WO2018088380A1 - Transmission method, transmission device, and program - Google Patents

Transmission method, transmission device, and program Download PDF

Info

Publication number
WO2018088380A1
WO2018088380A1 PCT/JP2017/040032 JP2017040032W WO2018088380A1 WO 2018088380 A1 WO2018088380 A1 WO 2018088380A1 JP 2017040032 W JP2017040032 W JP 2017040032W WO 2018088380 A1 WO2018088380 A1 WO 2018088380A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
image
signal
receiver
mode
Prior art date
Application number
PCT/JP2017/040032
Other languages
French (fr)
Japanese (ja)
Inventor
秀紀 青山
大嶋 光昭
Original Assignee
パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ filed Critical パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Priority to JP2018550203A priority Critical patent/JP7023239B2/en
Priority to CN201780069560.4A priority patent/CN110114988B/en
Publication of WO2018088380A1 publication Critical patent/WO2018088380A1/en
Priority to US16/408,537 priority patent/US10819428B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/1141One-way transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/1149Arrangements for indoor wireless networking of information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • the present invention relates to a visible light signal transmission method, a transmission device, a program, and the like.
  • Patent Literature 1 in an optical space transmission device that transmits information to free space using light, limited transmission is performed by performing communication using a plurality of monochromatic light sources of illumination light. A technology for efficiently realizing communication between devices is described in the apparatus.
  • the conventional method is limited to a case where a device to be applied has a three-color light source such as illumination.
  • the present invention solves such a problem and provides a transmission method and the like that enables communication between various devices including devices other than lighting having a three-color light source.
  • a transmission method is a transmission method for transmitting a signal according to a change in luminance of a light source, the step of accepting a dimming degree designated for the light source as a designated dimming degree, and the designated dimming degree Is less than or equal to the first value, the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is the first dimming level.
  • a transmission step of transmitting the signal encoded in the second mode by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is
  • the value of the peak current of the light source for transmitting the signal encoded in the second mode by a change in luminance when it is greater than the first value and less than or equal to the second value is the specified dimming degree Is the first If it is smaller than the value of the peak current of the light source for transmitting the first said signal encoded in the mode of the luminance change.
  • the present invention it is possible to realize a transmission method that enables communication between devices including a device other than a lighting device having a three-color light source.
  • FIG. 1 is a diagram illustrating an example of an observation method of luminance of a light emitting unit in the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment.
  • FIG. 3 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment.
  • FIG. 4 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment.
  • FIG. 5A is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5B is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5A is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5B is a diagram illustrating an example of an observation method of lumina
  • FIG. 5C is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5D is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5E is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5F is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5G is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5H is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 6A is a flowchart of the information communication method in Embodiment 1.
  • FIG. 6B is a block diagram of the information communication apparatus according to Embodiment 1.
  • FIG. 7 is a diagram illustrating an example of a photographing operation of the receiver in the second embodiment.
  • FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment.
  • FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment.
  • FIG. 10 is a diagram illustrating an example of display operation of the receiver in Embodiment 2.
  • FIG. 11 is a diagram illustrating an example of display operation of the receiver in Embodiment 2.
  • FIG. 12 is a diagram illustrating an example of operation of a receiver in Embodiment 2.
  • FIG. 12 is a diagram illustrating an example of operation of a receiver in Embodiment 2.
  • FIG. 13 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 14 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 15 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 16 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 17 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 18 is a diagram illustrating an example of operations of the receiver, the transmitter, and the server in the second embodiment.
  • FIG. 19 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 20 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 20 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 21 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 22 is a diagram illustrating an example of operation of a transmitter in Embodiment 2.
  • FIG. 23 is a diagram illustrating another example of operation of a transmitter in Embodiment 2.
  • FIG. 24 is a diagram illustrating an example of application of a receiver in Embodiment 2.
  • FIG. 25 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 26 is a diagram illustrating an example of processing operations of the receiver, the transmitter, and the server in Embodiment 3.
  • FIG. 27 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 3.
  • FIG. 28 is a diagram illustrating an example of operations of a transmitter, a receiver, and a server in Embodiment 3.
  • FIG. 29 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 3.
  • FIG. 30 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4.
  • FIG. 31 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4.
  • FIG. 32 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4.
  • FIG. 33 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4.
  • FIG. 34 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4.
  • FIG. 35 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4.
  • FIG. 36 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4.
  • FIG. 37 is a diagram for describing notification of visible light communication to a human in the fifth embodiment.
  • FIG. 38 is a diagram for explaining an application example to the route guidance in the fifth embodiment.
  • FIG. 39 is a diagram for explaining an application example to use log accumulation and analysis in the fifth embodiment.
  • FIG. 40 is a diagram for explaining an application example to screen sharing in the fifth embodiment.
  • FIG. 41 is a diagram illustrating an application example of the information communication method according to the fifth embodiment.
  • FIG. 42 is a diagram illustrating an example of application of the transmitter and the receiver in the sixth embodiment.
  • FIG. 43 is a diagram illustrating an example of application of the transmitter and the receiver in the sixth embodiment.
  • FIG. 44 is a diagram illustrating an example of a receiver in Embodiment 7.
  • FIG. 45 is a diagram illustrating an example of a reception system in the seventh embodiment.
  • FIG. 46 is a diagram illustrating an example of a signal transmission / reception system according to the seventh embodiment.
  • FIG. 47 is a flowchart showing a reception method in which interference is eliminated in the seventh embodiment.
  • FIG. 48 is a flowchart showing a method for estimating the orientation of a transmitter in the seventh embodiment.
  • FIG. 49 is a flowchart showing a reception start method according to the seventh embodiment.
  • FIG. 50 is a flowchart showing an ID generation method using information of another medium together in the seventh embodiment.
  • FIG. 51 is a flowchart showing a reception method selection method based on frequency separation in the seventh embodiment.
  • FIG. 52 is a flowchart showing a signal reception method when the exposure time is long in the seventh embodiment.
  • FIG. 53 is a diagram illustrating an example of a transmitter dimming (adjusting brightness) method in Embodiment 7.
  • FIG. 54 is a diagram illustrating an example of a method for configuring a dimming function of a transmitter in the seventh embodiment.
  • FIG. 55 is a diagram for explaining the EX zoom.
  • FIG. 56 is a diagram illustrating an example of a signal reception method in Embodiment 9.
  • FIG. 57 is a diagram illustrating an example of a signal reception method in Embodiment 9. FIG.
  • FIG. 58 is a diagram illustrating an example of a signal reception method in Embodiment 9.
  • FIG. 59 is a diagram illustrating an example of a screen display method of a receiver in Embodiment 9.
  • FIG. 60 is a diagram illustrating an example of a signal reception method in Embodiment 9.
  • FIG. 61 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
  • FIG. 62 is a flowchart illustrating an example of a signal reception method in the ninth embodiment.
  • FIG. 63 is a diagram illustrating an example of a signal reception method in the ninth embodiment.
  • FIG. 64 is a flowchart showing processing of the reception program in the ninth embodiment.
  • FIG. 65 is a block diagram of a receiving apparatus according to the ninth embodiment.
  • FIG. 66 is a diagram illustrating an example of display on the receiver when a visible light signal is received.
  • FIG. 67 is a diagram illustrating an example of display on the receiver when a visible light signal is received.
  • FIG. 68 is a diagram illustrating an example of the display of the acquired data image.
  • FIG. 69 is a diagram illustrating an example of an operation when saving or discarding acquired data.
  • FIG. 70 is a diagram illustrating a display example when browsing acquired data.
  • 71 is a diagram illustrating an example of a transmitter in Embodiment 9.
  • FIG. 72 is a diagram illustrating an example of a reception method in Embodiment 9.
  • FIG. 73 is a flowchart illustrating an example of a reception method in Embodiment 10.
  • FIG. 71 is a diagram illustrating an example of a transmitter in Embodiment 9.
  • FIG. 72 is a diagram illustrating an example of a reception method in Embodiment 9.
  • FIG. 73 is a flowchar
  • FIG. 74 is a flowchart illustrating an example of a reception method in Embodiment 10.
  • FIG. 75 is a flowchart illustrating an example of a reception method in Embodiment 10.
  • FIG. 76 is a diagram for describing a reception method in which the receiver according to Embodiment 10 uses an exposure time longer than the period of the modulation frequency (modulation period).
  • FIG. 77 is a diagram for describing a reception method in which the receiver according to Embodiment 10 uses an exposure time longer than the period of the modulation frequency (modulation period).
  • FIG. 78 is a diagram showing an efficient division number with respect to the size of transmission data in the tenth embodiment.
  • FIG. 79A is a diagram illustrating an example of a setting method in Embodiment 10.
  • FIG. 79A is a diagram illustrating an example of a setting method in Embodiment 10.
  • FIG. 79B is a diagram illustrating another example of the setting method according to the tenth embodiment.
  • FIG. 80 is a flowchart showing processing of the information processing program in the tenth embodiment.
  • FIG. 81 is a diagram for describing an application example of the transmission and reception system in the tenth embodiment.
  • FIG. 82 is a flowchart showing processing operations of the transmission / reception system in the tenth embodiment.
  • FIG. 83 is a diagram for describing an example of application of the transmission and reception system in the tenth embodiment.
  • FIG. 84 is a flowchart showing processing operations of the transmission / reception system in the tenth embodiment.
  • FIG. 85 is a diagram for describing an application example of the transmission and reception system in the tenth embodiment.
  • FIG. 86 is a flowchart showing processing operations of the transmission / reception system in the tenth embodiment.
  • FIG. 87 is a diagram for describing an example of application of a transmitter in Embodiment 10.
  • FIG. 88 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment.
  • FIG. 89 is a diagram for explaining an application example of the transmission and reception system in the eleventh embodiment.
  • 90 is a diagram for describing an example of application of a transmission and reception system in Embodiment 11.
  • FIG. FIG. 91 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment.
  • FIG. 92 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment.
  • FIG. 93 is a diagram for describing an application example of the transmission / reception system in Embodiment 11.
  • FIG. 94 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment.
  • FIG. 95 is a diagram for describing an application example of the transmission / reception system in Embodiment 11.
  • FIG. 96 is a diagram for describing an application example of the transmission / reception system in Embodiment 11.
  • FIG. FIG. 97 is a diagram for describing an application example of the transmission / reception system in Embodiment 11.
  • FIG. FIG. 98 is a diagram for describing an application example of the transmission / reception system in Embodiment 11.
  • FIG. 99 is a diagram for describing an application example of the transmission and reception system in Embodiment 11.
  • FIG. 100 is a diagram for describing an example of application of the transmission and reception system in Embodiment 11.
  • FIG. 101 is a diagram for explaining an application example of the transmission and reception system in the eleventh embodiment.
  • FIG. 102 is a diagram for describing operation of a receiver in Embodiment 12.
  • 103A is a diagram for describing another operation of the receiver in Embodiment 12.
  • FIG. FIG. 103B is a diagram illustrating an example of an indicator displayed by the output unit 1215 in the twelfth embodiment.
  • FIG. 103C is a diagram illustrating a display example of AR in the twelfth embodiment.
  • FIG. 104A is a diagram for describing an example of a transmitter in Embodiment 12.
  • FIG. FIG. 104B is a diagram for describing another example of the transmitter in Embodiment 12.
  • 105A is a diagram for describing an example of synchronous transmission by a plurality of transmitters in Embodiment 12.
  • FIG. 105B is a diagram for describing another example of synchronous transmission by a plurality of transmitters in Embodiment 12.
  • FIG. FIG. 106 is a diagram for describing another example of synchronous transmission by a plurality of transmitters in Embodiment 12.
  • FIG. 107 is a diagram for describing signal processing by a transmitter in Embodiment 12.
  • FIG. 108 is a flowchart illustrating an example of a reception method in Embodiment 12.
  • FIG. 110 is a flowchart illustrating another example of a reception method in Embodiment 12.
  • 111 is a diagram illustrating an example of a transmission signal in Embodiment 13.
  • FIG. 112 is a diagram illustrating another example of a transmission signal in Embodiment 13.
  • FIG. 113 is a diagram illustrating another example of a transmission signal in Embodiment 13.
  • 114A is a diagram for illustrating a transmitter in Embodiment 14.
  • FIG. FIG. 114B is a diagram showing each luminance change of RGB in the fourteenth embodiment.
  • FIG. 115 is a diagram illustrating afterglow characteristics of the green fluorescent component and the red fluorescent component in the fourteenth embodiment.
  • FIG. 110 is a flowchart illustrating another example of a reception method in Embodiment 12.
  • 111 is a diagram illustrating an example of a transmission signal in Embodiment 13.
  • FIG. 112 is a diagram illustrating another example of a transmission signal in Embodiment 13.
  • FIG. 116 is a diagram for describing a problem newly generated in order to suppress occurrence of a barcode reading error in the fourteenth embodiment.
  • 117 is a diagram for explaining downsampling performed by a receiver in Embodiment 14.
  • FIG. FIG. 118 is a flowchart illustrating a processing operation of the receiver in Embodiment 14.
  • FIG. 119 is a diagram illustrating processing operations of the reception device (imaging device) in Embodiment 15.
  • FIG. 120 is a diagram illustrating processing operation of a reception device (imaging device) in Embodiment 15.
  • FIG. 121 is a diagram illustrating processing operation of a reception device (imaging device) in Embodiment 15.
  • FIG. 122 is a diagram illustrating processing operation of the reception device (imaging device) in Embodiment 15.
  • FIG. 123 is a diagram illustrating an example of an application according to the sixteenth embodiment.
  • FIG. 124 is a diagram illustrating an example of an application according to the sixteenth embodiment.
  • FIG. 125 is a diagram illustrating an example of the transmission signal and an example of the audio synchronization method in Embodiment 16.
  • 126 is a diagram illustrating an example of a transmission signal in Embodiment 16.
  • FIG. 127 is a diagram illustrating an example of processing flow of a receiver in Embodiment 16.
  • FIG. 128 is a diagram illustrating an example of a user interface of the receiver in Embodiment 16.
  • 129 is a diagram illustrating an example of a process flow of a receiver in Embodiment 16.
  • FIG. 130 is a diagram illustrating another example of processing flow of a receiver in Embodiment 16.
  • FIG. 131A is a diagram for explaining a specific method of synchronized playback in the sixteenth embodiment.
  • FIG. 131B is a block diagram showing a configuration of a playback device (receiver) that performs synchronized playback in the sixteenth embodiment.
  • FIG. 131C is a flowchart illustrating a processing operation of a playback device (receiver) that performs synchronized playback in the sixteenth embodiment.
  • FIG. 132 is a diagram for describing preparation for synchronized playback in the sixteenth embodiment.
  • 133 is a diagram illustrating an example of application of a receiver in Embodiment 16.
  • FIG. 134A is a front view of a receiver held by a holder in Embodiment 16.
  • FIG. 134B is a rear view of a receiver held by a holder in Embodiment 16.
  • FIG. 135 is a diagram for describing a use case of a receiver held by a holder in Embodiment 16.
  • 136 is a flowchart illustrating processing operation of a receiver held by a holder in Embodiment 16.
  • FIG. FIG. 137 is a diagram illustrating an example of an image displayed by the receiver in Embodiment 16.
  • FIG. FIG. 138 is a diagram showing another example of the holder according to the sixteenth embodiment.
  • 139A is a diagram illustrating an example of a visible light signal in Embodiment 17.
  • FIG. 140 is a diagram illustrating a configuration of a visible light signal according to the seventeenth embodiment.
  • FIG. 141 is a diagram illustrating an example of bright line images obtained by imaging of the receiver in Embodiment 17.
  • FIG. 142 is a diagram illustrating another example of bright line images obtained by imaging by the receiver in Embodiment 17.
  • FIG. 143 is a diagram illustrating another example of the bright line image obtained by imaging by the receiver in Embodiment 17.
  • FIG. 144 is a diagram for describing adaptation of the receiver in Embodiment 17 to a camera system that performs HDR synthesis.
  • FIG. FIG. 145 is a diagram for explaining the processing operation of the visible light communication system in the seventeenth embodiment.
  • 146A is a diagram illustrating an example of vehicle-to-vehicle communication using visible light in Embodiment 17.
  • FIG. 146B is a diagram illustrating another example of vehicle-to-vehicle communication using visible light in Embodiment 17.
  • FIG. 147 is a diagram illustrating an example of a method for determining the positions of a plurality of LEDs in Embodiment 17.
  • FIG. FIG. 148 is a diagram illustrating an example of bright line images obtained by capturing an image of the vehicle in the seventeenth embodiment.
  • FIG. 149 is a diagram illustrating an example of application of the receiver and the transmitter in Embodiment 17.
  • FIG. 149 is a view of the automobile from the back.
  • FIG. 150 is a flowchart illustrating an example of processing operations of a receiver and a transmitter in Embodiment 17.
  • FIG. 151 is a diagram illustrating an example of application of the receiver and the transmitter in Embodiment 17.
  • FIG. 152 is a flowchart illustrating an example of processing operations of the receiver 7007a and the transmitter 7007b in Embodiment 17.
  • FIG. 153 is a diagram illustrating a configuration of a visible light communication system applied to the inside of a train in Embodiment 17.
  • FIG. 154 is a diagram illustrating a configuration of a visible light communication system applied to a facility such as an amusement park in Embodiment 17.
  • FIG. 155 is a diagram illustrating an example of a visible light communication system including a playground device and a smartphone according to Embodiment 17.
  • 156 is a diagram illustrating an example of a transmission signal in Embodiment 18.
  • FIG. 157 is a diagram illustrating an example of a transmission signal in Embodiment 18.
  • FIG. 158 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 159 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 160 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 161 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 162 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 163 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 164 is a diagram illustrating an example of a transmission and reception system in Embodiment 19.
  • FIG. 165 is a flowchart illustrating an example of processing of the transmission / reception system in the nineteenth embodiment.
  • FIG. 166 is a flowchart showing the operation of the server in the nineteenth embodiment.
  • FIG. 167 is a flowchart illustrating an example of operation of a receiver in Embodiment 19.
  • FIG. 168 is a flowchart illustrating a method of calculating the progress status in the simple mode according to the nineteenth embodiment.
  • FIG. 169 is a flowchart illustrating a method for calculating the progress in the maximum likelihood estimation mode according to the nineteenth embodiment.
  • FIG. 170 is a flowchart showing a display method in which the progress status does not decrease in the nineteenth embodiment.
  • FIG. 171 is a flowchart illustrating a progress status display method when there are a plurality of packet lengths according to the nineteenth embodiment.
  • 172 is a diagram illustrating an example of an operation state of a receiver in Embodiment 19.
  • FIG. 173 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 174 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 175 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 176 is a block diagram illustrating an example of a transmitter in Embodiment 19.
  • FIG. 177 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention.
  • FIG. 178 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention.
  • FIG. 179 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention.
  • FIG. 179 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention.
  • FIG. 180A is a flowchart illustrating a transmission method according to one embodiment of the present invention.
  • FIG. 180B is a block diagram illustrating a functional configuration of the transmission device according to one embodiment of the present invention.
  • FIG. 181 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 182 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 183 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 184 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 185 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 186 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 19 is a diagram illustrating an example of a transmission signal in Embodiment 19.
  • FIG. 187 is a diagram illustrating an example of a structure of a visible light signal in Embodiment 20.
  • FIG. 188 is a diagram illustrating an example of a detailed configuration of a visible light signal in Embodiment 20.
  • FIG. FIG. 189A is a diagram illustrating another example of a visible light signal in Embodiment 20.
  • FIG. 189B is a diagram illustrating another example of a visible light signal in Embodiment 20.
  • FIG. 189C is a diagram illustrating the signal length of a visible light signal in Embodiment 20.
  • FIG. FIG. 190 is a diagram illustrating a comparison result of luminance values between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment.
  • FIG. 191 is a diagram illustrating a comparison result of the number of received packets and the reliability with respect to the angle of view between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment.
  • FIG. 192 is a diagram illustrating comparison results of the number of received packets and reliability with respect to noise between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment.
  • FIG. 193 is a diagram illustrating a comparison result of the number of received packets and the reliability with respect to the reception-side clock error between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment.
  • FIG. 194 is a diagram illustrating a structure of a transmission target signal in the twentieth embodiment.
  • 195A is a diagram illustrating a visible light signal receiving method in Embodiment 20.
  • FIG. FIG. 195B is a diagram illustrating rearrangement of visible light signals in the twentieth embodiment.
  • 196 is a diagram illustrating another example of a visible light signal in Embodiment 20.
  • FIG. FIG. 197 is a diagram illustrating another example of a detailed configuration of a visible light signal in the twentieth embodiment.
  • FIG. 198 is a diagram illustrating another example of a detailed configuration of a visible light signal in the twentieth embodiment.
  • FIG. 199 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 20.
  • FIG. 200 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 20.
  • FIG. 201 is a diagram illustrating another example of a detailed configuration of a visible light signal according to the twentieth embodiment.
  • FIG. 202 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 20.
  • FIG. 203 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197.
  • FIG. 204 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197.
  • FIG. 205 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197.
  • FIG. 206 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. FIG.
  • FIG. 207 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197.
  • FIG. 208 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197.
  • FIG. 209 is a diagram for explaining a method of determining the values of x1 to x4 in FIG.
  • FIG. 210 is a diagram for explaining a method of determining the values of x1 to x4 in FIG.
  • FIG. 211 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197.
  • FIG. 212 is a diagram illustrating an example of a detailed configuration of a visible light signal according to the first modification of the twentieth embodiment.
  • FIG. 213 is a diagram illustrating another example of a visible light signal according to Modification 1 of Embodiment 20.
  • FIG. 214 is a diagram showing still another example of a visible light signal according to Modification 1 of Embodiment 20.
  • FIG. 215 is a diagram illustrating an example of packet modulation according to Modification 1 of Embodiment 20.
  • FIG. FIG. 216 is a diagram illustrating processing for dividing the original data into one according to the first modification of the twentieth embodiment.
  • FIG. 217 is a diagram illustrating a process of dividing the original data into two according to the first modification of the twentieth embodiment.
  • FIG. 218 is a diagram illustrating processing of dividing original data into three according to Modification 1 of Embodiment 20.
  • FIG. 219 is a diagram illustrating another example of the process of dividing the original data into three according to the first modification of the twentieth embodiment.
  • FIG. 220 is a diagram illustrating another example of the process of dividing the original data into three according to the first modification of the twentieth embodiment.
  • FIG. 221 is a diagram illustrating a process of dividing the original data into four according to the first modification of the twentieth embodiment.
  • FIG. 222 is a diagram showing processing for dividing original data into five parts according to Modification 1 of Embodiment 20.
  • 223 is a diagram illustrating processing of dividing original data into 6, 7, or 8 portions according to Modification Example 1 of Embodiment 20.
  • FIG. 224 is a diagram illustrating another example of the process of dividing the original data into 6, 7 or 8 according to the first modification of the twentieth embodiment.
  • FIG. 225 is a diagram illustrating a process of dividing the original data into nine according to the first modification of the twentieth embodiment.
  • FIG. 226 is a diagram illustrating processing of dividing original data into any number from 10 to 16 according to Modification 1 of Embodiment 20.
  • FIG. 227 is a diagram illustrating an example of a relationship among the number of original data divisions, a data size, and an error correction code according to the first modification of the twentieth embodiment.
  • FIG. 228 is a diagram illustrating another example of the relationship between the number of original data divisions, the data size, and the error correction code according to the first modification of the twentieth embodiment.
  • FIG. 229 is a diagram illustrating still another example of the relationship among the number of original data divisions, the data size, and the error correction code according to Modification 1 of Embodiment 20.
  • FIG. 230A is a flowchart illustrating a visible light signal generation method according to Embodiment 20.
  • FIG. FIG. 230B is a block diagram illustrating a configuration of the signal generation device according to Embodiment 20.
  • FIG. 231 is a diagram illustrating a method of receiving a high-frequency visible light signal in Embodiment 21.
  • 232A is a diagram illustrating another method of receiving a high-frequency visible light signal in Embodiment 21.
  • FIG. FIG. 232B is a diagram illustrating another method of receiving a high-frequency visible light signal in Embodiment 21.
  • FIG. 233 is a diagram illustrating a method of outputting a high-frequency signal in Embodiment 21.
  • FIG. FIG. 234 is a diagram for describing the autonomous flight apparatus according to the twenty-second embodiment.
  • FIG. 235 is a diagram illustrating an example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 236 is a diagram illustrating an example of a display system in Embodiment 23.
  • FIG. 237 is a diagram illustrating another example of the display system in Embodiment 23.
  • FIG. FIG. 238 is a diagram illustrating another example of the display system in Embodiment 23.
  • FIG. 239 is a flowchart illustrating an example of process operations of a receiver in Embodiment 23.
  • FIG. 240 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 241 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 242 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 243 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 244 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 245 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 246 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23.
  • FIG. 241 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 242 is a diagram illustrating another example in which the receiver in Embodiment
  • FIG. 247 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • 248 is a diagram illustrating a captured display image Ppre and a decoding image Pdec acquired by capturing by the receiver in Embodiment 23.
  • FIG. FIG. 249 is a diagram illustrating an example of a captured display image Ppre displayed on the receiver in Embodiment 23.
  • FIG. 250 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23.
  • FIG. 251 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 252 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 253 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 254 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image.
  • FIG. 255 is a diagram illustrating an example of recognition information according to the twenty-third embodiment.
  • FIG. 256 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23.
  • FIG. 257 is a diagram illustrating an example in which the receiver in Embodiment 23 identifies bright line pattern regions.
  • 258 is a diagram illustrating another example of a receiver in Embodiment 23.
  • FIG. 259 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23.
  • FIG. 260 is a diagram illustrating an example of a transmission system including a plurality of transmitters in Embodiment 23.
  • FIG. FIG. 261 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in Embodiment 23.
  • FIG. 262A is a flowchart illustrating an example of process operations of the receiver in Embodiment 23.
  • FIG. 262B is a flowchart illustrating an example of process operations of the receiver in Embodiment 23.
  • FIG. 263A is a flowchart illustrating a display method according to Embodiment 23.
  • FIG. 263B is a block diagram illustrating a structure of the display device in Embodiment 23.
  • FIG. 264 is a diagram illustrating an example in which the receiver in Modification 1 of Embodiment 23 displays an AR image.
  • FIG. 265 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
  • FIG. 266 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
  • FIG. 267 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
  • FIG. 268 is a diagram illustrating another example of the receiver 200 in the first modification of the twenty-third embodiment.
  • FIG. 269 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
  • FIG. 270 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
  • FIG. 271 is a flowchart illustrating an example of processing operations of the receiver 200 in the first modification of the twenty-third embodiment.
  • FIG. 272 is a diagram illustrating an example of a problem when an AR image assumed in the receiver in Embodiment 23 or the modification 1 thereof is displayed.
  • FIG. 273 is a diagram illustrating an example in which the receiver in Modification 2 of Embodiment 23 displays the AR image.
  • FIG. 274 is a flowchart illustrating an example of processing operations of a receiver in Modification 2 of Embodiment 23.
  • FIG. 275 is a diagram illustrating another example in which the receiver in the second modification of the twenty-third embodiment displays an AR image.
  • FIG. 276 is a flowchart illustrating another example of processing operations of a receiver in Modification 2 of Embodiment 23.
  • 277 is a diagram illustrating another example in which a receiver in Modification 2 of Embodiment 23 displays an AR image.
  • FIG. 278 is a diagram illustrating another example in which the receiver in the second modification of the twenty-third embodiment displays an AR image.
  • FIG. 279 is a diagram illustrating another example in which the receiver in Modification 2 of Embodiment 23 displays an AR image.
  • 280 is a diagram illustrating another example in which a receiver in Modification 2 of Embodiment 23 displays an AR image.
  • FIG. 281A is a flowchart illustrating a display method according to one embodiment of the present invention.
  • FIG. 281B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
  • FIG. 282 is a diagram illustrating an example of expansion and movement of the AR image in the third modification of the twenty-third embodiment.
  • FIG. 283 is a diagram illustrating an example of expansion of an AR image in the third modification of the twenty-third embodiment.
  • FIG. 284 is a flowchart illustrating an example of processing operations regarding expansion and movement of an AR image by a receiver in Modification 3 of Embodiment 23.
  • FIG. 285 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment.
  • FIG. 286 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment.
  • FIG. 287 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment.
  • FIG. 288 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment.
  • FIG. 289A is a diagram illustrating an example of a captured display image obtained by imaging by the receiver in the third modification of the twenty-third embodiment.
  • FIG. 289B is a diagram illustrating an example of a menu screen displayed on the display of the receiver in Modification 3 of Embodiment 23.
  • FIG. 290 is a flowchart illustrating an example of processing operations of the receiver and the server in the third modification of the twenty-third embodiment.
  • FIG. 291 is a diagram for describing the volume of audio reproduced by the receiver in the third modification of the twenty-third embodiment.
  • FIG. 292 is a diagram illustrating a relationship between the distance from the receiver to the transmitter and the sound volume in the third modification of the twenty-third embodiment.
  • 293 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23.
  • FIG. 294 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23.
  • FIG. 295 is a diagram for describing an example of how to obtain a line scan time by a receiver in Modification 3 of Embodiment 23.
  • FIG. 296 is a diagram for describing an example of how to obtain a line scan time by a receiver in Modification 3 of Embodiment 23.
  • FIG. 297 is a flowchart illustrating an example of how to obtain a line scan time by a receiver in the third modification of the twenty-third embodiment.
  • 298 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23.
  • FIG. 299 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23.
  • FIG. 300 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23.
  • FIG. 301 is a diagram illustrating an example of a decoding image obtained according to the attitude of the receiver in Modification 3 of Embodiment 23.
  • FIG. 302 is a diagram illustrating another example of a decoding image acquired in accordance with the attitude of a receiver in Modification 3 of Embodiment 23.
  • FIG. FIG. 303 is a flowchart illustrating an example of processing operation of a receiver in Modification 3 of Embodiment 23.
  • 304 is a diagram illustrating an example of camera lens switching processing by a receiver in Modification 3 of Embodiment 23.
  • FIG. 301 is a diagram illustrating an example of a decoding image obtained according to the attitude of the receiver in Modification 3 of Embodiment 23.
  • FIG. 302 is a diagram illustrating another example of a decoding image acquired in accordance with the attitude of a receiver in Mod
  • FIG. 305 is a diagram illustrating an example of camera switching processing by a receiver in Modification 3 of Embodiment 23.
  • FIG. FIG. 306 is a flowchart illustrating an example of processing operations of the receiver and the server in Modification 3 of Embodiment 23.
  • 307 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23.
  • FIG. FIG. 308 is a sequence diagram illustrating processing operations of a system including a receiver, a microwave oven, a relay server, and an electronic payment server in Modification 3 of Embodiment 23.
  • FIG. 309 is a sequence diagram illustrating processing operations of a system including a POS terminal, a server, a receiver 200, and a microwave oven in Modification 3 of Embodiment 23.
  • FIG. 310 is a diagram illustrating an example of indoor use in Modification 3 of Embodiment 23.
  • FIG. 311 is a diagram illustrating an example of an augmented reality object display in the third modification of the twenty-third embodiment.
  • FIG. 312 is a diagram showing a configuration of a display system in Modification 4 of Embodiment 23.
  • FIG. 313 is a flowchart showing processing operations of the display system in Modification 4 of Embodiment 23.
  • FIG. 314 is a flowchart illustrating a recognition method according to an aspect of the present invention.
  • FIG. 310 is a diagram illustrating an example of indoor use in Modification 3 of Embodiment 23.
  • FIG. 311 is a diagram illustrating an example of an augmented reality object display in the third modification of the twenty-third
  • FIG. 315 is a diagram illustrating an example of operation modes of visible light signals according to the twenty-fourth embodiment.
  • FIG. 316 is a diagram illustrating an example of a PPDU format in mode 1 of the packet PWM according to the twenty-fourth embodiment.
  • FIG. 317 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PWM according to the twenty-fourth embodiment.
  • FIG. 318 is a diagram illustrating an example of a PPDU format in mode 3 of the packet PWM according to the twenty-fourth embodiment.
  • FIG. 319 is a diagram illustrating an example of a pulse width pattern in each SHR of modes 1 to 3 of the packet PWM according to the twenty-fourth embodiment.
  • FIG. 316 is a diagram illustrating an example of a PPDU format in mode 1 of the packet PWM according to the twenty-fourth embodiment.
  • FIG. 317 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PWM according to
  • FIG. 320 is a diagram showing an example of a PPDU format in mode 1 of the packet PPM according to the twenty-fourth embodiment.
  • FIG. 321 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PPM according to the twenty-fourth embodiment.
  • FIG. 322 is a diagram illustrating an example of a PPDU format in mode 3 of the packet PPM according to the twenty-fourth embodiment.
  • FIG. 323 is a diagram illustrating an example of an interval pattern in each SHR of modes 1 to 3 of the packet PPM according to the twenty-fourth embodiment.
  • FIG. 324 is a diagram illustrating an example of 12-bit data included in the PHY payload according to the twenty-fourth embodiment.
  • FIG. 325 is a diagram illustrating processing of storing PHY frames in one packet according to Embodiment 24.
  • FIG. 326 is a diagram illustrating processing of dividing the PHY frame into 2 packets according to Embodiment 24.
  • FIG. 327 is a diagram illustrating processing of dividing a PHY frame into 3 packets according to Embodiment 24.
  • FIG. 328 is a diagram illustrating processing of dividing the PHY frame into 4 packets according to Embodiment 24.
  • FIG. 329 is a diagram illustrating processing of dividing the PHY frame into 5 packets according to Embodiment 24.
  • FIG. 331 is a diagram illustrating processing of dividing the PHY frame into 9 packets according to Embodiment 24.
  • FIG. 333A is a flowchart illustrating a visible light signal generation method according to Embodiment 24.
  • FIG. 333B is a block diagram showing a configuration of the signal generation apparatus according to Embodiment 24.
  • FIG. 334 is a diagram illustrating a format of an MPM MAC frame in the twenty-fifth embodiment.
  • FIG. 335 is a flowchart illustrating processing operations of the encoding device for generating an MPM MAC frame according to the twenty-fifth embodiment.
  • FIG. 336 is a flowchart illustrating processing operation of the decoding device for decoding the MAC frame of MPM in the twenty-fifth embodiment.
  • FIG. 337 is a diagram illustrating MAC PIB attributes according to the twenty-fifth embodiment.
  • FIG. 338 is a diagram for describing an MPM light control method according to the twenty-fifth embodiment.
  • FIG. 339 is a diagram illustrating attributes of a PHY PIB according to the twenty-fifth embodiment.
  • FIG. 340 is a diagram for describing MPM in the twenty-fifth embodiment.
  • FIG. 341 is a diagram illustrating PLCP header subfields according to Embodiment 25.
  • FIG. 342 is a diagram illustrating PLCP center subfields according to the twenty-fifth embodiment.
  • FIG. 343 is a diagram illustrating PLCP footer subfields according to the twenty-fifth embodiment.
  • FIG. 344 is a diagram illustrating a waveform in a PHY PWM mode in the MPM according to the twenty-fifth embodiment.
  • FIG. 345 is a diagram illustrating a PHY PPM mode waveform in the MPM according to the twenty-fifth embodiment.
  • FIG. 346 is a flowchart illustrating an example of the decoding method according to the twenty-fifth embodiment.
  • FIG. 347 is a flowchart illustrating an example of the coding method according to the twenty-fifth embodiment.
  • FIG. 348 is a diagram illustrating an example in which the receiver in Embodiment 26 displays an AR image.
  • FIG. 349 is a diagram illustrating an example of a captured display image on which an AR image is superimposed according to Embodiment 26.
  • FIG. FIG. 350 is a diagram illustrating another example in which the receiver in Embodiment 26 displays an AR image.
  • FIG. 351 is a flowchart illustrating operation of the receiver in Embodiment 26.
  • FIG. 352 is a diagram for describing an operation of a transmitter in Embodiment 26.
  • FIG. 353 is a diagram for describing another operation of the transmitter in Embodiment 26.
  • FIG. 354 is a diagram for describing another operation of the transmitter in Embodiment 26.
  • FIG. 355 is a diagram illustrating a comparative example for describing ease of reception of the optical ID in the twenty-sixth embodiment.
  • FIG. 356A is a flowchart illustrating operation of the transmitter in Embodiment 26.
  • FIG. 356B is a block diagram illustrating a configuration of a transmitter in Embodiment 26.
  • FIG. 357 is a diagram illustrating another example in which the receiver in Embodiment 26 displays an AR image.
  • 358 is a diagram for describing an operation of a transmitter in Embodiment 27.
  • FIG. 359A is a flowchart illustrating a transmission method according to Embodiment 27.
  • FIG. 359B is a block diagram illustrating a structure of a transmitter in Embodiment 27.
  • FIG. 360 is a diagram illustrating an example of a detailed configuration of a visible light signal in Embodiment 27.
  • FIG. 361 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 27.
  • FIG. 362 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 27.
  • FIG. 363 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 27.
  • FIG. 364 is a diagram illustrating a relationship between the total sum of the variables y 0 to y 3 , the total time length, and the effective time length in the twenty-seventh embodiment.
  • FIG. 365A is a flowchart illustrating a transmission method according to Embodiment 27.
  • FIG. 365B is a block diagram illustrating a structure of a transmitter in Embodiment 27.
  • a transmission method is a transmission method for transmitting a signal according to a change in luminance of a light source, wherein a reception step of accepting a dimming degree designated for the light source as a designated dimming degree, and the designated dimming degree Is less than or equal to the first value, the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is the first dimming level.
  • a transmission step of transmitting the signal encoded in the second mode by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is
  • the value of the peak current of the light source for transmitting the signal encoded in the second mode by a change in luminance when it is greater than the first value and less than or equal to the second value is the specified dimming degree Is the first If it is smaller than the value of the peak current of the light source for transmitting the first said signal encoded in the mode of the luminance change.
  • the peak current value of the light source when the designated dimming degree is greater than the first value and equal to or less than the second value by switching the signal encoding mode is It becomes smaller than the value of the peak current of the light source when the luminous intensity is the first value. Therefore, as the designated dimming degree is increased, a large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed. Further, since deterioration of the light source can be suppressed, communication between various devices can be performed for a long time.
  • the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the designated dimming degree.
  • the peak current value may be maintained constant with respect to the change in the designated dimming degree, and the third value may be smaller than the first value.
  • the specified dimming level is smaller than the third value, the specified dimming level is decreased by increasing the time during which the light source is turned off as the specified dimming level decreases.
  • the light source may be made to emit light at a luminous intensity, and the peak current value may be maintained at a constant value.
  • the value of the peak current is kept constant, so that the visible light signal (that is, the light ID) that is transmitted by the luminance change is easily received by the receiver. be able to.
  • the time for turning off the light source may be determined so that one cycle obtained by adding the time for transmitting the signal due to luminance change and the time for turning off the light source does not exceed 10 milliseconds.
  • the luminance change of the light source for transmitting the encoded signal may be perceived by the human eye as flickering. Therefore, in the present disclosure, since the time for turning off the light source is determined so that one period does not exceed 10 milliseconds, it is possible to suppress flickering from being recognized by a person.
  • the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the designated dimming degree.
  • the peak current value is decreased to cause the light source to emit light at the specified dimming level, and the fourth value is smaller than the second value. Also good.
  • the light source can be appropriately emitted with the designated dimming degree.
  • the peak current value of the light source when the designated dimming level is the first value is the same as the peak current value of the light source when the designated dimming level is the maximum value. May be.
  • the maximum value of the specified dimming degree is 100%.
  • the duty ratio of the signal encoded in the second mode may be larger than the duty ratio of the signal encoded in the first mode.
  • the first mode is a mode in which the increase in peak current is increased even if the increase in dimming degree is small
  • the second mode is a mode in which the increase in peak current is suppressed even if the increase in dimming degree is large. Therefore, since the second mode suppresses a large peak current from flowing to the light source, deterioration of the light source can be suppressed. Further, since the first mode causes a large peak current to flow through the light source even when the dimming degree is small, the signal transmitted by the luminance change of the light source can be easily received by the receiver. Therefore, in the present disclosure, it is possible to achieve both suppression of deterioration of the light source and ease of signal reception.
  • the transmission of the signal due to a change in luminance of the light source may be stopped.
  • the brightness of the signal is set using a parameter value for causing the light source to emit light with a dimming degree greater than the specified dimming degree. You may send by change.
  • the pulse width of the current of the light source may be made larger than when the usage time is less than the predetermined time.
  • the pulse width of the current of the light source is increased, so that it is possible to prevent the signal transmitted due to the luminance change of the light source from becoming difficult to be received by the receiver.
  • the transmission method is a transmission method for transmitting a signal according to a change in luminance of a light source, and accepting a dimming degree designated for the light source as a designated dimming degree; A step of transmitting the signal encoded in the first mode or the second mode by a luminance change while causing the light source to emit light at a specified dimming level, and the encoded in the second mode
  • the duty ratio of the signal is larger than the duty ratio of the signal encoded in the first mode, and the designated dimming level is changed from a small value to a large value in the transmission step, the designated dimming level
  • the designated dimming level When is the first value, the mode used for encoding the signal is switched from the first mode to the second mode, and the designated dimming level is small from a large value
  • the designated dimming level is the second value
  • the mode used for encoding the signal is switched from the second mode to the first mode, and the second value is changed. Is smaller than the first value.
  • the designated dimming degree that is, the switching point
  • the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Therefore, similarly to the transmission method according to one embodiment of the present invention, it is possible to suppress a large peak current from flowing to the light source as the designated dimming degree is increased.
  • deterioration of the light source can be suppressed. Further, since deterioration of the light source can be suppressed, communication between various devices can be performed for a long time. Further, when the designated dimming degree is small, the first mode with a small duty ratio is used. Therefore, the above-described peak current can be increased, and a signal that is easily received by the receiver can be transmitted as a visible light signal.
  • the encoded signal in the transmission step, is transmitted by a change in luminance when switching from the first mode to the second mode is performed.
  • Changing the peak current of the light source for changing from the first current value to a second current value smaller than the first current value, and switching from the second mode to the first mode.
  • the peak current is changed from a third current value to a fourth current value that is larger than the third current value, and the first current value is greater than the fourth current value.
  • the second current value is larger than the third current value.
  • a transmission method is a transmission method for transmitting a visible light signal according to a luminance change of a light emitter, and determining a pattern of the luminance change by modulating the signal; Transmitting the visible light signal by changing the luminance of red expressed by the light source included in the light emitter according to the determined pattern, wherein the visible light signal includes data, In the data, a first luminance value and a second luminance value smaller than the first luminance value appear on the time axis, and the first luminance value includes a preamble and a payload. And the length of time during which at least one of the second luminance values continues is not more than a first predetermined value, and in the preamble, that of the first and second luminance values. However, the first and second luminance values appear alternately along the time axis in the payload, and each of the first and second luminance values continues.
  • the time length to be performed is greater than the first predetermined value and is determined according to the signal and a predetermined method.
  • the visible light signal includes one payload having a waveform determined according to the signal to be modulated (that is, the L data portion or the R data portion), and includes two payloads.
  • the visible light signal that is, the packet of the visible light signal can be shortened. That is, a visible light signal can be transmitted in a short time, and communication between various devices can be performed in a short time. As a result, for example, even if the light emission period of red light expressed by the light source included in the light emitter is short, a packet of visible light signals can be transmitted during the light emission period.
  • the first luminance value having a first time length, the second luminance value having a second time length, the first luminance value having a third time length, and a fourth time length When the respective luminance values appear in the order of the second luminance values, and in the transmission step, the sum of the first time length and the third time length is smaller than a second predetermined value
  • the value of the current flowing through the light source is larger than when the sum of the first time length and the third time length is greater than the second predetermined value, and the second predetermined value is , Greater than the first predetermined value.
  • the first luminance value having the first time length D 0 the second luminance value having the second time length D 1 , and the first luminance value having the third time length D 2 are used.
  • the transmission step includes the data, the preamble, and the payload, Data, the preamble, and the payload may be transmitted in this order.
  • the fact that the packet of the visible light signal including the data (that is, invalid data) does not include the L data portion is indicated to the receiving apparatus that receives the packet by the data. I can inform you.
  • each of the first to fourth time lengths D 0 to D 3 (that is, the first to fourth time lengths D ′ 0 to D ′ 3 ) is set to W 0 or more.
  • a short waveform payload can be generated according to the signal.
  • the data, the preamble, and the payload are The payload, the preamble, and the data may be transmitted in this order.
  • the fact that the packet of the visible light signal including data (that is, invalid data) does not include the R data portion is indicated to the receiving device that receives the packet by the data. I can inform you.
  • the light emitter has a plurality of light sources including a red light source, a blue light source, and a green light source, and in the transmitting step, the visible light source is formed using only the red light source among the plurality of light sources.
  • An optical signal may be transmitted.
  • the light emitter can display an image using a red light source, a blue light source, and a green light source, and can transmit a visible light signal having a wavelength that can be easily received to the receiving device.
  • a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement
  • FIG. 1 shows an example in which imaging devices arranged in one row are exposed simultaneously, and imaging is performed by shifting the exposure start time in the order of closer rows.
  • the exposure line of the image sensor that is exposed simultaneously is referred to as an exposure line
  • the pixel line on the image corresponding to the image sensor is referred to as a bright line.
  • this image capturing method When this image capturing method is used to capture an image of a blinking light source on the entire surface of the image sensor, bright lines (light and dark lines of pixel values) along the exposure line appear on the captured image as shown in FIG. .
  • bright lines light and dark lines of pixel values
  • the imaging frame rate By recognizing the bright line pattern, it is possible to estimate the light source luminance change at a speed exceeding the imaging frame rate. Thereby, by transmitting a signal as a change in light source luminance, communication at a speed higher than the imaging frame rate can be performed.
  • LO lower luminance value
  • HI high
  • Low may be in a state where the light source is not shining, or may be shining weaker than high.
  • the imaging frame rate is 30 fps
  • a change in luminance with a period of 1.67 milliseconds can be recognized.
  • the exposure time is set shorter than 10 milliseconds, for example.
  • FIG. 2 shows a case where the exposure of the next exposure line is started after the exposure of one exposure line is completed.
  • the transmission speed is a maximum of flm bits per second.
  • the light emission time of the light emitting unit is controlled by a unit time shorter than the exposure time of each exposure line. More information can be transmitted.
  • information can be transmitted at a maximum rate of flElv bits per second.
  • the basic period of transmission can be recognized by causing the light emitting unit to emit light at a timing slightly different from the exposure timing of each exposure line.
  • FIG. 4 shows a case where the exposure of the next exposure line is started before the exposure of one exposure line is completed. That is, the exposure times of adjacent exposure lines are partially overlapped in time.
  • the S / N ratio can be improved.
  • the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. By configuring a part of the exposure lines so as not to partially overlap in time, it is possible to suppress the generation of intermediate colors due to the overlap of exposure times on the imaging screen, and to detect bright lines more appropriately. .
  • the exposure time is calculated from the brightness of each exposure line, and the light emission state of the light emitting unit is recognized.
  • the brightness of each exposure line is determined by a binary value indicating whether the luminance is equal to or higher than a threshold value, in order to recognize the state where no light is emitted, the state where the light emitting unit does not emit light is indicated for each line. It must last longer than the exposure time.
  • FIG. 5A shows the influence of the difference in exposure time when the exposure start times of the exposure lines are equal.
  • 7500a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line
  • 7500b is the case where the exposure time is longer than that.
  • the exposure time of adjacent exposure lines is partially overlapped in time, so that the exposure time can be increased. That is, the light incident on the image sensor increases and a bright image can be obtained.
  • the imaging sensitivity for capturing images with the same brightness can be suppressed to a low level, an image with less noise can be obtained, so that communication errors are suppressed.
  • FIG. 5B shows the influence of the difference in the exposure start time of each exposure line when the exposure times are equal.
  • 7501a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line
  • 7501b is the case where the exposure of the next exposure line is started earlier than the end of exposure of the previous exposure line.
  • the sample interval ( difference in exposure start time) becomes dense, the change in the light source luminance can be estimated more accurately, the error rate can be reduced, and the change in the light source luminance in a shorter time is recognized. be able to.
  • the exposure time overlap it is possible to recognize blinking of the light source that is shorter than the exposure time by using the difference in exposure amount between adjacent exposure lines.
  • the exposure time satisfy the exposure time> (sample interval ⁇ pulse width).
  • the pulse width is a pulse width of light that is a period during which the luminance of the light source is High. Thereby, the High brightness can be detected appropriately.
  • the exposure time is set to be longer than that in the normal shooting mode.
  • the communication speed can be dramatically improved.
  • the exposure time needs to be set as exposure time ⁇ 1/8 ⁇ f. Blanking that occurs during shooting is at most half the size of one frame. That is, since the blanking time is less than half of the shooting time, the actual shooting time is 1 / 2f at the shortest time.
  • the exposure time since it is necessary to receive quaternary information within a time of 1 / 2f, at least the exposure time needs to be shorter than 1 / (2f ⁇ 4). Since the normal frame rate is 60 frames / second or less, it is possible to generate an appropriate bright line pattern in the image data and perform high-speed signal transmission by setting the exposure time to 1/480 seconds or less. Become.
  • FIG. 5C shows an advantage when the exposure times are short when the exposure times of the exposure lines do not overlap.
  • the exposure time is long, even if the light source has a binary luminance change as in 7502a, the captured image has an intermediate color portion as in 7502e, and it becomes difficult to recognize the luminance change of the light source.
  • the free time (predetermined waiting time) t D2 not predetermined exposure start exposure of the next exposure line, the luminance variation of the light source Can be easily recognized. That is, a more appropriate bright line pattern such as 7502f can be detected.
  • an exposure time t E can be realized to be smaller than the time difference t D of the exposure start time of each exposure line.
  • the exposure time is set shorter than the normal shooting mode until a predetermined idle time occurs. This can be realized. Further, even when the normal photographing mode is the case where the exposure end time of the previous exposure line and the exposure start time of the next exposure line are equal, by setting the exposure time short until a predetermined non-exposure time occurs, Can be realized.
  • the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. Further, in all exposure lines, it is not necessary to provide a configuration in which an idle time (predetermined waiting time) in which a predetermined exposure is not performed is provided after the exposure of one exposure line is completed until the exposure of the next exposure line is started. It is also possible to have a configuration in which the lines partially overlap in time. With such a configuration, it is possible to take advantage of the advantages of each configuration.
  • the same readout method is used in the normal shooting mode in which shooting is performed at a normal frame rate (30 fps, 60 fps) and in the visible light communication mode in which shooting is performed with an exposure time of 1/480 second or less in which visible light communication is performed.
  • a signal may be read by a circuit.
  • FIG. 5D shows the relationship between the minimum change time t S of the light source luminance, the exposure time t E , the time difference t D of the exposure start time of each exposure line, and the captured image.
  • Figure 5E the transition and time t T of the light source luminance, which shows the relationship between the time difference t D of the exposure start time of each exposure line.
  • t D is larger than the t T, exposure lines to be neutral is reduced, it is easy to estimate the light source luminance.
  • the exposure line of the intermediate color is continuously 2 or less, which is desirable.
  • t T the light source is less than 1 microsecond in the case of LED, light source for an approximately 5 microseconds in the case of organic EL, a t D by 5 or more microseconds, to facilitate estimation of the light source luminance be able to.
  • Figure 5F shows a high frequency noise t HT of light source luminance, the relationship between the exposure time t E.
  • t E is larger than t HT , the captured image is less affected by high frequency noise, and light source luminance is easily estimated.
  • t E is an integral multiple of t HT , the influence of high frequency noise is eliminated, and the light source luminance is most easily estimated.
  • t E > t HT .
  • the main cause of high frequency noise derived from the switching power supply circuit since many of the t HT in the switching power supply for the lamp is less than 20 microseconds, by the t E and 20 micro-seconds or more, the estimation of the light source luminance It can be done easily.
  • Figure 5G is the case t HT is 20 microseconds, which is a graph showing the relationship between the size of the exposure time t E and the high frequency noise.
  • t E is the value becomes equal to the value when the amount of noise takes a maximum, 15 microseconds or more, or, 35 microseconds or more, or, It can be confirmed that the efficiency is good when it is set to 54 microseconds or more, or 74 microseconds or more. From the viewpoint of reducing high-frequency noise, it is desirable that t E be large. However, as described above, there is a property that light source luminance can be easily estimated in that the smaller the t E , the more difficult the intermediate color portion is generated.
  • t E when the light source luminance change period is 15 to 35 microseconds, t E is 15 microseconds or more, and when the light source luminance change period is 35 to 54 microseconds, t E is 35 microseconds or more.
  • t E is 54 microseconds or more when the cycle is 54 to 74 microseconds of change, t E when the period of the change in light source luminance is 74 microseconds or more may be set as 74 microseconds or more.
  • Figure 5H shows the relationship between the exposure time t E and the recognition success rate. Since the exposure time t E has a relative meaning with respect to the time when the luminance of the light source is constant, the value (relative exposure time) obtained by dividing the period t S where the luminance of the light source changes by the exposure time t E is taken as the horizontal axis. Yes. From the graph, it can be seen that if the recognition success rate is desired to be almost 100%, the relative exposure time should be 1.2 or less. For example, when the transmission signal is 1 kHz, the exposure time may be about 0.83 milliseconds or less.
  • the relative exposure time may be set to 1.25 or less, and when the recognition success rate is set to 80% or more, the relative exposure time may be set to 1.4 or less. Recognize. Also, the recognition success rate drops sharply when the relative exposure time is around 1.5, and becomes almost 0% at 1.6, so it can be seen that the relative exposure time should not be set to exceed 1.5. . It can also be seen that after the recognition rate becomes 0 at 7507c, it rises again at 7507d, 7507e, and 7507f.
  • the relative exposure time is 1.9 to 2.2, 2.4 to 2.6, and 2.8 to 3.0. Just do it.
  • these exposure times may be used as the intermediate mode.
  • FIG. 6A is a flowchart of the information communication method in the present embodiment.
  • the information communication method in the present embodiment is an information communication method for acquiring information from a subject, and includes steps SK91 to SK93.
  • a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor are generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • a first exposure time setting step SK91 for setting a first exposure time of the image sensor; and the image sensor shoots the subject whose luminance changes with the set first exposure time
  • First image acquisition step SK92 for acquiring a bright line image including a plurality of bright lines, and information acquisition for acquiring information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image Step SK93, and in the first image acquisition step SK92, the plurality of exposure lines Re starts exposure at successively different times, and the exposure of the adjacent exposure line after a predetermined idle time from the end, the exposure is started adjacent to the exposure line.
  • FIG. 6B is a block diagram of the information communication apparatus according to the present embodiment.
  • the information communication device K90 in the present embodiment is an information communication device that acquires information from a subject, and includes constituent elements K91 to K93.
  • the information communication apparatus K90 causes a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor to be generated in response to a change in luminance of the subject in an image obtained by photographing the subject by an image sensor.
  • an exposure time setting unit K91 for setting an exposure time of the image sensor, and the image sensor for acquiring a bright line image including the plurality of bright lines by photographing the subject whose luminance changes with the set exposure time.
  • an information acquisition unit K93 that acquires information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image, and the plurality of exposure lines. Each of these starts exposure at different times sequentially and is adjacent to the exposure line. From the exposure of the down is completed after a predetermined idle time has elapsed, exposure is started.
  • each of the plurality of exposure lines is exposed to the adjacent exposure line adjacent to the exposure line. Since exposure is started after a lapse of a predetermined idle time after the end, it is possible to easily recognize a change in luminance of the subject. As a result, information can be appropriately acquired from the subject.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the program causes the computer to execute the information communication method shown by the flowchart of FIG. 6A.
  • shooting in the normal shooting mode or normal shooting mode is referred to as normal shooting
  • shooting in the visible light communication mode or visible light communication mode is referred to as visible light shooting (visible light communication).
  • shooting in an intermediate mode may be used, and an intermediate image may be used instead of a composite image described later.
  • FIG. 7 is a diagram illustrating an example of the photographing operation of the receiver in this embodiment.
  • the receiver 8000 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on. Then, the receiver 8000 generates a composite image in which the bright line pattern, the subject, and the surrounding area are clearly displayed by combining the normal captured image and the visible light communication image, and displays the composite image on the display. .
  • This composite image is an image generated by superimposing the bright line pattern of the visible light communication image on the portion where the signal in the normal captured image is transmitted. Further, the bright line pattern, the subject, and the surroundings displayed by the composite image are clear and have a sharpness sufficiently recognized by the user. By displaying such a composite image, the user can more clearly know from where or from where the signal is transmitted.
  • FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
  • the receiver 8000 includes a camera Ca1 and a camera Ca2.
  • the camera Ca1 performs normal photographing
  • the camera Ca2 performs visible light photographing.
  • the camera Ca1 acquires the normal captured image as described above
  • the camera Ca2 acquires the visible light communication image as described above.
  • the receiver 8000 generates the above-described combined image by combining the normal captured image and the visible light communication image, and displays the combined image on the display.
  • FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
  • the camera Ca1 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on.
  • the camera Ca2 continuously performs normal shooting.
  • the receiver 8000 receives from the normal shooting images acquired by these cameras using stereo vision (the principle of triangulation).
  • the distance from the machine 8000 to the subject (hereinafter referred to as subject distance) is estimated.
  • FIG. 10 is a diagram illustrating an example of the display operation of the receiver in this embodiment.
  • the receiver 8000 switches the photographing mode to visible light communication, normal photographing, visible light communication, and so on.
  • the receiver 8000 activates an application program when performing visible light communication for the first time.
  • the receiver 8000 estimates its own position based on the signal received by visible light communication.
  • the receiver 8000 displays AR (Augmented Reality) information on the normal shot image acquired by the normal shooting.
  • This AR information is acquired based on the position estimated as described above.
  • the receiver 8000 estimates the movement and direction change of the receiver 8000 based on the detection result of the 9-axis sensor and the motion detection of the normal captured image, and matches the estimated movement and direction change. To move the display position of the AR information.
  • the AR information can be made to follow the subject image of the normal captured image.
  • the receiver 8000 switches the shooting mode from the normal shooting to the visible light communication, the AR information is superimposed on the latest normal shooting image acquired at the time of the normal shooting immediately before the visible light communication.
  • the receiver 8000 displays a normal captured image on which the AR information is superimposed.
  • the receiver 8000 estimates the movement and direction change of the receiver 8000 on the basis of the detection result by the 9-axis sensor, and AR in accordance with the estimated movement and direction change.
  • Move information and normal captured images Thereby, AR information can be made to follow the subject image of the normal captured image in accordance with the movement of the receiver 8000 or the like in the case of visible light communication as in the case of normal imaging. Further, the normal image can be enlarged and reduced in accordance with the movement of the receiver 8000 or the like.
  • FIG. 11 is a diagram showing an example of the display operation of the receiver in this embodiment.
  • the receiver 8000 may display the composite image on which the bright line pattern is projected, as shown in FIG.
  • the receiver 8000 normally captures a signal explicit object that is an image having a predetermined color for notifying that a signal is transmitted instead of the bright line pattern.
  • a composite image may be generated by superimposing on the image, and the composite image may be displayed.
  • the receiver 8000 normally has a location where a signal is transmitted indicated by a dotted frame and an identifier (for example, ID: 101, ID: 102, etc.).
  • the captured image may be displayed as a composite image.
  • the receiver 8000 recognizes a signal that is an image having a predetermined color for notifying that a specific type of signal is transmitted instead of the bright line pattern.
  • a composite image may be generated by superimposing an object on a normal captured image, and the composite image may be displayed.
  • the color of the signal identification object differs depending on the type of signal output from the transmitter. For example, when the signal output from the transmitter is position information, a red signal identification object is superimposed, and when the signal output from the transmitter is a coupon, the green signal identification object is Superimposed.
  • FIG. 12 is a diagram illustrating an example of the operation of the receiver in this embodiment.
  • the receiver 8000 may display a normal captured image and output a sound for notifying the user that the transmitter has been found.
  • the receiver 8000 varies the type of output sound, the number of outputs, or the output time depending on the number of transmitters found, the type of received signal, or the type of information specified by the signal. It may be allowed.
  • FIG. 13 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image.
  • This information notification image indicates, for example, a store coupon or a place.
  • the bright line pattern may be a signal explicit object, a signal identification object, a dotted line frame, or the like shown in FIG. The same applies to the bright line patterns described below.
  • FIG. 14 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image.
  • the information notification image indicates the current location of the receiver 8000 by a map or the like.
  • FIG. 15 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 when the user performs a swipe on the receiver 8000 on which the composite image is displayed, the receiver 8000 performs normal shooting having a dotted frame and an identifier, similar to the normal shot image illustrated in FIG. An image is displayed and a list of information is displayed so as to follow the swipe operation. In this list, information specified by a signal transmitted from a location (transmitter) indicated by each identifier is shown.
  • the swipe may be, for example, an operation of moving a finger from outside the right side of the display in the receiver 8000.
  • the swipe may be an operation of moving a finger from the upper side, the lower side, or the left side of the display.
  • the receiver 8000 may display an information notification image (for example, an image showing a coupon) showing the information in more detail.
  • an information notification image for example, an image showing a coupon
  • FIG. 16 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 displays the information notification image superimposed on the composite image so as to follow the swipe operation.
  • This information notification image shows the subject distance with an arrow in an easy-to-understand manner for the user.
  • the swipe may be, for example, an operation of moving a finger from outside the lower side of the display in the receiver 8000.
  • the swipe may be an operation of moving a finger from the left side of the display, from the upper side, or from the right side.
  • FIG. 17 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 images a transmitter, which is a signage indicating a plurality of stores, as a subject, and displays a normal captured image acquired by the imaging.
  • the receiver 8000 when the user taps the signage image of one store included in the subject displayed in the normal captured image, the receiver 8000 generates an information notification image based on a signal transmitted from the signage of the store Then, the information notification image 8001 is displayed.
  • This information notification image 8001 is an image showing, for example, a vacant seat situation in a store.
  • FIG. 18 is a diagram illustrating an example of operations of the receiver, the transmitter, and the server in the present embodiment.
  • the transmitter 8012 configured as a television transmits a signal to the receiver 8011 by a luminance change.
  • This signal includes, for example, information for prompting the user to purchase content related to the program being viewed.
  • the receiver 8011 displays an information notification image that prompts the user to purchase content based on the signal.
  • the receiver 8011 receives information included in a SIM (Subscriber Identity Module) card inserted into the receiver 8011, user ID, terminal ID, credit card information, billing At least one of the information, the password, and the transmitter ID is transmitted to the server 8013.
  • SIM Subscriber Identity Module
  • the server 8013 manages a user ID and payment information in association with each user.
  • the server 8013 identifies the user ID based on the information transmitted from the receiver 8011, and confirms the payment information associated with the user ID. By this confirmation, the server 8013 determines whether or not to allow the user to purchase content. If the server 8013 determines to permit, the server 8013 transmits permission information to the receiver 8011. When receiving the permission information, the receiver 8011 transmits the permission information to the transmitter 8012. The transmitter 8012 that has received the permission information acquires and reproduces the content via a network, for example.
  • the transmitter 8012 may transmit information including the ID of the transmitter 8012 to the receiver 8011 by changing the luminance.
  • the receiver 8011 transmits the information to the server 8013.
  • the server 8013 obtains the information, the server 8013 can determine that, for example, a television program is being viewed by the transmitter 8012, and can perform a viewing rate survey of the television program.
  • the receiver 8011 includes the content operated by the user (voting or the like) in the above information and transmits it to the server 8013, so that the server 8013 can reflect the content in the television program. That is, a viewer participation type program can be realized. Further, when the receiver 8011 accepts writing by the user, the contents of the writing are included in the above-described information and transmitted to the server 8013 so that the server 8013 can write the writing to the TV program or a bulletin board on the network. Etc. can be reflected.
  • the server 8013 can charge for viewing of a television program by pay broadcasting or an on-demand program. Further, the server 8013 displays an advertisement on the receiver 8011, displays detailed information of a TV program displayed on the transmitter 8012, and displays a URL of a site indicating the detailed information. Can do. Further, the server 8013 obtains the number of times the advertisement is displayed by the receiver 8011 or the amount of the product purchased by the advertisement, and thereby charges the advertiser according to the number or amount. be able to. Such billing can be made even if the user who saw the advertisement does not purchase the product immediately.
  • the server 8013 acquires information indicating the manufacturer of the transmitter 8012 from the transmitter 8012 via the receiver 8011, the server 8013 provides a service (for example, a reward for sales of the above-described product to the manufacturer indicated by the information). Payment).
  • a service for example, a reward for sales of the above-described product to the manufacturer indicated by the information.
  • FIG. 19 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8030 is configured as a head mounted display including a camera, for example.
  • the receiver 8030 starts photographing in the visible light communication mode, that is, visible light communication when the start button is pressed.
  • the receiver 8030 notifies the user of information corresponding to the received signal. This notification is performed, for example, by outputting sound from a speaker provided in the receiver 8030 or by displaying an image.
  • the visible light communication is received by the receiver 8030 when an input of a voice instructing the start is received by the receiver 8030 or a signal instructing the start is received by wireless communication. It may be started when it is done.
  • visible light communication is started when the change width of the value obtained by the 9-axis sensor provided in the receiver 8030 exceeds a predetermined range or when a bright line pattern appears even in a normal photographed image. May be.
  • FIG. 20 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8030 displays the composite image 8034 as described above.
  • the user performs an operation of moving the fingertip so as to surround the bright line pattern in the composite image 8034.
  • the receiver 8030 identifies the bright line pattern that is the target of the operation, and displays an information notification image 8032 based on a signal transmitted from a location corresponding to the bright line pattern.
  • FIG. 21 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8030 displays the composite image 8034 as described above.
  • the user performs an operation of placing the fingertip on the bright line pattern in the composite image 8034 for a predetermined time or more.
  • the receiver 8030 identifies the bright line pattern that is the target of the operation, and displays an information notification image 8032 based on a signal transmitted from a location corresponding to the bright line pattern.
  • FIG. 22 is a diagram illustrating an example of the operation of the transmitter according to the present embodiment.
  • the transmitter transmits the signal 1 and the signal 2 alternately at a predetermined cycle, for example. Transmission of the signal 1 and transmission of the signal 2 are performed by luminance changes such as blinking of visible light. Further, the luminance change pattern for transmitting the signal 1 and the luminance change pattern for transmitting the signal 2 are different from each other.
  • FIG. 23 is a diagram illustrating another example of the operation of the transmitter according to the present embodiment.
  • each block is arranged in the order of block 1, block 2, and block 3 in the first signal sequence, and each block is arranged in the order of block 3, block 1, and block 2 in the next signal sequence.
  • FIG. 24 is a diagram illustrating an application example of the receiver in this embodiment.
  • the receiver 7510a configured as a smartphone images the light source 7510b with a back camera (out camera) 7510c, receives a signal transmitted from the light source 7510b, and acquires the position and orientation of the light source 7510b from the received signal.
  • the receiver 7510a estimates the position and orientation of the receiver 7510a itself from how the light source 7510b is captured in the captured image and the sensor values of the 9-axis sensor provided in the receiver 7510a.
  • the receiver 7510a captures the user 7510e with a front camera (face camera, in-camera) 7510f, and estimates the position and orientation of the head of the 7510e and the line-of-sight direction (eyeball position and orientation) by image processing. .
  • the receiver 7510a transmits the estimation result to the server.
  • the receiver 7510a changes the behavior (display content and playback sound) according to the viewing direction of the user 7510e.
  • the imaging by the back camera 7510c and the imaging by the front camera 7510f may be performed simultaneously or alternately.
  • FIG. 25 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver displays the bright line pattern by the composite image or the intermediate image as described above. At this time, the receiver may not be able to receive a signal from the transmitter corresponding to the bright line pattern.
  • the receiver when the bright line pattern is selected by the user performing an operation (for example, tapping) on the bright line pattern, the receiver performs an optical zoom to enlarge the composite line image or the intermediate image in which the bright line pattern is enlarged. Display an image.
  • the receiver can appropriately receive a signal from the transmitter corresponding to the bright line pattern. That is, even if an image obtained by imaging is too small to acquire a signal, the signal can be appropriately received by performing optical zoom. Even when an image having a size capable of acquiring a signal is displayed, fast reception can be performed by performing optical zoom.
  • the information communication method is an information communication method for acquiring information from a subject, and an bright line corresponding to an exposure line included in the image sensor is included in an image obtained by photographing the subject with an image sensor.
  • a first exposure time setting step for setting an exposure time of the image sensor so as to occur in accordance with a change in luminance of the subject; and the image sensor photographs the subject whose luminance changes with the set exposure time.
  • the bright line image acquisition step of acquiring a bright line image that is an image including the bright line, and the spatial position of the portion where the bright line appears can be identified based on the bright line image, and the subject and the subject
  • An image display step for displaying a display image in which the surroundings of the subject are projected, and the image included in the acquired bright line image Including an information acquisition step of acquiring transmission information by demodulating the data identified by the pattern of lines.
  • a composite image or an intermediate image as shown in FIGS. 7, 8, and 11 is displayed as a display image.
  • the spatial position of the part where the bright line appears is identified by a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like. Therefore, the user can easily find a subject that is transmitting a signal due to a change in luminance by viewing such a display image.
  • the information communication method further includes a second exposure time setting step for setting an exposure time longer than the exposure time, and the image sensor photographs the subject and the surroundings of the subject with the long exposure time.
  • a composite step of generating a composite image by superimposing the captured image, and the composite image may be displayed as the display image in the image display step.
  • the signal object is a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like, and a composite image is displayed as a display image as shown in FIGS.
  • the user can more easily find the subject that is transmitting the signal due to the luminance change.
  • an exposure time is set to 1/3000 sec.
  • the bright line image acquisition step the bright line image in which the periphery of the subject is projected is acquired, and in the image display step.
  • the bright line image may be displayed as the display image.
  • the bright line image is acquired and displayed as an intermediate image. Therefore, it is not necessary to perform processing such as acquiring and synthesizing the normal captured image and the visible light communication image, and the processing can be simplified.
  • the image sensor includes a first image sensor and a second image sensor.
  • the first image sensor captures the normal captured image, and the bright line is acquired.
  • the bright line image may be acquired by capturing the second image sensor simultaneously with the capturing of the first image sensor.
  • a normal photographed image and a visible light communication image that is a bright line image are acquired by each camera. Therefore, compared with the case where a normal captured image and a visible light communication image are acquired with one camera, those images can be acquired earlier, and the processing can be speeded up.
  • the information communication method when the part where the bright line appears in the display image is designated by a user operation, the information communication method further includes the transmission information acquired from the pattern of the bright line of the designated part.
  • An information presentation step of presenting presentation information based on the information may be included. For example, the operation by the user is shown in association with a tap, swipe, an operation in which a fingertip is continuously applied to the part for a predetermined time, an operation in which a line of sight is directed to the part for a predetermined time, or the like.
  • the presentation information is displayed as an information notification image. Thereby, desired information can be presented to the user.
  • the image sensor may be provided in a head mounted display, and in the image display step, a projector mounted on the head mounted display may display the display image.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time.
  • the plurality of subjects are photographed during the period in which the image sensor is moved.
  • the information communication method By acquiring the bright line image including a plurality of parts where the bright lines appear, and in the information acquisition step, for each part, by demodulating data specified by the pattern of the bright lines of the parts, The information communication method further estimates the position of the image sensor based on the acquired positions of the plurality of subjects and the movement state of the image sensor. A position estimation step may be included.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time is performed.
  • the user can be authenticated and the convenience can be improved.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the exposure time setting step for setting the exposure time of the image sensor, and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time.
  • the image acquisition step the image is reflected on the reflection surface.
  • the bright line image is acquired by photographing a plurality of the subjects, and the information acquisition step is performed.
  • the bright line is separated into bright lines corresponding to each of the plurality of subjects according to the intensity of the bright lines included in the bright line image, and each subject is specified by a bright line pattern corresponding to the subject.
  • Information may be acquired by demodulating the data.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the exposure time setting step for setting the exposure time of the image sensor and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time.
  • the image acquisition step the image is reflected on the reflection surface.
  • the bright line image is acquired by photographing the subject, and the information communication method includes: , Based on the luminance distribution in the emission line image may include position estimation step for estimating the position of the object.
  • An information communication method for transmitting a signal according to a luminance change wherein a first determination step of determining a first pattern of luminance change by modulating a first signal to be transmitted, A second determination step of determining a second pattern of luminance change by modulating the signal of 2; and a luminance change according to the first pattern determined by the light emitter; A transmission step of transmitting the first and second signals by alternately performing a luminance change according to the second pattern.
  • the first signal and the second signal can be transmitted without delay.
  • the luminance change when the luminance change is switched between the luminance change according to the first pattern and the luminance change according to the second pattern, it may be switched with a buffer time.
  • An information communication method for transmitting a signal according to a luminance change wherein a determination step of determining a luminance change pattern by modulating a signal to be transmitted, and the light emitter changes in luminance according to the determined pattern And transmitting the signal to be transmitted, the signal comprising a plurality of large blocks, each of the plurality of large blocks including first data and a preamble for the first data. And a check signal for the first data, wherein the first data is composed of a plurality of small blocks, and the small blocks include second data, a preamble for the second data, and the first data. And a check signal for the second data may be included.
  • An information communication method for transmitting a signal by luminance change wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, And a transmitting step in which a light emitter provided in the transmitter changes the luminance according to the determined pattern and transmits the signal to be transmitted.
  • signals having different frequencies or protocols are transmitted. May be.
  • An information communication method for transmitting a signal by luminance change wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, A transmitter in which a light emitter provided in the transmitter transmits a signal to be transmitted by changing in luminance according to the determined pattern, wherein in the transmitting step, one of the plurality of transmitters One transmitter may receive a signal transmitted from the other transmitter and transmit another signal in a manner that does not interfere with the received signal.
  • Embodiment 3 each application example using a receiver such as a smartphone in Embodiment 1 or 2 and a transmitter that transmits information as a blinking pattern such as an LED or an organic EL will be described.
  • FIG. 26 is a diagram illustrating an example of processing operations of the receiver, the transmitter, and the server in the third embodiment.
  • the receiver 8142 configured as a smartphone acquires position information indicating its own position, and transmits the position information to the server 8141.
  • the receiver 8142 acquires position information when, for example, GPS is used or other signals are received.
  • the server 8141 transmits the ID list associated with the position indicated by the position information to the receiver 8142.
  • the ID list includes, for each ID such as “abcd”, the ID and information associated with the ID.
  • the receiver 8142 receives a signal from a transmitter 8143 configured as, for example, a lighting device. At this time, the receiver 8142 may receive only a part of the ID (for example, “b”) as the above signal. In this case, the receiver 8142 searches the ID list for an ID including a part of the ID. If the unique ID is not found, the receiver 8142 further receives a signal from the transmitter 8143 that includes another portion of that ID. As a result, the receiver 8142 acquires a larger part (for example, “bc”) of the ID. Then, the receiver 8142 searches the ID list again for an ID including a part of the ID (for example, “bc”).
  • a part of the ID for example, “b”
  • the receiver 8142 can specify all of the IDs even if only a part of the IDs can be acquired.
  • the receiver 8142 receives not only a part of the ID but also a check part such as CRC (Cyclic Redundancy Check).
  • FIG. 27 is a diagram illustrating an example of operations of the transmitter and the receiver in the third embodiment.
  • the transmitter 8165 configured as a television acquires an image and an ID (ID 1000) associated with the image from the control unit 8166.
  • the transmitter 8165 displays the image and transmits the ID (ID 1000) to the receiver 8167 by changing the luminance.
  • the receiver 8167 receives the ID (ID 1000) by imaging and displays information associated with the ID (ID 1000).
  • the control unit 8166 changes the image output to the transmitter 8165 to another image.
  • the control unit 8166 also changes the ID output to the transmitter 8165. That is, the control unit 8166 outputs the other ID (ID 1001) associated with the other image to the transmitter 8165 together with the other image.
  • the transmitter 8165 displays another image and transmits another ID (ID 1001) to the receiver 8167 by changing the luminance.
  • the receiver 8167 receives the other ID (ID 1001) by imaging, and displays information associated with the other ID (ID 1001).
  • FIG. 28 is a diagram illustrating an example of operations of the transmitter, the receiver, and the server in the third embodiment.
  • the transmitter 8185 configured as a smartphone displays information indicating “coupon 100 yen discount”, for example, by changing the luminance of the display 8185a excluding the barcode portion 8185b, that is, by visible light communication. Send.
  • the transmitter 8185 displays the barcode on the barcode portion 8185b without changing the luminance of the barcode portion 8185b.
  • This bar code indicates the same information as the information transmitted by the visible light communication described above.
  • the transmitter 8185 displays a character or a picture indicating information transmitted by visible light communication, for example, a character “coupon 100 yen discount” on a portion of the display 8185a excluding the barcode portion 8185b. By displaying such characters or pictures, the user of the transmitter 8185 can easily grasp what information is being transmitted.
  • the receiver 8186 acquires the information transmitted by visible light communication and the information indicated by the barcode by imaging, and transmits the information to the server 8187.
  • the server 8187 determines whether or not these pieces of information match or relate to each other. When it determines that these pieces of information match or relate to each other, the server 8187 executes processing according to the pieces of information. Alternatively, the server 8187 transmits the determination result to the receiver 8186, and causes the receiver 8186 to execute processing according to the information.
  • the transmitter 8185 may transmit a part of the information indicated by the barcode by visible light communication.
  • the barcode may indicate the URL of the server 8187.
  • the transmitter 8185 may acquire information associated with the ID by acquiring the ID as a receiver and transmitting the ID to the server 8187.
  • the information associated with this ID is the same as the information transmitted by the above visible light communication or the information indicated by the barcode.
  • the server 8187 may transmit an ID associated with information (visible light communication information or barcode information) transmitted from the transmitter 8185 via the receiver 8186 to the transmitter 8185.
  • FIG. 29 is a diagram illustrating an example of operations of the transmitter and the receiver in the third embodiment.
  • the receiver 8183 images a subject including a plurality of persons 8197 and street lamps 8195.
  • the streetlight 8195 includes a transmitter 8195a that transmits information according to a change in luminance.
  • the receiver 8183 acquires an image in which the image of the transmitter 8195a appears as the bright line pattern described above.
  • the receiver 8183 acquires the AR object 8196a associated with the ID indicated by the bright line pattern from, for example, a server. Then, the receiver 8183 superimposes the AR object 8196a on a normal captured image 8196 obtained by normal imaging, and displays a normal captured image 8196 on which the AR object 8196a is superimposed.
  • the information communication method is an information communication method for transmitting a signal according to a luminance change, wherein a determination step for determining a luminance change pattern by modulating a signal to be transmitted and a light emitter are determined.
  • the luminance change position that is the rising or falling position of the luminance in the time width is different for each of the different signals to be transmitted.
  • the integrated value of the luminance of the luminous body in the time width is the same according to the preset brightness As a value to determine the pattern of the luminance change.
  • the rising positions of the luminance are different from each other, and a predetermined time width is set.
  • the luminance change pattern is determined so that the integrated value of the luminance of the light emitter in (unit time width) becomes the same value according to a predetermined brightness (for example, 99% or 1%).
  • a predetermined brightness for example, 99% or 1%.
  • the brightness of the illuminant can be kept constant for each signal to be transmitted, flicker can be suppressed, and the receiver that images the illuminant is based on the luminance change position.
  • the brightness change pattern can be demodulated appropriately.
  • the luminance change pattern is a pattern in which one of two different luminance values (luminance H (High) or luminance L (Low)) appears at any position in the unit time width. The brightness of the body can be changed continuously.
  • the information communication method further includes an image display step of sequentially switching and displaying each of the plurality of images.
  • the displayed image is displayed each time the image is displayed in the image display step.
  • the luminance change pattern for the identification information is determined by modulating the identification information corresponding to the signal to be transmitted.
  • the transmission step the image is displayed every time the image is displayed in the image display step.
  • the identification information may be transmitted by the luminance change of the light emitter according to the luminance change pattern determined with respect to the identification information corresponding to the existing image.
  • each time an image is displayed identification information corresponding to the displayed image is transmitted. Therefore, the user receives the received information on the receiver based on the displayed image.
  • the identification information to be made can be easily selected.
  • the light emitter is further radiated according to a luminance change pattern determined for identification information corresponding to an image displayed in the past.
  • the identification information may be transmitted by changing.
  • the receiver cannot receive the identification signal transmitted before switching because the displayed image has been switched, the image displayed in the past together with the identification information corresponding to the currently displayed image is displayed. Since the identification information corresponding to is also transmitted, the identification information transmitted before switching can be properly received again by the receiver.
  • each time an image is displayed in the image display step the identification information corresponding to the displayed image and the time when the image is displayed are modulated as the signal to be transmitted.
  • a pattern of luminance change with respect to the identification information and the time is determined, and in the transmission step, each time an image is displayed in the image display step, the identification information and time corresponding to the displayed image are determined.
  • the identification information and the time are transmitted when the luminous body changes in luminance according to the luminance change pattern determined in the above, and the identification information and the time corresponding to an image displayed in the past are further determined.
  • the identification information and the time are transmitted when the luminous body changes in luminance according to a luminance change pattern. Good.
  • each time an image is displayed a plurality of ID time information (information consisting of identification information and time) is transmitted, and therefore the receiver transmits in the past from the received plurality of ID time information.
  • the identification signal that could not be received can be easily selected based on the time included in each of the ID time information.
  • each of the light emitters has a plurality of regions that emit light, light in regions adjacent to each other among the plurality of regions interfere with each other, and only one of the plurality of regions is determined.
  • the luminance change pattern in the transmission step, only the region arranged at the end of the plurality of regions may change in luminance according to the determined luminance change pattern.
  • the receiver can appropriately capture the luminance change pattern by photographing.
  • the transmission step includes: a region disposed at an end of the plurality of regions; The area adjacent to the area arranged at the end may change in luminance according to the determined luminance change pattern.
  • the luminance of the region arranged at the end (light emitting unit) and the region adjacent to the region arranged at the end (light emitting unit) change in luminance, compared to the case where the luminance of regions separated from each other changes. It is possible to keep a wide area in which the luminance continuously changes in brightness. As a result, the receiver can appropriately capture the luminance change pattern by photographing.
  • the information communication method is an information communication method for acquiring information from a subject, a location information transmission step for transmitting location information indicating a location of an image sensor used for photographing the subject, and the location information.
  • a list receiving step for receiving an ID list including a plurality of pieces of identification information associated with the position indicated by, and an image obtained by photographing the subject by the image sensor corresponding to an exposure line included in the image sensor
  • An exposure time setting step for setting an exposure time of the image sensor so that a bright line to be generated is generated according to a change in luminance of the subject, and the subject in which the image sensor changes in luminance is photographed at the set exposure time.
  • An image acquisition step of acquiring a bright line image including the bright line and An information acquisition step of acquiring information by demodulating data specified by the bright line pattern included in the bright line image, and a search step of searching the ID list for identification information including the acquired information. Including.
  • the method may further include a re-search step of searching the ID list for identification information including the acquired information and the new information.
  • the information communication method is an information communication method for acquiring information from a subject, and an bright line corresponding to an exposure line included in the image sensor is included in an image obtained by photographing the subject with an image sensor.
  • An exposure time setting step for setting an exposure time of the image sensor so as to occur according to a change in luminance of the subject, and the image sensor photographing the subject whose luminance changes with the set exposure time,
  • An image acquisition step of acquiring a bright line image including the bright line an information acquisition step of acquiring identification information by demodulating data specified by the pattern of the bright line included in the acquired bright line image, and
  • a transmission step for transmitting the identification information and position information indicating the position of the image sensor.
  • an error that receives error notification information for notifying an error when there is no acquired identification information in an ID list that includes a plurality of identification information associated with the position indicated by the position information.
  • the user of the receiver that has received the error notification information can change the information associated with the acquired identification information. It can be easily grasped that it cannot be obtained.
  • Embodiment 4 an application example using a receiver such as a smartphone in Embodiments 1 to 4 and a transmitter that transmits information as a blinking pattern such as an LED or an organic EL will be described.
  • FIG. 30 is a diagram illustrating an example of operations of the transmitter and the receiver in the fourth embodiment.
  • the transmitter includes an ID storage unit 8361, a random number generation unit 8362, an addition unit 8363, an encryption unit 8364, and a transmission unit 8365.
  • the ID storage unit 8361 stores the ID of the transmitter.
  • the random number generation unit 8362 generates different random numbers every certain time.
  • Adder 8363 combines the latest random number generated by random number generator 8362 with the ID stored in ID storage unit 8361, and outputs the result as an edit ID.
  • the encryption unit 8364 generates an encrypted edit ID by encrypting the edit ID.
  • the transmission unit 8365 transmits the encrypted edit ID to the receiver by changing the luminance.
  • the receiver includes a receiving unit 8366, a decoding unit 8367, and an ID acquisition unit 8368.
  • the receiving unit 8366 receives the encrypted edit ID from the transmitter by imaging the transmitter (visible light imaging).
  • the decryption unit 8367 restores the edit ID by decrypting the received encrypted edit ID.
  • the ID acquisition unit 8368 acquires the ID by extracting the ID from the restored editing ID.
  • the ID storage unit 8361 stores the ID “100”, and the random number generation unit 8362 generates the latest random number “817” (Example 1).
  • the adding unit 8363 generates and outputs the edit ID “100817” by combining the random number “817” with the ID “100”.
  • the encryption unit 8364 generates an encrypted edit ID “abced” by encrypting the edit ID “100817”.
  • the decryption unit 8367 of the receiver restores the edit ID “100817” by decrypting the encrypted edit ID “abced”.
  • the ID acquisition unit 8368 extracts the ID “100” from the restored editing ID “100817”. In other words, the ID acquisition unit 8368 acquires the ID “100” by deleting the last three digits of the edit ID.
  • the random number generation unit 8362 generates a new random number “619” (example 2).
  • the adding unit 8363 generates and outputs the edit ID “100619” by combining the random number “619” with the ID “100”.
  • the encryption unit 8364 generates an encrypted edit ID “diffia” by encrypting the edit ID “100619”.
  • the decryption unit 8367 of the transmitter restores the edit ID “100619” by decrypting the encrypted edit ID “diffia”.
  • the ID acquisition unit 8368 extracts the ID “100” from the restored editing ID “100619”. In other words, the ID acquisition unit 8368 acquires the ID “100” by deleting the last three digits of the edit ID.
  • the transmitter simply encrypts a combination of random numbers that change every fixed time without simply encrypting the ID, so the ID can be easily decrypted from the signal transmitted from the transmitter 8365. Can be prevented.
  • a simple encrypted ID is transmitted from the transmitter to the receiver several times, even if the ID is encrypted, if the ID is the same, the transmitter to the receiver. Since the transmitted signal is the same, the ID may be deciphered.
  • a random number that is changed at regular intervals is combined with an ID, and the ID that is combined with the random number is encrypted. Therefore, even when the same ID is transmitted to the receiver several times, the signals transmitted from the transmitter to the receiver can be made different if the transmission timings of these IDs are different. As a result, it is possible to prevent the ID from being easily decoded.
  • the receiver shown in FIG. 30 acquires the encryption edit ID
  • the receiver may transmit the encryption edit ID to the server and acquire the ID from the server.
  • FIG. 31 shows an example of a form of use of the present invention in a train platform.
  • the user holds the portable terminal over an electronic bulletin board or lighting, and obtains information displayed on the electronic bulletin board, train information of a station where the electronic bulletin board is installed, information on the premises of the station, or the like by visible light communication.
  • the information itself displayed on the electronic bulletin board may be transmitted to the portable terminal by visible light communication, or ID information corresponding to the electronic bulletin board is transmitted to the portable terminal, and the ID information acquired by the portable terminal
  • the information displayed on the electronic bulletin board may be acquired by inquiring the server.
  • the server transmits the content displayed on the electronic bulletin board to the mobile terminal based on the ID information.
  • the train ticket information stored in the memory of the mobile terminal is compared with the information displayed on the electronic bulletin board, and the ticket information corresponding to the user's ticket is displayed on the electronic bulletin board.
  • An arrow indicating the destination to the home where the user's scheduled train arrives is displayed on the display.
  • the route to the vehicle near the exit or the transfer route may be displayed. If a seat has been designated, the route to that seat may be displayed.
  • the arrow is displayed, it can be displayed more easily by displaying the arrow using the same color as the color of the train route in the map or the train guide information.
  • the user's reservation information (home number, vehicle number, departure time, seat number) can also be displayed. By displaying the user reservation information together, it is possible to prevent erroneous recognition.
  • the ticket information is stored in the server, query the server from the mobile terminal to obtain and compare the ticket information, or compare the ticket information with the information displayed on the electronic bulletin board on the server side.
  • the target route may be estimated from the history of the user performing a transfer search, and the route may be displayed. Further, not only the contents displayed on the electronic bulletin board but also the train information / premises information of the station where the electronic bulletin board is installed may be acquired and compared.
  • Information related to the user may be highlighted with respect to the display of the electronic bulletin board on the display, or may be rewritten and displayed.
  • an arrow for guiding to the boarding place on each route may be displayed.
  • an arrow for guiding to a store or restroom may be displayed on the display.
  • the user's behavior characteristics may be managed in advance by a server, and an arrow for guiding the user to a store / restroom may be displayed on the display when the user often stops at a store / restaurant in the station.
  • FIG. 32 shows an example in which coupon information acquired by visible light communication is displayed or a popup is displayed on the display of the mobile terminal when the user approaches the store.
  • a user acquires coupon information of a store from an electronic bulletin board etc. by visible light communication using a portable terminal.
  • the coupon information of the store or a pop-up is displayed.
  • Whether or not the user has entered the predetermined range from the store is determined using the GPS information of the mobile terminal and the store information included in the coupon information. Not only coupon information but also ticket information may be used. Since it automatically alerts when a store where coupons and tickets can be used approaches, the user can use coupons and tickets appropriately.
  • FIG. 33 shows an example in which a user acquires information from a home appliance by visible light communication using a mobile terminal.
  • ID information or information related to the home appliance is acquired from the home appliance by visible light communication
  • an application for operating the home appliance is automatically started.
  • FIG. 33 shows an example using a television. With such a configuration, it is possible to start an application for operating a home appliance simply by holding the portable terminal over the home appliance.
  • FIG. 34 shows an example of the configuration of the database held by the server that manages the ID transmitted by the transmitter.
  • the database has an ID-data table that holds data to be passed in response to an inquiry using the ID as a key, and an access log table that stores a record of the inquiry using the ID as a key.
  • the ID sent by the transmitter, the data passed in response to an inquiry using the ID as a key, the conditions for passing the data, the number of times the access was made using the ID as a key, and the data are passed with the conditions cleared Have the number of times.
  • the conditions for passing data include the date and time, the number of accesses, the number of successful accesses, the information of the terminal of the inquiry source (terminal model, the application that made the inquiry, the current location of the terminal, etc.) Age, gender, occupation, nationality, language, religion, etc.).
  • the log table clears the ID, requested user ID, time, other incidental information, whether or not the data is passed, and the passed data Record the contents of.
  • FIG. 35 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4.
  • the receiver 8420a receives zone information from the base station 8420h, recognizes in which zone it is located, and selects a reception protocol.
  • the base station 8420h is configured as, for example, a mobile phone communication base station, a Wi-Fi access point, an IMES transmitter, a speaker, or a wireless transmitter (Bluetooth (registered trademark), ZigBee, specific low power wireless station, etc.).
  • the receiver 8420a may specify a zone based on position information obtained from GPS or the like. As an example, it is assumed that communication is performed at a signal frequency of 9.6 kHz in zone A, and communication is performed at a signal frequency of 15 kHz for ceiling lighting and signage of 4.8 kHz in zone B.
  • the receiver 8420a recognizes that the current location is zone A from the information of the base station 8420h, performs reception at a signal frequency of 9.6 kHz, and receives signals transmitted from the transmitters 8420b and 8420c.
  • the receiver 8420a recognizes that the current location is zone B from the information of the base station 8420i at the position 8420l, and is further trying to receive a signal from the ceiling lighting because the in-camera is directed upward. Is received at a signal frequency of 15 kHz, and signals transmitted by the transmitters 8420e and 8420f are received.
  • the receiver 8420a recognizes that the current location is the zone B from above the base station 8420i, and further estimates that it is trying to receive a signal transmitted by the signage from the movement of the out camera. Receive at a signal frequency of .8 kHz and receive the signal transmitted by transmitter 8420g. The receiver 8420a receives the signals of both the base station 8420h and the base station 8420i at the position 8420k, and cannot determine whether the current location is the zone A or the zone B. Therefore, the reception process is performed at both 9.6 kHz and 15 kHz. I do.
  • the part where the protocol differs depending on the zone is not limited to the frequency, but may be the server that inquires about the modulation method, signal format, and ID of the transmission signal.
  • the base stations 8420h and 8420i may transmit the protocol in the zone to the receiver, or may transmit only the ID indicating the zone, and the receiver may acquire the protocol information from the server using the zone ID as a key. Good.
  • the transmitters 8420b to 8420f receive the zone ID and protocol information transmitted by the base stations 8420h and 8420i, and determine the signal transmission protocol.
  • a transmitter 8420d capable of receiving signals transmitted by both base station 8420h and base station 8420i utilizes a base station zone protocol with stronger signal strength, or alternately uses both protocols.
  • FIG. 36 is a diagram illustrating an example of operation of a receiver and a transmitter in Embodiment 4.
  • the receiver 8421a recognizes the zone to which it belongs from the received signal.
  • the receiver 8421a provides services (coupon distribution, point assignment, route guidance, etc.) determined for each zone.
  • the receiver 8421a receives a signal transmitted from the left side of the transmitter 8421b and recognizes that it is in the zone A.
  • the transmitter 8421b may transmit different signals depending on the transmission direction.
  • the transmitter 8421b may transmit a signal such that a different signal is received according to the distance to the receiver by using a signal having a light emission pattern such as 2217a.
  • the receiver 8421a may recognize the positional relationship with the transmitter 8421b from the direction and size in which the transmitter 8421b is imaged, and may recognize the zone in which the receiver 8421a is located.
  • a part of the signal indicating that it is located in the same zone may be shared. For example, IDs representing zone A transmitted from the transmitter 8421b and the transmitter 8421c are common to the first half. Accordingly, the receiver 8421a can recognize the zone where the receiver 8421a is located only by receiving the first half of the signal.
  • the information communication method in the present embodiment is an information communication method for transmitting a signal by a luminance change, and a determination step for determining a plurality of luminance change patterns by modulating each of a plurality of transmission target signals;
  • Each of the plurality of light emitters changes a luminance according to any one of the determined plurality of luminance change patterns, thereby transmitting a transmission target signal corresponding to any one of the patterns.
  • a transmission step wherein two or more of the plurality of light emitters each have two types of brightness different from each other for each time unit predetermined for the light emitter.
  • the time unit predetermined for each of the two or more light emitters is set so that either one of the two lights is output. Differently, luminance changes at different frequencies to.
  • each of the two or more light emitters changes in luminance at a different frequency
  • a signal to be transmitted for example, the ID of the light emitter
  • each of the plurality of light emitters changes in luminance at any one of at least four frequencies, and two or more light emitters of the plurality of light emitters are:
  • the luminance may be changed at the same frequency.
  • the transmitting step when the plurality of light emitters are projected onto a light receiving surface of an image sensor for receiving the plurality of transmission target signals, all the light emitters adjacent to each other on the light receiving surface.
  • Each of the plurality of light emitters changes in luminance so that the frequency of the luminance change differs between them.
  • the number of frequency types is the number of illuminants. Even in the case where the number is smaller, the frequency of the luminance change can be surely made different among all the light emitters adjacent to each other on the light receiving surface of the image sensor based on the four-color problem or the four-color theorem. As a result, the receiver can easily distinguish and acquire each of the transmission target signals transmitted from the plurality of light emitters.
  • each of the plurality of light emitters may transmit the signal to be transmitted by changing in luminance at a frequency specified by a hash value of the signal to be transmitted.
  • each of the plurality of light emitters changes in luminance at a frequency specified by a hash value of a signal to be transmitted (for example, the ID of the light emitter)
  • a signal to be transmitted for example, the ID of the light emitter
  • the information communication method may further calculate a frequency corresponding to the transmission target signal as a first frequency from the transmission target signal stored in the signal storage unit according to a predetermined function.
  • a calculation step; a frequency determination step for determining whether or not the second frequency stored in the frequency storage unit matches the calculated first frequency; and the first frequency and the second frequency A frequency error notification step of notifying an error when it is determined that the frequency does not match, and a determination step when it is determined that the first frequency and the second frequency match.
  • a luminance change pattern is determined by modulating the transmission target signal stored in the signal storage unit, and in the transmission step, the plurality of light emitters are detected. Of any one of the light emitters according to the determined the pattern, by changing the luminance in the first frequency may transmit a signal of the transmission target stored in the signal storage unit.
  • the frequency stored in the frequency storage unit matches the frequency calculated from the signal to be transmitted stored in the signal storage unit (ID storage unit).
  • the information communication method further includes a check value calculation step of calculating a first check value from a transmission target signal stored in the signal storage unit according to a predetermined function, and a check value storage unit.
  • a check value determination step for determining whether or not the stored second check value matches the calculated first check value; and the first check value and the second check value.
  • a check value error notification step of notifying an error when it is determined that they do not match, and a determination step when it is determined that the first check value and the second check value match
  • a luminance change pattern is determined by modulating the signal to be transmitted stored in the signal storage unit.
  • the plurality of light emitters are determined. Any one of the emitters of blood, by changing luminance in accordance with the determined the pattern, may transmit a signal of the transmission target stored in the signal storage unit.
  • check value stored in the check value storage unit matches the check value calculated from the transmission target signal stored in the signal storage unit (ID storage unit), If it is determined that they do not coincide with each other, an error is notified, so that it is possible to easily detect abnormality of the signal transmission function by the light emitter.
  • the information communication method is an information communication method for acquiring information from a subject, and corresponds to a plurality of exposure lines included in the image sensor in an image obtained by photographing the subject by an image sensor.
  • An exposure time setting step for setting an exposure time of the image sensor so that a plurality of bright lines are generated according to a change in luminance of the subject, and the exposure time set for the subject whose luminance is changed by the image sensor.
  • a frequency specifying step of specifying the frequency of the luminance change of the subject For example, in the frequency specifying step, a plurality of header patterns, which are a plurality of predetermined patterns for indicating a header, are included in the plurality of bright line patterns, and the number of pixels between the plurality of header patterns is determined. Is determined as the frequency of luminance change of the subject.
  • the frequency of the luminance change of the subject is specified, when a plurality of subjects having different luminance change frequencies are photographed, information from these subjects can be easily distinguished and acquired.
  • the bright line image including a plurality of patterns each represented by a plurality of bright lines is acquired by photographing a plurality of subjects each changing in luminance, and the information acquisition step acquires the bright line image.
  • the plurality of patterns Information may be acquired from each of the patterns.
  • a plurality of bright line images are acquired by photographing the plurality of subjects a plurality of times at mutually different timings, and in the frequency specifying step, the bright line images are included in the bright line image.
  • a frequency for each of a plurality of patterns is specified, and in the information acquisition step, a plurality of patterns in which the same frequency is specified is searched from a plurality of previous bright line images, and the searched plurality of patterns are combined and combined.
  • the information may be obtained by demodulating data specified by the plurality of patterns.
  • a plurality of patterns in which the same frequency is specified are searched from a plurality of emission line images, the plurality of searched patterns are combined, and information is acquired from the combined patterns. Therefore, even when a plurality of subjects are moving, information from the plurality of subjects can be easily distinguished and acquired.
  • the information communication method may further include: identifying information on the subject included in the information acquired in the information acquisition step; and the frequency specification for a server in which frequencies are registered for each of the identification information.
  • related information associated with the identification information (ID) acquired based on the luminance change of the subject (transmitter) and the frequency of the luminance change is acquired. Therefore, by changing the frequency of the luminance change of the subject and updating the frequency registered in the server to the frequency after the change, the receiver that acquired the identification information before the frequency change acquires the related information from the server. Can be prevented. That is, by changing the frequency registered in the server in accordance with the change in the luminance change frequency of the subject, the receiver that has acquired the subject identification information in the past can acquire the related information from the server indefinitely. It can prevent becoming a state.
  • the information communication method is further acquired in the identification information acquisition step of acquiring the identification information of the subject by extracting a part from the information acquired in the information acquisition step, and in the information acquisition step.
  • a setting frequency specifying step of specifying a number indicated by the remaining part other than the part of the information as a setting frequency of the luminance change set for the subject may be included.
  • the information obtained from the plurality of bright line patterns can include the identification information of the subject and the set frequency of the luminance change set for the subject without depending on each other. Can increase the degree of freedom.
  • FIG. 37 is a diagram illustrating an example of operation of a transmitter in Embodiment 5.
  • the light emitting unit of the transmitter 8921a repeats blinking and visible light communication that are visible to humans, as shown in FIG. By performing blinking that is visible to humans, it is possible to inform humans that visible light communication is possible.
  • the user notices that visible light communication is possible because the transmitter 8921a is blinking, performs visible light communication with the receiver 8921b directed to the transmitter 8921a, and performs user registration of the transmitter 8921a.
  • the transmitter in the present embodiment alternately and repeatedly performs a step in which the light emitter transmits a signal due to a change in luminance and a step in which the light emitter blinks so as to be visually recognized by human eyes.
  • the transmitter may separately provide a visible light communication unit and a blinking unit (communication status display unit) as shown in FIG.
  • the light emitting unit is blinking while performing visible light communication. That is, for example, the transmitter repeatedly performs high luminance visible light communication with a brightness of 75% and low luminance visible light communication with a brightness of 1% alternately.
  • the operation shown in (c) of FIG. 37 is performed to alert the user without stopping the visible light communication. Can do.
  • FIG. 38 is a diagram illustrating an example of application of the transmission and reception system in the fifth embodiment.
  • the receiver 8955a receives, for example, the transmission ID of the transmitter 8955b configured as a guide plate, acquires the map data displayed on the guide plate from the server, and displays the map data.
  • the server may transmit an advertisement suitable for the user of the receiver 8955a, and the receiver 8955a may also display this advertisement information.
  • the receiver 8955a displays a route from the current location to a location designated by the user.
  • FIG. 39 is a diagram illustrating an example of application of the transmission / reception system in the fifth embodiment.
  • the receiver 8957a receives the ID transmitted from the transmitter 8957b configured as a signboard, for example, acquires coupon information from the server, and displays the coupon information.
  • the receiver 8957a stores subsequent user actions such as saving a coupon, moving to a store displayed on the coupon, shopping at the store, and leaving without saving the coupon. Save to 8957c.
  • the subsequent behavior of the user who has acquired information from the sign 8957b can be analyzed, and the advertising value of the sign 8957b can be estimated.
  • FIG. 40 is a diagram illustrating an example of application of the transmission and reception system in the fifth embodiment.
  • the transmitter 8960b configured as a projector or a display transmits information (SSID, password for wireless connection, IP address, password for operating the transmitter) for wireless connection to itself.
  • information SSID, password for wireless connection, IP address, password for operating the transmitter
  • an ID serving as a key for accessing these pieces of information is transmitted.
  • the receiver 8960a configured as a smartphone, a tablet, a laptop computer, or a camera receives the signal transmitted from the transmitter 8960b, acquires the information, and establishes a wireless connection with the transmitter 8960b.
  • This wireless connection may be connected via a router, or may be directly connected by Wi-Fi Direct, Bluetooth (registered trademark), Wireless Home Digital Interface, or the like.
  • the receiver 8960a transmits a screen displayed by the transmitter 8960b. Thereby, the image of the receiver can be easily displayed on the transmitter.
  • the transmitter 8960b When the transmitter 8960b is connected to the receiver 8960a, the transmitter 8960b notifies the receiver 8960a that a password is required in addition to the information transmitted by the transmitter in order to display the screen. If is not sent, the transmitted screen may not be displayed. At this time, the receiver 8960a displays a password input screen such as 8960d and allows the user to input the password.
  • the information communication method according to one or more aspects has been described based on the embodiment.
  • the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in this embodiment, and forms constructed by combining components in different embodiments are also within the scope of one or more aspects. May be included.
  • an information communication method according to an aspect of the present invention may be applied.
  • FIG. 41 is a diagram showing an example of application of the transmission / reception system in the fifth embodiment.
  • a camera configured as a receiver for visible light communication performs imaging in a normal imaging mode (Step 1).
  • the camera acquires an image file configured in a format such as EXIF (Exchangeable image file format).
  • the camera performs imaging in the visible light communication imaging mode (Step 2).
  • the camera acquires a signal (visible light communication information) transmitted by the visible light communication from the transmitter that is the subject (Step 3).
  • the camera obtains information corresponding to the key from the server by using the signal (reception information) as a key and accessing the server (Step 4).
  • the camera transmits a signal (visible light reception data) transmitted from the subject by visible light communication, information acquired from the server, and data indicating a position where the transmitter as the subject is projected in the image indicated by the image file. And the data indicating the time when the signal transmitted by the visible light communication is received (the time in the moving image) are stored as metadata in the above-described image file. Note that when a plurality of transmitters are projected as subjects in an image (image file) obtained by imaging, the camera stores, for each transmitter, some metadata corresponding to the transmitter. Save to file.
  • a display or projector configured as a transmitter for visible light communication displays an image indicated by the above-described image file
  • the display or projector transmits a signal corresponding to metadata included in the image file by visible light communication.
  • the display or the projector may transmit the metadata itself by visible light communication, or may transmit a signal associated with the transmitter displayed in the image as a key.
  • a portable terminal configured as a receiver for visible light communication receives a signal transmitted by visible light communication from the display or projector by capturing an image of the display or projector. If the received signal is the above-described key, the portable terminal uses the key to acquire the transmitter metadata associated with the key from the display, projector, or server. In addition, if the received signal is a signal (visible light reception data or visible light communication information) transmitted from an actual transmitter by visible light communication, the portable terminal receives the visible light from a display, a projector, or a server. Information corresponding to received light data or visible light communication information is acquired.
  • the information communication method in the present embodiment is an information communication method for acquiring information from a subject, and each exposure included in the image sensor is included in an image obtained by photographing the first subject that is the subject by an image sensor.
  • a first exposure time setting step for setting a first exposure time of the image sensor so that a plurality of bright lines corresponding to a line are generated according to a change in luminance of the first subject;
  • a first bright line image acquisition step of acquiring a first bright line image that is an image including the plurality of bright lines by photographing the first subject that changes with the set first exposure time;
  • the first transmission information is acquired by demodulating data specified by the pattern of the plurality of bright lines included in the acquired first bright line image.
  • An information obtaining step of, after said first transmission information is obtained by sending a control signal, and a door control steps to open the door against the opening and closing devices of the door.
  • the information communication method may further include a second image in which the image sensor includes a plurality of bright lines by photographing the second subject whose luminance changes with the set first exposure time.
  • Second transmission information is acquired by demodulating data specified by a pattern of the plurality of bright lines included in the acquired second bright line image and a second bright line image acquiring step of acquiring a bright line image A second information acquisition step; and an approach determination step of determining whether or not a receiving device including the image sensor is approaching the door based on the acquired first and second transmission information.
  • the control signal may be transmitted when it is determined that the receiving device is approaching the door.
  • the door can be opened only when the receiving device (receiver) approaches the door, that is, only at an appropriate timing.
  • a second exposure time setting step for setting a second exposure time longer than the first exposure time and the image sensor sets a third subject.
  • the charge is read after a predetermined time has elapsed from the time when the charge is read for the exposure line adjacent to the exposure line, and the first bright line image obtaining step is performed. Then, without using the optical black for the charge readout, the optical black in the image sensor is different from the optical black.
  • the charge is read after a time longer than the predetermined time from when the charge is read for the exposure line adjacent to the exposure line. Also good.
  • the readout (exposure) of charges from the optical black is not performed. Therefore, the readout (exposure) of charges from the effective pixel area, which is an area other than the optical black in the image sensor, is performed. This time can be lengthened. As a result, it is possible to increase the time for receiving a signal in the effective pixel region, and it is possible to acquire many signals.
  • the length in the direction perpendicular to each of the plurality of bright lines in the plurality of bright line patterns included in the first bright line image is less than a predetermined length.
  • the frame rate is reduced and the bright line is renewed.
  • An image is acquired as a third bright line image.
  • the length of the bright line pattern included in the third bright line image can be increased, and the transmitted signal can be acquired for one block.
  • the information communication method further includes a ratio setting step for setting a ratio between a vertical width and a horizontal width of an image obtained by the image sensor, and the first bright line image acquisition step includes the set ratio.
  • a clipping determination step for determining whether or not an end in a direction perpendicular to each exposure line in the image is clipped, and when it is determined that the end is clipped, the ratio set in the ratio setting step Changing the ratio to a non-clipping ratio which is a ratio at which the edge is not clipped, and the image sensor captures the first bright line with the non-clipping ratio by photographing the first subject whose luminance changes.
  • the ratio of the horizontal width to the vertical width of the effective pixel area of the image sensor is 4: 3
  • the ratio of the horizontal width to the vertical width of the image is set to 16: 9, and a bright line along the horizontal direction appears. That is, when the exposure line is along the horizontal direction, it is determined that the upper end and the lower end of the image are clipped. That is, it is determined that the end of the first bright line image is missing.
  • the ratio of the image is changed to 4: 3, which is a ratio that is not clipped.
  • the information communication method further includes a compression step of generating a compressed image by compressing the first bright line image in a direction parallel to each of the plurality of bright lines included in the first bright line image. And a compressed image transmission step of transmitting the compressed image.
  • the information communication method further determines that the receiving device including the image sensor has been moved in a predetermined manner, and a gesture determination step for determining whether or not the receiving device has been moved in a predetermined manner.
  • a gesture determination step for determining whether or not the receiving device has been moved in a predetermined manner.
  • an activation step of activating the image sensor may be included.
  • FIG. 42 is a diagram illustrating an application example of the transmitter and the receiver in the sixth embodiment.
  • the robot 8970 has, for example, a function as a self-propelled cleaner and a function as a receiver in each of the above embodiments.
  • the lighting devices 8971a and 8971b each have a function as a transmitter in each of the above embodiments.
  • the robot 8970 performs cleaning while moving in the room and photographs the lighting device 8971a that illuminates the room.
  • the lighting device 8971a transmits the ID of the lighting device 8971a by changing the luminance.
  • the robot 8970 receives the ID from the lighting device 8971a and estimates its own position (self-position) based on the ID as in the above embodiments. That is, the robot 8970 moves itself based on the detection result by the 9-axis sensor, the relative position of the lighting device 8971a reflected in the image obtained by photographing, and the absolute position of the lighting device 8971a specified by the ID. Is estimated.
  • the robot 8970 when the robot 8970 moves away from the lighting device 8971a by moving, the robot 8970 transmits a signal to turn off the lighting device 8971a (turn-off command). For example, when the robot 8970 leaves the lighting device 8971a by a predetermined distance, the robot 8970 transmits a turn-off command. Alternatively, the robot 8970 transmits a turn-off command to the lighting device 8971a when the lighting device 8971a does not appear in the image obtained by shooting or when another lighting device appears in the image. When the lighting device 8971a receives a turn-off command from the robot 8970, the lighting device 8971a turns off according to the turn-off command.
  • turn-off command For example, when the robot 8970 leaves the lighting device 8971a by a predetermined distance, the robot 8970 transmits a turn-off command. Alternatively, the robot 8970 transmits a turn-off command to the lighting device 8971a when the lighting device 8971
  • the robot 8970 detects that it has approached the lighting device 8971b based on the estimated self-position while moving and cleaning. That is, the robot 8970 holds information indicating the position of the lighting device 8971b, and when the distance between the self position and the position of the lighting device 8971b is equal to or less than a predetermined distance, the lighting device 8971b. Detecting that you are approaching. Then, the robot 8970 transmits a signal (lighting command) for instructing lighting to the lighting device 8971b. When the lighting device 8971b receives the lighting command, the lighting device 8971b lights up in accordance with the lighting command.
  • the robot 8970 can brighten only the surroundings while moving and can easily perform cleaning.
  • FIG. 43 is a diagram illustrating an application example of the transmitter and the receiver in the sixth embodiment.
  • the lighting device 8974 has a function as a transmitter in each of the above embodiments.
  • the lighting device 8974 illuminates a route bulletin board 8975 at a railway station, for example, while changing in luminance.
  • the receiver 8973 pointed to the route bulletin board 8975 by the user photographs the route bulletin board 8975.
  • the receiver 8973 acquires the ID of the route bulletin board 8975, and acquires detailed information about each route described in the route bulletin board 8975, which is information associated with the ID.
  • the receiver 8973 displays a guide image 8973a indicating the detailed information.
  • the guidance image 8973a indicates the distance to the route described on the route bulletin board 8975, the direction toward the route, and the time when the next train arrives on the route.
  • the receiver 8973 displays a supplementary guide image 8973b.
  • This supplementary guide image 8973b is, for example, a user's selection operation of any one of a railway timetable, information on another route different from the route indicated by the guide image 8973a, and detailed information on the station. It is an image for displaying accordingly.
  • FIG. 44 is a diagram illustrating an example of a receiver in Embodiment 7.
  • the receiver 9020a configured as a wristwatch includes a plurality of light receiving units.
  • the receiver 9020a is arranged in the vicinity of the character indicating 12 o'clock in the light receiving portion 9020b arranged at the upper end portion of the rotating shaft that supports the long hand and the short hand of the watch.
  • Light receiving portion 9020c The light receiving unit 9020b receives light traveling toward the light receiving unit 9020b along the direction of the rotation axis described above, and the light receiving unit 9020c transmits light traveling toward the light receiving unit 9020c along the direction connecting the rotation axis and the character indicating 12:00. Receive.
  • the light receiving unit 9020b can receive light from above.
  • the receiver 9020a can receive a signal from the ceiling lighting.
  • the light receiving unit 9020c can receive light from the front direction.
  • the receiver 9020a can receive a signal from a signage or the like at the front.
  • These light receiving units 9020b and 9020c have directivity so that signals can be received without interference even when there are a plurality of transmitters at close positions.
  • FIG. 45 is a diagram illustrating an example of a reception system in the seventh embodiment.
  • the receiver 9023b configured as a wristwatch is connected to the smartphone 9022a via wireless communication such as Bluetooth (registered trademark).
  • the receiver 9023b has a dial made up of a display such as a liquid crystal display, and can display information other than the time.
  • the smartphone 9022a recognizes the current location from the signal received by the receiver 9023b, and displays the route and distance to the destination on the display surface of the receiver 9023b.
  • FIG. 46 is a diagram illustrating an example of a signal transmission / reception system according to the seventh embodiment.
  • the signal transmission / reception system includes a smartphone (smartphone) that is a multi-function mobile phone, an LED light emitting device that is a lighting device, a home appliance such as a refrigerator, and a server.
  • the LED light emitter performs communication using BTLE (Bluetooth (registered trademark) Low Energy) and visible light communication using LED (Light-Emitting-Diode).
  • BTLE Bluetooth (registered trademark) Low Energy)
  • LED Light-Emitting-Diode
  • the LED light emitter controls a refrigerator or communicates with an air conditioner by BTLE.
  • the LED light emitter controls a power source of a microwave oven, an air purifier, a television (TV), or the like by visible light communication.
  • the television includes, for example, a solar power generation element, and uses the solar power generation element as an optical sensor. That is, when a signal is transmitted when the brightness of the LED light emitter changes, the television detects a change in the brightness of the LED light emitter based on a change in the power generated by the solar power generation element. Then, the television acquires the signal transmitted from the LED light emitter by demodulating the signal indicated by the detected luminance change.
  • the signal is a command indicating that the power is on
  • the television switches its main power source to ON
  • the signal is a command indicating that the power source is OFF
  • the television switches its main power source to OFF.
  • the server can communicate with the air conditioner via a router and a specific low power radio station (extra small). Furthermore, since the air conditioner can communicate with the LED light emitter via BTLE, the server can communicate with the LED light emitter. Therefore, the server can switch the power of the TV ON and OFF via the LED light emitter.
  • the smartphone can control the power supply of the TV via the server by communicating with the server via, for example, Wi-Fi (Wireless Fidelity).
  • the information communication method allows the mobile terminal (smart phone) to transmit a control signal (transmission data string or user command) by wireless communication (such as BTLE or Wi-Fi) different from visible light communication.
  • a visible light communication step in which the lighting device performs visible light communication by changing the luminance according to the control signal, and a control target device (such as a microwave oven). Includes an execution step of detecting a luminance change of the lighting device, acquiring a control signal by demodulating a signal specified by the detected luminance change, and executing a process according to the control signal.
  • the mobile terminal may be a wristwatch instead of a smartphone.
  • FIG. 47 is a flowchart showing a reception method in which interference is eliminated in the seventh embodiment.
  • step 9001a start is made in step 9001a, and it is checked whether there is a periodic change in the intensity of light received in step 9001b. If YES, the process proceeds to step 9001c. In the case of NO, the process proceeds to Step 9001d to receive a wide range of light by setting the lens of the light receiving unit to a wide angle, and returns to Step 9001b. In step 9001c, it is confirmed whether the signal can be received. If YES, the process proceeds to step 9001e, the signal is received, and the process ends in step 9001g. In the case of NO, the process proceeds to Step 9001f, the lens of the light receiving unit is telephoto, and light in a narrow range is received, and the process returns to Step 9001c.
  • This method makes it possible to receive signals from transmitters in a wide direction while eliminating signal interference from a plurality of transmitters.
  • FIG. 48 is a flowchart showing a method for estimating the orientation of a transmitter in the seventh embodiment.
  • step 9002a the lens of the light receiving unit is set to the maximum telephoto in step 9002b, and it is checked whether there is a periodic change in the intensity of light received in step 9002c. Proceed to 9002d. In the case of NO, the process proceeds to Step 9002e, where a wide range of light is received by setting the lens of the light receiving unit to a wide angle, and the process returns to Step 9002c.
  • step 9002d the signal is received, in step 9002f, the lens of the light receiving unit is set to the maximum telephoto position, the light receiving direction is changed along the boundary of the light receiving range, and the direction in which the light receiving intensity is maximized is detected. Estimating that it is in the direction, the process ends at step 9002d.
  • the maximum wide angle may be set first, and the telephoto may be gradually increased.
  • FIG. 49 is a flowchart showing a reception start method according to the seventh embodiment.
  • step 9003a start is performed in step 9003a, and in step 9003b, it is confirmed whether a signal from a base station such as Wi-Fi, Bluetooth (registered trademark), IMES or the like is received. If YES, the process proceeds to step 9003c. If NO, the process returns to step 9003b. In step 9003c, it is confirmed whether or not the base station is registered in the receiver or server as a trigger for starting reception. If YES, the process proceeds to step 9003d, starts receiving a signal, and ends in step 9003e. . If NO, the process returns to step 9003b.
  • a base station such as Wi-Fi, Bluetooth (registered trademark), IMES or the like
  • This method can start reception without the user performing a reception start operation. Further, power consumption can be suppressed as compared with the case where reception is always performed.
  • FIG. 50 is a flowchart showing an ID generation method using information of another medium together in the seventh embodiment.
  • step 9004a ID of carrier communication network, Wi-Fi, Bluetooth (registered trademark), etc. connected in step 9004b, or position information obtained from the ID, position information obtained from GPS, etc. Is transmitted to the upper bit ID index server.
  • step 9004c the upper bits of the visible light ID are received from the upper bit ID index server, and in step 9004d, the signal from the transmitter is received as the lower bits of the visible light ID.
  • step 9004e the upper and lower bits of the visible light ID are combined and transmitted to the ID resolution server, and the process ends in step 9004f.
  • This method makes it possible to obtain upper bits that are commonly used near the receiver and reduce the amount of data transmitted by the transmitter.
  • the receiving speed of the receiver can be increased.
  • the transmitter may transmit both the upper and lower bits.
  • the receiver using this method can synthesize the ID when the lower bit is received, and the receiver not using this method obtains the ID by receiving the entire ID from the transmitter. .
  • FIG. 51 is a flowchart showing a reception method selection method based on frequency separation in the seventh embodiment.
  • step 9005a start is performed in step 9005a, and the optical signal received in step 9005b is applied to a frequency filter circuit, or discrete Fourier series expansion is performed to perform frequency decomposition.
  • step 9005c it is confirmed whether or not a low frequency component exists. If YES, the process proceeds to step 9005d, a signal expressed in a low frequency region such as frequency modulation is decoded, and the process proceeds to step 9005e. If NO, the process proceeds to step 9005e. In step 9005e, it is confirmed whether or not the base station is registered in the receiver or server as a trigger for starting reception. If YES, the process proceeds to step 9005f, and a signal expressed in a high frequency region such as pulse position modulation. And the process proceeds to Step 9005g. If NO, the process advances to step 9005g. In step 9005g, signal reception is started, and in step 9005h, the process ends.
  • This method makes it possible to receive signals modulated by a plurality of modulation schemes.
  • FIG. 52 is a flowchart showing a signal reception method when the exposure time is long in the seventh embodiment.
  • step 9030a start in step 9030a, and if the sensitivity can be set in step 9030b, set the sensitivity to the maximum. If the exposure time can be set in step 9030c, the exposure time is set shorter than that in the normal shooting mode.
  • step 9030d two images are picked up and a difference in luminance is obtained. If the position or direction of the imaging unit changes during the imaging of two images, the change is canceled and an image as if it was captured from the same position and direction is generated to obtain the difference.
  • Step 9030e a value obtained by averaging the luminance values in the direction parallel to the difference image or the exposure line of the captured image is obtained.
  • step 9030f The averaged values in step 9030f are arranged in the direction perpendicular to the exposure line, and discrete Fourier transform is performed.
  • step 9030g it is recognized whether there is a peak near a predetermined frequency, and the process ends in step 9030h.
  • This method allows signals to be received even when the exposure time is long, such as when the exposure time cannot be set or when a normal image is captured simultaneously.
  • the exposure time is set automatically, when the camera is directed to a transmitter configured as illumination, the exposure time is set to about 1/60 second to 1/480 second by the automatic exposure correction function. If the exposure time cannot be set, a signal is received under this condition.
  • the illumination is blinked periodically, if the time of one cycle is about 1/16 or more of the exposure time, stripes can be visually recognized in the direction perpendicular to the exposure line. I was able to recognize it. At this time, since the brightness is too high in the portion where the illumination is reflected and it is difficult to confirm the stripe, it is preferable to obtain the signal period from the portion where the illumination light is reflected.
  • a method of periodically turning on / off the light emitting unit such as a frequency shift keying method or a frequency multiplexing modulation method
  • a frequency shift keying method or a frequency multiplexing modulation method even if the modulation frequency is the same as that of the pulse position modulation method, It is difficult for the viewer to see the flicker, and it is difficult for the flicker to appear in the video shot with the video camera. Therefore, a low frequency can be used as the modulation frequency. Since the temporal resolution of human vision is about 60 Hz, a frequency higher than this frequency can be used as the modulation frequency.
  • the modulation frequency is an integral multiple of the imaging frame rate of the receiver
  • the imaging frame rate of the receiver is usually 30 fps
  • reception is easy if the modulation frequency is set to a value other than an integral multiple of 30 Hz.
  • two disjoint modulation frequencies are assigned to the same signal, and the transmitter receives signals by alternately using the two modulation frequencies for transmission. The machine can easily restore the signal by receiving at least one signal.
  • FIG. 53 is a diagram illustrating an example of a transmitter dimming (adjusting brightness) method.
  • the average luminance changes and the brightness can be adjusted.
  • the period T 1 which repeats high and low brightness constant it is possible to keep the frequency peak constant.
  • transmission is performed while the time T1 between the first luminance change that becomes brighter than the average luminance and the second luminance change is kept constant.
  • the time for lighting brighter than the average brightness is shortened.
  • the illumination time is set longer than the average luminance.
  • 53 (b) and 53 (c) are dimmed darker than (a), and FIG. 53 (c) is dimmed the darkest. Thereby, dimming can be performed while transmitting signals having the same meaning.
  • the average brightness may be changed by changing the brightness value of the section with high brightness, the brightness of the section with low brightness, or both brightness values.
  • FIG. 54 is a diagram showing an example of a method for configuring the dimming function of the transmitter.
  • the dimming correction unit holds the correction value, and the dimming control unit controls the brightness of the light emitting unit according to the correction value.
  • the dimming control unit uses the changed dimming setting value and the correction value held in the dimming correction unit. In addition, the brightness of the light emitting unit is controlled.
  • the dimming control unit transmits the dimming setting value to another transmitter through the interlocking dimming unit.
  • the dimming control unit based on the dimming setting value and the correction value held in the dimming correction unit, To control the brightness.
  • a control method for controlling an information communication device that transmits a signal by changing luminance of a light emitter, and includes a plurality of different signals for a computer of the information communication device.
  • a control method may include a transmission step of transmitting a signal to be transmitted by changing the luminance of the light emitter so as to include only the pattern.
  • the determining step is configured such that the number of transmissions for transmitting one signal among a plurality of different signals is different from the number of transmissions for transmitting other signals within a predetermined time.
  • the number of transmissions may be determined.
  • the flickering at the time of transmission can be prevented because the number of transmissions for transmitting one signal is different from the number of transmissions for transmitting other signals.
  • the determining step may increase the number of transmissions of a signal corresponding to a high frequency within a predetermined time, compared to the number of transmissions of other signals.
  • the luminance change pattern may be a pattern in which the waveform of the luminance change over time is any one of a rectangular wave, a triangular wave, and a sawtooth wave.
  • the time during which the luminance of the illuminant is greater than a predetermined value at a time corresponding to a single frequency You may lengthen with respect to the case where the value of the average luminance of the said light-emitting body is made small.
  • the receiver can set the exposure time to a predetermined value by using an API for setting the exposure time (which stands for application programming interface and indicates a means for using the function of the OS).
  • the visible light signal can be received stably.
  • the receiver can set the sensitivity to a predetermined value by using an API for setting the sensitivity, and stably receives a visible light signal even when the brightness of the transmission signal is dark or bright. be able to.
  • FIG. 55 is a diagram for explaining the EX zoom.
  • the zoom that is, the method of obtaining a large image, includes an optical zoom that adjusts the focal length of the lens to change the size of the image captured on the image sensor, and a digital image that interpolates the image captured on the image sensor.
  • the EX zoom can be used when the number of image sensors included in the image sensor is larger than the resolution of the captured image.
  • 32 ⁇ 24 image sensors are arranged in a matrix. That is, 32 image sensors are arranged horizontally and 24 elements are arranged vertically.
  • the image sensor Only 16 ⁇ 12 image sensors (for example, image sensors indicated by black squares in the image sensor 1080a in FIG. 55A) arranged uniformly distributed throughout 10080a are used for imaging. That is, only an odd-numbered or even-numbered image sensor is used for imaging among a plurality of image sensors arranged in the vertical direction and the horizontal direction.
  • an image 10080b having a desired resolution is obtained.
  • a subject appears on the image sensor 1008a, in order to make it easy to understand the correspondence between each imaging element and an image obtained by imaging.
  • the receiver including the image sensor 10080a captures a wide range, and searches for a transmitter or receives information from many transmitters in the image sensor 10080a. An image is picked up using only a part of the image pickup elements arranged in a distributed manner.
  • the image sensor 10080a When the receiver performs the EX zoom, as shown in FIG. 55 (b), the image sensor 10080a has a part of the image pickup elements arranged locally densely (for example, FIG. 55 (b)). Only 16 ⁇ 12 image sensors indicated by black squares in the image sensor 1080a in FIG. As a result, a part of the image 10080b corresponding to a part of the imaging elements is zoomed, and an image 10080d is obtained.
  • an EX zoom it is possible to receive a visible light signal for a long time by capturing a large image of the transmitter, the reception speed is improved, and a visible light signal can be received from a distance.
  • the number of exposure lines that receive visible light signals cannot be increased, and the reception time of visible light signals does not increase. Therefore, it is better to use other zooms as much as possible.
  • the optical zoom requires a physical movement time of the lens and the image sensor.
  • the EX zoom is performed only by electronic setting change, there is an advantage that the time required for the zoom is short.
  • the priority order of each zoom is (1) EX zoom, (2) optical zoom, and (3) digital zoom.
  • the receiver may select and use any one or a plurality of zooms according to the priority order and the necessity of the zoom magnification.
  • image noise can be suppressed by using an unused imaging device.
  • the exposure time is set for each exposure line or each image sensor.
  • 56, 57, and 58 are diagrams illustrating an example of a signal reception method according to the ninth embodiment.
  • an exposure time is set for each exposure line. That is, a long exposure time for normal imaging is set for a predetermined exposure line (white exposure line in FIG. 56), and a visible light imaging for other exposure lines (black exposure line in FIG. 56).
  • a short exposure time is set. For example, a long exposure time and a short exposure time are alternately set for each exposure line arranged in the vertical direction.
  • the two exposure times may be alternately set for each line, may be set for every several lines, or different exposure times may be set for the upper part and the lower part of the image sensor 10010a.
  • a normal captured image 10010b and a bright line image showing a plurality of bright line patterns are obtained.
  • a visible light captured image 10010c is obtained.
  • the preview image is obtained by interpolating that portion. 10010d can be displayed.
  • information obtained by visible light communication can be superimposed on the preview image 10010d.
  • This information is information associated with a visible light signal obtained by decoding a plurality of bright line patterns included in the visible light captured image 10010c.
  • the receiver stores the normal captured image 10010b or an image obtained by performing interpolation on the normal captured image 10010b as a captured image, and stores the received visible light signal or information associated with the visible light signal. As additional information, it can also be added to the stored captured image.
  • an image sensor 10011a may be used instead of the image sensor 10010a.
  • the exposure time is set not for each exposure line but for each column (hereinafter, referred to as a vertical line) composed of a plurality of imaging elements arranged along a direction perpendicular to the exposure line. That is, a long exposure time for normal imaging is set for a predetermined vertical line (white vertical line in FIG. 57), and a visible light imaging for other vertical lines (black vertical line in FIG. 57). A short exposure time is set.
  • exposure is started at different timings for each exposure line, as in the image sensor 10010a.
  • the receiver obtains a normal captured image 10011b and a visible light captured image 10011c by imaging with the image sensor 10011a. Further, the receiver generates and displays a preview image 10011d based on the normal captured image 10011b and information associated with the visible light signal obtained from the visible light captured image 10011c.
  • this image sensor 10011a unlike the image sensor 10010a, all exposure lines can be used for visible light imaging. As a result, since the visible light captured image 10011c obtained by the image sensor 10011a includes more bright lines than the visible light captured image 10010c, the reception accuracy of the visible light signal can be increased.
  • an image sensor 10012a may be used instead of the image sensor 10010a.
  • the exposure time is set for each image sensor so that the same exposure time is not set continuously for each image sensor along the horizontal direction and the vertical direction. That is, the exposure time for each image sensor is such that a plurality of image sensors with a long exposure time set and a plurality of image sensors with a short exposure time are distributed like a grid or checkered pattern. Is set. Also in this case, similarly to the image sensor 10010a, the exposure is started at different timings for each exposure line, but the exposure time of each image sensor included in the exposure line is different for each exposure line.
  • the receiver obtains a normal captured image 10012b and a visible light captured image 10012c by imaging with the image sensor 10012a. Further, the receiver generates and displays a preview image 10012d based on the normal captured image 10012b and information associated with the visible light signal obtained from the visible light captured image 10012c.
  • the normal captured image 10012b obtained by the image sensor 10012a has data of a plurality of imaging elements arranged in a grid pattern or uniformly, interpolation or more accurately than the normal captured image 10010b and the normal captured image 10011b is performed. You can resize.
  • the visible light captured image 10012c is generated by imaging using all exposure lines of the image sensor 10012a. That is, in the image sensor 10012a, unlike the image sensor 10010a, all exposure lines can be used for visible light imaging. As a result, the visible light captured image 10012c obtained by the image sensor 10012a includes a larger number of bright lines than the visible light captured image 10010c, as in the visible light captured image 10011c. Can be done with precision.
  • FIG. 59 is a diagram illustrating an example of a receiver screen display method according to the ninth embodiment.
  • the receiver including the image sensor 10010a shown in FIG. 56 is set to an exposure time set for an odd-numbered exposure line (hereinafter referred to as an odd-numbered line) and an even-numbered exposure line (hereinafter referred to as an even-numbered line).
  • the exposure time is changed every predetermined time. For example, as shown in FIG. 59, at time t1, the receiver sets a long exposure time for each image sensor on the odd lines and sets a short exposure time on each image sensor on the even lines. Imaging is performed using the set exposure time. Further, at time t2, the receiver sets a short exposure time for each image sensor of the odd line, sets a long exposure time for each image sensor of the even line, and sets the set exposure time. The used imaging is performed. Then, at time t3, the receiver performs imaging using each exposure time set similarly to time t1, and uses each exposure time set similarly to time t2 at time t4. Take an image.
  • the receiver captures an image obtained from each of a plurality of odd lines by imaging (hereinafter referred to as an odd line image) and an image obtained from each of a plurality of even lines by imaging (hereinafter referred to as an even line image).
  • Image1 including the above.
  • the receiver since the exposure time is short in each of the plurality of even lines, the subject is not clearly displayed in each of the even line images. Therefore, the receiver generates a plurality of interpolated line images by interpolating pixel values for the plurality of odd line images. Then, the receiver displays a preview image including a plurality of interpolation line images instead of the plurality of even line images. That is, odd-numbered line images and interpolated line images are alternately arranged in the preview image.
  • the receiver acquires Image2 including a plurality of odd line images and even line images by imaging at time t2. At this time, since the exposure time is short in each of the plurality of odd lines, the subject is not clearly displayed in each of the odd line images. Therefore, the receiver displays a preview image including the odd line image of Image1 instead of the odd line image of Image2. That is, the odd line image of Image1 and the even line image of Image2 are alternately arranged in the preview image.
  • the receiver acquires Image3 including a plurality of odd line images and even line images by imaging.
  • the receiver displays a preview image including an even line image of Image2 instead of an even line image of Image3. That is, in the preview image, the even line image of Image2 and the odd line image of Image3 are alternately arranged.
  • the receiver acquires Image4 including a plurality of odd line images and even line images by imaging.
  • the receiver displays a preview image including the odd line image of Image3 instead of the odd line image of Image4. That is, the odd line image of Image3 and the even line image of Image4 are alternately arranged in the preview image.
  • the receiver performs so-called interlaced display, in which an image including even line images and odd line images obtained at different timings is displayed.
  • Such a receiver can display a fine preview image while performing visible light imaging.
  • the plurality of image sensors set with the same exposure time may be a plurality of image sensors arranged along the horizontal direction of the exposure line as in the image sensor 10010a, or the exposure line as in the image sensor 10011a.
  • a plurality of image sensors arranged along a direction perpendicular to the image sensor may be used, or a plurality of image sensors arranged according to a checkered pattern like the image sensor 10012a.
  • the receiver may save the preview image as imaging data.
  • FIG. 60 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
  • the ratio of the number of image sensors for which a long exposure time is set to the number of image sensors for which a short exposure time is set is 1: 1.
  • This ratio is a ratio between normal imaging and visible light imaging, and is hereinafter referred to as a spatial ratio.
  • the receiver may include an image sensor 10014a.
  • the number of image sensors with a short exposure time is larger than that of images with a long exposure time, and the spatial ratio is 1: N (N> 1).
  • the receiver may include an image sensor 10014c.
  • the image sensor with a short exposure time is smaller than the image sensor with a long exposure time, and the spatial ratio is N (N> 1): 1.
  • the receiver instead of the image sensors 10014a to 10014c, the receiver sets the exposure time for each of the above-described vertical lines, and the image sensors 10015a to 10015c have a spatial ratio of 1: N, 1: 1, or N: 1, respectively. Any of these may be provided.
  • the receiver may perform interlaced display as shown in FIG. 59 using the image sensors 10014a, 10014c, 10015a, and 10015c.
  • FIG. 61 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
  • the receiver may switch the imaging mode between the normal imaging mode and the visible light imaging mode for each frame, as shown in FIG.
  • the normal imaging mode is an imaging mode in which a long exposure time for normal imaging is set for all imaging elements of the image sensor of the receiver.
  • the visible light imaging mode is an imaging mode in which a short exposure time for visible light imaging is set for all imaging elements of the image sensor of the receiver. In this way, by switching between long and short exposure times, a preview image can be displayed by imaging with a long exposure time while receiving a visible light signal by imaging with a short exposure time.
  • the receiver ignores the image obtained by imaging with a short exposure time, and only determines the brightness of the image obtained by imaging with a long exposure time.
  • Automatic exposure may be performed with reference. Thereby, a long exposure time can be determined as an appropriate time.
  • the receiver may switch the imaging mode between the normal imaging mode and the visible light imaging mode for each set of a plurality of frames.
  • the exposure time or when it takes time to stabilize the exposure time, as shown in FIG. 61 (b)
  • Visible light imaging reception of visible light signals
  • normal imaging can be made compatible. Further, since the number of exposure time switching is reduced as the number of frames included in the set is increased, power consumption and heat generation in the receiver can be suppressed.
  • the number of at least one frame continuously generated by imaging with a long exposure time in the normal imaging mode and the at least one frame continuously generated by imaging with a short exposure time in the visible light imaging mode is not necessarily 1: 1. That is, in the cases shown in FIGS. 61A and 61B, the time ratio is 1: 1, but the time ratio may not be 1: 1.
  • the receiver may increase the number of frames in the visible light imaging mode as compared with the frame in the normal imaging mode, as illustrated in FIG. Thereby, the receiving speed of the visible light signal can be increased.
  • the frame rate of the preview image is equal to or higher than a predetermined rate, the difference in the preview image depending on the frame rate is not recognized by human eyes.
  • the imaging frame rate is sufficiently high, for example, when the frame rate is 120 fps, the receiver sets the visible light imaging mode for three consecutive frames, and then the visible light imaging for one frame. Set the mode. Thereby, the receiver can receive a visible light signal at high speed while displaying a preview image at a frame rate of 30 fps, which is sufficiently higher than the above-described predetermined rate. Further, since the number of times of switching is reduced, the effect described with reference to FIG.
  • the receiver may increase the number of frames in the normal imaging mode than that in the visible light imaging mode.
  • the preview image can be smoothly displayed by increasing the number of frames in the normal imaging mode, that is, the frames obtained by imaging with a long exposure time.
  • the number of times of receiving the visible light signal is reduced, there is a power saving effect. Further, since the number of times of switching is reduced, the effect described with reference to FIG.
  • the receiver first switches the imaging mode for each frame as in the case of (a) of FIG. 61, and then receives a visible light signal.
  • the number of frames in the normal imaging mode may be increased as in the case shown in FIG.
  • the search for a new visible light signal can be continued while the preview image is displayed smoothly. Further, since the number of times of switching is reduced, the effect described with reference to FIG.
  • FIG. 62 is a flowchart illustrating an example of a signal reception method according to the ninth embodiment.
  • the receiver starts visible light reception, which is a process of receiving a visible light signal (step S10017a), and sets the exposure time length setting ratio to a value designated by the user (step S10017b).
  • the exposure time length short / high setting ratio is at least one of the above-described space ratio and time ratio.
  • the user may specify only the space ratio, only the time ratio, or both the space ratio and the time ratio, or the receiver may automatically set regardless of the user's specification.
  • the receiver determines whether or not the reception performance is equal to or less than a predetermined value (step S10017c). If it is determined that the value is equal to or less than the predetermined value (Y in step S10017c), the receiver sets a high ratio of the short exposure time (step S10017d). Thereby, reception performance can be improved.
  • the ratio of the short exposure time is the ratio of the number of image sensors set with a short exposure time to the number of image sensors set with a long exposure time in the case of a spatial ratio. It is a ratio of the number of frames generated continuously in the visible light imaging mode to the number of frames generated continuously in the mode.
  • the receiver receives at least a part of the visible light signal and determines whether or not a priority is set for at least a part of the received visible light signal (hereinafter referred to as a received signal) (Ste S10017e).
  • a received signal an identifier indicating the priority is included in the received signal.
  • the receiver determines that the priority is set (Y in step S10017e)
  • the receiver sets the exposure time length / short ratio according to the priority (step S10017f). That is, if the priority is high, the receiver sets the ratio of the short exposure time high.
  • an emergency light configured as a transmitter emits an identifier indicating a high priority when the luminance changes. In this case, the receiver can increase the reception speed by increasing the ratio of the short exposure time, and can promptly display the evacuation route and the like.
  • the receiver determines whether or not reception of all visible light signals has been completed (step S10017g).
  • the receiver repeatedly executes the processing from step S10017c.
  • the receiver sets the ratio of the long exposure time to a high value and shifts to the power saving mode (step S10017h).
  • the ratio of the long exposure time is the ratio of the number of image sensors set with a long exposure time to the number of image sensors set with a short exposure time in the case of a spatial ratio. This is the ratio of the number of frames generated continuously in the normal imaging mode to the number of frames generated continuously in the imaging mode. As a result, the preview image can be displayed smoothly without receiving unnecessary visible light.
  • the receiver determines whether another visible light signal has been found (step S10017i).
  • the receiver determines whether another visible light signal has been found (Y in step S10017i).
  • FIG. 63 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
  • the receiver may set two or more exposure times for the image sensor. That is, as shown in FIG. 63A, each of the exposure lines included in the image sensor is continuously exposed for the longest exposure time among two or more set exposure times. For each exposure line, the receiver reads out the imaging data obtained by exposure of the exposure line when the above-described two or more set exposure times have elapsed. Here, the receiver does not reset the read image data until the longest exposure time has elapsed. Therefore, the receiver can obtain image data for a plurality of exposure times only by exposure with the longest exposure time by recording the accumulated value of the read image data. Note that the image sensor may or may not record the cumulative value of the imaging data. When the image sensor is not used, the components of the receiver that read data from the image sensor perform accumulation calculation, that is, recording the accumulated value of the imaging data.
  • the receiver captures visible light imaging data including a visible light signal generated by exposure with a short exposure time. Read out, followed by normal imaging data generated by exposure with a long exposure time.
  • visible light imaging that is imaging for receiving a visible light signal and normal imaging can be performed simultaneously, and normal imaging can be performed while receiving a visible light signal. Further, by using data of a plurality of exposure times, a signal frequency higher than the sampling theorem can be recognized, and a high-frequency signal or a high-density modulation signal can be received.
  • the receiver when outputting the imaging data, the receiver outputs a data string including the imaging data as an imaging data body, as shown in FIG. 63 (b). That is, the receiver has an imaging mode identifier indicating an imaging mode (visible light imaging or normal imaging), an imaging element identifier for specifying an imaging element or an exposure line to which the imaging element belongs, and an exposure number of an imaging data body.
  • the imaging mode identifier indicating an imaging mode (visible light imaging or normal imaging)
  • an imaging element identifier for specifying an imaging element or an exposure line to which the imaging element belongs
  • an exposure number of an imaging data body By adding additional information including an imaging data number indicating whether it is time-based imaging data and an imaging data length indicating the size of the imaging data body to the imaging data body, the above-described data string is generated and output. .
  • the respective imaging data is not always output in the order of exposure lines. Therefore, by adding the additional information shown in (b) of FIG. 63, it is possible to specify which exposure line the imaging data is.
  • FIG. 64 is a flowchart showing processing of the reception program in the ninth embodiment.
  • This reception program is a program that causes a computer provided in the receiver to execute the processes shown in FIGS. 56 to 63, for example.
  • this reception program is a reception program for receiving information from a light-emitting body that changes in luminance.
  • this reception program causes the computer to execute step SA31, step SA32, and step SA33.
  • step SA31 a first exposure time is set for some of the K image sensors (K is an integer of 4 or more) included in the image sensor, and the K image sensors are set.
  • a second exposure time shorter than the first exposure time is set for the remaining plurality of image sensors.
  • the image sensor captures the subject, which is a light-emitting body that changes in luminance, with the set first and second exposure times, and outputs from the plurality of image sensors having the first exposure time set.
  • step SA33 information is acquired by decoding a plurality of bright line patterns included in the acquired bright line image.
  • a normal image can be obtained by one imaging by the image sensor. And bright line images can be acquired. That is, it is possible to simultaneously capture a normal image and acquire information by visible light communication.
  • the first exposure time is set for a part of a plurality of image sensor rows in the L (L is an integer of 4 or more) image sensor rows included in the image sensor. Then, the second exposure time is set for the remaining plurality of image sensor rows in the L image sensor rows.
  • each of the L image sensor rows is composed of a plurality of image sensors arranged in a row included in the image sensor.
  • each of the L image sensor rows is an exposure line included in the image sensor as shown in FIG.
  • each of the L image pickup device arrays includes a plurality of image pickup devices arranged along a direction perpendicular to an exposure line included in the image sensor.
  • the same exposure time is used for each of the odd-numbered image sensor rows of the L image sensor rows included in the image sensor.
  • One of the first exposure time and the second exposure time is set, and the first exposure time and the second exposure time are the same exposure time for each of even-numbered image sensor rows of the L image sensor rows. You may set the other of time.
  • the exposure time setting step SA31, the image acquisition step SA32, and the information acquisition step SA33 are repeated, the repeated exposure time setting step SA31 is set for each of the odd-numbered image sensor rows in the previous exposure time setting step SA31.
  • the exposure time that has been set may be interchanged with the exposure time that has been set for each even-numbered imaging element array.
  • each time a normal image is acquired the plurality of image sensor rows used for the acquisition can be switched between an odd number of image sensor rows and an even number of image sensor rows.
  • each of the sequentially acquired normal images can be displayed by interlace.
  • a new normal image including an image by an odd-numbered plurality of imaging element arrays and an image by an even-numbered plurality of imaging element arrays is generated. be able to.
  • the first exposure time is set when the setting mode is switched between the normal priority mode and the visible light priority mode and can be switched to the normal priority mode.
  • the number of image sensors may be larger than the number of image sensors for which the second exposure time is set.
  • the number of image sensors for which the first exposure time is set may be smaller than the number of image sensors for which the second exposure time is set.
  • the setting mode when the setting mode is switched to the normal priority mode, the image quality of the normal image can be improved.
  • the setting mode is switched to the visible light priority mode, the reception efficiency of information from the light emitter is improved. can do.
  • a plurality of image sensors for which the first exposure time is set and a plurality of image sensors for which the second exposure time is set are in a checkered pattern (
  • the exposure time of the image sensor may be set for each image sensor included in the image sensor so as to be distributed like Checkered pattern.
  • the plurality of image pickup devices for which the first exposure time is set and the plurality of image pickup devices for which the second exposure time is set are uniformly distributed. Not normal images and bright line images can be acquired.
  • FIG. 65 is a block diagram of a receiving apparatus according to the ninth embodiment.
  • the receiving device A30 is the above-described receiver that executes the processing shown in FIGS. 56 to 63, for example.
  • the receiving device A30 is a receiving device that receives information from a light-emitting body that changes in luminance, and includes a multiple exposure time setting unit A31, an imaging unit A32, and a decoding unit A33.
  • the multiple exposure time setting unit A31 sets the first exposure time for some of the plurality of image pickup elements (K is an integer of 4 or more) included in the image sensor, and K pieces.
  • a second exposure time shorter than the first exposure time is set for the remaining plurality of image sensors.
  • the imaging unit A32 causes the image sensor to capture an image of a subject, which is a light-emitting body that changes in luminance, with the set first and second exposure times, so that a plurality of imaging elements with the first exposure time are set.
  • a bright image corresponding to each of the plurality of exposure lines included in the image sensor which is an image corresponding to the output from the plurality of imaging elements for which the second exposure time is set, while acquiring a normal image according to the output
  • the decoding unit A33 acquires information by decoding a plurality of bright line patterns included in the acquired bright line image.
  • 66 and 67 are diagrams showing an example of the display of the receiver when a visible light signal is received.
  • the receiver when the receiver captures an image of the transmitter 10020d, the receiver displays an image 10020a on which the transmitter 10020d is projected. Further, the receiver generates and displays an image 10020b by superimposing the object 10020e on the image 10020a.
  • the object 10020e is an image indicating that the image of the transmitter 10020d is present and that a visible light signal is received from the transmitter 10020d.
  • the object 10020e may be an image that varies depending on the reception state of the visible light signal (the state of reception, the state of searching for a transmitter, the degree of progress of reception, the reception speed, or the error rate).
  • the receiver changes the color of the object 1020e, the thickness of the line, the type of line (single line, double line, dotted line, or the like), or the interval between dotted lines. Thereby, a user can be made to recognize a receiving state.
  • the receiver generates and displays an image 10020c by superimposing an image indicating the content of the acquired data on the image 10020a as an acquired data image 10020f.
  • the acquired data is data associated with the received visible light signal or the ID indicated by the received visible light signal.
  • the receiver displays the acquired data image 10020f
  • the receiver displays the acquired data image 10020f like a balloon from the transmitter 10020d or near the transmitter 10020d.
  • the acquired data image 10020f is displayed.
  • the receiver may display the acquired data image 10020f so that the acquired data image 10020f gradually approaches the receiver side from the transmitter 10020d. Thereby, the user can recognize which transmitter the received data image 10020f is based on the visible light signal received from.
  • the receiver may display the acquired data image 10020f so that the acquired data image 10020f gradually emerges from the end of the receiver display. This makes it possible for the user to easily recognize that the visible light signal has been acquired at that time.
  • AR Augmented Reality
  • FIG. 68 is a diagram showing an example of display of the acquired data image 10020f.
  • the receiver moves the acquired data image 10020f in accordance with the movement of the transmitter image. This allows the user to recognize that the acquired data image 10020f corresponds to the transmitter.
  • the receiver may display the acquired data image 10020f in association with another image instead of the image of the transmitter. Thereby, AR display can be performed.
  • FIG. 69 is a diagram illustrating an example of an operation when saving or discarding acquired data.
  • the receiver when the user performs a swipe down on the acquired data image 10020f, the receiver stores the acquired data indicated by the acquired data image 10020f. .
  • the receiver places an acquired data image 10020f indicating the stored acquired data at the end of the acquired data image indicating one or more other already stored acquired data. This allows the user to recognize that the acquisition data indicated by the acquisition data image 10020f is the acquisition data stored last. For example, as shown in FIG. 69A, the receiver arranges the acquired data image 10020f in the forefront among the plurality of acquired data images.
  • the receiver when the user swipes the acquired data image 10020f to the right side, the receiver discards the acquired data indicated by the acquired data image 10020f.
  • the receiver may discard the acquired data indicated by the acquired data image 10020f when the image of the transmitter is framed out of the display by the user moving the receiver. Note that the same effect as described above can be obtained regardless of whether the swipe direction is up, down, left, or right.
  • the receiver may display the swipe direction corresponding to the save or discard. As a result, the user can recognize that the data can be saved or discarded by the operation.
  • FIG. 70 is a diagram showing a display example when browsing acquired data.
  • the receiver displays the acquired data images of a plurality of stored acquired data in a small manner overlapping the lower end of the display.
  • the receiver displays each of the plurality of acquired data images in a large size as shown in FIG.
  • the receiver When the user taps the acquired data image to be displayed in the state shown in (b) of FIG. 70, the receiver displays the tapped acquired data image in a larger size as shown in (c) of FIG. A lot of information is displayed in the acquired data image.
  • the receiver When the user taps the back surface display button 10024a, the receiver displays the back surface of the acquired data image, and displays other data related to the acquired data.
  • the receiver disables camera shake correction (off), or converts the captured image according to the correction direction and correction amount of camera shake correction, thereby acquiring the correct imaging direction and accurately performing self-position estimation.
  • the captured image is an image obtained by imaging by the imaging unit of the receiver.
  • Self-position estimation means that the receiver estimates its own position. In the self-position estimation, specifically, the receiver specifies the position of the transmitter based on the received visible light signal, and receives based on the size, position, or shape of the transmitter reflected in the captured image. Identify the relative positional relationship between the transmitter and transmitter. Then, the receiver estimates the position of the receiver based on the position of the transmitter and the relative positional relationship between the receiver and the transmitter.
  • the transmitter is out of frame with a slight blurring of the receiver. End up.
  • the receiver can continuously receive the signal by enabling the camera shake correction.
  • FIG. 71 is a diagram illustrating an example of a transmitter in the ninth embodiment.
  • the above-described transmitter includes a light emitting unit, and transmits a visible light signal by changing the luminance of the light emitting unit.
  • the receiver determines the relative positional relationship between the receiver and the transmitter based on the shape of the transmitter (specifically, the light emitting unit) in the captured image. Find the relative angle to the transmitter.
  • the transmitter includes a light-emitting unit 10090a having a rotationally symmetric shape, as described above, based on the shape of the transmitter in the captured image, The relative angle with the receiver cannot be determined accurately. Therefore, it is desirable that the transmitter includes a light emitting unit having a shape that is not rotationally symmetric.
  • the receiver can obtain
  • the transmitter may include a light emitting unit 10090b that is not in a completely rotationally symmetric shape.
  • the shape of the light emitting unit 10090b is symmetric with respect to 90 ° rotation, but is not completely rotationally symmetric.
  • the receiver obtains a rough angle with the azimuth sensor, and further uses the shape of the transmitter in the captured image to uniquely limit the relative angle between the receiver and the transmitter. And accurate self-position estimation can be performed.
  • the transmitter may include a light emitting unit 10090c shown in FIG.
  • the shape of the light emitting unit 10090c is basically a rotationally symmetric shape. However, since a light guide plate or the like is provided in a part of the light emitting unit 10090c, the shape of the light emitting unit 10090c is not rotationally symmetric.
  • the transmitter may include a light emitting unit 10090d shown in FIG.
  • Each of the light emitting units 10090d includes rotationally symmetric illumination.
  • the overall shape of the light emitting unit 10090d configured by combining them is not a rotationally symmetric shape. Therefore, the receiver can perform accurate self-position estimation by imaging the transmitter.
  • the transmitter may include a light emitting unit 10090e and an object 10090f shown in FIG.
  • the object 10090f is an object (for example, a fire alarm or a pipe) configured so that the positional relationship with the light emitting unit 10090e does not change. Since the shape of the combination of the light emitting unit 10090e and the object 10090f is not a rotationally symmetric shape, the receiver can accurately perform self-position estimation by imaging the light emitting unit 10090e and the object 10090f.
  • the receiver can perform self-position estimation from the position and shape of the transmitter in the captured image. As a result, the receiver can estimate the moving direction and distance of the receiver being imaged. Further, the receiver can perform more accurate self-position estimation by performing triangulation using a plurality of frames or images. By integrating estimation results using a plurality of images and estimation results using a plurality of different combinations of images, the receiver can perform self-position estimation more accurately. At this time, the receiver can perform self-position estimation more accurately by emphasizing and summing up the results estimated from recent captured images.
  • FIG. 72 is a diagram illustrating an example of a reception method according to the ninth embodiment.
  • the horizontal axis of the graph shown in FIG. 72 indicates time, and the vertical axis indicates the position of each exposure line in the image sensor. Furthermore, a solid line arrow in the graph indicates a time (exposure timing) at which exposure of each exposure line in the image sensor is started.
  • the receiver reads the horizontal optical black signal in the image sensor as shown in FIG. 72A, but skips the horizontal optical black signal as shown in FIG. 72B. May be. Thereby, a continuous visible light signal can be received.
  • Horizontal optical black is optical black in the horizontal direction on the exposure line.
  • the vertical optical black is a portion of the optical black other than the horizontal optical black.
  • the black level can be adjusted using the optical black at the start of the visible light imaging similarly to the normal imaging.
  • the receiver can perform continuous reception and black level adjustment by performing black level adjustment using only vertical optical black.
  • the receiver may adjust the black level using horizontal optical black every predetermined time. In the case where the normal imaging and the visible light imaging are alternately performed, the receiver skips the horizontal optical black signal when continuously performing the visible light imaging, and reads the horizontal optical black signal otherwise. Then, the receiver can adjust the black level while continuously receiving the visible light signal by adjusting the black level based on the read signal.
  • the receiver may adjust the black level by setting the darkest part of the visible light captured image as black.
  • the transmitter may add a transmitter identifier indicating the type of transmitter to the visible light signal for transmission.
  • the receiver can perform a receiving operation according to the type of the transmitter.
  • the transmitter displays a content ID indicating which content is currently displayed in addition to the transmitter ID for performing individual identification of the transmitter as a visible light signal.
  • the receiver can display information according to the content currently displayed by the transmitter by separately handling these IDs based on the transmitter identifier. For example, when the transmitter identifier indicates digital signage or an emergency light, the receiver can reduce reception errors by imaging with increased sensitivity.
  • FIG. 73 is a flowchart showing an example of the reception method in the present embodiment.
  • the receiver receives the packet (step S10101) and performs error correction (step S10102). Then, the receiver determines whether or not a packet having the same address as that of the received packet has already been received (step S10103). If it is determined that the data is received (Y in step S10103), the receiver compares the data. That is, the receiver determines whether the data parts are equal (step S10104). Here, when it is determined that they are not equal (N in step S10104), the receiver further determines whether the difference in the plurality of data parts is equal to or greater than a predetermined number, specifically, the number of different bits or the luminance. It is determined whether or not the number of slots having different states is equal to or greater than a predetermined number (step S10105).
  • the receiver discards the packet that has already been received (step S10106). Thereby, when a packet is started to be received from another transmitter, interference with a packet received from a previous transmitter can be avoided.
  • the receiver sets the data of the data part with the most packets having the same data part as the data of the address (step S10107). Alternatively, the receiver takes the most equal bit as the value of that bit at that address. Alternatively, the receiver sets the luminance state having the largest number of equal luminance states as the luminance state of the slot at the address, and demodulates the data at the address.
  • the receiver first acquires a first packet including a data portion and an address portion from a plurality of bright line patterns.
  • the receiver has at least one second packet which is a packet including the same address part as the address part of the first packet among at least one packet already acquired before the first packet. It is determined whether or not there is a packet.
  • the receiver determines whether the data parts of the at least one second packet and the first packet are all equal. judge.
  • the receiver in each of the at least one second packet, out of the parts included in the data part of the second packet, the first packet It is determined whether the number of parts different from each part included in the data part is greater than or equal to a predetermined number.
  • the receiver includes the at least one second packet. Is discarded.
  • the receiver may include the first packet and the at least one second packet.
  • a plurality of packets having the largest number of packets having the same data part are specified.
  • the receiver then decodes at least a part of the visible light identifier (ID) by decoding the data part included in each of the plurality of packets as a data part corresponding to the address part included in the first packet. get.
  • ID visible light identifier
  • a plurality of packets having the same address part are received, even if the data part of those packets is different, the appropriate data part can be decoded, and at least a part of the visible light identifier is You can get it correctly. That is, a plurality of packets having the same address part transmitted from the same transmitter basically have the same data part. However, when the receiver switches the transmitter that is the transmission source of the packet, the receiver may receive a plurality of packets having different data parts even though they have the same address part. In such a case, in the present embodiment, as in step S10106 in FIG. 73, the already received packet (second packet) is discarded, and the data portion of the latest packet (first packet).
  • FIG. 74 is a flowchart showing an example of the reception method in this embodiment.
  • the receiver receives the packet (step S10111) and performs error correction of the address part (step S10112). At this time, the receiver does not demodulate the data part and holds the pixel value obtained by imaging as it is. Then, the receiver determines whether or not there are a predetermined number or more of packets having the same address among the plurality of packets that have already been received (step S10113). If it is determined that the packet exists (Y in step S10113), the receiver performs demodulation processing by combining pixel values of portions corresponding to data portions of a plurality of packets having the same address (step S10114).
  • the first packet including the data portion and the address portion is acquired from the plurality of bright line patterns. Then, among at least one packet that has already been acquired before the first packet, there are a predetermined number or more of second packets that include the same address part as the address part of the first packet. It is determined whether or not. If it is determined that there are more than the predetermined number of second packets, the pixel values of the partial areas of the bright line image corresponding to the respective data portions of the second packet more than the predetermined number, The pixel values of a part of the bright line image corresponding to the data portion of the packet are matched. That is, the pixel values are added. By the addition, a composite pixel value is calculated, and at least a part of the visible light identifier (ID) is obtained by decoding the data portion including the composite pixel value.
  • ID visible light identifier
  • the portion to be demodulated as described above includes a larger amount of data (number of samples) than the data portion of a single packet. Thereby, a data part can be demodulated more correctly. Further, by increasing the number of samples, a signal modulated with a higher modulation frequency can be demodulated.
  • the data part and its error correction code part are modulated at a higher frequency than the header part, the address part, and the error correction code part of the address part.
  • FIG. 75 is a flowchart showing an example of a reception method in the present embodiment.
  • the receiver receives the packet (step S10121), and determines whether or not a packet in which all the bits of the data part are 0 (hereinafter referred to as a 0 termination packet) is received (step S10122).
  • a 0 termination packet a packet in which all the bits of the data part are 0
  • the receiver determines whether or not all packets having addresses below the zero-termination packet address are available, that is, It is determined whether or not it has been received (step S10123).
  • the address is set to a value that increases in accordance with the order of transmission of each packet generated by dividing the transmitted data.
  • the receiver determines that the address of the 0-termination packet is the last address of the packet transmitted from the transmitter. Then, the receiver restores the data by connecting the data of the packets of each address up to the 0 terminal packet (step S10124). Further, the receiver performs an error check on the restored data (step S10125). This makes it possible to send and receive variable-length address data even when the data to be transmitted is not divided into several parts, that is, when the address is not fixed length but variable length. More IDs can be transmitted and received with high efficiency.
  • the receiver acquires a plurality of packets each including a data portion and an address portion from a plurality of bright line patterns. Then, the receiver determines whether or not there is a 0-termination packet that is a packet in which all bits included in the data part indicate 0 among the plurality of acquired packets. If it is determined that there is a zero-termination packet, the receiver, among a plurality of packets, N packets (N is an integer equal to or greater than one) including an address part associated with the address part of the zero-termination packet. It is determined whether or not all related packets are present.
  • the receiver acquires a visible light identifier (ID) by arranging and decoding the data portions of the N related packets.
  • ID visible light identifier
  • the address part associated with the address part of the 0-termination packet is an address part that indicates an address that is smaller than the address shown in the address part of the 0-termination packet and is 0 or more.
  • 76 and 77 are diagrams for explaining a reception method in which the receiver according to the present embodiment uses an exposure time longer than the period of the modulation frequency (modulation period).
  • the visible light signal may not be received correctly.
  • the modulation period is the time of one slot described above. That is, in such a case, there are few exposure lines (exposure lines indicated by black in FIG. 76) reflecting the luminance state of a certain slot. As a result, it is difficult to estimate the luminance of the transmitter when a lot of noise is accidentally included in the pixel values of these exposure lines.
  • the visible light signal can be correctly received. That is, in such a case, since there are many exposure lines reflecting the brightness of a certain slot, the brightness of the transmitter can be estimated from the pixel values of many exposure lines, and it is resistant to noise.
  • the luminance change that is, the change in the pixel value of each exposure line
  • the luminance change received by the receiver can sufficiently follow the luminance change used for transmission.
  • the luminance change received by the receiver cannot follow the luminance change used for transmission at all. That is, the longer the exposure time, the higher the noise resistance because the luminance can be estimated from many exposure lines. However, the longer the exposure time, the lower the identification margin or the smaller the identification margin. With these balances, the noise resistance can be maximized by setting the exposure time to about 2 to 5 times the modulation period.
  • FIG. 78 is a diagram showing an efficient number of divisions with respect to the size of transmission data.
  • the transmitter transmits data due to a change in luminance
  • transmission data data to be transmitted
  • the data size of the packet is large.
  • the transmission data is divided into a plurality of partial data and the partial data is included in each packet, the data size of each packet is reduced.
  • the receiver receives the packet by imaging.
  • the larger the data size of the packet the more difficult it is for the receiver to receive the packet by one imaging, and it is necessary to repeat imaging.
  • FIGS. 78 (a) and 78 (b) it is desirable for the transmitter to increase the number of divisions of the transmission data as the data size of the transmission data increases. However, if the number of divisions is too large, the transmission data cannot be restored unless all of the partial data is received.
  • the data size of the address (address size) is variable, and the data size of the transmission data is 2-16 bits, 16-24 bits, 24-64 bits, 66- In the case of 78 bits, 78-128 bits, 128 bits or more, send data into 1-2 pieces, 2-4 pieces, 4 pieces, 4-6 pieces, 6-8 pieces, 7 pieces or more, respectively. Is divided, transmission data can be efficiently transmitted by a visible light signal. Further, as shown in FIG.
  • the data size of the address (address size) is fixed to 4 bits, and the data size of the transmission data is 2-8 bits, 8-16 bits, 16-30 bits, For 30-64 bits, 66-80 bits, 80-96 bits, 96-132 bits, 132 bits or more, 1-2, 2-3, 2-4, 4-5,
  • the transmission data can be efficiently transmitted by a visible light signal.
  • the transmitter sequentially changes the luminance based on each packet including each of a plurality of partial data. For example, the transmitter changes the luminance based on the packets in the order of the addresses of the packets. Further, the transmitter may perform the luminance change based on the plurality of partial data again in an order different from the address order. Thereby, each partial data can be reliably received by the receiver.
  • FIG. 79A is a diagram showing an example of a setting method in the present embodiment.
  • the receiver acquires a notification operation identifier for identifying the notification operation and a priority of the notification operation identifier (specifically, an identifier indicating the priority) from a server near the receiver.
  • a notification operation when each packet including each of a plurality of partial data is transmitted due to a change in luminance and received by the receiver, reception is performed to notify the user of the receiver that the packets have been received.
  • the operation of the machine For example, the operation is sounding, vibration, or screen display.
  • the receiver receives a packetized visible light signal, that is, each packet including each of a plurality of partial data (step S10132).
  • the receiver acquires the notification operation identifier and the priority of the notification operation identifier (specifically, an identifier indicating the priority) included in the visible light signal (step S10133).
  • the receiver sets the current notification operation setting of the receiver, that is, the notification operation identifier preset in the receiver and the priority of the notification operation identifier (specifically, an identifier indicating the priority). ) Is read out (step S10134).
  • the notification operation identifier set in advance in the receiver is set by a user operation, for example.
  • the receiver selects an identifier having the highest priority among the preset notification operation identifiers and the notification operation identifiers acquired in steps S10131 and S10133 (step S10135).
  • the receiver resets the selected notification operation identifier to itself, thereby performing the operation indicated by the selected notification operation identifier, and notifies the user of reception of the visible light signal (step S10136).
  • the receiver may select a notification operation identifier having a higher priority from the two notification operation identifiers without performing any one of steps S10131 and S10133.
  • the priority of notification action identifiers transmitted from servers installed in theaters or museums, or the priority of notification action identifiers included in visible light signals transmitted within those facilities are set high. Also good. Thereby, it is possible to prevent the sound for the reception notification from being sounded in the facility regardless of the setting of the user. In other facilities, by setting the priority of the notification operation identifier low, the receiver can notify the reception by the operation according to the user's setting.
  • FIG. 79B is a diagram showing another example of the setting method in the present embodiment.
  • the receiver acquires a notification operation identifier for identifying the notification operation and a priority of the notification operation identifier (specifically, an identifier indicating the priority) from a server near the receiver. (Step S10141).
  • the receiver receives a packetized visible light signal, that is, each packet including each of a plurality of partial data (step S10142).
  • the receiver acquires the notification operation identifier and the priority of the notification operation identifier (specifically, an identifier indicating the priority) included in the visible light signal (step S10143).
  • the receiver sets the current notification operation setting of the receiver, that is, the notification operation identifier preset in the receiver and the priority of the notification operation identifier (specifically, an identifier indicating the priority). ) Is read (step S10144).
  • the receiver includes an operation notification identifier indicating an operation for prohibiting the generation of the notification sound in the notification operation identifier set in advance and the notification operation identifier acquired in each of steps S10141 and S10143. It is determined whether or not it is (step S10145). Here, if it is determined that it is included (Y in step S10145), the receiver sounds a notification sound for notifying completion of reception (step S10146). On the other hand, if it is determined that it is not included (N in step S10145), the receiver notifies the user of the completion of reception by, for example, vibration (step S10147).
  • the receiver does not perform either one of steps S10141 and S10143, and determines whether or not the two notification operation identifiers include an operation notification identifier indicating an operation that prohibits the generation of the notification sound. May be.
  • the receiver may perform self-position estimation based on an image obtained by imaging, and notify the user of reception by an operation associated with the estimated position or a facility at the position.
  • FIG. 80 is a flowchart showing processing of the information processing program in the tenth embodiment.
  • This information processing program is a program for changing the luminance of the light emitter of the transmitter described above according to the number of divisions shown in FIG.
  • this information processing program is an information processing program for causing a computer to process information to be transmitted in order to transmit the information to be transmitted by a change in luminance.
  • the information processing program includes an encoding step SA41 that generates an encoded signal by encoding information to be transmitted, and the number of bits of the generated encoded signal is within a range of 24 to 64 bits.
  • the computer executes a division step SA42 for dividing the encoded signal into four partial signals and an output step SA43 for sequentially outputting the four partial signals. These partial signals are output as packets.
  • the information processing program may cause the computer to specify the number of bits of the encoded signal and determine the number of partial signals based on the specified number of bits. In this case, the information processing program causes the computer to generate the determined number of partial signals by dividing the encoded signal.
  • the encoded signal is divided into four partial signals and output.
  • the four partial signals are transmitted as visible light signals and received by the receiver.
  • the larger the number of bits of the output signal the more difficult it is for the receiver to properly receive the signal by imaging, and the reception efficiency decreases. Therefore, it is desirable to divide the signal into signals having a small number of bits, that is, small signals.
  • the receiver cannot receive the original signal unless each small signal is individually received, resulting in a decrease in reception efficiency.
  • the encoded signal when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and sequentially output to indicate the information to be transmitted.
  • the encoded signal can be transmitted as a visible light signal with the best reception efficiency. As a result, communication between various devices can be enabled.
  • four partial signals may be output according to the first order, and further, the four partial signals may be output again according to a second order different from the first order.
  • the four partial signals are repeatedly output in a different order. Therefore, when each output signal is transmitted to the receiver as a visible light signal, the reception efficiency of the four partial signals is increased. It can be further increased. That is, even if the four partial signals are repeatedly output in the same order, the same partial signal may not be received by the receiver. However, by changing the order, the occurrence of such a case can be suppressed.
  • a notification operation identifier may be further attached to the four partial signals.
  • the notification operation identifier is used to identify the operation of the receiver notifying the user of the receiver that the four partial signals are received when the four partial signals are transmitted by the luminance change and received by the receiver. Identifier.
  • the receiver when the notification operation identifier is transmitted as a visible light signal and received by the receiver, the receiver receives the four partial signals to the user according to the operation identified by the notification operation identifier. You can be notified. That is, the notification operation by the receiver can be set on the side that transmits the information to be transmitted.
  • a priority identifier for identifying the priority of the notification operation identifier may be further output in association with the four partial signals.
  • the receiver handles the notification operation identifier according to the priority identified by the priority identifier. be able to. That is, when the receiver acquires another notification operation identifier, the receiver is identified by the notification operation identified by the notification operation identifier transmitted as the visible light signal and the other notification operation identifier.
  • One of the notification operations can be selected based on the priority.
  • An information processing program is an information processing program that causes a computer to process information on a transmission target in order to transmit the information on the transmission target with a luminance change, and encodes the information on the transmission target
  • An encoding step for generating an encoded signal by dividing the encoded signal into four partial signals when the number of bits of the generated encoded signal is in a range of 24 to 64 bits; And causing the computer to execute an output step of sequentially outputting the four partial signals.
  • the encoded signal when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and output.
  • the four partial signals are transmitted as visible light signals and received by the receiver.
  • the larger the number of bits of the output signal the more difficult it is for the receiver to properly receive the signal by imaging, and the reception efficiency decreases. Therefore, it is desirable to divide the signal into signals having a small number of bits, that is, small signals.
  • the receiver cannot receive the original signal unless each small signal is individually received, resulting in a decrease in reception efficiency.
  • the encoded signal when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and sequentially output to indicate the information to be transmitted.
  • the encoded signal can be transmitted as a visible light signal with the best reception efficiency. As a result, communication between various devices can be enabled.
  • the four partial signals may be output according to a first order, and the four partial signals may be output again according to a second order different from the first order.
  • the four partial signals are repeatedly output in a different order. Therefore, when each output signal is transmitted to the receiver as a visible light signal, the reception efficiency of the four partial signals is increased. It can be further increased. That is, even if the four partial signals are repeatedly output in the same order, the same partial signal may not be received by the receiver. However, by changing the order, the occurrence of such a case can be suppressed.
  • the four partial signals are further output with a notification operation identifier attached thereto, and the notification operation identifier is transmitted when the four partial signals are transmitted by a luminance change and received by a receiver. Furthermore, an identifier for identifying the operation of the receiver that notifies the user of the receiver that the four partial signals have been received may be used.
  • the receiver when the notification operation identifier is transmitted as a visible light signal and received by the receiver, the receiver receives the four partial signals to the user according to the operation identified by the notification operation identifier. You can be notified. That is, the notification operation by the receiver can be set on the side that transmits the information to be transmitted.
  • a priority identifier for identifying a priority of the notification operation identifier may be output along with the four partial signals.
  • the receiver handles the notification operation identifier according to the priority identified by the priority identifier. be able to. That is, when the receiver acquires another notification operation identifier, the receiver is identified by the notification operation identified by the notification operation identifier transmitted as the visible light signal and the other notification operation identifier.
  • One of the notification operations can be selected based on the priority.
  • FIG. 81 is a diagram for explaining an application example of the transmission / reception system in the present embodiment.
  • This transmission / reception system includes a transmitter 10131b configured as an electronic device such as a washing machine, a receiver 10131a configured as a smartphone, and a communication device 10131c configured as an access point or a router.
  • FIG. 82 is a flowchart showing the processing operation of the transmission / reception system in the present embodiment.
  • the transmitter 10131b transmits Wi-Fi, Bluetooth (registered trademark) information such as an SSID, a password, an IP address, a MAC address, or an encryption key to connect to the transmitter 10131b. ) Or Ethernet (registered trademark) or the like (step S10166), and waits for connection.
  • the transmitter 10131b may transmit these pieces of information directly or indirectly.
  • the transmitter 10131b transmits an ID associated with the information. For example, the receiver 10131a that has received the ID downloads information associated with the ID from a server or the like.
  • the receiver 10131a receives the information (step S10151), connects to the transmitter 10131b, and connects to the communication device 10131c configured as an access point or router (SSID, password, IP address, MAC address, Alternatively, the encryption key or the like is transmitted to the transmitter 10131b (step S10152).
  • the receiver 10131a registers information (MAC address, IP address, encryption key, etc.) for the transmitter 10131b to connect to the communication device 10131c in the communication device 10131c, and makes the communication device 10131c wait for connection. Further, the receiver 10131a notifies the transmitter 10131b that preparation for connection from the transmitter 10131b to the communication device 10131c is completed (step S10153).
  • the transmitter 10131b disconnects from the receiver 10131a (step S10168) and connects to the communication device 10131c (step S10169). If the connection is successful (Y in step S10170), the transmitter 10131b notifies the receiver 10131a of the connection success via the communication device 10131c, and notifies the user of the connection success with a screen display, LED status, voice, or the like. (Step S10171). If the connection fails (N in step S10170), the transmitter 10131b notifies the receiver 10131a of the connection failure through visible light communication, and notifies the user in the same way as when it is successful (step S10172). The connection success may be notified by visible light communication.
  • the receiver 10131a connects to the communication device 10131c (step S10154), and if there is no notification of a connection success or failure (N in step S10155 and N in step S10156), the transmitter 10131b can be accessed via the communication device 10131c. It is confirmed whether or not (step S10157). If not (N in step S10157), the receiver 10131a determines whether or not the connection to the transmitter 10131b using the information received from the transmitter 10131b has been made a predetermined number of times or more (step S10158). If it is determined that the predetermined number of times has not been performed (N in step S10158), the receiver 10131a repeats the process from step S10152.
  • the receiver 10131a notifies the user of the processing failure (step S10159). If the receiver 10131a determines in step S10156 that there has been a notification of a successful connection (Y in step S10156), the receiver 10131a notifies the user of the processing success (step S10160). That is, the receiver 10131a notifies the user whether or not the transmitter 10131b has been able to connect to the communication device 10131c by screen display, voice, or the like. Accordingly, the transmitter 10131b can be connected to the communication device 10131c without requiring complicated input by the user.
  • FIG. 83 is a diagram for explaining an application example of the transmission / reception system in the present embodiment.
  • This transmission / reception system includes an air conditioner 10133b, a transmitter 10133c configured as an electronic device such as a wireless adapter connected to the air conditioner 10133b, a receiver 10133a configured as, for example, a smartphone, and a communication device configured as an access point or a router. 10133d and another electronic device 10133e configured as, for example, a wireless adapter, a wireless access point, or a router.
  • FIG. 84 is a flowchart showing the processing operation of the transmission / reception system in the present embodiment.
  • the air conditioner 10133b or the transmitter 10133c is referred to as an electronic device A
  • the electronic device 10133e is referred to as an electronic device B.
  • step S10188 when the start button is pressed (step S10188), the electronic device A transmits information (individual ID, password, IP address, MAC address, encryption key, etc.) for connection to itself (step S10189). The connection is awaited (step S10190). The electronic device A may transmit these pieces of information directly or indirectly as described above.
  • the receiver 10133a receives the information from the electronic device A (step S10181) and transmits the information to the electronic device B (step S10182).
  • electronic device B receives the information (step S10196), it connects to electronic device A according to the received information (step S10197). Then, the electronic device B determines whether or not the connection with the electronic device A is established (step S10198), and notifies the receiver 10133a of the success or failure (step S10199 or step S10200).
  • the electronic device A If the electronic device A is connected to the electronic device B during a predetermined time (Y in step S10191), the electronic device A notifies the receiver 10133a of the connection success via the electronic device B (step S10192) and is not connected (step S10192). N in step S10191), the receiver 10133a is notified of the connection failure by visible light communication (step S10193). In addition, the electronic device A notifies the user of the success or failure of the connection through a screen display, a light emission state, sound, or the like. Accordingly, the electronic device A (transmitter 10133c) can be connected to the electronic device B (electronic device 10133e) without requiring complicated input by the user. Note that the air conditioner 10133b and the transmitter 10133c illustrated in FIG. 83 may be configured integrally, and similarly, the communication device 10133d and the electronic device 10133e may be configured integrally.
  • FIG. 85 is a diagram for explaining an application example of the transmission / reception system according to the present embodiment.
  • This transmission / reception system includes a receiver 10135a configured as, for example, a digital still camera or a digital video camera, and a transmitter 10135b configured as, for example, illumination.
  • FIG. 86 is a flowchart showing the processing operation of the transmission / reception system in the present embodiment.
  • the receiver 10135a sends an imaging information transmission command to the transmitter 10135b (step S10211).
  • the transmitter 10135b receives the imaging information transmission command, the imaging information transmission button is pressed, the imaging information transmission switch is turned on, or the power is turned on (in step S10221). Y), imaging information is transmitted (step S10222).
  • the imaging information transmission command is a command for transmitting imaging information, and the imaging information indicates, for example, the color temperature of illumination, spectral distribution, illuminance, or light distribution.
  • the transmitter 10135b may transmit the imaging information directly or indirectly as described above. When transmitting indirectly, the transmitter 10135b transmits an ID associated with the imaging information.
  • the receiver 10135a that has received the ID downloads, for example, imaging information associated with the ID from a server or the like.
  • the transmitter 10135b is a method for transmitting a transmission stop command to itself (frequency of radio waves, infrared rays, or sound waves for transmitting the transmission stop command, or an SSID, a password or an IP address for connection to self-confidence) ) May be sent.
  • the receiver 10135a When receiving the imaging information (step S10212), the receiver 10135a transmits a transmission stop command to the transmitter 10135b (step S10213).
  • the transmitter 10135b receives a transmission stop command from the receiver 10135a (step S10223), the transmitter 10135b stops transmission of imaging information and emits light uniformly (step S10224).
  • the receiver 10135a sets the imaging parameter according to the imaging information received in step S10212 (step S10214), or notifies the user of the imaging information.
  • the imaging parameter is, for example, white balance, exposure time, focal length, sensitivity, or scene mode. Thereby, it is possible to take an image with an optimum setting according to the illumination.
  • the receiver 10135a captures an image after the transmission of imaging information from the transmitter 10135b is stopped (Y in step S10215) (step S10216). Thereby, it is possible to perform imaging without changing the brightness of the subject due to signal transmission.
  • the receiver 10135a may transmit a transmission start command for prompting the start of transmission of imaging information to the transmitter 10135b (step S10217).
  • FIG. 87 is a diagram for describing an application example of the transmitter in this embodiment.
  • the transmitter 10137b configured as a charger includes a light emitting unit, and transmits a visible light signal indicating a charged state of the battery from the light emitting unit.
  • the state of charge of the battery can be notified without an expensive display device.
  • a small LED is used as the light emitting unit, a visible light signal cannot be received unless the LED is imaged from nearby.
  • the transmitter 10137c having a protrusion near the LED it is difficult to close-up the LED due to the protrusion. Therefore, the visible light signal from the transmitter 10137b having no protrusion near the LED can be received more easily than the visible light signal from the transmitter 10137c.
  • FIG. 88 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
  • the transmitter transmits a signal indicating that the error has occurred, or a signal corresponding to the error code. Can tell error details.
  • the receiver can correct the error or appropriately report the error content to the service center by indicating an appropriate response according to the error content.
  • the transmitter When the transmitter is in demo mode, it will send a demo code.
  • a demo of a transmitter that is a product is performed at a storefront, the store visitor can receive the demo code and acquire the product description associated with the demo code.
  • Whether or not the demo mode is selected is determined by whether the transmitter operation setting is in the demo mode, the storefront CAS card is inserted, the CAS card is not inserted, or the recording medium is not inserted. Judging from the point.
  • FIG. 89 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
  • a transmitter configured as an air conditioner remote controller receives main body information
  • the transmitter transmits the main body information, so that the receiver receives information on a distant main body from a nearby transmitter. can do.
  • the receiver can also receive information from a main body that exists in a place where visible light communication is impossible, such as over a network.
  • FIG. 90 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
  • the transmitter transmits if the ambient brightness is above a certain level, and stops transmitting if it is below a certain level.
  • a transmitter configured as an advertisement for a train can automatically stop its operation when the vehicle enters the garage, and battery consumption can be suppressed.
  • FIG. 91 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
  • the transmitter associates the content that the receiver wants to acquire with the transmission ID in accordance with the display timing of the content to be displayed. Each time the display content is changed, the association change is registered with the server.
  • the transmitter sets the server so that another content is delivered to the receiver in accordance with the change timing of the display content.
  • the server transmits the content according to the set schedule to the receiver.
  • the receiver can acquire content that matches the content displayed by the transmitter.
  • FIG. 92 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
  • the transmitter synchronizes the time with the server, adjusts the timing so that a predetermined part is displayed at a predetermined time, and displays the content.
  • the receiver can acquire content that matches the content displayed by the transmitter.
  • FIG. 93 is a diagram for explaining an example of operations of the transmitter and the receiver in this embodiment.
  • the transmitter transmits the display time of the content being displayed in addition to the transmitter ID.
  • the content display time is information that can identify the currently displayed content, and can be expressed by, for example, an elapsed time from the start time of the content.
  • the receiver acquires the content associated with the received ID from the server and displays the content according to the received display time. Thereby, for example, when a transmitter configured as digital signage changes display contents one after another, the receiver can acquire content that matches the content displayed by the transmitter.
  • the receiver changes the content to be displayed over time. As a result, even if a signal is not received again when the display content of the transmitter changes, the content that matches the display content is displayed.
  • FIG. 94 is a diagram for explaining an example of the operation of the receiver in this embodiment.
  • the receiver is authorized to access information such as the location of the receiver, phone number, ID, installed application, and user (Age, sex, occupation, preference, etc.) are transmitted to the server together with the received ID.
  • the account is not registered, if the user permits uploading of information as described above, it is similarly sent to the server. If not permitted, only the received ID is sent to the server. .
  • FIG. 95 is a diagram for explaining an example of operation of the receiver in this embodiment.
  • the receiver acquires content associated with the received ID from the server.
  • the active application can handle (can display or play) the acquired content
  • the acquired content is displayed / reproduced by the active application. If it cannot be handled, it is confirmed whether or not an app that can be handled is installed in the receiver. If it is installed, the app is activated to display / play the acquired content. If it is not installed, it automatically installs, displays a message prompting installation, displays a download screen, and displays and plays the acquired content after installation.
  • the acquired content can be handled appropriately (display / playback etc.).
  • FIG. 96 is a diagram for explaining an example of operation of the receiver in this embodiment.
  • the receiver acquires the content associated with the received ID and information (application ID) for specifying the application to be activated from the server.
  • application ID information for specifying the application to be activated from the server.
  • the running application is the designated application
  • the acquired content is displayed / reproduced.
  • the designated application is installed in the receiver, the designated application is activated to display / play the acquired content. If it is not installed, it automatically installs, displays a message prompting installation, displays a download screen, and displays and plays the acquired content after installation.
  • the receiver may acquire only the application ID from the server and start the designated application.
  • the receiver may perform specified settings.
  • the receiver may set the designated parameter and activate the designated application.
  • FIG. 97 is a diagram for explaining an example of operation of the receiver in this embodiment.
  • the receiver determines that the signal is being streamed when the value of the predetermined address of the received data is a predetermined value or the received data includes a predetermined identifier, and receives the streaming data. Receive with. Otherwise, it is received by a normal receiving method.
  • FIG. 98 is a diagram for explaining an example of operation of the receiver in this embodiment.
  • the receiver refers to the table in the application, and if the reception ID exists in the table, the receiver is designated in the table. Get the content. Otherwise, the content set as the reception ID is acquired from the server.
  • FIG. 99 is a diagram for explaining an example of operation of the receiver in this embodiment.
  • the receiver detects the signal and recognizes the modulation frequency of the signal.
  • the receiver sets the exposure time according to the period of the modulation frequency (modulation period). For example, the signal can be easily received by setting the exposure time to be approximately equal to the modulation period. Further, for example, by setting the exposure time to an integral multiple of the modulation period or a value close to it (approximately ⁇ 30%), it is possible to easily receive a signal by convolutional decoding.
  • FIG. 100 is a diagram for explaining an example of operation of the receiver in this embodiment.
  • the receiver transmits current location information and information related to the user (address, gender, age, preference, etc.) to the server.
  • the server transmits parameters for optimal operation of the transmitter to the receiver in accordance with the received information.
  • the receiver sets the received parameter if it can be set in the transmitter. If it cannot be set, the parameter is displayed and the user is prompted to set the parameter.
  • FIG. 101 is a diagram for explaining an example of a configuration of transmission data in the present embodiment.
  • the information to be transmitted includes an identifier, and the receiver can know the configuration of the subsequent part by its value. For example, it is possible to specify the data length, the type and length of the error correction code, the data division point, and the like.
  • the transmitter can change the type and length of the data body and error correction code according to the nature of the transmitter and the communication path. Also, the transmitter can cause the receiver to acquire an ID corresponding to the content ID by transmitting the content ID in addition to the ID of the transmitter.
  • FIG. 102 is a diagram for explaining the operation of the receiver in this embodiment.
  • the receiver 1210a in the present embodiment switches the shutter speed between high speed and low speed, for example, in units of frames when performing continuous shooting by the image sensor. Furthermore, the receiver 1210a switches the process for the frame to a barcode recognition process and a visible light recognition process based on the frame obtained by the shooting.
  • the barcode recognition process is a process for decoding a barcode reflected in a frame obtained by a low shutter speed.
  • the visible light recognition process is a process of decoding the above-described bright line pattern reflected in a frame obtained with a high shutter speed.
  • Such a receiver 1210a includes a video input unit 1211, a barcode / visible light identification unit 1212, a barcode recognition unit 1212a, a visible light recognition unit 1212b, and an output unit 1213.
  • the video input unit 1211 includes an image sensor, and switches the shutter speed for shooting by the image sensor. That is, the video input unit 1211 switches the shutter speed alternately between low speed and high speed, for example, in units of frames. More specifically, the video input unit 1211 switches the shutter speed to high speed for odd-numbered frames and switches the shutter speed to low speed for even-numbered frames. Shooting at a low shutter speed is shooting in the above-described normal shooting mode, and shooting at a high shutter speed is shooting in the above-described visible light communication mode. That is, when the shutter speed is low, the exposure time of each exposure line included in the image sensor is long, and a normal captured image on which the subject is projected is obtained as a frame. Further, when the shutter speed is high, the exposure time of each exposure line included in the image sensor is short, and a visible light communication image in which the above-described bright line is projected is obtained as a frame.
  • the barcode / visible light identifying unit 1212 switches processing for the image by determining whether a barcode appears or whether a bright line appears in the image obtained by the video input unit 1211. For example, if a barcode appears in a frame obtained by shooting at a low shutter speed, the barcode / visible light identification unit 1212 causes the barcode recognition unit 1212a to perform processing on the image. On the other hand, if a bright line appears in an image obtained by shooting at a high shutter speed, the barcode / visible light identifying unit 1212 causes the visible light recognizing unit 1212b to execute processing on the image.
  • the barcode recognition unit 1212a decodes a barcode appearing in a frame obtained by shooting at a low shutter speed.
  • the barcode recognition unit 1212a acquires barcode data (for example, a barcode identifier) by decoding, and outputs the barcode identifier to the output unit 1213.
  • the barcode may be a one-dimensional code or a two-dimensional code (for example, a QR code (registered trademark)).
  • the visible light recognizing unit 1212b decodes the bright line pattern appearing in the frame obtained by photographing at a high shutter speed.
  • the visible light recognizing unit 1212b obtains visible light data (for example, a visible light identifier) by the decoding, and outputs the visible light identifier to the output unit 1213.
  • the visible light data is the above-described visible light signal.
  • the output unit 1213 displays only frames obtained by shooting at a low shutter speed. Therefore, when the subject imaged by the video input unit 1211 is a barcode, the output unit 1213 displays the barcode.
  • the output unit 1213 displays an image of the digital signage without displaying the bright line pattern.
  • the output part 1213 acquires a barcode identifier, it acquires the information matched with the barcode identifier from a server etc., for example, and displays the information.
  • the output unit 1213 acquires a visible light identifier
  • the output unit 1213 acquires information associated with the visible light identifier from, for example, a server and displays the information.
  • the receiver 1210a as a terminal device includes an image sensor, and the image sensor is switched while alternately switching the shutter speed of the image sensor between a first speed and a second speed higher than the first speed. Perform continuous shooting with.
  • the receiver 1210a obtains an image showing the barcode by photographing when the shutter speed is the first speed, A barcode identifier is obtained by decoding the barcode reflected in the image.
  • the receiver 1210a when the subject to be photographed by the image sensor is a light source (for example, digital signage), the receiver 1210a includes a plurality of images included in the image sensor by photographing when the shutter speed is the second speed.
  • a bright line image that is an image including a bright line corresponding to each of the exposure lines is acquired. Then, the receiver 1210a acquires a visible light signal as a visible light identifier by decoding a plurality of bright line patterns included in the acquired bright line image. Further, the receiver 1210a displays an image obtained by photographing when the shutter speed is the first speed.
  • the barcode is decoded and the visible light signal can be received by switching between the barcode recognition process and the visible light recognition process. Furthermore, power consumption can be suppressed by switching.
  • the receiver in this embodiment may perform image recognition processing simultaneously with visible light processing instead of barcode recognition processing.
  • FIG. 103A is a diagram for explaining another operation of the receiver in this embodiment.
  • the receiver 1210b in the present embodiment switches the shutter speed between high speed and low speed, for example, in units of frames when performing continuous shooting by the image sensor. Further, the receiver 1210b simultaneously performs the image recognition process and the above-described visible light recognition process on the image (frame) obtained by the photographing.
  • the image recognition process is a process for recognizing a subject appearing in a frame obtained with a low shutter speed.
  • Such a receiver 1210b includes a video input unit 1211, an image recognition unit 1212c, a visible light recognition unit 1212b, and an output unit 1215.
  • the video input unit 1211 includes an image sensor, and switches the shutter speed for shooting by the image sensor. That is, the video input unit 1211 switches the shutter speed alternately between a low speed and a high speed, for example, in frame units. More specifically, the video input unit 1211 switches the shutter speed to high speed for odd-numbered frames and switches the shutter speed to low speed for even-numbered frames. Shooting at a low shutter speed is shooting in the above-described normal shooting mode, and shooting at a high shutter speed is shooting in the above-described visible light communication mode. That is, when the shutter speed is low, the exposure time of each exposure line included in the image sensor is long, and a normal captured image on which the subject is projected is obtained as a frame. Further, when the shutter speed is high, the exposure time of each exposure line included in the image sensor is short, and a visible light communication image in which the above-described bright line is projected is obtained as a frame.
  • the image recognition unit 1212c recognizes a subject appearing in a frame obtained by shooting at a low shutter speed and specifies the position of the subject in the frame. As a result of recognition, the image recognition unit 1212c determines whether or not the subject is an AR (Augmented Reality) target (hereinafter referred to as an AR target). When the image recognition unit 1212c determines that the subject is an AR object, the image recognition unit 1212c generates image recognition data that is data for displaying information about the subject (for example, the position of the subject and the AR marker). The AR marker is output to the output unit 1215.
  • an AR target Augmented Reality
  • the output unit 1215 displays only frames obtained by shooting at a low shutter speed, as with the output unit 1213 described above. Therefore, when the subject photographed by the video input unit 1211 is digital signage that transmits a visible light signal, the output unit 1213 displays the image of the digital signage without displaying the bright line pattern. Further, when acquiring the image recognition data from the image recognition unit 1212c, the output unit 1215 superimposes a white frame-shaped indicator surrounding the subject on the frame based on the position of the subject in the frame indicated by the image recognition data. .
  • FIG. 103B is a diagram illustrating an example of an indicator displayed by the output unit 1215.
  • the output unit 1215 superimposes a white frame-shaped indicator 1215b surrounding a subject image 1215a configured as digital signage on a frame, for example. That is, the output unit 1215 displays the indicator 1215b indicating the subject whose image has been recognized. Furthermore, when the output unit 1215 acquires the visible light identifier from the visible light recognition unit 1212b, the output unit 1215 changes the color of the indicator 1215b from white to red, for example.
  • FIG. 103C is a diagram showing a display example of AR.
  • the output unit 1215 further acquires information related to the subject associated with the visible light identifier as related information from, for example, a server.
  • the output unit 1215 describes related information in the AR marker 1215c indicated by the image recognition data, and displays the AR marker 1215c in which the related information is described in association with the subject image 1215a in the frame.
  • an AR using visible light communication can be realized by simultaneously performing an image recognition process and a visible light recognition process.
  • the receiver 1210a illustrated in FIG. 103A may also display the indicator 1215b illustrated in FIG. 103B, similarly to the receiver 1210b.
  • the receiver 1210a displays a white frame-shaped indicator 1215b surrounding the barcode. Then, when the barcode is decoded, the receiver 1210a changes the color of the indicator 1215b from white to red.
  • the receiver 1210a identifies a part in the low-speed frame corresponding to the part where the bright line pattern is located. For example, when the digital signage is transmitting a visible light signal, an image of the digital signage in the low-speed frame is specified. Note that the low speed frame is a frame obtained by shooting at a low shutter speed. Then, the receiver 1210a displays a white frame-shaped indicator 1215b surrounding a specified portion (for example, the above-mentioned digital signage image) in the low-speed frame so as to be superimposed on the low-speed frame. Then, when the bright line pattern is decoded, the receiver 1210a changes the color of the indicator 1215b from white to red.
  • FIG. 104A is a diagram for describing an example of a transmitter in this embodiment.
  • the transmitter 1220a in the present embodiment transmits a visible light signal in synchronization with the transmitter 1230. That is, the transmitter 1220a transmits the same visible light signal as the visible light signal at the timing when the transmitter 1230 transmits the visible light signal.
  • the transmitter 1230 includes a light emitting unit 1231, and transmits a visible light signal when the luminance of the light emitting unit 1231 changes.
  • Such a transmitter 1220 a includes a light receiving unit 1221, a signal analyzing unit 1222, a transmission clock adjusting unit 1223 a, and a light emitting unit 1224.
  • the light emitting unit 1224 transmits a visible light signal that is the same as the visible light signal transmitted from the transmitter 1230 by changing the luminance.
  • the light receiving unit 1221 receives a visible light signal from the transmitter 1230 by receiving visible light from the transmitter 1230.
  • the signal analysis unit 1222 analyzes the visible light signal received by the light receiving unit 1221 and transmits the analysis result to the transmission clock adjustment unit 1223a.
  • the transmission clock adjustment unit 1223a adjusts the timing of the visible light signal transmitted from the light emitting unit 1224 based on the analysis result.
  • the transmission clock adjustment unit 1223a uses the light emitting unit 1224 so that the timing at which the visible light signal is transmitted from the light emitting unit 1231 of the transmitter 1230 matches the timing at which the visible light signal is transmitted from the light emitting unit 1224. Adjust the brightness change timing.
  • the waveform of the visible light signal transmitted by the transmitter 1220a and the waveform of the visible light signal transmitted by the transmitter 1230 can be matched in timing.
  • FIG. 104B is a diagram for describing another example of the transmitter in this embodiment.
  • the transmitter 1220b in the present embodiment transmits a visible light signal in synchronization with the transmitter 1230, similarly to the transmitter 1220a. That is, the transmitter 1200b transmits the same visible light signal as the visible light signal at the timing when the transmitter 1230 transmits the visible light signal.
  • Such a transmitter 1220b includes a first light receiving unit 1221a, a second light receiving unit 1221b, a comparison unit 1225, a transmission clock adjusting unit 1223b, and a light emitting unit 1224.
  • the first light receiving unit 1221a receives the visible light from the transmitter 1230 by receiving the visible light from the transmitter 1230, similarly to the light receiving unit 1221.
  • the second light receiving unit 1221 b receives visible light from the light emitting unit 1224.
  • the comparison unit 1225 compares the first timing at which visible light is received by the first light receiving unit 1221a and the second timing at which visible light is received by the second light receiving unit 1221b. Then, the comparison unit 1225 outputs the difference (that is, the delay time) between the first timing and the second timing to the transmission clock adjustment unit 1223b.
  • the transmission clock adjusting unit 1223b adjusts the timing of the visible light signal transmitted from the light emitting unit 1224 so that the delay time is shortened.
  • the waveform of the visible light signal transmitted by the transmitter 1220b and the waveform of the visible light signal transmitted by the transmitter 1230 can be matched more accurately in terms of timing.
  • two transmitters transmit the same visible light signal, but different visible light signals may be transmitted. That is, when the two transmitters transmit the same visible light signal, they are transmitted in synchronization as described above. When two transmitters transmit different visible light signals, only one of the two transmitters transmits a visible light signal, while the other transmitter is uniformly lit or extinguished. To do. Thereafter, one transmitter is uniformly lit or extinguished, while only the other transmitter transmits a visible light signal. Two transmitters may transmit different visible light signals simultaneously.
  • FIG. 105A is a diagram for describing an example of synchronous transmission by a plurality of transmitters in the present embodiment.
  • the plurality of transmitters 1220 in the present embodiment are arranged, for example, in a line as shown in FIG. 105A. Note that these transmitters 1220 have the same configuration as the transmitter 1220a shown in FIG. 104A or the transmitter 1220b shown in FIG. 104B. Each of the plurality of transmitters 1220 transmits a visible light signal in synchronization with one of the transmitters 1220 on both sides.
  • FIG. 105B is a diagram for describing an example of synchronous transmission by a plurality of transmitters in the present embodiment.
  • One transmitter 1220 among the plurality of transmitters 1220 in this embodiment is a reference for synchronizing the visible light signal, and the remaining plurality of transmitters 1220 are visible light signals so as to match the reference. Send.
  • FIG. 106 is a diagram for describing another example of synchronous transmission by a plurality of transmitters in the present embodiment.
  • Each of the plurality of transmitters 1240 in the present embodiment receives a synchronization signal and transmits a visible light signal in accordance with the synchronization signal. Thereby, a visible light signal is transmitted from each of the plurality of transmitters 1240 in synchronization.
  • each of the plurality of transmitters 1240 includes a control unit 1241, a synchronization control unit 1242, a photocoupler 1243, an LED drive circuit 1244, an LED 1245, and a photodiode 1246.
  • the control unit 1241 receives the synchronization signal and outputs the synchronization signal to the synchronization control unit 1242.
  • the LED 1245 is a light source that emits visible light, and blinks (that is, changes in luminance) in accordance with control by the LED drive circuit 1244. Thus, a visible light signal is transmitted from the LED 1245 to the outside of the transmitter 1240.
  • the photocoupler 1243 transmits a signal between the synchronization control unit 1242 and the LED drive circuit 1244 while being electrically insulated. Specifically, the photocoupler 1243 transmits a transmission start signal described later transmitted from the synchronization control unit 1242 to the LED drive circuit 1244.
  • the LED drive circuit 1244 When the LED drive circuit 1244 receives the transmission start signal from the synchronization control unit 1242 via the photocoupler 1243, the LED drive circuit 1244 causes the LED 1245 to start transmitting the visible light signal at the timing when the transmission start signal is received.
  • the photodiode 1246 detects visible light emitted from the LED 1245, and outputs a detection signal indicating that the visible light has been detected to the synchronization control unit 1242.
  • the synchronization control unit 1242 When receiving the synchronization signal from the control unit 1241, the synchronization control unit 1242 transmits a transmission start signal to the LED drive circuit 1244 via the photocoupler 1243. By transmitting this transmission start signal, transmission of the visible light signal is started.
  • the synchronization control unit 1242 receives the detection signal from the photodiode 1246 by transmitting the visible light signal, the synchronization control unit 1242 is a delay that is a difference between the timing at which the detection signal is received and the timing at which the synchronization signal is received from the control unit 1241. Calculate time.
  • the synchronization control unit 1242 When the synchronization control unit 1242 receives the next synchronization signal from the control unit 1241, the synchronization control unit 1242 adjusts the timing for transmitting the next transmission start signal based on the calculated delay time. That is, the synchronization control unit 1242 adjusts the timing of transmitting the next transmission start signal so that the delay time for the next synchronization signal becomes a predetermined set delay time. Thus, the synchronization control unit 1242 transmits the next transmission start signal at the adjusted timing.
  • FIG. 107 is a diagram for explaining signal processing in the transmitter 1240.
  • the synchronization control unit 1242 When receiving the synchronization signal, the synchronization control unit 1242 generates a delay time setting signal that generates a delay time setting pulse at a predetermined timing. Note that receiving the synchronization signal specifically means receiving a synchronization pulse. That is, the synchronization control unit 1242 generates the delay time setting signal so that the delay time setting pulse rises at the timing when the set delay time has passed since the falling edge of the synchronization pulse.
  • the synchronization control unit 1242 transmits a transmission start signal to the LED drive circuit 1244 via the photocoupler 1243 at a timing delayed by the correction value N obtained last time from the falling edge of the synchronization pulse.
  • a visible light signal is transmitted from the LED 1245 by the LED drive circuit 1244.
  • the synchronization control unit 1242 receives the detection signal from the photodiode 1246 at a timing delayed by the sum of the intrinsic delay time and the correction value N from the falling edge of the synchronization pulse. That is, transmission of a visible light signal is started from that timing.
  • this timing is referred to as transmission start timing.
  • the above-described intrinsic delay time is a delay time caused by a circuit such as the photocoupler 1243, and is a delay time that occurs even when the synchronization control unit 1242 receives the synchronization signal and immediately transmits the transmission start signal. .
  • the correction correction value N can be a negative value as well as a positive value.
  • each of the plurality of transmitters 1240 transmits the visible light signal after the set delay time has elapsed after receiving the synchronization signal (synchronization pulse). It can. That is, even if there is a variation in the inherent delay time caused by a circuit such as the photocoupler 1243 in each of the plurality of transmitters 1240, the visible light from each of the plurality of transmitters 1240 is not affected by the variation.
  • the transmission of the optical signal can be accurately synchronized.
  • the LED drive circuit consumes a large amount of power, and is electrically insulated from the control circuit that handles the synchronization signal using a photocoupler or the like. Therefore, when such a photocoupler is used, it is difficult to synchronize the transmission of visible light signals from a plurality of transmitters due to the variation in the inherent delay time described above.
  • the light emission timing of the LED 1245 is detected by the photodiode 1246, the delay time from the synchronization signal is detected by the synchronization control unit 1242, and the delay time is set in advance. The time is adjusted to be the time (the set delay time described above).
  • a visible light signal (for example, visible light ID) is synchronized with high accuracy from the plurality of LED lighting. Can be sent.
  • the LED illumination may be turned on or off outside the visible light signal transmission period.
  • the first falling edge of the visible light signal may be detected.
  • the first rising edge of the visible light signal may be detected.
  • the transmitter 1240 transmits a visible light signal every time a synchronization signal is received.
  • the transmitter 1240 may transmit a visible light signal without receiving the synchronization signal. That is, if the transmitter 1240 transmits a visible light signal once in response to reception of the synchronization signal, the transmitter 1240 may sequentially transmit the visible light signal without receiving the synchronization signal. Specifically, the transmitter 1240 may sequentially transmit the visible light signal 2 to several thousand times with respect to one reception of the synchronization signal.
  • the transmitter 1240 may transmit a visible light signal corresponding to the synchronization signal at a rate of once every 100 milliseconds or once every few seconds.
  • the transmitter 1240 may transmit a visible light signal corresponding to the synchronization signal at a period of 60 Hz or more. Thereby, blinking is performed at high speed, and the blinking is difficult to be visually recognized by a person. As a result, the occurrence of flicker can be suppressed.
  • the transmitter 1240 may transmit a visible light signal corresponding to the synchronization signal at a sufficiently long cycle such as once every few minutes.
  • a sufficiently long cycle such as once every few minutes.
  • FIG. 108 is a flowchart illustrating an example of a reception method in this embodiment.
  • FIG. 109 is an explanatory diagram for describing an example of a reception method in this embodiment.
  • the receiver calculates the average value of the pixel values of a plurality of pixels arranged in a direction parallel to the exposure line (step S1211). If the pixel values of N pixels are averaged according to the central limit theorem, the expected value of the noise amount becomes N minus 1 ⁇ 2 power, and the SN ratio is improved.
  • the receiver leaves only the part where the pixel value changes in the vertical direction in all the colors, and removes the change in the pixel value where the pixel value changes differently (step S1212).
  • the transmission signal visible light signal
  • the luminance of the illumination of the transmitter or the backlight of the display changes.
  • the pixel values change in the same direction for all the colors.
  • the pixel values change differently for each color. In these portions, the pixel value fluctuates due to reception noise or a picture of the display or signage. Therefore, by removing these fluctuations, the SN ratio can be improved.
  • the receiver obtains a luminance value (step S1213). Since the luminance is not easily changed by color, the influence of the display or signage picture can be eliminated, and the SN ratio can be improved.
  • the receiver applies a low-pass filter to the luminance value (step S1214).
  • the S / N ratio can be improved by using a low-pass filter that cuts a high frequency region. Since there are many signal components up to the frequency up to the reciprocal of the exposure time, the effect of improving the S / N ratio can be increased by blocking the higher frequency. When the frequency component included in the signal is finite, the S / N ratio can be improved by blocking a frequency higher than that frequency.
  • a filter (such as a Butterworth filter) that does not include a frequency vibration component is suitable for the low-pass filter.
  • FIG. 110 is a flowchart illustrating another example of the reception method in this embodiment.
  • the reception method when the exposure time is longer than the transmission cycle will be described with reference to FIG.
  • the exposure time is an integral multiple of the transmission period
  • reception can be performed with the highest accuracy. Even if it is not an integral multiple, it can be received within a range of (N ⁇ 0.33) times (N is an integer).
  • the receiver sets the transmission / reception offset to 0 (step S1221).
  • the transmission / reception offset is a value for correcting a difference between the transmission timing and the reception timing. Since this deviation is unknown, the receiver gradually changes the value that is a candidate for the transmission / reception offset, and adopts the most suitable value as the transmission / reception offset.
  • the receiver determines whether or not the transmission / reception offset is less than the transmission cycle (step S1222).
  • the reception value for example, pixel value
  • the receiver determines in step S1222 that it is less than the transmission cycle (Y in step S1222)
  • the reception value for example, pixel value
  • the receiver obtains a difference between the received values obtained for each transmission cycle (step S1224).
  • the receiver adds a predetermined value to the transmission / reception offset (step S1226), and repeatedly executes the processing from step S1222. If the receiver determines in step S1222 that it is not less than the transmission cycle (N in step S1222), the receiver specifies the highest likelihood among the likelihoods of the received signals calculated for each transmission / reception offset. Then, the receiver determines whether or not the highest likelihood is greater than or equal to a predetermined value (step S1227). If it is determined that the value is equal to or greater than the predetermined value (Y in step S1227), the receiver uses the received signal with the highest likelihood as the final estimation result.
  • the receiver uses, as a received signal candidate, a received signal having a likelihood equal to or higher than a value obtained by subtracting a predetermined value from the highest likelihood (step S1228). On the other hand, if it is determined in step S1227 that the highest likelihood is less than the predetermined value (N in step S1227), the receiver discards the estimation result (step S1229).
  • the likelihood decreases. Therefore, when the likelihood is low, the reliability of the received signal can be improved by discarding the estimation result.
  • the maximum likelihood decoding outputs a valid signal as an estimation result. However, since the likelihood also decreases in this case, this problem can be avoided by discarding the estimation result when the likelihood is low.
  • Multi-value amplitude pulse signal 111, 112, and 113 are diagrams illustrating an example of a transmission signal in the present embodiment.
  • a transmission packet is configured using the patterns (a) and (b).
  • Data can be delimited by using a pattern with a specific length as a header of the entire packet and using a pattern with a different length as a separator.
  • signal detection can be facilitated by including this pattern in the middle.
  • the receiver can connect and decode the data. This also makes it possible to make the packet length variable by adjusting the number of separators.
  • the length of the entire packet may be expressed by the length of the packet header pattern.
  • the receiver can synthesize the partially received data by using the separator as a packet header and the length of the separator as the data address.
  • the transmitter repeatedly transmits the packet configured as described above.
  • the contents of packets 1 to 4 in (c) of FIG. 113 may all be the same, or may be combined as different data on the receiving side.
  • FIG. 114A is a diagram for describing a transmitter according to the present embodiment.
  • the transmitter in this embodiment is configured as a backlight of a liquid crystal display, for example, and includes a blue LED 2303 and a phosphor 2310 including a green fluorescent component 2304 and a red fluorescent component 2305.
  • Blue LED 2303 emits blue (B) light.
  • the phosphor 2310 emits yellow (Y) light when receiving blue light emitted from the blue LED 2303 as excitation light. That is, the phosphor 2310 emits yellow light.
  • the phosphor 2130 includes a green fluorescent component 2304 and a red fluorescent component 2305, yellow light is emitted by the emission of these fluorescent components.
  • the green fluorescent component 2304 emits green light when receiving blue light emitted from the blue LED 2303 as excitation light. That is, the green fluorescent component 2304 emits green (G) light.
  • the red fluorescent component 2305 emits red light when receiving blue light emitted from the blue LED 2303 as excitation light. That is, the red fluorescent component 2305 emits red (R) light.
  • This transmitter transmits a visible light signal of white light by changing the luminance of the blue LED 2303 as in the above embodiments. At this time, a visible light signal having a predetermined carrier frequency is output as the luminance of white light changes.
  • the barcode reader irradiates the barcode with the red laser beam, and reads the barcode based on the luminance change of the red laser beam reflected from the barcode.
  • the barcode reading frequency of the red laser light may coincide with or approximate the carrier frequency of the visible light signal output from a general transmitter currently in practical use. Therefore, in such a case, when the barcode reader attempts to read a barcode illuminated with white light, which is a visible light signal from the general transmitter, the luminance of the red light contained in the white light Depending on the change, the reading may fail. In other words, a barcode reading error occurs due to interference between the carrier frequency of the visible light signal (particularly red light) and the barcode reading frequency.
  • the red fluorescent component 2305 in the present embodiment changes in luminance at a frequency sufficiently lower than the luminance change frequency of the blue LED 2303 and the green fluorescent component 2304.
  • the red fluorescent component 2305 smoothes the frequency of the red luminance change included in the visible light signal.
  • FIG. 114B is a diagram showing luminance changes of RGB.
  • Blue light from the blue LED 2303 is included in the visible light signal and output as shown in FIG. 114B (a).
  • the green fluorescent component 2304 emits green light when receiving blue light from the blue LED 2303.
  • the duration of afterglow in the green fluorescent component 2304 is short. Therefore, when the blue LED 2303 changes in luminance, the green fluorescent component 2304 emits green light whose luminance changes at substantially the same frequency as the luminance change frequency of the blue LED 2303 (that is, the visible light signal carrier frequency). .
  • the red fluorescent component 2305 emits red light when receiving blue light from the blue LED 2303 as shown in (c) of FIG. 114B.
  • the duration of the afterglow in the red fluorescent component 2305 is long. Therefore, when the blue LED 2303 changes in luminance, the red fluorescent component 2305 emits red light whose luminance changes at a frequency lower than the frequency of luminance change of the blue LED 2303 (that is, the carrier frequency of the visible light signal). .
  • FIG. 115 is a diagram showing the afterglow characteristics of the green fluorescent component 2304 and the red fluorescent component 2305 in the present embodiment.
  • the intensity becomes smaller than the intensity I0.
  • the frequency f exceeds the threshold value fb, it gradually decreases.
  • the carrier frequency f1 is a carrier frequency of luminance change by the blue LED 2303 provided in the transmitter. Further, the above-described intensity I1 is 1/3 of the intensity I0 or ⁇ 10 dB of the intensity I0. For example, the carrier frequency f1 is 10 kHz or 5 to 100 kHz.
  • the transmitter according to the present embodiment is a transmitter that transmits a visible light signal, and receives a blue LED that emits blue light whose luminance changes as light included in the visible light signal, and the blue light.
  • a green fluorescent component that emits green light as light included in the visible light signal
  • a red fluorescent component that emits red light as light included in the visible light signal by receiving the blue light.
  • the duration of afterglow in the red fluorescent component is longer than the duration of afterglow in the green fluorescent component.
  • the green fluorescent component and the red fluorescent component may be included in a single phosphor that receives yellow light and emits yellow light as light included in the visible light signal.
  • the green fluorescent component may be included in a green phosphor
  • the red fluorescent component may be included in a red phosphor that is separate from the green phosphor.
  • the persistence time of the afterglow in the red fluorescent component is long, it is possible to change the luminance of the red light at a frequency lower than the frequency in the luminance change of the blue and green light. Therefore, even if the frequency of the luminance change of blue and green light included in the visible light signal of white light is the same as or close to the barcode reading frequency of the red laser light, it is included in the visible light signal of white light.
  • the frequency of the red light can be significantly different from the barcode reading frequency. As a result, the occurrence of barcode reading errors can be suppressed.
  • the red fluorescent component may emit red light whose luminance changes at a frequency lower than the frequency of luminance change of the light emitted from the blue LED.
  • the red fluorescent component may include a red fluorescent material that emits red light by receiving blue light and a low-pass filter that transmits only light in a predetermined frequency band.
  • the low-pass filter transmits only light in a low frequency band out of blue light emitted from the blue LED and applies the light to the red fluorescent material.
  • the red fluorescent material may have the same afterglow characteristics as the green fluorescent component.
  • the low-pass filter transmits only light in a low frequency band out of red light emitted from the red fluorescent material when blue light emitted from the blue LED hits the red fluorescent material. Let Even when such a low-pass filter is used, the occurrence of barcode reading errors can be suppressed as described above.
  • the red fluorescent component may be made of a fluorescent material having a predetermined afterglow characteristic.
  • the carrier frequency f1 may be approximately 10 kHz.
  • the carrier frequency used for transmitting visible light signals which is currently in practical use, is 9.6 kHz, it is effective to generate bar code reading errors in this practical transmission of visible light signals. Can be suppressed.
  • the carrier frequency f1 may be approximately 5 to 100 kHz.
  • carrier frequencies such as 20 kHz, 40 kHz, 80 kHz, and 100 kHz will be used in future visible light communication due to the advancement of image sensors (imaging devices) of receivers that receive visible light signals. Therefore, by setting the above-mentioned carrier frequency f1 to approximately 5 to 100 kHz, it is possible to effectively suppress the occurrence of barcode reading errors in future visible light communication.
  • the green fluorescent component and the red fluorescent component are contained in a single phosphor, or each of these two fluorescent components is contained in a separate phosphor.
  • the above effects can be achieved. That is, even when a single phosphor is used, the afterglow characteristics, that is, the frequency characteristics, of the red light and the green light emitted from the phosphor are different. Therefore, the above-mentioned effects can also be achieved by using a single phosphor that is inferior in afterglow characteristics or frequency characteristics in red light and superior in afterglow characteristics or frequency characteristics in green light.
  • poor afterglow characteristics or frequency characteristics means that the duration of afterglow is long or the intensity of light in a high frequency band is weak, and that the afterglow characteristics or frequency characteristics are superior means that afterglow characteristics The duration of the light is short, or the light intensity in the high frequency band is high.
  • the occurrence of barcode reading errors is suppressed by smoothing the frequency of the red luminance change included in the visible light signal, but the carrier frequency of the visible light signal is reduced. By making it high, the occurrence of the reading error may be suppressed.
  • FIG. 116 is a diagram for explaining a problem newly generated in order to suppress occurrence of a barcode reading error.
  • the reading frequency of the red laser light used for reading the barcode is also about 10 to 20 kHz. A barcode reading error occurs.
  • the carrier frequency fc of the visible light signal from about 10 kHz to, for example, 40 kHz, it is possible to suppress the occurrence of barcode reading errors.
  • the sampling frequency fs for the receiver to sample the visible light signal by photographing needs to be 80 kHz or more.
  • the receiver in this embodiment performs downsampling.
  • FIG. 117 is a diagram for explaining the downsampling performed by the receiver in this embodiment.
  • the transmitter 2301 in the present embodiment is configured as, for example, a liquid crystal display, digital signage, or a lighting device.
  • the transmitter 2301 then outputs a frequency-modulated visible light signal.
  • the transmitter 2301 switches the carrier frequency fc of the visible light signal to, for example, 40 kHz and 45 kHz.
  • the receiver 2302 in this embodiment photographs the transmitter 2301 at a frame rate of 30 fps, for example.
  • the receiver 2302 performs imaging with a short exposure time so that bright lines are generated in each image (specifically, each frame) obtained by imaging.
  • the receiver 2302 in this embodiment estimates the carrier frequency fc of the visible light signal by observing and analyzing the aliasing.
  • FIG. 118 is a flowchart showing a processing operation of the receiver 2302 in this embodiment.
  • the receiver 2302 observes and analyzes an alias generated in the frame obtained by the downsampling (step S2311).
  • the receiver 2302 specifies the frequency of the alias as, for example, 5.1 kHz or 5.5 kHz.
  • the receiver 2302 estimates the carrier frequency fc of the visible light signal based on the identified alias frequency (step S2311). That is, the receiver 2302 restores the original frequency from the alias. Thereby, the receiver 2302 estimates the carrier frequency fc of the visible light signal as 40 kHz or 45 kHz, for example.
  • the receiver 2302 in this embodiment can appropriately receive a visible light signal having a high carrier frequency by performing downsampling and restoration of a frequency based on an alias.
  • the carrier frequency of the visible light signal can be increased from 30 kHz to 60 kHz from the frequency (about 10 kHz) currently in practical use.
  • the carrier frequency of the visible light signal and the barcode reading frequency (10 to 20 kHz) can be greatly different, and interference between the frequencies can be suppressed.
  • the occurrence of barcode reading errors can be suppressed.
  • Such a receiving method in the present embodiment is a receiving method for acquiring information from a subject, and corresponds to a plurality of exposure lines included in the image sensor in a frame obtained by photographing the subject by an image sensor.
  • An exposure time setting step for setting an exposure time of the image sensor so that a plurality of bright lines are generated according to a change in luminance of the subject, and exposure of each of the plurality of exposure lines included in the image sensor at different times sequentially
  • the image sensor causes the image sensor to shoot the subject whose luminance changes at a predetermined frame rate and at the set exposure time, and for each frame obtained by the shooting.
  • Specified by the plurality of bright line patterns included in the frame Including an information acquisition step of acquiring information by demodulating the over data.
  • each of the plurality of exposure lines sequentially repeats starting exposure at different times, so that the sampling frequency is lower than the carrier frequency of the visible light signal transmitted by the luminance change of the subject.
  • the visible light signal is down-sampled, and in the information acquisition step, the frequency of the alias specified by the pattern of the plurality of bright lines included in the frame is specified and specified for each frame obtained by the imaging.
  • the information is obtained by estimating the frequency of the visible light signal from the frequency of the alias and demodulating the estimated frequency of the visible light signal.
  • a visible light signal having a high carrier frequency can be appropriately received by performing downsampling and frequency restoration based on alias.
  • a visible light signal having a carrier frequency higher than 30 kHz may be downsampled. Accordingly, interference between the carrier frequency of the visible light signal and the barcode reading frequency (10 to 20 kHz) can be avoided, and barcode reading errors can be more effectively suppressed.
  • FIG. 119 is a diagram illustrating processing operations of the reception device (imaging device). Specifically, FIG. 119 is a diagram for describing an example of switching processing between the normal imaging mode and the macro imaging mode when receiving visible light communication.
  • the reception device 1610 receives visible light emitted from a transmission device including a plurality of light sources (four light sources in FIG. 119).
  • the receiving device 1610 when the receiving device 1610 transitions to a mode for performing visible light communication, the receiving device 1610 activates the imaging unit in the normal imaging mode (S1601). Note that the receiving device 1610 displays a frame 1611 for imaging a light source on the screen when the mode is changed to a mode for performing visible light communication.
  • the receiving device 1610 switches the imaging mode of the imaging unit to the macro imaging mode (S1602).
  • the timing of switching from step S1601 to step S1602 may not be after a predetermined time from step S1601, but may be when the receiving device 1610 determines that the light source is captured within the frame 1611.
  • the user can easily place the light source in the frame 1611 because the light source can be stored in the frame 1611 with a clear image in the normal imaging mode before the image is blurred in the macro imaging mode. Can fit in.
  • the receiving device 1610 determines whether or not a signal from the light source has been received (S1603). If it is determined that the signal from the light source is received (Yes in S1603), the process returns to the normal imaging mode in Step S1601, and if it is determined that the signal from the light source is not received (No in S1603), the macro imaging in Step 1602 is performed. Continue mode. In the case of Yes in step S1603, processing based on the received signal (for example, processing for displaying an image indicated by the received signal) may be performed.
  • the user can take an image in a blurred state by switching from the normal imaging mode to the macro imaging mode by touching the display unit of the light source 1611 of the smartphone with a finger.
  • the image captured in the macro imaging mode includes more bright areas than the image captured in the normal imaging mode.
  • stripe-like images are separated as shown in the left diagram of FIG.
  • the problem that it cannot be received as a signal can be demodulated as a continuous reception signal for forming a continuous stripe as shown in the right figure. Since a long code can be received at once, the response time is shortened. As shown in FIG.
  • FIG. 120 is a diagram illustrating processing operations of the receiving device (imaging device). Specifically, FIG. 120 is a diagram for describing another example of the switching process between the normal imaging mode and the macro imaging mode when receiving visible light communication.
  • the receiving device 1620 receives visible light emitted from a transmitting device including a plurality of light sources (four light sources in FIG. 120).
  • the imaging device 1620 transitions to a mode for performing visible light communication
  • the imaging device is activated in the normal imaging mode, and an image 1623 having a wider range than the image 1622 displayed on the screen of the receiving device 1620 is captured.
  • the image data indicating the captured image 1623 and the posture information indicating the posture of the receiving device 1620 detected by the gyro sensor, the geomagnetic sensor, and the acceleration sensor of the receiving device 1620 when the image 1623 is captured are stored in the memory. (S1611).
  • the captured image 1623 is an image having a wide range by a predetermined width in the vertical direction and the horizontal direction with reference to the image 1622 displayed on the screen of the reception device 1620.
  • the receiving apparatus 1620 displays a frame 1621 for imaging a light source on the screen.
  • the receiving device 1620 switches the imaging mode of the imaging unit to the macro imaging mode (S1612). Note that the timing of switching from step S1611 to step S1612 is not after a predetermined time from step S1611, but when the image 1623 is captured and it is determined that the image data indicating the captured image 1623 is held in the memory. Good. At this time, the receiving device 1620 displays an image 1624 having a size corresponding to the screen size of the receiving device 1620 among the images 1623 based on the image data held in the memory.
  • an image 1624 displayed on the receiving device 1620 at this time is a part of the image 1623, and the posture of the receiving device 1620 indicated by the posture information acquired in step S1611 (indicated by a white broken line). Position) and the current posture of the receiving device 1620.
  • This is an image of an area predicted to be captured by the current receiving device 1620. That is, the image 1624 is a partial image of the image 1623 and is an image of an area corresponding to the imaging target of the image 1625 actually captured in the macro imaging mode. That is, in step S1612, the posture (imaging direction) changed from the time of step S1611 is acquired, and the imaging target that is presumed to be currently imaged from the acquired current posture (imaging direction) is identified and imaged in advance.
  • An image 1624 corresponding to the current posture (imaging direction) is specified from the image 1623, and processing for displaying the image 1624 is performed. For this reason, as shown by an image 1623 in FIG. 120, when the receiving device 1620 moves from the position indicated by the white broken line in the direction of the white arrow, the receiving device 1620 cuts out the image 1623 according to the amount of movement. An area 1624 can be determined, and an image 1624 that is the image 1623 in the determined area can be displayed.
  • the reception device 1620 does not display the image 1625 captured in the macro imaging mode even when capturing in the macro imaging mode, and starts from the image 1623 captured in the clearer normal imaging mode.
  • An image 1624 cut out in accordance with the current posture of the receiving device 1620 can be displayed.
  • the stored normal plane image is displayed on the display unit, and the user takes a picture using a smartphone. In this case, camera shake occurs, and the direction of the actual captured image and the direction of the still image displayed from the memory shifts, and a problem that the user cannot adjust the direction to the target light source is expected to occur.
  • the camera shake is detected by the image swing detection means or the swing gyro detection means, and the target image in the still image is shifted in a predetermined direction.
  • the user can see the deviation from the direction.
  • This display makes it possible for the user to point the camera at the target light source, so that multiple divided light sources can be photographed while displaying normal images, and signals are received continuously. can do.
  • the normal image is displayed, the light source divided into a plurality of parts can be received.
  • the receiving device 1620 determines whether or not a signal from the light source has been received (S1613). If it is determined that the signal from the light source has been received (Yes in S1613), the process returns to the normal imaging mode in Step S1611. If it is determined that the signal from the light source has not been received (No in S1613), the macro imaging in Step 1612 is performed. Continue mode. In the case of Yes in step S1613, processing based on the received signal (for example, processing for displaying an image indicated by the received signal) may be performed.
  • the receiving device 1620 can pick up an image including a brighter region in the macro image pickup mode. For this reason, in the macro imaging mode, the number of exposure lines that can generate bright lines for the subject can be increased.
  • 121 is a diagram showing processing operations of the receiving device (imaging device).
  • the transmission device 1630 is a display device such as a television, for example, and transmits different transmission IDs by visible light communication at a predetermined time interval ⁇ 1630. Specifically, at times t1631, t1632, t1633, and t1634, ID1631, ID1632, ID1633, and ID1634, which are transmission IDs associated with the data corresponding to the displayed images 1631, 1632, 1633, and 1634, respectively, are transmitted. To do. That is, ID 1631 to ID 1634 are transmitted one after another at a predetermined time interval ⁇ t 1630 from the transmission device 1630.
  • the receiving device 1640 requests the data associated with each transmission ID to the server 1650 based on the transmission ID received by visible light communication, receives the data from the server, and displays an image corresponding to the data. Specifically, images 1641, 1642, 1643, and 1644 respectively corresponding to ID1631, ID1632, ID1633, and ID1634 are displayed at times t1631, t1632, t1633, and t1634, respectively.
  • the receiving apparatus 1640 may acquire ID information indicating a transmission ID scheduled to be transmitted from the transmitting apparatus 1630 from the server 1650 at subsequent times t1632 to t1634.
  • the receiving device 1640 uses the acquired ID information, so that the data associated with ID 1632 to ID 1634 at times t1632 to t1634 can be stored in the server 1650 without receiving a transmission ID from the transmitting device 1630 each time.
  • the received data can be displayed at times t1632 to t1634.
  • the receiving device 1640 may request data corresponding to the ID 1631 at time t1631 without acquiring information indicating the transmission ID scheduled to be transmitted from the transmitting device 1630 from time t1632 to t1634 from the server 1650.
  • the data associated with the transmission ID corresponding to the subsequent times t1632 to t1634 may be received from the server 1650, and the received data may be displayed at each time t1632 to t1634. That is, when server 1650 receives a request for data associated with ID 1631 transmitted at time t1631 from reception device 1640, server 1650 receives data associated with a transmission ID corresponding to subsequent times t1632 to t1634.
  • the server 1650 holds association information in which each time t1631 to 1634 and data associated with the transmission ID corresponding to each time t1631 to 1634 are associated, and based on the association information
  • the predetermined data associated with the predetermined time is transmitted at the predetermined time.
  • receiving apparatus 1640 can obtain transmission ID 1631 by visible light communication at time t1631, data corresponding to each time t1632 to t1634 from server 1650 can be obtained at subsequent times t1632 to t1634 without performing visible light communication. Can be received. For this reason, the user does not need to keep the receiving device 1640 directed at the transmitting device 1630 in order to acquire the transmission ID by visible light communication, and can easily display the data acquired from the server 1650 on the receiving device 1640. In this case, when the receiving device 1640 obtains data corresponding to the ID from the server every time, the time delay from the server occurs and the response time becomes longer.
  • the receiving device 1640 displays images 1641, 1642, 1643, and 1644 corresponding to the transmission IDs ID1631, ID1632, ID1633, and ID1634, respectively, at times t1631, t1632, t1633, and t1634. Displayed respectively.
  • the reception device 1640 may present not only the image but also other information at each time. That is, at time t1631, the receiving device 1640 displays the image 1641 corresponding to the ID 1631 and outputs sound or sound corresponding to the ID 1631. At this time, the receiving device 1640 may further display, for example, a purchase site for the product displayed in the image. Such sound output and purchase site display are performed in the same manner at times t1632, t1633, and t1634 other than time t1631.
  • an image with a normal image quality is displayed with a normal shutter speed and a normal focus for the left eye.
  • the right-eye camera uses a shutter that is faster than the left eye and / or is set to a focal point or macro at a short distance to obtain the stripe-like bright line of the present invention and demodulate the data.
  • an image having normal image quality is displayed on the display unit, and the optical communication data of a plurality of light sources divided in distance can be received by the right eye camera.
  • FIG. 123 is a diagram illustrating an example of an application according to the sixteenth embodiment.
  • a receiver 1800a configured as a smartphone receives a signal (visible light signal) transmitted from a transmitter 1800b configured as, for example, a street digital signage. That is, the receiver 1800a receives the timing of image reproduction by the transmitter 1800b. The receiver 1800a reproduces sound at the same timing as the image reproduction. In other words, the receiver 1800a performs synchronized reproduction of the sound so that the image and sound reproduced by the transmitter 1800b are synchronized. Note that the receiver 1800a may reproduce the same image as the image (reproduced image) reproduced by the transmitter 1800b or a related image related to the reproduced image together with the sound. Further, the receiver 1800a may cause a device connected to the receiver 1800a to reproduce sound and the like. Further, after receiving the visible light signal, the receiver 1800a may download content such as sound or related images associated with the visible light signal from the server. The receiver 1800a performs synchronous reproduction after the download.
  • the user can select the sound that matches the display of the transmitter 1800b. Can hear. Further, even when there is a distance that takes time to reach the voice, it is possible to listen to the voice that matches the display.
  • FIG. 124 is a diagram illustrating an example of an application according to the sixteenth embodiment.
  • Each of the receiver 1800a and the receiver 1800c obtains and reproduces audio corresponding to a video such as a movie displayed on the transmitter 1800d from the server, in the language set in the receiver.
  • the transmitter 1800d transmits a visible light signal indicating an ID for identifying the displayed video to the receiver.
  • the receiver transmits a request signal including the ID indicated in the visible light signal and the language set in the receiver to the server.
  • the receiver acquires the audio corresponding to the request signal from the server and reproduces it. Thereby, the user can enjoy the work displayed on the transmitter 1800d in the language set by the user.
  • 125 and 126 are diagrams showing an example of a transmission signal and an example of a voice synchronization method in the sixteenth embodiment.
  • Different data are associated with a time every fixed time (N seconds).
  • These data may be, for example, an ID for identifying time, may be time, or may be audio data (for example, 64 Kbps data).
  • the following description is based on the assumption that the data is an ID. Different IDs may have different additional information parts attached to the ID.
  • the packets that make up the ID are different. Therefore, it is desirable that IDs are not continuous.
  • the transmitter 1800d transmits the ID in accordance with the reproduction time of the displayed image, for example.
  • the receiver can recognize the reproduction time (synchronization time) of the image of the transmitter 1800d by detecting the timing when the ID is changed.
  • the synchronization time can be recognized by the following method.
  • N By setting N to 0.5 seconds or less, it can be synchronized accurately.
  • FIG. 126 is a diagram illustrating an example of a transmission signal in the sixteenth embodiment.
  • a time packet is a packet that holds the time of transmission.
  • the time packet is divided into a time packet 1 representing a fine time and a time packet 2 representing a rough time.
  • time packet 2 indicates the hour and minute of the time
  • time packet 1 indicates only the second of the time.
  • a packet indicating the time may be divided into three or more time packets. Since the coarse time is less necessary, the receiver can recognize the synchronization time quickly and accurately by transmitting more fine time packets than coarse time packets.
  • the visible light signal includes the second information (hour packet 2) indicating the hour and minute of the time, and the first information (time packet 1) indicating the second of the time.
  • the time when the visible light signal is transmitted from the transmitter 1800d is indicated.
  • the receiver 1800a receives the second information and receives the first information more times than the number of times of receiving the second information.
  • FIG. 127 is a diagram illustrating an example of a process flow of the receiver 1800a according to the sixteenth embodiment.
  • a processing delay time is designated for the receiver 1800a (step S1801). This may be stored in the processing program or specified by the user. When the user performs correction, it is possible to realize more accurate synchronization according to the individual receiver. This processing delay time can be synchronized more accurately by changing it depending on the receiver model, the temperature of the receiver, and the CPU usage rate.
  • the receiver 1800a determines whether or not a time packet has been received or whether or not an ID associated for voice synchronization has been received (step S1802).
  • the receiver 1800a determines whether it has been received (Y in step S1802), it further determines whether there is an image waiting for processing (step S1804). If it is determined that there is an image waiting for processing (Y in step S1804), the receiver 1800a discards the image waiting for processing or delays processing of the image waiting for processing to receive from the latest acquired image. Is performed (step S1805). Thereby, it is possible to avoid an unexpected delay due to the amount of waiting for processing.
  • the receiver 1800a measures the position in the image where the visible light signal (specifically the bright line) is located (step S1806). In other words, by measuring the position in the direction perpendicular to the exposure line from the first exposure line in the image sensor, the time difference (delay time in the image) from the image acquisition start time to the signal reception time is obtained. Can be calculated.
  • the receiver 1800a can accurately perform synchronized reproduction by reproducing the sound or moving image at the time obtained by adding the processing delay time and the in-image delay time to the recognized synchronization time (step S1807).
  • step S1802 if it is determined in step S1802 that the receiver 1800a has not received the time packet or the voice synchronization ID, the receiver 1800a receives a signal from the image obtained by imaging (step S1803).
  • 128 is a diagram illustrating an example of a user interface of the receiver 1800a in Embodiment 16.
  • the user can adjust the processing delay time described above by pressing any of the buttons Bt1 to Bt4 displayed on the receiver 1800a as shown in FIG. 128 (a).
  • the processing delay time may be set by a swipe operation as shown in FIG. 128 (b). Thereby, synchronous reproduction can be performed more accurately based on the user's sense.
  • FIG. 129 is a diagram illustrating an example of a process flow of the receiver 1800a according to the sixteenth embodiment.
  • the earphone-only playback shown by this processing flow enables audio playback without disturbing the surroundings.
  • the receiver 1800a checks whether or not the setting limited to the earphone is performed (step S1811).
  • the setting limited to the earphone is performed, for example, the setting limited to the earphone is set in the receiver 1800a.
  • settings that are limited to earphones are made in the received signal (visible light signal). Alternatively, it is recorded in the server or the receiver 1800a in association with the received signal that it is limited to the earphone.
  • step S1813 it is determined whether or not the earphone is connected to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a confirms that the earphone is not limited (N in Step S1811) or determines that the earphone is connected (Y in Step S1813), the receiver 1800a reproduces the sound (Step S1812). When playing back audio, the receiver 1800a adjusts the volume so that the volume is within the set range. This setting range is set similarly to the setting limited to the earphone.
  • the receiver 1800a determines that the earphone is not connected (N in step S1813)
  • the receiver 1800a performs a notification prompting the user to connect the earphone (step S1814).
  • This notification is performed by, for example, screen display, audio output, or vibration.
  • the receiver 1800a prepares an interface for forced reproduction and determines whether or not the user has performed an operation of forced reproduction (Ste S1815). If it is determined that the forced playback operation has been performed (Y in step S1815), the receiver 1800a plays back the audio even when the earphone is not connected (step S1812).
  • the receiver 1800a retains the audio data received in advance and the analyzed synchronization time so that the earphone is connected. Quickly synchronize audio playback.
  • FIG. 130 is a diagram illustrating another example of the process flow of the receiver 1800a according to the sixteenth embodiment.
  • the receiver 1800a receives an ID from the transmitter 1800d (step S1821). That is, the receiver 1800a receives a visible light signal indicating the ID of the transmitter 1800d or the ID of the content displayed on the transmitter 1800d.
  • the receiver 1800a downloads information (content) associated with the received ID from the server (step S1822). Alternatively, the receiver 1800a reads out the information from the data holding unit in the receiver 1800a. Hereinafter, this information is referred to as related information.
  • the receiver 1800a determines whether or not the synchronous reproduction flag included in the related information indicates ON (step S1823). If it is determined that the synchronous reproduction flag does not indicate ON (N in step S1823), the receiver 1800a outputs the content indicated by the related information (step S1824). That is, when the content is an image, the receiver 1800a displays an image, and when the content is audio, the receiver 1800a outputs audio.
  • receiver 1800a determines that the synchronous reproduction flag indicates ON (Y in step S1823), is the time adjustment mode included in the related information set to the transmitter reference mode? Then, it is determined whether or not the absolute time mode is set (step S1825). If it is determined that the absolute time mode is set, the receiver 1800a determines whether or not the last time adjustment has been performed within a certain time from the current time (step S1826). The time adjustment at this time is processing for obtaining time information by a predetermined method and using the time information to adjust the time of a clock provided in the receiver 1800a to the absolute time of the reference clock.
  • the predetermined method is, for example, a method using a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave. Note that the current time described above may be a time when the receiver 1800a, which is a terminal device, receives a visible light signal.
  • the receiver 1800a determines that the last time adjustment has been performed within a certain time (Y in step S1826), the receiver 1800a outputs the related information based on the time of the clock of the receiver 1800a, and is displayed on the transmitter 1800d. Content and related information are synchronized (step S1827).
  • the content indicated by the related information is, for example, a moving image
  • the receiver 1800a displays the moving image so as to be synchronized with the content displayed on the transmitter 1800d.
  • the content indicated by the related information is, for example, audio
  • the receiver 1800a outputs the audio so as to be synchronized with the content displayed on the transmitter 1800d.
  • the related information indicates sound
  • the related information includes each frame constituting the sound, and these frames are time stamped.
  • the receiver 1800a outputs a sound synchronized with the content of the transmitter 1800d by playing back a frame with a type stamp corresponding to the time of its own clock.
  • the receiver 1800a determines that the last time adjustment has not been performed within a certain time (N in step S1826), the receiver 1800a attempts to obtain the time information by a predetermined method, and whether or not the time information has been obtained. Is determined (step S1828). If it is determined that the time information has been obtained (Y in step S1828), the receiver 1800a updates the time of the clock of the receiver 1800a using the time information (step S1829). Then, the receiver 1800a executes the process of step S1827 described above.
  • step S1825 If it is determined in step S1825 that the time adjustment mode is the transmitter reference mode, or if it is determined in step S1828 that time information could not be obtained (N in step S1828), the receiver 1800a
  • the time information is acquired from the transmitter 1800d (step S1830). That is, the receiver 1800a acquires time information that is a synchronization signal from the transmitter 1800d through visible light communication.
  • the synchronization signals are time packet 1 and time packet 2 shown in FIG.
  • the receiver 1800a acquires time information from the transmitter 1800d by radio waves such as Bluetooth (registered trademark) or Wi-Fi. Then, the receiver 1800a executes the processes of steps S1829 and S1827 described above.
  • processing is performed for synchronization between the clock of the terminal device that is the receiver 1800a and the reference clock by GPS radio waves or NTP radio waves.
  • the time of the terminal device, the time of the terminal device, and the time of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter 1800d. Synchronize between. Accordingly, the terminal device can reproduce the content (moving image or sound) at the timing synchronized with the transmitter-side content reproduced by the transmitter 1800d.
  • FIG. 131A is a diagram for explaining a specific method of synchronized playback in the sixteenth embodiment. As a method of synchronous reproduction, there are methods a to e shown in FIG. 131A.
  • the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments.
  • the content playback time is the playback time of data that is part of the content that is being played back by the transmitter 1800d when the content ID is transmitted from the transmitter 1800d.
  • the data is a picture or a sequence constituting the moving image if the content is a moving image, or a frame constituting the sound if the content is sound.
  • the playback time indicates, for example, the playback time from the beginning of the content as the time. If the content is a moving image, the playback time is included in the content as a PTS (Presentation Time Stamp). That is, the content includes the reproduction time (display time) of the data for each data constituting the content.
  • PTS Presentation Time Stamp
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal to the server 1800f. The server 1800f receives the request signal, and transmits the content associated with the content ID included in the request signal to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception).
  • the elapsed time from the reception of the ID is an elapsed time from when the content ID is received by the receiver 1800a.
  • the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal and the content playback time to the server 1800f.
  • the server 1800f receives the request signal, and transmits only a part of the content after the content playback time to the receiver 1800a among the content associated with the content ID included in the request signal.
  • the receiver 1800a When the receiver 1800a receives the part of the content, the receiver 1800a reproduces the part of the content from the time point (elapsed time since the ID reception).
  • the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the content reproduction time by changing the luminance of the display, as in the above embodiments.
  • the transmitter ID is information for identifying the transmitter.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
  • the server 1800f holds, for each transmitter ID, a reproduction schedule that is a timetable of content reproduced by the transmitter with the transmitter ID. Further, the server 1800f includes a clock. When such a server 1800f receives the request signal, the content associated with the transmitter ID included in the request signal and the clock time (server time) of the server 1800f is the content being played back. Identify from the playback schedule. Then, the server 1800f transmits the content to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception).
  • the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the transmitter time by changing the luminance of the display as in the above embodiments.
  • the transmitter time is a time indicated by a clock provided in the transmitter 1800d.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal and the transmitter time to the server 1800f.
  • the server 1800f holds the above reproduction schedule.
  • the server 1800f receives the request signal
  • the server 1800f identifies the content associated with the transmitter ID and the transmitter time included in the request signal as the content being reproduced from the reproduction schedule.
  • the server 1800f specifies the content playback time from the transmitter time. That is, the server 1800f finds the playback start time of the specified content from the playback schedule, and specifies the time between the transmitter time and the playback start time as the content playback time. Then, the server 1800f transmits the content and the content playback time to the receiver 1800a.
  • the receiver 1800a Upon receiving the content and the content playback time, the receiver 1800a plays the content from the time of (content playback time + elapsed time since reception of ID).
  • the visible light signal indicates the time when the visible light signal is transmitted from the transmitter 1800d. Therefore, the receiver 1800a, which is a terminal device, can receive content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter 1800d. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
  • the server 1800f has a plurality of contents each associated with a time.
  • the content associated with the time indicated by the visible light signal may not exist in the server 1800f.
  • the receiver 1800a as the terminal device is closest to the time indicated by the visible light signal and is associated with the time after the time indicated by the visible light signal among the plurality of contents. Content may be received. Thereby, even if the content associated with the time indicated by the visible light signal does not exist in the server 1800f, it is possible to receive appropriate content from among the plurality of contents in the server 1800f.
  • the reproduction method includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal according to a luminance change of the light source, and a receiver 1800a transmits a request signal for requesting the content associated with the visible light signal to the server 1800f, the receiver 1800a receives the content from the server 1800f, and reproduces the content.
  • the visible light signal indicates a transmitter ID and a transmitter time.
  • the transmitter ID is ID information.
  • the transmitter time is the time indicated by the clock of the transmitter 1800d, and the time when the visible light signal is transmitted from the transmitter 1800d.
  • the receiver 1800a receives the content associated with the transmitter ID and the transmitter time indicated by the visible light signal. As a result, the receiver 1800a can reproduce appropriate content with respect to the transmitter ID and the transmitter time.
  • the transmitter 1800d outputs a visible light signal indicating the transmitter ID by changing the luminance of the display as in the above embodiments.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
  • the server 1800f holds the above-described reproduction schedule and further includes a clock.
  • the server 1800f receives the request signal
  • the server 1800f identifies the content associated with the transmitter ID and the server time included in the request signal from the reproduction schedule as content being reproduced.
  • the server time is the time indicated by the clock of the server 1800f.
  • the server 1800f finds the reproduction start time of the specified content from the reproduction schedule table. Then, the server 1800f transmits the content and the content reproduction start time to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a receives the content and the content playback start time, the receiver 1800a plays the content from the time of (receiver time-content playback start time).
  • the receiver time is a time indicated by a clock provided in the receiver 1800a.
  • the reproduction method includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal due to a luminance change of the light source;
  • the transmitting step of transmitting a request signal for requesting the content associated with the visible light signal from the receiver 1800a to the server 1800f, and the receiver 1800a include each time and data reproduced at each time
  • the transmitter 1800d if content related to the content (transmitter-side content) is reproduced, the receiver 1800a can reproduce the content in synchronization with the transmitter-side content appropriately. .
  • the server 1800f may transmit only a part of the content after the content playback time to the receiver 1800a.
  • the receiver 1800a transmits a request signal to the server 1800f and receives necessary data from the server 1800f.
  • the data in the server 1800f is transmitted in advance without performing such transmission / reception. You may keep it.
  • FIG. 131B is a block diagram showing a configuration of a playback apparatus that performs synchronized playback by the method e described above.
  • the playback device B10 is a receiver 1800a or a terminal device that performs synchronous playback by the method e described above, and includes a sensor B11, a request signal transmission unit B12, a content reception unit B13, a clock B14, and a playback unit B15. I have.
  • Sensor B11 is, for example, an image sensor, and receives the visible light signal from a transmitter 1800d that transmits a visible light signal according to a change in luminance of the light source.
  • the request signal transmission unit B12 transmits a request signal for requesting content associated with the visible light signal to the server 1800f.
  • the content receiving unit B13 receives content including each time and data reproduced at each time from the server 1800f.
  • the reproduction unit B15 reproduces data corresponding to the time of the clock B14 in the content.
  • FIG. 131C is a flowchart showing the processing operation of the terminal device that performs synchronous reproduction by the method e described above.
  • the playback device B10 is a receiver 1800a or a terminal device that performs synchronized playback by the method e described above, and executes each process of steps SB11 to SB15.
  • step SB11 the visible light signal is received from the transmitter 1800d that transmits the visible light signal according to the luminance change of the light source.
  • step SB12 a request signal for requesting content associated with the visible light signal is transmitted to server 1800f.
  • step SB13 content including each time and data reproduced at each time is received from server 1800f.
  • step SB15 data corresponding to the time of the clock B14 is reproduced from the content.
  • the data in the content can be appropriately played back at the correct time indicated by the content without being played back at the wrong time.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the software that realizes the playback apparatus B10 and the like of the present embodiment is a program that causes a computer to execute each step included in the flowchart shown in FIG. 131C.
  • FIG. 132 is a diagram for explaining preparations for synchronized playback in the sixteenth embodiment.
  • the receiver 1800a adjusts the time of the clock provided in the receiver 1800a to the time of the reference clock in order to perform synchronized playback. For this time adjustment, the receiver 1800a performs the following processes (1) to (5).
  • the receiver 1800a receives a signal.
  • This signal may be a visible light signal transmitted by a change in luminance of the display of the transmitter 1800d, or a radio wave signal based on Wi-Fi or Bluetooth (registered trademark) from a wireless device.
  • the receiver 1800a acquires position information indicating the position of the receiver 1800a by, for example, GPS instead of receiving such a signal. Then, the receiver 1800a recognizes that the receiver 1800a has entered a predetermined place or building based on the position information.
  • the receiver 1800a When the receiver 1800a receives the above signal or recognizes that it has entered a predetermined location, it receives a request signal for requesting data (related information) associated with the signal or location. It transmits to the server (visible light ID resolution server) 1800f.
  • the server visible light ID resolution server
  • the server 1800f transmits the above-described data and a time adjustment request for causing the receiver 1800a to adjust the time to the receiver 1800a.
  • the receiver 1800a When receiving the data and the time adjustment request, the receiver 1800a transmits the time adjustment request to the GPS time server, the NTP server, or the base station of the telecommunications carrier (carrier).
  • the server or the base station Upon receiving the time adjustment request, the server or the base station transmits time data (time information) indicating the current time (reference clock time or absolute time) to the receiver 1800a.
  • time data time information
  • the receiver 1800a adjusts the time by adjusting the time of the clock provided to the receiver 1800a to the current time indicated by the time data.
  • a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave is used between the clock provided in the receiver 1800a (terminal device) and the reference clock. Synchronized. Therefore, the receiver 1800a can reproduce the data corresponding to the time at an appropriate time according to the reference clock.
  • FIG. 133 is a diagram illustrating an example of application of the receiver 1800a according to the sixteenth embodiment.
  • the receiver 1800a is configured as a smartphone as described above, and is used by being held by a holder 1810 formed of, for example, a translucent resin or glass member.
  • the holder 1810 includes a back plate portion 1810a and a locking portion 1810b provided upright on the back plate portion 1810a.
  • the receiver 1800a is inserted between the back plate portion 1810a and the locking portion 1810b so as to be along the back plate portion 1810a.
  • FIG. 134A is a front view of receiver 1800a held by holder 1810 in Embodiment 16.
  • FIG. 134A is a front view of receiver 1800a held by holder 1810 in Embodiment 16.
  • the receiver 1800a is held by the holder 1810 in the inserted state as described above.
  • the locking portion 1810b locks with the lower portion of the receiver 1800a and sandwiches the lower portion with the back plate portion 1810a.
  • the back surface of the receiver 1800a faces the back plate portion 1810a, and the display 1801 of the receiver 1800a is exposed.
  • FIG. 134B is a rear view of receiver 1800a held by holder 1810 in the sixteenth embodiment.
  • a through hole 1811 is formed in the back plate portion 1810a, and a variable filter 1812 is attached in the vicinity of the through hole 1811.
  • camera 1802 of receiver 1800a is exposed through back hole 1811 from back plate portion 1810a.
  • the flashlight 1803 of the receiver 1800a faces the variable filter 1812.
  • the variable filter 1812 is formed in a disk shape, for example, and has three color filters (a red filter, a yellow filter, and a green filter) each having a fan shape and the same size.
  • the variable filter 1812 is attached to the back plate portion 1810a so as to be rotatable about the center of the variable filter 1812.
  • the red filter is a filter having red translucency
  • the yellow filter is a filter having yellow translucency
  • the green filter is a filter having green translucency.
  • variable filter 1812 is rotated, and, for example, the red filter is disposed at a position facing the flashlight 1803a.
  • the light emitted from the flashlight 1803a is diffused inside the holder 1810 as red light by passing through the red filter.
  • substantially the entire holder 1810 emits red light.
  • variable filter 1812 is rotated and, for example, the yellow filter is disposed at a position facing the flashlight 1803a.
  • the light emitted from the flashlight 1803a is diffused inside the holder 1810 as yellow light by passing through the yellow filter.
  • substantially the entire holder 1810 emits yellow light.
  • variable filter 1812 is rotated so that, for example, the green filter is disposed at a position facing the flashlight 1803a.
  • the light emitted from the flashlight 1803a is diffused inside the holder 1810 as green light by passing through the green filter.
  • substantially the entire holder 1810 emits green light.
  • the holder 1810 lights in red, yellow or green like a penlight.
  • FIG. 135 is a diagram for describing a use case of the receiver 1800a held by the holder 1810 in the sixteenth embodiment.
  • a receiver with a holder that is a receiver 1800a held by a holder 1810 is used in an amusement park or the like. That is, the plurality of receivers with holders that are directed to the float moving in the amusement park blink in synchronization with the music flowing from the float.
  • the float is configured as a transmitter in each of the above embodiments, and transmits a visible light signal by a change in luminance of a light source attached to the float.
  • the float transmits a visible light signal indicating the ID of the float.
  • the receiver with a holder receives the visible light signal, ie, ID, by imaging
  • the receiver 1800a that has received the ID acquires a program associated with the ID from, for example, a server.
  • This program includes instructions for turning on the flashlight 1803 of the receiver 1800a at each predetermined time. Each predetermined time is set in accordance with the music flowing from the float (so as to be synchronized). Then, the receiver 1800a blinks the flashlight 1803a according to the program.
  • each receiver 1800a that has received the ID repeats lighting at the same timing according to the music flowing from the float of the ID.
  • each receiver 1800a blinks the flashlight 1803 in accordance with a set color filter (hereinafter referred to as a setting filter).
  • the setting filter is a color filter that faces the flashlight 1803 of the receiver 1800a.
  • Each receiver 1800a recognizes the current setting filter based on an operation by the user. Alternatively, each receiver 1800a recognizes the current setting filter based on the color of an image obtained by photographing with the camera 1802.
  • the receiver 1800a held in the holder 1810 is synchronized with the float music and the receiver 1800a held in the other holder 1810 in the same manner as the synchronous playback shown in FIGS. 123 to 129 described above. Then, the flashlight 1803, that is, the holder 1810 is blinked.
  • FIG. 136 is a flowchart showing the processing operation of the receiver 1800a held by the holder 1810 in the sixteenth embodiment.
  • the receiver 1800a receives the float ID indicated by the visible light signal from the float (step S1831). Next, the receiver 1800a acquires a program associated with the ID from the server (step S1832). Next, the receiver 1800a executes the program to turn on the flashlight 1803 at each predetermined time according to the setting filter (step S1833).
  • the receiver 1800a may cause the display 1801 to display an image corresponding to the received ID or the acquired program.
  • FIG. 137 is a diagram illustrating an example of an image displayed by the receiver 1800a according to the sixteenth embodiment.
  • the receiver 1800a when the receiver 1800a receives an ID from a Santa Claus float, the receiver 1800a displays a Santa Claus image as shown in FIG. Further, as illustrated in FIG. 137 (b), the receiver 1800a may change the background color of the Santa Claus image to the color of the setting filter simultaneously with the lighting of the flashlight 1803. For example, when the color of the setting filter is red, the holder 1810 is lit red by turning on the flashlight 1803, and at the same time, a Santa Claus image having a red background color is displayed on the display 1801. That is, the blinking of the holder 1810 and the display on the display 1801 are synchronized.
  • FIG. 138 is a diagram showing another example of the holder according to the sixteenth embodiment.
  • the holder 1820 is configured in the same manner as the holder 1810 described above, but does not include the through hole 1811 and the variable filter 1812.
  • a holder 1820 holds the receiver 1800a in a state where the display 1801 of the receiver 1800a is directed to the back plate portion 1820a.
  • the receiver 1800a causes the display 1801 to emit light instead of the flashlight 1803.
  • light from the display 1801 is diffused over substantially the entire holder 1820. Therefore, when the receiver 1800a causes the display 1801 to emit light with red light according to the above-described program, the holder 1820 is lit red. Similarly, when the receiver 1800a causes the display 1801 to emit light with yellow light according to the above-described program, the holder 1820 is lit in yellow.
  • the holder 1820 lights up in green. If such a holder 1820 is used, the setting of the variable filter 1812 can be omitted.
  • FIG. 17 (Embodiment 17) (Visible light signal) 139A to 139D are diagrams illustrating an example of a visible light signal in Embodiment 17.
  • FIG. 17 (Embodiment 17) (Visible light signal) 139A to 139D are diagrams illustrating an example of a visible light signal in Embodiment 17.
  • the transmitter generates a 4PPM visible light signal and changes the luminance in accordance with the visible light signal, for example, as shown in FIG. 139A.
  • the transmitter allocates 4 slots to one signal unit, and generates a visible light signal composed of a plurality of signal units.
  • the signal unit indicates High (H) or Low (L) for each slot.
  • the transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot.
  • one slot is a period corresponding to a time of 1/9600 seconds.
  • the transmitter may generate a visible light signal in which the number of slots allocated to one signal unit is variable.
  • the signal unit includes a signal indicating H in one or more consecutive slots and a signal indicating L in one slot following the H signal. Since the number of slots of H is variable, the total number of slots in the signal unit is variable.
  • the transmitter generates a visible light signal including these signal units in the order of a signal unit of 3 slots, a signal unit of 4 slots, and a signal unit of 6 slots. Also in this case, the transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot.
  • the transmitter may assign an arbitrary period (signal unit period) to one signal unit without assigning a plurality of slots to one signal unit.
  • the signal unit period includes an H period and an L period following the H period.
  • the period of H is adjusted according to the signal before modulation.
  • the period L may be fixed and may be a period corresponding to the slot.
  • the H period and the L period are, for example, periods of 100 ⁇ s or more. For example, as shown in FIG.
  • the transmitter transmits a visible light signal including signal units in the order of a signal unit having a signal unit period of 210 ⁇ s, a signal unit having a signal unit period of 220 ⁇ s, and a signal unit having a signal unit period of 230 ⁇ s.
  • the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
  • the transmitter may generate a signal indicating L and H alternately as a visible light signal.
  • the L period and the H period in the visible light signal are adjusted according to the signals before modulation.
  • the transmitter indicates H for a period of 100 ⁇ s, then indicates L for a period of 120 ⁇ s, then indicates H for a period of 110 ⁇ s, and further indicates L for a period of 200 ⁇ s.
  • a visible light signal is transmitted.
  • the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
  • FIG. 140 is a diagram showing a configuration of a visible light signal in the seventeenth embodiment.
  • the visible light signal includes, for example, a signal 1, a brightness adjustment signal corresponding to the signal 1, a signal 2, and a brightness adjustment signal corresponding to the signal 2.
  • the transmitter When the transmitter generates the signal 1 and the signal 2 by modulating the signals before modulation, the transmitter generates a brightness adjustment signal for the signals and generates the above-described visible light signal.
  • the brightness adjustment signal corresponding to signal 1 is a signal that compensates for increase / decrease in brightness due to a luminance change according to signal 1.
  • the brightness adjustment signal corresponding to the signal 2 is a signal that compensates for increase / decrease in brightness due to a luminance change according to the signal 2.
  • the brightness B1 is expressed by the luminance change according to the signal 1 and the brightness adjustment signal of the signal 1
  • Brightness B2 is expressed.
  • the transmitter in the present embodiment generates the brightness adjustment signals of signal 1 and signal 2 as part of the visible light signal so that the brightness B1 and brightness B2 are equal. Thereby, the brightness is kept constant and flicker can be suppressed.
  • the transmitter 1 when the transmitter 1 generates the signal 1, the transmitter 1 generates the signal 1 including the data 1, the preamble (header) following the data 1, and the data 1 following the preamble.
  • the preamble is a signal corresponding to data 1 arranged before and after the preamble.
  • this preamble is a signal serving as an identifier for reading data 1.
  • FIG. 141 is a diagram illustrating an example of bright line images obtained by imaging of the receiver in Embodiment 17.
  • the receiver captures a bright line image including a visible light signal transmitted from the transmitter as a bright line pattern by capturing an image of the transmitter that changes in luminance. With such imaging, a visible light signal is received by the receiver.
  • the receiver uses N exposure lines included in the image sensor to capture an image at time t1, so that each line includes a region a and a region b where a bright line pattern appears. Get an image. Regions a and b are regions in which bright line patterns appear when the luminance of the transmitter, which is the subject, changes.
  • the receiver demodulates the visible light signal from the bright line pattern of the region a and the region b. However, if the receiver determines that the demodulated visible light signal alone is not sufficient, only M (M ⁇ N) consecutive exposure lines corresponding to the area a are used among the N exposure lines. The image is taken at time t2. Thereby, the receiver acquires a bright line image including only the region a out of the regions a and b. The receiver repeatedly performs such imaging at times t3 to t5. As a result, a visible light signal having a sufficient amount of data from the subject corresponding to the region a can be received at high speed.
  • the receiver captures an image at time t6 using only L (L ⁇ N) consecutive exposure lines corresponding to the region b among the N exposure lines. Thereby, the receiver acquires a bright line image including only the region b out of the regions a and b.
  • the receiver repeatedly performs such imaging at times t7 to t9. As a result, a visible light signal having a sufficient amount of data from the subject corresponding to the region b can be received at high speed.
  • the receiver may acquire a bright line image including only the region a by performing the same imaging at the times t2 to t5 at the times t10 and t11. Further, the receiver may acquire a bright line image including only the region b by performing imaging similar to that at times t6 to t9 at times t12 and t13.
  • the receiver when the receiver determines that the visible light signal is insufficient, the receiver performs continuous shooting of the bright line image including only the region a from time t2 to t5. If bright lines appear in the image obtained by the above, the above-described continuous shooting may be performed. Similarly, when the receiver determines that the visible light signal is insufficient, the receiver performs continuous shooting of the bright line image including only the region b from time t6 to time t9, which is obtained by imaging at time t1. If bright lines appear in the image, the above-described continuous shooting may be performed. The receiver may alternately perform acquisition of a bright line image including only the region a and acquisition of a bright line image including only the region b.
  • the M consecutive exposure lines corresponding to the area a are exposure lines that contribute to the generation of the area a
  • the L consecutive exposure lines corresponding to the area b are the generation of the area b. Is an exposure line that contributes to
  • FIG. 142 is a diagram illustrating another example of the bright line image obtained by imaging of the receiver in the seventeenth embodiment.
  • the receiver captures an image at time t1 using N exposure lines included in the image sensor, so that each line includes a region a and a region b where a bright line pattern appears.
  • each line includes a region a and a region b where a bright line pattern appears.
  • Each of the areas a and b is an area where a bright line pattern appears when the luminance of the transmitter, which is a subject, changes as described above.
  • each of the region a and the region b has a region that overlaps with each other along the direction of the bright line or the exposure line (hereinafter referred to as an overlapping region).
  • the receiver determines that the visible light signal demodulated from the bright line pattern of the area a and the area b is insufficient, P (P ⁇ N) corresponding to the overlapping area among the N exposure lines.
  • An image is taken at time t2 using only the continuous exposure lines.
  • the receiver acquires a bright line image including only the overlapping regions of the region a and the region b.
  • the receiver repeatedly performs such imaging at times t3 and t4.
  • a visible light signal having a sufficient amount of data from the subject corresponding to each of the region a and the region b can be received substantially simultaneously and at high speed.
  • FIG. 143 is a diagram illustrating another example of the bright line image obtained by imaging by the receiver in the seventeenth embodiment.
  • the receiver uses the N exposure lines included in the image sensor to capture an image at time t ⁇ b> 1, thereby clearly displaying a portion a in which the bright line pattern appears unclearly.
  • a bright line image including an area including the appearing portion b is acquired. Similar to the above, this region is a region where a bright line pattern appears when the luminance of the transmitter that is the subject changes.
  • the receiver may perform continuous shooting of the bright line image including only the part a after the continuous shooting of the bright line image including only the part b.
  • the receiver assigns an order to each region, and according to the order, only that region is included.
  • the order may be an order corresponding to the magnitude of the signal (area or area size) or an order corresponding to the clarity of the bright line.
  • the order may be an order corresponding to the color of light from the subject corresponding to these areas. For example, the first continuous shooting is performed on a region corresponding to red light, and the next continuous shooting is performed on a region corresponding to white light. Further, only continuous shooting of the area with respect to red light may be performed.
  • (HDR synthesis) 144 is a diagram for describing adaptation of the receiver in Embodiment 17 to a camera system that performs HDR synthesis.
  • FIG. 1 A diagram for describing adaptation of the receiver in Embodiment 17 to a camera system that performs HDR synthesis.
  • the vehicle is equipped with a camera system to prevent collisions.
  • This camera system performs HDR (High Dynamic Range) composition using an image obtained by imaging by a camera. By this HDR synthesis, an image with a wide dynamic range of luminance can be obtained.
  • the camera system recognizes surrounding vehicles, obstacles, or people based on this wide dynamic range image.
  • the camera system has a normal setting mode and a communication setting mode as setting modes.
  • the setting mode is the normal setting mode, for example, as shown in FIG. 144
  • the camera system performs four times of imaging at the same 1/100 second shutter speed and at different sensitivities at times t1 to t4. .
  • the camera system performs HDR synthesis using the four images obtained by the four imaging operations.
  • the camera system may not perform HDR synthesis.
  • the camera system performs imaging three times with a shutter speed of 1/10000 seconds and different sensitivities from time t10 to t12.
  • the camera system recognizes a surrounding vehicle, an obstacle, a person, or the like from an image obtained by the first one of the four images. Further, the camera system receives a visible light signal by the last three imagings among the above four imagings, and demodulates the bright line pattern appearing in the image obtained by the imaging.
  • imaging is performed with different sensitivities at times t10 to t12.
  • imaging may be performed with the same sensitivity.
  • Such a camera system can perform HDR synthesis and can also receive a visible light signal.
  • FIG. 145 is a diagram for explaining the processing operation of the visible light communication system in the seventeenth embodiment.
  • This visible light communication system includes, for example, a transmitter arranged at a cash register, a smartphone as a receiver, and a server. Note that the communication between the smartphone and the server and the communication between the transmitter and the server are each performed via a secure communication line. Communication between the transmitter and the smartphone is performed by visible light communication.
  • the visible light communication system according to the present embodiment ensures security by determining whether a visible light signal from a transmitter is accurately received by a smartphone.
  • the transmitter transmits, for example, a visible light signal indicating the value “100” to the smartphone by changing the luminance at time t1.
  • the smartphone receives the visible light signal at time t2
  • the smartphone transmits a radio signal indicating the value “100” to the server.
  • the server receives the radio signal from the smartphone at time t3.
  • the server performs a process for determining whether or not the value “100” indicated by the radio signal is the value of the visible light signal received by the smartphone from the transmitter. That is, the server transmits, for example, a radio signal indicating a value “200” to the transmitter.
  • the transmitter that has received the radio wave signal transmits a visible light signal indicating the value “200” to the smartphone by changing the luminance at time t4.
  • the smartphone When the smartphone receives the visible light signal at time t5, the smartphone transmits a radio signal indicating the value “200” to the server.
  • the server receives the radio signal from the smartphone at time t6.
  • the server determines whether or not the value indicated by the received radio signal is the same as the value indicated by the radio signal transmitted at time t3. If they are the same, the server determines that the value “100” indicated by the visible light signal received at time t3 is the value of the visible light signal transmitted from the transmitter to the smartphone and received. On the other hand, if not the same, the server determines that the value “100” indicated by the visible light signal received at time t3 is suspicious as the value of the visible light signal transmitted from the transmitter to the smartphone and received.
  • communication using a radio wave signal is performed between the smartphone, the server, and the transmitter.
  • communication using an optical signal other than a visible light signal or communication using an electrical signal may be performed.
  • the visible light signal transmitted from the transmitter to the smartphone indicates, for example, a charging value, a coupon value, a monster value, or a bingo value.
  • (Vehicle related) 146A is a diagram illustrating an example of vehicle-to-vehicle communication using visible light in Embodiment 17.
  • FIG. 1 is a diagram illustrating an example of vehicle-to-vehicle communication using visible light in Embodiment 17.
  • the head vehicle recognizes that there is an accident in the direction of travel by a sensor (camera etc.) mounted on the vehicle.
  • the leading vehicle transmits a visible light signal by changing the brightness of the tail lamp.
  • the leading vehicle transmits a visible light signal that prompts the subsequent vehicle to decelerate.
  • the succeeding vehicle receives the visible light signal by imaging with a camera mounted on the vehicle, the following vehicle decelerates according to the visible light signal and further transmits a visible light signal that prompts the subsequent vehicle to decelerate. To do.
  • the visible light signal that prompts deceleration is sequentially transmitted from the head to a plurality of vehicles traveling in a line, and the vehicle that receives the visible light signal decelerates. Since the transmission of the visible light signal to each vehicle is performed quickly, the plurality of vehicles can be decelerated in the same manner at substantially the same time. Therefore, it is possible to reduce traffic congestion due to accidents.
  • FIG. 146B is a diagram illustrating another example of vehicle-to-vehicle communication using visible light according to the seventeenth embodiment.
  • the front vehicle may transmit a visible light signal indicating a message (for example, “thank you”) to the subsequent vehicle by changing the brightness of the tail lamp.
  • This message is generated, for example, by a user operation on a smartphone.
  • a smart phone transmits the signal which shows the message to the above-mentioned previous vehicle.
  • the preceding vehicle can transmit a visible light signal indicating the message to the subsequent vehicle.
  • FIG. 147 is a diagram illustrating an example of a method for determining the positions of a plurality of LEDs in the seventeenth embodiment.
  • a vehicle headlight has a plurality of LEDs (Light Emitting Diodes).
  • the transmitter of this vehicle transmits a visible light signal from each LED by individually changing the brightness of each of the plurality of LEDs of the headlight.
  • Other vehicle receivers receive visible light signals from their LEDs by imaging the vehicle with the headlights.
  • the receiver determines the position of each of the plurality of LEDs from the image obtained by the imaging in order to recognize which LED the received visible light signal is transmitted from. .
  • the receiver uses an acceleration sensor attached to the same vehicle as the receiver, and uses a plurality of gravitational directions (for example, a downward arrow in FIG. 147) indicated by the acceleration sensor as a reference. Determine the position of each LED.
  • an LED is used as an example of a light-emitting body that changes in luminance, but a light-emitting body other than the LED may be used.
  • FIG. 148 is a diagram illustrating an example of a bright line image obtained by imaging a vehicle in the seventeenth embodiment.
  • a receiver mounted on a traveling vehicle acquires a bright line image shown in FIG. 148 by imaging a subsequent vehicle (following vehicle).
  • the transmitter mounted on the following vehicle transmits a visible light signal to the preceding vehicle by changing the brightness of the two headlights of the vehicle.
  • a camera for imaging the rear is attached to the rear part of the front vehicle or the side mirror.
  • the receiver acquires a bright line image by imaging with the camera of the following vehicle as a subject, and demodulates a bright line pattern (visible light signal) included in the bright line image. Thereby, the visible light signal transmitted from the transmitter of the following vehicle is received by the receiver of the preceding vehicle.
  • the receiver acquires the ID of the vehicle having the headlight, the speed of the vehicle, and the type of the vehicle from each of the visible light signals transmitted and demodulated from the two headlights. If the IDs of the two visible light signals are the same, the receiver determines that the two visible light signals are signals transmitted from the same vehicle. And a receiver specifies the length (distance between lights) between the two headlights which the vehicle has from the model of the vehicle. Further, the receiver measures a distance L1 between two regions where the bright line pattern appears, which is included in the bright line image. Then, the receiver calculates the distance (inter-vehicle distance) from the vehicle on which the receiver is mounted to the following vehicle by triangulation using the distance L1 and the inter-light distance. The receiver determines the risk of collision based on the inter-vehicle distance and the vehicle speed acquired from the visible light signal, and notifies the vehicle driver of a warning corresponding to the determination result. Thereby, the collision of a vehicle can be avoided.
  • the receiver specifies the inter-light distance from the vehicle type included in the visible light signal, but may specify the inter-light distance from information other than the vehicle type.
  • the receiver issues a warning when it is determined that there is a risk of a collision.
  • the receiver may output a control signal for causing the vehicle to perform an operation to avoid the risk. Good.
  • the control signal is a signal for accelerating the vehicle or a signal for causing the vehicle to change lanes.
  • the camera images the following vehicle, but may image the oncoming vehicle.
  • the receiver determines from the image obtained by imaging by the camera that fog is in the vicinity of the receiver (that is, the vehicle equipped with the receiver)
  • the receiver enters a mode for receiving the visible light signal as described above. May be.
  • the receiver of the vehicle can identify the position and speed of the oncoming vehicle by receiving the visible light signal transmitted from the headlight of the oncoming vehicle, even if fog is in the vicinity.
  • FIG. 149 is a diagram illustrating an example of application of the receiver and the transmitter in the seventeenth embodiment.
  • FIG. 149 is a view of the automobile from the back.
  • a transmitter (car) 7006a having two tail lamps (light emitting unit or light) of a car transmits identification information (ID) of the transmitter 7006a to a receiver configured as a smartphone, for example.
  • ID identification information
  • the receiver acquires information associated with the ID from the server.
  • the information includes the ID of the car or transmitter, the distance between the light emitting parts, the size of the light emitting part, the size of the car, the shape of the car, the weight of the car, the car number, the appearance in front, or the danger. This is information indicating the presence or absence of.
  • the receiver may acquire these pieces of information directly from the transmitter 7006a.
  • FIG. 150 is a flowchart illustrating an example of processing operations of the receiver and the transmitter 7006a in the seventeenth embodiment.
  • the ID of the transmitter 7006a and the information passed to the receiver that has received the ID are associated with each other and stored in the server (Step 7106a).
  • the information to be passed to the receiver includes identification of the size of the light emitting unit that becomes the transmitter 7006a, the distance between the light emitting units, the shape of the object having the transmitter 7006a as a component, the weight, the body number, etc. Information such as numbers, places that are difficult to observe from the receiver, and presence or absence of danger may be included.
  • the transmitter 7006a transmits the ID (Step 7106b).
  • the transmission content may include the URL of the server and information stored in the server.
  • the receiver receives information such as the transmitted ID (Step 7106c).
  • the receiver acquires information associated with the received ID from the server (Step 7106d).
  • the receiver displays the received information and the information acquired from the server (Step 7106e).
  • the receiver and the light emitting unit can be triangulated from the size information of the light emitting unit and the appearance size of the imaged light emitting unit, or from the distance information between the light emitting units and the distance between the imaged light emitting units. Is calculated (Step 7106f).
  • the receiver issues a warning of danger based on information such as the state of the place that is difficult to observe from the receiver and the presence or absence of danger (Step 7106g).
  • FIG. 151 is a diagram illustrating an example of application of the receiver and the transmitter in the seventeenth embodiment.
  • a transmitter (car) 7007b having two tail lamps (light emitting unit or light) of a car transmits information of the transmitter 7007b to a receiver 7007a configured as a transmission / reception device of a parking lot, for example.
  • the information of the transmitter 7007b indicates identification information (ID) of the transmitter 7007b, a car number, a car size, a car shape, or a car weight.
  • ID identification information
  • the receiver 7007a transmits information indicating whether parking is possible, billing information, or a parking position. Note that the receiver 7007a may receive the ID and acquire information other than the ID from the server.
  • FIG. 152 is a flowchart illustrating an example of processing operations of the receiver 7007a and the transmitter 7007b according to the seventeenth embodiment.
  • the transmitter 7007b includes an in-vehicle transmitter and an in-vehicle receiver in order to perform not only transmission but also reception.
  • the ID of the transmitter 7007b and the information passed to the receiver 7007a that has received the ID are associated with each other and stored in the server (parking lot management server) (Step 7107a).
  • Information to be passed to the receiver 7007a includes an identification number such as the shape, weight, and body number of the object having the transmitter 7007b as a component, the identification number of the user of the transmitter 7007b, and information for payment. May be included.
  • the transmitter 7007b transmits the ID (Step 7107b).
  • the contents of transmission may include the URL of the server and information stored in the server.
  • the parking lot receiver 7007a (parking lot transmission / reception device) transmits the received information to a server (parking lot management server) that manages the parking lot (Step 7107c).
  • the parking lot management server acquires information associated with the ID using the ID of the transmitter 7007b as a key (Step 7107d).
  • the parking lot management server investigates the parking lot availability (Step 7107e).
  • the parking lot receiver 7007a (parking lot transmission / reception device) transmits whether or not parking is possible, parking position information, or the address of a server holding these pieces of information (Step 7107f). Or a parking lot management server transmits such information to another server.
  • the transmitter (on-vehicle receiver) 7007b receives the information transmitted above (Step 7107g). Or an in-vehicle system acquires these information from another server.
  • the parking lot management server controls the parking lot so as to facilitate parking (Step 7107h). For example, control of a multistory parking lot is performed.
  • the transmission / reception device of the parking lot transmits the ID (Step 7107i).
  • the in-vehicle receiver (transmitter 7007b) makes an inquiry to the parking lot management server based on the user information of the in-vehicle receiver and the received ID (Step 7107j).
  • the parking lot management server charges according to the parking time (7107k).
  • the parking lot management server controls the parking lot so that the parked vehicle can be easily accessed (Step 7107m). For example, control of a multistory parking lot is performed.
  • the in-vehicle receiver (transmitter 7007b) displays a map to the parking position and performs navigation from the current location (Step 7107n).
  • FIG. 153 is a diagram illustrating a configuration of a visible light communication system applied to the inside of a train in Embodiment 17.
  • the visible light communication system includes, for example, a plurality of lighting devices 1905 arranged in a train, a smartphone 1906 held by a user, a server 1904, and a camera 1903 arranged in the train.

Abstract

This transmission method includes a step (S551) for receiving, as a designated light adjustment degree, a light adjustment degree designated with respect to a light source, and a step (S552) in which, when the designated light adjustment degree is no greater than a first value, a light source emits light at the designated light adjustment degree while a signal encoded in a first mode is transmitted via changes in luminance, and, when the designated light adjustment degree is greater than the first value, the light source emits light at the designated light adjustment degree while a signal encoded in a second mode is transmitted via changes in luminance. The peak current value of the light source when the designated light adjustment degree is greater than the first value but no more than a second value is smaller than the peak current value of the light source when the designated light adjustment value is the first value.

Description

送信方法、送信装置、およびプログラムTransmission method, transmission device, and program
 本発明は、可視光信号の送信方法、送信装置およびプログラムなどに関する。 The present invention relates to a visible light signal transmission method, a transmission device, a program, and the like.
 近年のホームネットワークでは、Ethernet(登録商標)や無線LAN(Local Area Network)でのIP(Internet Protocol)接続によるAV家電の連携に加え、環境問題に対応した電力使用量の管理や、宅外からの電源ON/OFFといった機能を持つホームエネルギーマネジメントシステム(HEMS)によって、多様な家電機器がネットワークに接続される家電連携機能の導入が進んでいる。しかしながら、通信機能を有するには、演算力が十分ではない家電や、コスト面で通信機能の搭載が難しい家電などもある。 In recent home networks, in addition to cooperation of AV home appliances by IP (Internet Protocol) connection in Ethernet (registered trademark) and wireless LAN (Local Area Network), management of power consumption corresponding to environmental problems, and from outside the home Introduction of a home appliance linkage function in which various home appliances are connected to a network by a home energy management system (HEMS) having a function of turning on / off the power of the home appliance. However, there are home appliances that do not have sufficient computing power to have a communication function, and home appliances that are difficult to install a communication function in terms of cost.
 このような問題を解決するため、特許文献1では、光を用いて自由空間に情報を伝達する光空間伝送装置において、照明光の単色光源を複数用いた通信を行うことで、限られた送信装置のなかで、効率的に機器間の通信を実現する技術が記載されている。 In order to solve such a problem, in Patent Literature 1, in an optical space transmission device that transmits information to free space using light, limited transmission is performed by performing communication using a plurality of monochromatic light sources of illumination light. A technology for efficiently realizing communication between devices is described in the apparatus.
特開2002-290335号公報JP 2002-290335 A
 しかしながら、前記従来の方式では、適用される機器が照明のような3色光源を持つ場合に限定される。 However, the conventional method is limited to a case where a device to be applied has a three-color light source such as illumination.
 本発明は、このような課題を解決し、3色光源を持つ照明以外の機器を含む多様な機器間の通信を可能とする送信方法などを提供する。 The present invention solves such a problem and provides a transmission method and the like that enables communication between various devices including devices other than lighting having a three-color light source.
 本発明の一形態に係る送信方法は、光源の輝度変化により信号を送信する送信方法であって、前記光源に対して指定される調光度を指定調光度として受け付ける受付ステップと、前記指定調光度が第1の値以下である場合には、前記指定調光度で前記光源を発光させながら、第1のモードで符号化された前記信号を輝度変化により送信し、前記指定調光度が前記第1の値よりも大きい場合には、前記指定調光度で前記光源を発光させながら、第2のモードで符号化された前記信号を輝度変化により送信する送信ステップとを含み、前記指定調光度が前記第1の値よりも大きく第2の値以下である場合に、前記第2のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値は、前記指定調光度が前記第1の値である場合に、前記第1のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値よりも小さい。 A transmission method according to an aspect of the present invention is a transmission method for transmitting a signal according to a change in luminance of a light source, the step of accepting a dimming degree designated for the light source as a designated dimming degree, and the designated dimming degree Is less than or equal to the first value, the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is the first dimming level. A transmission step of transmitting the signal encoded in the second mode by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is The value of the peak current of the light source for transmitting the signal encoded in the second mode by a change in luminance when it is greater than the first value and less than or equal to the second value is the specified dimming degree Is the first If it is smaller than the value of the peak current of the light source for transmitting the first said signal encoded in the mode of the luminance change.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、システム、方法、集積回路、コンピュータプログラムおよび記録媒体の任意な組み合わせで実現されてもよい。また、一実施形態に関わる方法を実行するコンピュータプログラムがサーバの記録媒体に保存されており、端末の要求に応じて、サーバから端末に配信する態様で実現されてもよい。 Note that these comprehensive or specific modes may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable CD-ROM, and the system, method, integrated circuit, and computer program. And any combination of recording media. Moreover, the computer program which performs the method concerning one Embodiment is preserve | saved at the recording medium of the server, and may be implement | achieved in the aspect delivered to a terminal from a server according to the request | requirement of a terminal.
 本発明によれば、3色光源を持つ照明以外の機器を含む態様な機器間の通信を可能とする送信方法を実現できる。 According to the present invention, it is possible to realize a transmission method that enables communication between devices including a device other than a lighting device having a three-color light source.
図1は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 1 is a diagram illustrating an example of an observation method of luminance of a light emitting unit in the first embodiment. 図2は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 2 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment. 図3は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 3 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment. 図4は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 4 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment. 図5Aは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5A is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Bは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5B is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Cは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5C is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Dは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5D is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Eは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5E is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Fは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5F is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Gは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5G is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Hは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5H is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図6Aは、実施の形態1における情報通信方法のフローチャートである。FIG. 6A is a flowchart of the information communication method in Embodiment 1. 図6Bは、実施の形態1における情報通信装置のブロック図である。FIG. 6B is a block diagram of the information communication apparatus according to Embodiment 1. 図7は、実施の形態2における受信機の撮影動作の一例を示す図である。FIG. 7 is a diagram illustrating an example of a photographing operation of the receiver in the second embodiment. 図8は、実施の形態2における受信機の撮影動作の他の例を示す図である。FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment. 図9は、実施の形態2における受信機の撮影動作の他の例を示す図である。FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment. 図10は、実施の形態2における受信機の表示動作の一例を示す図である。FIG. 10 is a diagram illustrating an example of display operation of the receiver in Embodiment 2. 図11は、実施の形態2における受信機の表示動作の一例を示す図である。FIG. 11 is a diagram illustrating an example of display operation of the receiver in Embodiment 2. 図12は、実施の形態2における受信機の動作の一例を示す図である。FIG. 12 is a diagram illustrating an example of operation of a receiver in Embodiment 2. 図13は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 13 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図14は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 14 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図15は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 15 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図16は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 16 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図17は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 17 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図18は、実施の形態2における受信機と送信機とサーバとの動作の一例を示す図である。FIG. 18 is a diagram illustrating an example of operations of the receiver, the transmitter, and the server in the second embodiment. 図19は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 19 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図20は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 20 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図21は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 21 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図22は、実施の形態2における送信機の動作の一例を示す図である。FIG. 22 is a diagram illustrating an example of operation of a transmitter in Embodiment 2. 図23は、実施の形態2における送信機の動作の他の例を示す図である。FIG. 23 is a diagram illustrating another example of operation of a transmitter in Embodiment 2. 図24は、実施の形態2における受信機の応用例を示す図である。FIG. 24 is a diagram illustrating an example of application of a receiver in Embodiment 2. 図25は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 25 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図26は、実施の形態3における受信機、送信機およびサーバの処理動作の一例を示す図である。FIG. 26 is a diagram illustrating an example of processing operations of the receiver, the transmitter, and the server in Embodiment 3. 図27は、実施の形態3における送信機および受信機の動作の一例を示す図である。FIG. 27 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 3. 図28は、実施の形態3における送信機、受信機およびサーバの動作の一例を示す図である。FIG. 28 is a diagram illustrating an example of operations of a transmitter, a receiver, and a server in Embodiment 3. 図29は、実施の形態3における送信機および受信機の動作の一例を示す図である。FIG. 29 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 3. 図30は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 30 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4. 図31は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 31 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4. 図32は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 32 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4. 図33は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 33 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4. 図34は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 34 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4. 図35は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 35 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4. 図36は、実施の形態4における送信機および受信機の動作の一例を示す図である。FIG. 36 is a diagram illustrating an example of operations of a transmitter and a receiver in Embodiment 4. 図37は、実施の形態5における人間への可視光通信の通知を説明するための図である。FIG. 37 is a diagram for describing notification of visible light communication to a human in the fifth embodiment. 図38は、実施の形態5における道案内への応用例を説明するための図である。FIG. 38 is a diagram for explaining an application example to the route guidance in the fifth embodiment. 図39は、実施の形態5における利用ログ蓄積と解析への応用例を説明するための図である。FIG. 39 is a diagram for explaining an application example to use log accumulation and analysis in the fifth embodiment. 図40は、実施の形態5における画面共有への応用例を説明するための図である。FIG. 40 is a diagram for explaining an application example to screen sharing in the fifth embodiment. 図41は、実施の形態5における情報通信方法の応用例を示す図である。FIG. 41 is a diagram illustrating an application example of the information communication method according to the fifth embodiment. 図42は、実施の形態6における送信機と受信機の適用例を示す図である。FIG. 42 is a diagram illustrating an example of application of the transmitter and the receiver in the sixth embodiment. 図43は、実施の形態6における送信機および受信機の適用例を示す図である。FIG. 43 is a diagram illustrating an example of application of the transmitter and the receiver in the sixth embodiment. 図44は、実施の形態7における受信機の一例を示す図である。FIG. 44 is a diagram illustrating an example of a receiver in Embodiment 7. 図45は、実施の形態7における受信システムの一例を示す図である。FIG. 45 is a diagram illustrating an example of a reception system in the seventh embodiment. 図46は、実施の形態7における信号送受信システムの一例を示す図である。FIG. 46 is a diagram illustrating an example of a signal transmission / reception system according to the seventh embodiment. 図47は、実施の形態7における干渉を排除した受信方法を示すフローチャートである。FIG. 47 is a flowchart showing a reception method in which interference is eliminated in the seventh embodiment. 図48は、実施の形態7における送信機の方位の推定方法を示すフローチャートである。FIG. 48 is a flowchart showing a method for estimating the orientation of a transmitter in the seventh embodiment. 図49は、実施の形態7における受信の開始方法を示すフローチャートである。FIG. 49 is a flowchart showing a reception start method according to the seventh embodiment. 図50は、実施の形態7における他媒体の情報を併用したIDの生成方法を示すフローチャートである。FIG. 50 is a flowchart showing an ID generation method using information of another medium together in the seventh embodiment. 図51は、実施の形態7における周波数分離による受信方式の選択方法を示すフローチャートである。FIG. 51 is a flowchart showing a reception method selection method based on frequency separation in the seventh embodiment. 図52は、実施の形態7における露光時間が長い場合の信号受信方法を示すフローチャートである。FIG. 52 is a flowchart showing a signal reception method when the exposure time is long in the seventh embodiment. 図53は、実施の形態7における送信機の調光(明るさを調整すること)方法の一例を示す図である。FIG. 53 is a diagram illustrating an example of a transmitter dimming (adjusting brightness) method in Embodiment 7. 図54は、実施の形態7における送信機の調光機能を構成する方法の一例を示す図である。FIG. 54 is a diagram illustrating an example of a method for configuring a dimming function of a transmitter in the seventh embodiment. 図55は、EXズームを説明するための図である。FIG. 55 is a diagram for explaining the EX zoom. 図56は、実施の形態9における信号受信方法の一例を示す図である。FIG. 56 is a diagram illustrating an example of a signal reception method in Embodiment 9. 図57は、実施の形態9における信号受信方法の一例を示す図である。FIG. 57 is a diagram illustrating an example of a signal reception method in Embodiment 9. 図58は、実施の形態9における信号受信方法の一例を示す図である。FIG. 58 is a diagram illustrating an example of a signal reception method in Embodiment 9. 図59は、実施の形態9における受信機の画面表示方法の一例を示す図である。FIG. 59 is a diagram illustrating an example of a screen display method of a receiver in Embodiment 9. 図60は、実施の形態9における信号受信方法の一例を示す図である。FIG. 60 is a diagram illustrating an example of a signal reception method in Embodiment 9. 図61は、実施の形態9における信号受信方法の一例を示す図である。FIG. 61 is a diagram illustrating an example of a signal reception method according to the ninth embodiment. 図62は、実施の形態9における信号受信方法の一例を示すフローチャートである。FIG. 62 is a flowchart illustrating an example of a signal reception method in the ninth embodiment. 図63は、実施の形態9における信号受信方法の一例を示す図である。FIG. 63 is a diagram illustrating an example of a signal reception method in the ninth embodiment. 図64は、実施の形態9における受信プログラムの処理を示すフローチャートである。FIG. 64 is a flowchart showing processing of the reception program in the ninth embodiment. 図65は、実施の形態9における受信装置のブロック図である。FIG. 65 is a block diagram of a receiving apparatus according to the ninth embodiment. 図66は、可視光信号を受信したときの受信機の表示の一例を示す図である。FIG. 66 is a diagram illustrating an example of display on the receiver when a visible light signal is received. 図67は、可視光信号を受信したときの受信機の表示の一例を示す図である。FIG. 67 is a diagram illustrating an example of display on the receiver when a visible light signal is received. 図68は、取得データ画像の表示の一例を示す図である。FIG. 68 is a diagram illustrating an example of the display of the acquired data image. 図69は、取得データを保存する、または、破棄する場合の操作の一例を示す図である。FIG. 69 is a diagram illustrating an example of an operation when saving or discarding acquired data. 図70は、取得データを閲覧する際の表示例を示す図である。FIG. 70 is a diagram illustrating a display example when browsing acquired data. 図71は、実施の形態9における送信機の一例を示す図である。71 is a diagram illustrating an example of a transmitter in Embodiment 9. FIG. 図72は、実施の形態9における受信方法の一例を示す図である。FIG. 72 is a diagram illustrating an example of a reception method in Embodiment 9. 図73は、実施の形態10における受信方法の一例を示すフローチャートである。FIG. 73 is a flowchart illustrating an example of a reception method in Embodiment 10. 図74は、実施の形態10における受信方法の一例を示すフローチャートである。FIG. 74 is a flowchart illustrating an example of a reception method in Embodiment 10. 図75は、実施の形態10における受信方法の一例を示すフローチャートである。FIG. 75 is a flowchart illustrating an example of a reception method in Embodiment 10. 図76は、実施の形態10における受信機が、変調周波数の周期(変調周期)より長い露光時間を用いた受信方法を説明するための図である。FIG. 76 is a diagram for describing a reception method in which the receiver according to Embodiment 10 uses an exposure time longer than the period of the modulation frequency (modulation period). 図77は、実施の形態10における受信機が、変調周波数の周期(変調周期)より長い露光時間を用いた受信方法を説明するための図である。FIG. 77 is a diagram for describing a reception method in which the receiver according to Embodiment 10 uses an exposure time longer than the period of the modulation frequency (modulation period). 図78は、実施の形態10における送信データのサイズに対する効率的な分割数を示す図である。FIG. 78 is a diagram showing an efficient division number with respect to the size of transmission data in the tenth embodiment. 図79Aは、実施の形態10における設定方法の一例を示す図である。FIG. 79A is a diagram illustrating an example of a setting method in Embodiment 10. 図79Bは、実施の形態10における設定方法の他の例を示す図である。FIG. 79B is a diagram illustrating another example of the setting method according to the tenth embodiment. 図80は、実施の形態10における情報処理プログラムの処理を示すフローチャートである。FIG. 80 is a flowchart showing processing of the information processing program in the tenth embodiment. 図81は、実施の形態10における送受信システムの応用例を説明するための図である。FIG. 81 is a diagram for describing an application example of the transmission and reception system in the tenth embodiment. 図82は、実施の形態10における送受信システムの処理動作を示すフローチャートである。FIG. 82 is a flowchart showing processing operations of the transmission / reception system in the tenth embodiment. 図83は、実施の形態10における送受信システムの応用例を説明するための図である。FIG. 83 is a diagram for describing an example of application of the transmission and reception system in the tenth embodiment. 図84は、実施の形態10における送受信システムの処理動作を示すフローチャートである。FIG. 84 is a flowchart showing processing operations of the transmission / reception system in the tenth embodiment. 図85は、実施の形態10における送受信システムの応用例を説明するための図である。FIG. 85 is a diagram for describing an application example of the transmission and reception system in the tenth embodiment. 図86は、実施の形態10における送受信システムの処理動作を示すフローチャートである。FIG. 86 is a flowchart showing processing operations of the transmission / reception system in the tenth embodiment. 図87は、実施の形態10における送信機の応用例を説明するための図である。FIG. 87 is a diagram for describing an example of application of a transmitter in Embodiment 10. 図88は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 88 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment. 図89は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 89 is a diagram for explaining an application example of the transmission and reception system in the eleventh embodiment. 図90は、実施の形態11における送受信システムの応用例を説明するための図である。90 is a diagram for describing an example of application of a transmission and reception system in Embodiment 11. FIG. 図91は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 91 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment. 図92は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 92 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment. 図93は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 93 is a diagram for describing an application example of the transmission / reception system in Embodiment 11. In FIG. 図94は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 94 is a diagram for describing an application example of the transmission and reception system in the eleventh embodiment. 図95は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 95 is a diagram for describing an application example of the transmission / reception system in Embodiment 11. In FIG. 図96は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 96 is a diagram for describing an application example of the transmission / reception system in Embodiment 11. In FIG. 図97は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 97 is a diagram for describing an application example of the transmission / reception system in Embodiment 11. In FIG. 図98は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 98 is a diagram for describing an application example of the transmission / reception system in Embodiment 11. In FIG. 図99は、実施の形態11における送受信システムの応用例を説明するための図である。99 is a diagram for describing an application example of the transmission and reception system in Embodiment 11. FIG. 図100は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 100 is a diagram for describing an example of application of the transmission and reception system in Embodiment 11. 図101は、実施の形態11における送受信システムの応用例を説明するための図である。FIG. 101 is a diagram for explaining an application example of the transmission and reception system in the eleventh embodiment. 図102は、実施の形態12における受信機の動作を説明するための図である。FIG. 102 is a diagram for describing operation of a receiver in Embodiment 12. 図103Aは、実施の形態12における受信機の他の動作を説明するための図である。103A is a diagram for describing another operation of the receiver in Embodiment 12. FIG. 図103Bは、実施の形態12における出力部1215によって表示されるインジケータの例を示す図である。FIG. 103B is a diagram illustrating an example of an indicator displayed by the output unit 1215 in the twelfth embodiment. 図103Cは、実施の形態12におけるARの表示例を示す図である。FIG. 103C is a diagram illustrating a display example of AR in the twelfth embodiment. 図104Aは、実施の形態12における送信機の一例を説明するための図である。104A is a diagram for describing an example of a transmitter in Embodiment 12. FIG. 図104Bは、実施の形態12における送信機の他の例を説明するための図である。FIG. 104B is a diagram for describing another example of the transmitter in Embodiment 12. 図105Aは、実施の形態12における複数の送信機による同期送信の一例を説明するための図である。105A is a diagram for describing an example of synchronous transmission by a plurality of transmitters in Embodiment 12. FIG. 図105Bは、実施の形態12における複数の送信機による同期送信の他の例を説明するための図である。105B is a diagram for describing another example of synchronous transmission by a plurality of transmitters in Embodiment 12. FIG. 図106は、実施の形態12における複数の送信機による同期送信の他の例を説明するための図である。FIG. 106 is a diagram for describing another example of synchronous transmission by a plurality of transmitters in Embodiment 12. 図107は、実施の形態12における送信機の信号処理を説明するための図である。FIG. 107 is a diagram for describing signal processing by a transmitter in Embodiment 12. 図108は、実施の形態12における受信方法の一例を示すフローチャートである。FIG. 108 is a flowchart illustrating an example of a reception method in Embodiment 12. 図109は、実施の形態12における受信方法の一例を説明するための説明図である。FIG. 109 is an explanatory diagram for describing an example of a reception method in Embodiment 12. 図110は、実施の形態12における受信方法の他の例を示すフローチャートである。FIG. 110 is a flowchart illustrating another example of a reception method in Embodiment 12. 図111は、実施の形態13における送信信号の一例を示す図である。111 is a diagram illustrating an example of a transmission signal in Embodiment 13. FIG. 図112は、実施の形態13における送信信号の他の例を示す図である。112 is a diagram illustrating another example of a transmission signal in Embodiment 13. FIG. 図113は、実施の形態13における送信信号の他の例を示す図である。FIG. 113 is a diagram illustrating another example of a transmission signal in Embodiment 13. 図114Aは、実施の形態14における送信機を説明するための図である。114A is a diagram for illustrating a transmitter in Embodiment 14. FIG. 図114Bは、実施の形態14におけるRGBのそれぞれの輝度変化を示す図である。FIG. 114B is a diagram showing each luminance change of RGB in the fourteenth embodiment. 図115は、実施の形態14における緑色蛍光成分および赤色蛍光成分の残光特性を示す図である。FIG. 115 is a diagram illustrating afterglow characteristics of the green fluorescent component and the red fluorescent component in the fourteenth embodiment. 図116は、実施の形態14における、バーコードの読み取りエラーの発生を抑制するために新たに発生する課題を説明するための図である。FIG. 116 is a diagram for describing a problem newly generated in order to suppress occurrence of a barcode reading error in the fourteenth embodiment. 図117は、実施の形態14における受信機で行われるダウンサンプリングを説明するための図である。117 is a diagram for explaining downsampling performed by a receiver in Embodiment 14. FIG. 図118は、実施の形態14における受信機の処理動作を示すフローチャートである。FIG. 118 is a flowchart illustrating a processing operation of the receiver in Embodiment 14. 図119は、実施の形態15における受信装置(撮像装置)の処理動作を示す図である。FIG. 119 is a diagram illustrating processing operations of the reception device (imaging device) in Embodiment 15. 図120は、実施の形態15における受信装置(撮像装置)の処理動作を示す図である。FIG. 120 is a diagram illustrating processing operation of a reception device (imaging device) in Embodiment 15. 図121は、実施の形態15における受信装置(撮像装置)の処理動作を示す図である。FIG. 121 is a diagram illustrating processing operation of a reception device (imaging device) in Embodiment 15. 図122は、実施の形態15における受信装置(撮像装置)の処理動作を示す図である。FIG. 122 is a diagram illustrating processing operation of the reception device (imaging device) in Embodiment 15. 図123は、実施の形態16におけるアプリケーションの一例を示す図である。FIG. 123 is a diagram illustrating an example of an application according to the sixteenth embodiment. 図124は、実施の形態16におけるアプリケーションの一例を示す図である。FIG. 124 is a diagram illustrating an example of an application according to the sixteenth embodiment. 図125は、実施の形態16における送信信号の例と音声同期方法の例とを示す図である。FIG. 125 is a diagram illustrating an example of the transmission signal and an example of the audio synchronization method in Embodiment 16. 図126は、実施の形態16における送信信号の例を示す図である。126 is a diagram illustrating an example of a transmission signal in Embodiment 16. FIG. 図127は、実施の形態16における受信機の処理フローの一例を示す図である。FIG. 127 is a diagram illustrating an example of processing flow of a receiver in Embodiment 16. 図128は、実施の形態16における受信機のユーザインタフェースの一例を示す図である。FIG. 128 is a diagram illustrating an example of a user interface of the receiver in Embodiment 16. 図129は、実施の形態16における受信機の処理フローの一例を示す図である。129 is a diagram illustrating an example of a process flow of a receiver in Embodiment 16. FIG. 図130は、実施の形態16における受信機の処理フローの他の例を示す図である。FIG. 130 is a diagram illustrating another example of processing flow of a receiver in Embodiment 16. 図131Aは、実施の形態16における同期再生の具体的な方法を説明するための図である。FIG. 131A is a diagram for explaining a specific method of synchronized playback in the sixteenth embodiment. 図131Bは、実施の形態16における同期再生を行う再生装置(受信機)の構成を示すブロック図である。FIG. 131B is a block diagram showing a configuration of a playback device (receiver) that performs synchronized playback in the sixteenth embodiment. 図131Cは、実施の形態16における同期再生を行う再生装置(受信機)の処理動作を示すフローチャートである。FIG. 131C is a flowchart illustrating a processing operation of a playback device (receiver) that performs synchronized playback in the sixteenth embodiment. 図132は、実施の形態16における同期再生の事前準備を説明するための図である。FIG. 132 is a diagram for describing preparation for synchronized playback in the sixteenth embodiment. 図133は、実施の形態16における受信機の応用例を示す図である。133 is a diagram illustrating an example of application of a receiver in Embodiment 16. FIG. 図134Aは、実施の形態16における、ホルダーに保持された受信機の正面図である。FIG. 134A is a front view of a receiver held by a holder in Embodiment 16. FIG. 図134Bは、実施の形態16における、ホルダーに保持された受信機の背面図である。FIG. 134B is a rear view of a receiver held by a holder in Embodiment 16. 図135は、実施の形態16における、ホルダーに保持された受信機のユースケースを説明するための図である。FIG. 135 is a diagram for describing a use case of a receiver held by a holder in Embodiment 16. 図136は、実施の形態16における、ホルダーに保持された受信機の処理動作を示すフローチャートである。136 is a flowchart illustrating processing operation of a receiver held by a holder in Embodiment 16. FIG. 図137は、実施の形態16における受信機によって表示される画像の一例を示す図である。FIG. 137 is a diagram illustrating an example of an image displayed by the receiver in Embodiment 16. In FIG. 図138は、実施の形態16におけるホルダーの他の例を示す図である。FIG. 138 is a diagram showing another example of the holder according to the sixteenth embodiment. 図139Aは、実施の形態17における可視光信号の一例を示す図である。139A is a diagram illustrating an example of a visible light signal in Embodiment 17. FIG. 図139Bは、実施の形態17における可視光信号の一例を示す図である。FIG. 139B is a diagram illustrating an example of a visible light signal in Embodiment 17. 図139Cは、実施の形態17における可視光信号の一例を示す図である。139C is a diagram illustrating an example of a visible light signal in Embodiment 17. FIG. 図139Dは、実施の形態17における可視光信号の一例を示す図である。139D is a diagram illustrating an example of a visible light signal in Embodiment 17. FIG. 図140は、実施の形態17における可視光信号の構成を示す図である。FIG. 140 is a diagram illustrating a configuration of a visible light signal according to the seventeenth embodiment. 図141は、実施の形態17における受信機の撮像によって得られる輝線画像の一例を示す図である。FIG. 141 is a diagram illustrating an example of bright line images obtained by imaging of the receiver in Embodiment 17. 図142は、実施の形態17における受信機の撮像によって得られる輝線画像の他の例を示す図である。FIG. 142 is a diagram illustrating another example of bright line images obtained by imaging by the receiver in Embodiment 17. 図143は、実施の形態17における受信機の撮像によって得られる輝線画像の他の例を示す図である。FIG. 143 is a diagram illustrating another example of the bright line image obtained by imaging by the receiver in Embodiment 17. 図144は、実施の形態17における受信機の、HDR合成を行うカメラシステムへの適応を説明するための図である。144 is a diagram for describing adaptation of the receiver in Embodiment 17 to a camera system that performs HDR synthesis. FIG. 図145は、実施の形態17における可視光通信システムの処理動作を説明するための図である。FIG. 145 is a diagram for explaining the processing operation of the visible light communication system in the seventeenth embodiment. 図146Aは、実施の形態17における可視光を用いた車車間通信の一例を示す図である。146A is a diagram illustrating an example of vehicle-to-vehicle communication using visible light in Embodiment 17. FIG. 図146Bは、実施の形態17における可視光を用いた車車間通信の他の例を示す図である。146B is a diagram illustrating another example of vehicle-to-vehicle communication using visible light in Embodiment 17. FIG. 図147は、実施の形態17における複数のLEDの位置決定方法の一例を示す図である。147 is a diagram illustrating an example of a method for determining the positions of a plurality of LEDs in Embodiment 17. FIG. 図148は、実施の形態17における、車両を撮像することによって得られる輝線画像の一例を示す図である。FIG. 148 is a diagram illustrating an example of bright line images obtained by capturing an image of the vehicle in the seventeenth embodiment. 図149は、実施の形態17における受信機と送信機の適用例を示す図である。なお、図149は自動車を後ろから見た図である。FIG. 149 is a diagram illustrating an example of application of the receiver and the transmitter in Embodiment 17. FIG. 149 is a view of the automobile from the back. 図150は、実施の形態17における受信機と送信機の処理動作の一例を示すフローチャートである。FIG. 150 is a flowchart illustrating an example of processing operations of a receiver and a transmitter in Embodiment 17. 図151は、実施の形態17における受信機と送信機の適用例を示す図である。FIG. 151 is a diagram illustrating an example of application of the receiver and the transmitter in Embodiment 17. 図152は、実施の形態17における受信機7007aと送信機7007bの処理動作の一例を示すフローチャートである。FIG. 152 is a flowchart illustrating an example of processing operations of the receiver 7007a and the transmitter 7007b in Embodiment 17. 図153は、実施の形態17における、電車の車内に適用される可視光通信システムの構成を示す図である。FIG. 153 is a diagram illustrating a configuration of a visible light communication system applied to the inside of a train in Embodiment 17. 図154は、実施の形態17における、遊園地などの施設に適用される可視光通信システムの構成を示す図である。FIG. 154 is a diagram illustrating a configuration of a visible light communication system applied to a facility such as an amusement park in Embodiment 17. 図155は、実施の形態17における、遊具とスマートフォンとからなる可視光通信システムの一例を示す図である。FIG. 155 is a diagram illustrating an example of a visible light communication system including a playground device and a smartphone according to Embodiment 17. 図156は、実施の形態18における送信信号の一例を示す図である。156 is a diagram illustrating an example of a transmission signal in Embodiment 18. FIG. 図157は、実施の形態18における送信信号の一例を示す図である。157 is a diagram illustrating an example of a transmission signal in Embodiment 18. FIG. 図158は、実施の形態19における送信信号の一例を示す図である。158 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図159は、実施の形態19における送信信号の一例を示す図である。159 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図160は、実施の形態19における送信信号の一例を示す図である。160 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図161は、実施の形態19における送信信号の一例を示す図である。161 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図162は、実施の形態19における送信信号の一例を示す図である。162 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図163は、実施の形態19における送信信号の一例を示す図である。163 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図164は、実施の形態19における送受信システムの一例を示す図である。164 is a diagram illustrating an example of a transmission and reception system in Embodiment 19. FIG. 図165は、実施の形態19における送受信システムの処理の一例を示すフローチャートである。FIG. 165 is a flowchart illustrating an example of processing of the transmission / reception system in the nineteenth embodiment. 図166は、実施の形態19におけるサーバの動作を示すフローチャートである。FIG. 166 is a flowchart showing the operation of the server in the nineteenth embodiment. 図167は、実施の形態19における受信機の動作の一例を示すフローチャートである。FIG. 167 is a flowchart illustrating an example of operation of a receiver in Embodiment 19. 図168は、実施の形態19における簡易モードでの進捗状況の計算方法を示すフローチャートである。FIG. 168 is a flowchart illustrating a method of calculating the progress status in the simple mode according to the nineteenth embodiment. 図169は、実施の形態19における最尤推定モードでの進捗状況の計算方法を示すフローチャートである。FIG. 169 is a flowchart illustrating a method for calculating the progress in the maximum likelihood estimation mode according to the nineteenth embodiment. 図170は、実施の形態19における進捗状況が減少しない表示方法を示すフローチャートである。FIG. 170 is a flowchart showing a display method in which the progress status does not decrease in the nineteenth embodiment. 図171は、実施の形態19における複数のパケット長がある場合の進捗状況の表示方法を示すフローチャートである。FIG. 171 is a flowchart illustrating a progress status display method when there are a plurality of packet lengths according to the nineteenth embodiment. 図172は、実施の形態19における受信機の動作状態の一例を示す図である。172 is a diagram illustrating an example of an operation state of a receiver in Embodiment 19. FIG. 図173は、実施の形態19における送信信号の一例を示す図である。173 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図174は、実施の形態19における送信信号の一例を示す図である。174 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図175は、実施の形態19における送信信号の一例を示す図である。175 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図176は、実施の形態19における送信機の一例を示すブロック図である。176 is a block diagram illustrating an example of a transmitter in Embodiment 19. FIG. 図177は、実施の形態19におけるLEDディスプレイを本発明の光ID変調信号で駆動する場合のタイミングチャートを示す図である。FIG. 177 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention. 図178は、実施の形態19におけるLEDディスプレイを本発明の光ID変調信号で駆動する場合のタイミングチャートを示す図である。FIG. 178 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention. 図179は、実施の形態19におけるLEDディスプレイを本発明の光ID変調信号で駆動する場合のタイミングチャートを示す図である。FIG. 179 is a timing chart when the LED display in Embodiment 19 is driven with the optical ID modulation signal of the present invention. 図180Aは、本発明の一態様に係る送信方法を示すフローチャートである。FIG. 180A is a flowchart illustrating a transmission method according to one embodiment of the present invention. 図180Bは、本発明の一態様に係る送信装置の機能構成を示すブロック図である。FIG. 180B is a block diagram illustrating a functional configuration of the transmission device according to one embodiment of the present invention. 図181は、実施の形態19における送信信号の一例を示す図である。FIG. 181 is a diagram illustrating an example of a transmission signal in Embodiment 19. In FIG. 図182は、実施の形態19における送信信号の一例を示す図である。182 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図183は、実施の形態19における送信信号の一例を示す図である。183 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図184は、実施の形態19における送信信号の一例を示す図である。184 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図185は、実施の形態19における送信信号の一例を示す図である。185 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図186は、実施の形態19における送信信号の一例を示す図である。186 is a diagram illustrating an example of a transmission signal in Embodiment 19. FIG. 図187は、実施の形態20における可視光信号の構成の一例を示す図である。187 is a diagram illustrating an example of a structure of a visible light signal in Embodiment 20. FIG. 図188は、実施の形態20における可視光信号の詳細な構成の一例を示す図である。188 is a diagram illustrating an example of a detailed configuration of a visible light signal in Embodiment 20. FIG. 図189Aは、実施の形態20における可視光信号の他の一例を示す図である。FIG. 189A is a diagram illustrating another example of a visible light signal in Embodiment 20. FIG. 図189Bは、実施の形態20における可視光信号の他の一例を示す図である。189B is a diagram illustrating another example of a visible light signal in Embodiment 20. FIG. 図189Cは、実施の形態20における可視光信号の信号長を示す図である。189C is a diagram illustrating the signal length of a visible light signal in Embodiment 20. FIG. 図190は、実施の形態20における可視光信号と、規格IECの可視光信号との輝度値の比較結果を示す図である。FIG. 190 is a diagram illustrating a comparison result of luminance values between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment. 図191は、実施の形態20における可視光信号と、規格IECの可視光信号との、画角に対する受信パケット数および信頼度の比較結果を示す図である。FIG. 191 is a diagram illustrating a comparison result of the number of received packets and the reliability with respect to the angle of view between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment. 図192は、実施の形態20における可視光信号と、規格IECの可視光信号との、ノイズに対する受信パケット数および信頼度の比較結果を示す図である。FIG. 192 is a diagram illustrating comparison results of the number of received packets and reliability with respect to noise between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment. 図193は、実施の形態20における可視光信号と、規格IECの可視光信号との、受信側クロック誤差に対する受信パケット数および信頼度の比較結果を示す図である。FIG. 193 is a diagram illustrating a comparison result of the number of received packets and the reliability with respect to the reception-side clock error between the visible light signal and the standard IEC visible light signal according to the twentieth embodiment. 図194は、実施の形態20における送信対象の信号の構成を示す図である。FIG. 194 is a diagram illustrating a structure of a transmission target signal in the twentieth embodiment. 図195Aは、実施の形態20における可視光信号の受信方法を示す図である。195A is a diagram illustrating a visible light signal receiving method in Embodiment 20. FIG. 図195Bは、実施の形態20における可視光信号の並び替えを示す図である。FIG. 195B is a diagram illustrating rearrangement of visible light signals in the twentieth embodiment. 図196は、実施の形態20における可視光信号の他の例を示す図である。196 is a diagram illustrating another example of a visible light signal in Embodiment 20. FIG. 図197は、実施の形態20における可視光信号の詳細な構成の他の例を示す図である。FIG. 197 is a diagram illustrating another example of a detailed configuration of a visible light signal in the twentieth embodiment. 図198は、実施の形態20における可視光信号の詳細な構成の他の例を示す図である。FIG. 198 is a diagram illustrating another example of a detailed configuration of a visible light signal in the twentieth embodiment. 図199は、実施の形態20における可視光信号の詳細な構成の他の例を示す図である。FIG. 199 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 20. In FIG. 図200は、実施の形態20における可視光信号の詳細な構成の他の例を示す図である。200 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 20. FIG. 図201は、実施の形態20における可視光信号の詳細な構成の他の例を示す図である。FIG. 201 is a diagram illustrating another example of a detailed configuration of a visible light signal according to the twentieth embodiment. 図202は、実施の形態20における可視光信号の詳細な構成の他の例を示す図である。FIG. 202 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 20. 図203は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 203 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197. 図204は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 204 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197. 図205は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 205 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197. 図206は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 206 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 図207は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 207 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197. 図208は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 208 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197. 図209は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 209 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 図210は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 210 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 図211は、図197のx1~x4の値を決定する方法を説明するための図である。FIG. 211 is a diagram for explaining a method of determining the values of x1 to x4 in FIG. 197. 図212は、実施の形態20の変形例1に係る可視光信号の詳細な構成の一例を示す図である。FIG. 212 is a diagram illustrating an example of a detailed configuration of a visible light signal according to the first modification of the twentieth embodiment. 図213は、実施の形態20の変形例1に係る可視光信号の他の例を示す図である。213 is a diagram illustrating another example of a visible light signal according to Modification 1 of Embodiment 20. FIG. 図214は、実施の形態20の変形例1に係る可視光信号のさらに他の例を示す図である。214 is a diagram showing still another example of a visible light signal according to Modification 1 of Embodiment 20. FIG. 図215は、実施の形態20の変形例1に係るパケット変調の一例を示す図である。215 is a diagram illustrating an example of packet modulation according to Modification 1 of Embodiment 20. FIG. 図216は、実施の形態20の変形例1に係る、元データを1分割する処理を示す図である。FIG. 216 is a diagram illustrating processing for dividing the original data into one according to the first modification of the twentieth embodiment. 図217は、実施の形態20の変形例1に係る、元データを2分割する処理を示す図である。FIG. 217 is a diagram illustrating a process of dividing the original data into two according to the first modification of the twentieth embodiment. 図218は、実施の形態20の変形例1に係る、元データを3分割にする処理を示す図である。FIG. 218 is a diagram illustrating processing of dividing original data into three according to Modification 1 of Embodiment 20. 図219は、実施の形態20の変形例1に係る、元データを3分割にする処理の他の例を示す図である。FIG. 219 is a diagram illustrating another example of the process of dividing the original data into three according to the first modification of the twentieth embodiment. 図220は、実施の形態20の変形例1に係る、元データを3分割にする処理の他の例を示す図である。FIG. 220 is a diagram illustrating another example of the process of dividing the original data into three according to the first modification of the twentieth embodiment. 図221は、実施の形態20の変形例1に係る、元データを4分割にする処理を示す図である。FIG. 221 is a diagram illustrating a process of dividing the original data into four according to the first modification of the twentieth embodiment. 図222は、実施の形態20の変形例1に係る、元データを5分割にする処理を示す図である。FIG. 222 is a diagram showing processing for dividing original data into five parts according to Modification 1 of Embodiment 20. 図223は、実施の形態20の変形例1に係る、元データを6、7または8分割にする処理を示す図である。223 is a diagram illustrating processing of dividing original data into 6, 7, or 8 portions according to Modification Example 1 of Embodiment 20. FIG. 図224は、実施の形態20の変形例1に係る、元データを6、7または8分割にする処理の他の例を示す図である。FIG. 224 is a diagram illustrating another example of the process of dividing the original data into 6, 7 or 8 according to the first modification of the twentieth embodiment. 図225は、実施の形態20の変形例1に係る、元データを9分割にする処理を示す図である。FIG. 225 is a diagram illustrating a process of dividing the original data into nine according to the first modification of the twentieth embodiment. 図226は、実施の形態20の変形例1に係る、元データを10~16の何れか数に分割する処理を示す図である。FIG. 226 is a diagram illustrating processing of dividing original data into any number from 10 to 16 according to Modification 1 of Embodiment 20. 図227は、実施の形態20の変形例1に係る、元データの分割数と、データサイズと、誤り訂正符号との関係の一例を示す図である。FIG. 227 is a diagram illustrating an example of a relationship among the number of original data divisions, a data size, and an error correction code according to the first modification of the twentieth embodiment. 図228は、実施の形態20の変形例1に係る、元データの分割数と、データサイズと、誤り訂正符号との関係の他の例を示す図である。FIG. 228 is a diagram illustrating another example of the relationship between the number of original data divisions, the data size, and the error correction code according to the first modification of the twentieth embodiment. 図229は、実施の形態20の変形例1に係る、元データの分割数と、データサイズと、誤り訂正符号との関係のさらに他の例を示す図である。229 is a diagram illustrating still another example of the relationship among the number of original data divisions, the data size, and the error correction code according to Modification 1 of Embodiment 20. FIG. 図230Aは、実施の形態20における可視光信号の生成方法を示すフローチャートである。230A is a flowchart illustrating a visible light signal generation method according to Embodiment 20. FIG. 図230Bは、実施の形態20における信号生成装置の構成を示すブロック図である。FIG. 230B is a block diagram illustrating a configuration of the signal generation device according to Embodiment 20. 図231は、実施の形態21における高周波可視光信号を受信する方法を示す図である。FIG. 231 is a diagram illustrating a method of receiving a high-frequency visible light signal in Embodiment 21. 図232Aは、実施の形態21における高周波可視光信号を受信する他の方法を示す図である。232A is a diagram illustrating another method of receiving a high-frequency visible light signal in Embodiment 21. FIG. 図232Bは、実施の形態21における高周波可視光信号を受信する他の方法を示す図である。FIG. 232B is a diagram illustrating another method of receiving a high-frequency visible light signal in Embodiment 21. 図233は、実施の形態21における高周波信号を出力する方法を示す図である。233 is a diagram illustrating a method of outputting a high-frequency signal in Embodiment 21. FIG. 図234は、実施の形態22における自律飛行装置を説明するための図である。FIG. 234 is a diagram for describing the autonomous flight apparatus according to the twenty-second embodiment. 図235は、実施の形態23における受信機がAR画像を表示する例を示す図である。FIG. 235 is a diagram illustrating an example in which the receiver in Embodiment 23 displays an AR image. 図236は、実施の形態23における表示システムの一例を示す図である。FIG. 236 is a diagram illustrating an example of a display system in Embodiment 23. 図237は、実施の形態23における表示システムの他の例を示す図である。FIG. 237 is a diagram illustrating another example of the display system in Embodiment 23. In FIG. 図238は、実施の形態23における表示システムの他の例を示す図である。FIG. 238 is a diagram illustrating another example of the display system in Embodiment 23. In FIG. 図239は、実施の形態23における受信機の処理動作の一例を示すフローチャートである。FIG. 239 is a flowchart illustrating an example of process operations of a receiver in Embodiment 23. 図240は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 240 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図241は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 241 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図242は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 242 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図243は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 243 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図244は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 244 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図245は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 245 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図246は、実施の形態23における受信機の処理動作の他の例を示すフローチャートである。FIG. 246 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23. 図247は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 247 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図248は、実施の形態23における受信機の撮像によって取得される撮像表示画像Ppreおよび復号用画像Pdecを示す図である。248 is a diagram illustrating a captured display image Ppre and a decoding image Pdec acquired by capturing by the receiver in Embodiment 23. FIG. 図249は、実施の形態23における受信機に表示される撮像表示画像Ppreの一例を示す図である。FIG. 249 is a diagram illustrating an example of a captured display image Ppre displayed on the receiver in Embodiment 23. FIG. 図250は、実施の形態23における受信機の処理動作の他の例を示すフローチャートである。FIG. 250 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23. 図251は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 251 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図252は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 252 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図253は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 253 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図254は、実施の形態23における受信機がAR画像を表示する他の例を示す図である。FIG. 254 is a diagram illustrating another example in which the receiver in Embodiment 23 displays an AR image. 図255は、実施の形態23における認識情報の一例を示す図である。FIG. 255 is a diagram illustrating an example of recognition information according to the twenty-third embodiment. 図256は、実施の形態23における受信機の処理動作の他の例を示すフローチャートである。FIG. 256 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23. 図257は、実施の形態23における受信機が輝線パターン領域を識別する一例を示す図である。FIG. 257 is a diagram illustrating an example in which the receiver in Embodiment 23 identifies bright line pattern regions. 図258は、実施の形態23における受信機の他の例を示す図である。258 is a diagram illustrating another example of a receiver in Embodiment 23. FIG. 図259は、実施の形態23における受信機の処理動作の他の例を示すフローチャートである。FIG. 259 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 23. 図260は、実施の形態23における複数の送信機を含む送信システムの一例を示す図である。260 is a diagram illustrating an example of a transmission system including a plurality of transmitters in Embodiment 23. FIG. 図261は、実施の形態23における複数の送信機および受信機を含む送信システムの一例を示す図である。FIG. 261 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in Embodiment 23. 図262Aは、実施の形態23における受信機の処理動作の一例を示すフローチャートである。FIG. 262A is a flowchart illustrating an example of process operations of the receiver in Embodiment 23. 図262Bは、実施の形態23における受信機の処理動作の一例を示すフローチャートである。FIG. 262B is a flowchart illustrating an example of process operations of the receiver in Embodiment 23. 図263Aは、実施の形態23における表示方法を示すフローチャートである。FIG. 263A is a flowchart illustrating a display method according to Embodiment 23. 図263Bは、実施の形態23における表示装置の構成を示すブロック図である。FIG. 263B is a block diagram illustrating a structure of the display device in Embodiment 23. 図264は、実施の形態23の変形例1における受信機がAR画像を表示する例を示す図である。FIG. 264 is a diagram illustrating an example in which the receiver in Modification 1 of Embodiment 23 displays an AR image. 図265は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。FIG. 265 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image. 図266は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。FIG. 266 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image. 図267は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。FIG. 267 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image. 図268は、実施の形態23の変形例1における受信機200の他の例を示す図である。FIG. 268 is a diagram illustrating another example of the receiver 200 in the first modification of the twenty-third embodiment. 図269は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。FIG. 269 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image. 図270は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。FIG. 270 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image. 図271は、実施の形態23の変形例1における受信機200の処理動作の一例を示すフローチャートである。FIG. 271 is a flowchart illustrating an example of processing operations of the receiver 200 in the first modification of the twenty-third embodiment. 図272は、実施の形態23またはその変形例1における受信機において想定されるAR画像を表示するときの課題の一例を示す図である。FIG. 272 is a diagram illustrating an example of a problem when an AR image assumed in the receiver in Embodiment 23 or the modification 1 thereof is displayed. 図273は、実施の形態23の変形例2における受信機がAR画像を表示する例を示す図である。FIG. 273 is a diagram illustrating an example in which the receiver in Modification 2 of Embodiment 23 displays the AR image. 図274は、実施の形態23の変形例2における受信機の処理動作の一例を示すフローチャートである。FIG. 274 is a flowchart illustrating an example of processing operations of a receiver in Modification 2 of Embodiment 23. 図275は、実施の形態23の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 275 is a diagram illustrating another example in which the receiver in the second modification of the twenty-third embodiment displays an AR image. 図276は、実施の形態23の変形例2における受信機の処理動作の他の例を示すフローチャートである。FIG. 276 is a flowchart illustrating another example of processing operations of a receiver in Modification 2 of Embodiment 23. 図277は、実施の形態23の変形例2における受信機がAR画像を表示する他の例を示す図である。277 is a diagram illustrating another example in which a receiver in Modification 2 of Embodiment 23 displays an AR image. FIG. 図278は、実施の形態23の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 278 is a diagram illustrating another example in which the receiver in the second modification of the twenty-third embodiment displays an AR image. 図279は、実施の形態23の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 279 is a diagram illustrating another example in which the receiver in Modification 2 of Embodiment 23 displays an AR image. 図280は、実施の形態23の変形例2における受信機がAR画像を表示する他の例を示す図である。280 is a diagram illustrating another example in which a receiver in Modification 2 of Embodiment 23 displays an AR image. FIG. 図281Aは、本発明の一態様に係る表示方法を示すフローチャートである。FIG. 281A is a flowchart illustrating a display method according to one embodiment of the present invention. 図281Bは、本発明の一態様に係る表示装置の構成を示すブロック図である。FIG. 281B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention. 図282は、実施の形態23の変形例3におけるAR画像の拡大および移動の一例を示す図である。FIG. 282 is a diagram illustrating an example of expansion and movement of the AR image in the third modification of the twenty-third embodiment. 図283は、実施の形態23の変形例3におけるAR画像の拡大の一例を示す図である。FIG. 283 is a diagram illustrating an example of expansion of an AR image in the third modification of the twenty-third embodiment. 図284は、実施の形態23の変形例3における受信機によるAR画像の拡大および移動に関する処理動作の一例を示すフローチャートである。FIG. 284 is a flowchart illustrating an example of processing operations regarding expansion and movement of an AR image by a receiver in Modification 3 of Embodiment 23. 図285は、実施の形態23の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 285 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment. 図286は、実施の形態23の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 286 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment. 図287は、実施の形態23の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 287 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment. 図288は、実施の形態23の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 288 is a diagram illustrating an example of superimposition of AR images in the third modification of the twenty-third embodiment. 図289Aは、実施の形態23の変形例3における受信機による撮像によって得られる撮像表示画像の一例を示す図である。FIG. 289A is a diagram illustrating an example of a captured display image obtained by imaging by the receiver in the third modification of the twenty-third embodiment. 図289Bは、実施の形態23の変形例3における受信機のディスプレイに表示されるメニュー画面の一例を示す図である。FIG. 289B is a diagram illustrating an example of a menu screen displayed on the display of the receiver in Modification 3 of Embodiment 23. 図290は、実施の形態23の変形例3における受信機とサーバとの処理動作の一例を示すフローチャートである。FIG. 290 is a flowchart illustrating an example of processing operations of the receiver and the server in the third modification of the twenty-third embodiment. 図291は、実施の形態23の変形例3における受信機によって再生される音声の音量を説明するための図である。FIG. 291 is a diagram for describing the volume of audio reproduced by the receiver in the third modification of the twenty-third embodiment. 図292は、実施の形態23の変形例3における受信機から送信機までの距離と音量との関係を示す図である。FIG. 292 is a diagram illustrating a relationship between the distance from the receiver to the transmitter and the sound volume in the third modification of the twenty-third embodiment. 図293は、実施の形態23の変形例3における受信機によるAR画像の重畳の一例を示す図である。293 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23. FIG. 図294は、実施の形態23の変形例3における受信機によるAR画像の重畳の一例を示す図である。294 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23. FIG. 図295は、実施の形態23の変形例3における受信機によるラインスキャン時間の求め方の一例を説明するための図である。295 is a diagram for describing an example of how to obtain a line scan time by a receiver in Modification 3 of Embodiment 23. FIG. 図296は、実施の形態23の変形例3における受信機によるラインスキャン時間の求め方の一例を説明するための図である。296 is a diagram for describing an example of how to obtain a line scan time by a receiver in Modification 3 of Embodiment 23. FIG. 図297は、実施の形態23の変形例3における受信機によるラインスキャン時間の求め方の一例を示すフローチャートである。FIG. 297 is a flowchart illustrating an example of how to obtain a line scan time by a receiver in the third modification of the twenty-third embodiment. 図298は、実施の形態23の変形例3における受信機によるAR画像の重畳の一例を示す図である。298 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23. FIG. 図299は、実施の形態23の変形例3における受信機によるAR画像の重畳の一例を示す図である。299 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23. FIG. 図300は、実施の形態23の変形例3における受信機によるAR画像の重畳の一例を示す図である。300 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23. FIG. 図301は、実施の形態23の変形例3における受信機の姿勢に応じて取得される復号用画像の一例を示す図である。301 is a diagram illustrating an example of a decoding image obtained according to the attitude of the receiver in Modification 3 of Embodiment 23. FIG. 図302は、実施の形態23の変形例3における受信機の姿勢に応じて取得される復号用画像の他の例を示す図である。302 is a diagram illustrating another example of a decoding image acquired in accordance with the attitude of a receiver in Modification 3 of Embodiment 23. FIG. 図303は、実施の形態23の変形例3における受信機の処理動作の一例を示すフローチャートである。FIG. 303 is a flowchart illustrating an example of processing operation of a receiver in Modification 3 of Embodiment 23. 図304は、実施の形態23の変形例3における受信機によるカメラレンズの切り替え処理の一例を示す図である。304 is a diagram illustrating an example of camera lens switching processing by a receiver in Modification 3 of Embodiment 23. FIG. 図305は、実施の形態23の変形例3における受信機によるカメラの切り替え処理の一例を示す図である。305 is a diagram illustrating an example of camera switching processing by a receiver in Modification 3 of Embodiment 23. FIG. 図306は、実施の形態23の変形例3における受信機とサーバとの処理動作の一例を示すフローチャートである。FIG. 306 is a flowchart illustrating an example of processing operations of the receiver and the server in Modification 3 of Embodiment 23. 図307は、実施の形態23の変形例3における受信機によるAR画像の重畳の一例を示す図である。307 is a diagram illustrating an example of superimposition of AR images by a receiver in Modification 3 of Embodiment 23. FIG. 図308は、実施の形態23の変形例3における受信機、電子レンジ、中継サーバおよび電子決済用サーバを含むシステムの処理動作を示すシーケンス図である。FIG. 308 is a sequence diagram illustrating processing operations of a system including a receiver, a microwave oven, a relay server, and an electronic payment server in Modification 3 of Embodiment 23. 図309は、実施の形態23の変形例3における、POS端末、サーバ、受信機200および電子レンジを含むシステムの処理動作を示すシーケンス図である。FIG. 309 is a sequence diagram illustrating processing operations of a system including a POS terminal, a server, a receiver 200, and a microwave oven in Modification 3 of Embodiment 23. 図310は、実施の形態23の変形例3における屋内での利用の一例を示す図である。FIG. 310 is a diagram illustrating an example of indoor use in Modification 3 of Embodiment 23. 図311は、実施の形態23の変形例3における拡張現実オブジェクトの表示の一例を示す図である。FIG. 311 is a diagram illustrating an example of an augmented reality object display in the third modification of the twenty-third embodiment. 図312は、実施の形態23の変形例4における表示システムの構成を示す図である。FIG. 312 is a diagram showing a configuration of a display system in Modification 4 of Embodiment 23. In FIG. 図313は、実施の形態23の変形例4における表示システムの処理動作を示すフローチャートである。FIG. 313 is a flowchart showing processing operations of the display system in Modification 4 of Embodiment 23. 図314は、本発明の一態様に係る認識方法を示すフローチャートである。FIG. 314 is a flowchart illustrating a recognition method according to an aspect of the present invention. 図315は、実施の形態24に係る可視光信号の動作モードの一例を示す図である。FIG. 315 is a diagram illustrating an example of operation modes of visible light signals according to the twenty-fourth embodiment. 図316は、実施の形態24に係るパケットPWMのモード1におけるPPDUフォーマットの一例を示す図である。FIG. 316 is a diagram illustrating an example of a PPDU format in mode 1 of the packet PWM according to the twenty-fourth embodiment. 図317は、実施の形態24に係るパケットPWMのモード2におけるPPDUフォーマットの一例を示す図である。FIG. 317 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PWM according to the twenty-fourth embodiment. 図318は、実施の形態24に係るパケットPWMのモード3におけるPPDUフォーマットの一例を示す図である。FIG. 318 is a diagram illustrating an example of a PPDU format in mode 3 of the packet PWM according to the twenty-fourth embodiment. 図319は、実施の形態24に係るパケットPWMのモード1~3のそれぞれのSHRにおけるパルス幅のパターンの一例を示す図である。FIG. 319 is a diagram illustrating an example of a pulse width pattern in each SHR of modes 1 to 3 of the packet PWM according to the twenty-fourth embodiment. 図320は、実施の形態24に係るパケットPPMのモード1におけるPPDUフォーマットの一例を示す図である。FIG. 320 is a diagram showing an example of a PPDU format in mode 1 of the packet PPM according to the twenty-fourth embodiment. 図321は、実施の形態24に係るパケットPPMのモード2におけるPPDUフォーマットの一例を示す図である。FIG. 321 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PPM according to the twenty-fourth embodiment. 図322は、実施の形態24に係るパケットPPMのモード3におけるPPDUフォーマットの一例を示す図である。FIG. 322 is a diagram illustrating an example of a PPDU format in mode 3 of the packet PPM according to the twenty-fourth embodiment. 図323は、実施の形態24に係るパケットPPMのモード1~3のそれぞれのSHRにおけるインターバルのパターンの一例を示す図である。FIG. 323 is a diagram illustrating an example of an interval pattern in each SHR of modes 1 to 3 of the packet PPM according to the twenty-fourth embodiment. 図324は、実施の形態24に係る、PHYペイロードに含まれる12ビットのデータの一例を示す図である。FIG. 324 is a diagram illustrating an example of 12-bit data included in the PHY payload according to the twenty-fourth embodiment. 図325は、実施の形態24に係る、PHYフレームを1パケットに収める処理を示す図である。FIG. 325 is a diagram illustrating processing of storing PHY frames in one packet according to Embodiment 24. 図326は、実施の形態24に係る、PHYフレームを2パケットに分割する処理を示す図である。FIG. 326 is a diagram illustrating processing of dividing the PHY frame into 2 packets according to Embodiment 24. 図327は、実施の形態24に係る、PHYフレームを3パケットに分割する処理を示す図である。FIG. 327 is a diagram illustrating processing of dividing a PHY frame into 3 packets according to Embodiment 24. 図328は、実施の形態24に係る、PHYフレームを4パケットに分割する処理を示す図である。FIG. 328 is a diagram illustrating processing of dividing the PHY frame into 4 packets according to Embodiment 24. 図329は、実施の形態24に係る、PHYフレームを5パケットに分割する処理を示す図である。FIG. 329 is a diagram illustrating processing of dividing the PHY frame into 5 packets according to Embodiment 24. 図330は、実施の形態24に係る、PHYフレームをN(N=6、7または8)パケットに分割する処理を示す図である。FIG. 330 is a diagram illustrating processing of dividing the PHY frame into N (N = 6, 7 or 8) packets according to the twenty-fourth embodiment. 図331は、実施の形態24に係る、PHYフレームを9パケットに分割する処理を示す図である。FIG. 331 is a diagram illustrating processing of dividing the PHY frame into 9 packets according to Embodiment 24. 図332は、実施の形態24に係る、PHYフレームをN(N=10~16)パケットに分割する処理を示す図である。FIG. 332 is a diagram illustrating processing of dividing the PHY frame into N (N = 10 to 16) packets according to the twenty-fourth embodiment. 図333Aは、実施の形態24に係る可視光信号の生成方法を示すフローチャートである。FIG. 333A is a flowchart illustrating a visible light signal generation method according to Embodiment 24. 図333Bは、実施の形態24に係る信号生成装置の構成を示すブロック図である。FIG. 333B is a block diagram showing a configuration of the signal generation apparatus according to Embodiment 24. 図334は、実施の形態25におけるMPMのMACフレームのフォーマットを示す図である。FIG. 334 is a diagram illustrating a format of an MPM MAC frame in the twenty-fifth embodiment. 図335は、実施の形態25におけるMPMのMACフレームを生成する符号化装置の処理動作を示すフローチャートである。FIG. 335 is a flowchart illustrating processing operations of the encoding device for generating an MPM MAC frame according to the twenty-fifth embodiment. 図336は、実施の形態25におけるMPMのMACフレームを復号する復号装置の処理動作を示すフローチャートである。FIG. 336 is a flowchart illustrating processing operation of the decoding device for decoding the MAC frame of MPM in the twenty-fifth embodiment. 図337は、実施の形態25におけるMACのPIBの属性を示す図である。FIG. 337 is a diagram illustrating MAC PIB attributes according to the twenty-fifth embodiment. 図338は、実施の形態25におけるMPMの調光方法を説明するための図である。FIG. 338 is a diagram for describing an MPM light control method according to the twenty-fifth embodiment. 図339は、実施の形態25におけるPHYのPIBの属性を示す図である。FIG. 339 is a diagram illustrating attributes of a PHY PIB according to the twenty-fifth embodiment. 図340は、実施の形態25におけるMPMを説明するための図である。FIG. 340 is a diagram for describing MPM in the twenty-fifth embodiment. 図341は、実施の形態25におけるPLCPヘッダサブフィールドを示す図である。FIG. 341 is a diagram illustrating PLCP header subfields according to Embodiment 25. 図342は、実施の形態25におけるPLCPセンタサブフィールドを示す図である。FIG. 342 is a diagram illustrating PLCP center subfields according to the twenty-fifth embodiment. 図343は、実施の形態25におけるPLCPフッタサブフィールドを示す図である。FIG. 343 is a diagram illustrating PLCP footer subfields according to the twenty-fifth embodiment. 図344は、実施の形態25におけるMPMにおけるPHYのPWMモードの波形を示す図である。FIG. 344 is a diagram illustrating a waveform in a PHY PWM mode in the MPM according to the twenty-fifth embodiment. 図345は、実施の形態25におけるMPMにおけるPHYのPPMモードの波形を示す図である。FIG. 345 is a diagram illustrating a PHY PPM mode waveform in the MPM according to the twenty-fifth embodiment. 図346は、実施の形態25の復号方法の一例を示すフローチャートである。FIG. 346 is a flowchart illustrating an example of the decoding method according to the twenty-fifth embodiment. 図347は、実施の形態25の符号化方法の一例を示すフローチャートである。FIG. 347 is a flowchart illustrating an example of the coding method according to the twenty-fifth embodiment. 図348は、実施の形態26における受信機がAR画像を表示する例を示す図である。FIG. 348 is a diagram illustrating an example in which the receiver in Embodiment 26 displays an AR image. 図349は、実施の形態26における、AR画像が重畳された撮像表示画像の例を示す図である。349 is a diagram illustrating an example of a captured display image on which an AR image is superimposed according to Embodiment 26. FIG. 図350は、実施の形態26における受信機がAR画像を表示する他の例を示す図である。FIG. 350 is a diagram illustrating another example in which the receiver in Embodiment 26 displays an AR image. 図351は、実施の形態26における受信機の動作を示すフローチャートである。FIG. 351 is a flowchart illustrating operation of the receiver in Embodiment 26. 図352は、実施の形態26における送信機の動作を説明するための図である。FIG. 352 is a diagram for describing an operation of a transmitter in Embodiment 26. 図353は、実施の形態26における送信機の他の動作を説明するための図である。FIG. 353 is a diagram for describing another operation of the transmitter in Embodiment 26. 図354は、実施の形態26における送信機の他の動作を説明するための図である。FIG. 354 is a diagram for describing another operation of the transmitter in Embodiment 26. 図355は、実施の形態26における光IDの受信し易さを説明するための比較例を示す図である。FIG. 355 is a diagram illustrating a comparative example for describing ease of reception of the optical ID in the twenty-sixth embodiment. 図356Aは、実施の形態26における送信機の動作を示すフローチャートである。FIG. 356A is a flowchart illustrating operation of the transmitter in Embodiment 26. 図356Bは、実施の形態26における送信機の構成を示すブロック図である。FIG. 356B is a block diagram illustrating a configuration of a transmitter in Embodiment 26. 図357は、実施の形態26における受信機がAR画像を表示する他の例を示す図である。FIG. 357 is a diagram illustrating another example in which the receiver in Embodiment 26 displays an AR image. 図358は、実施の形態27における送信機の動作を説明するための図である。358 is a diagram for describing an operation of a transmitter in Embodiment 27. FIG. 図359Aは、実施の形態27における送信方法を示すフローチャートである。FIG. 359A is a flowchart illustrating a transmission method according to Embodiment 27. 図359Bは、実施の形態27における送信機の構成を示すブロック図である。FIG. 359B is a block diagram illustrating a structure of a transmitter in Embodiment 27. 図360は、実施の形態27における可視光信号の詳細な構成の一例を示す図である。FIG. 360 is a diagram illustrating an example of a detailed configuration of a visible light signal in Embodiment 27. 図361は、実施の形態27における可視光信号の詳細な構成の他の例を示す図である。361 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 27. FIG. 図362は、実施の形態27における可視光信号の詳細な構成の他の例を示す図である。362 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 27. FIG. 図363は、実施の形態27における可視光信号の詳細な構成の他の例を示す図である。FIG. 363 is a diagram illustrating another example of a detailed configuration of a visible light signal in Embodiment 27. 図364は、実施の形態27における、変数y~yの総和と、全時間長および有効時間長との関係を示す図である。FIG. 364 is a diagram illustrating a relationship between the total sum of the variables y 0 to y 3 , the total time length, and the effective time length in the twenty-seventh embodiment. 図365Aは、実施の形態27における送信方法を示すフローチャートである。FIG. 365A is a flowchart illustrating a transmission method according to Embodiment 27. 図365Bは、実施の形態27における送信機の構成を示すブロック図である。FIG. 365B is a block diagram illustrating a structure of a transmitter in Embodiment 27.
 本発明の一態様に係る送信方法は、光源の輝度変化により信号を送信する送信方法であって、前記光源に対して指定される調光度を指定調光度として受け付ける受付ステップと、前記指定調光度が第1の値以下である場合には、前記指定調光度で前記光源を発光させながら、第1のモードで符号化された前記信号を輝度変化により送信し、前記指定調光度が前記第1の値よりも大きい場合には、前記指定調光度で前記光源を発光させながら、第2のモードで符号化された前記信号を輝度変化により送信する送信ステップとを含み、前記指定調光度が前記第1の値よりも大きく第2の値以下である場合に、前記第2のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値は、前記指定調光度が前記第1の値である場合に、前記第1のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値よりも小さい。 A transmission method according to an aspect of the present invention is a transmission method for transmitting a signal according to a change in luminance of a light source, wherein a reception step of accepting a dimming degree designated for the light source as a designated dimming degree, and the designated dimming degree Is less than or equal to the first value, the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is the first dimming level. A transmission step of transmitting the signal encoded in the second mode by a luminance change while causing the light source to emit light at the specified dimming level, and the specified dimming level is The value of the peak current of the light source for transmitting the signal encoded in the second mode by a change in luminance when it is greater than the first value and less than or equal to the second value is the specified dimming degree Is the first If it is smaller than the value of the peak current of the light source for transmitting the first said signal encoded in the mode of the luminance change.
 これにより、図354に示すように、信号を符号化するモードの切り替えによって、指定調光度が第1の値よりも大きく第2の値以下である場合における光源のピーク電流の値は、指定調光度が第1の値である場合における光源のピーク電流の値よりも小さくなる。したがって、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。また、光源の劣化が抑えられるため、多様な機器間の通信を長期的に行うことができる。 As a result, as shown in FIG. 354, the peak current value of the light source when the designated dimming degree is greater than the first value and equal to or less than the second value by switching the signal encoding mode is It becomes smaller than the value of the peak current of the light source when the luminous intensity is the first value. Therefore, as the designated dimming degree is increased, a large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed. Further, since deterioration of the light source can be suppressed, communication between various devices can be performed for a long time.
 また、前記指定調光度が第3の値よりも小さい場合には、前記指定調光度で前記光源を発光させながら、前記第1のモードで符号化された前記信号を輝度変化により送信するとともに、前記指定調光度の変化に対して前記ピーク電流の値を一定の値に維持し、前記第3の値は、前記第1の値よりも小さくてもよい。具体的には、前記指定調光度が前記第3の値よりも小さい場合には、前記指定調光度が小さくなるにしたがって、前記光源をオフにする時間を長くすることにより、小さくなる前記指定調光度で前記光源を発光させ、かつ、前記ピーク電流の値を一定の値に維持してもよい。 In addition, when the designated dimming degree is smaller than a third value, the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the designated dimming degree. The peak current value may be maintained constant with respect to the change in the designated dimming degree, and the third value may be smaller than the first value. Specifically, when the specified dimming level is smaller than the third value, the specified dimming level is decreased by increasing the time during which the light source is turned off as the specified dimming level decreases. The light source may be made to emit light at a luminous intensity, and the peak current value may be maintained at a constant value.
 これにより、指定調光度が小さくなる場合でも、ピーク電流の値が一定に維持されるため、輝度変化によって送信される信号である可視光信号(すなわち光ID)を、受信機に受信させ易くすることができる。 As a result, even when the specified dimming degree is small, the value of the peak current is kept constant, so that the visible light signal (that is, the light ID) that is transmitted by the luminance change is easily received by the receiver. be able to.
 また、前記信号を輝度変化により送信する時間と、前記光源をオフにする時間とを足した1周期が10ミリ秒を超えないように、前記光源をオフする時間を決定してもよい。 Further, the time for turning off the light source may be determined so that one cycle obtained by adding the time for transmitting the signal due to luminance change and the time for turning off the light source does not exceed 10 milliseconds.
 例えば、その1周期が10ミリ秒を超えると、符号化された信号を送信するための光源の輝度変化が、ちらつきとして人の眼に認識されてしまう虞がある。そのため、本開示では、1周期が10ミリ秒を超えないように、光源をオフする時間が決定されるため、ちらつきが人に認識されてしまうことを抑えることができる。 For example, if one cycle exceeds 10 milliseconds, the luminance change of the light source for transmitting the encoded signal may be perceived by the human eye as flickering. Therefore, in the present disclosure, since the time for turning off the light source is determined so that one period does not exceed 10 milliseconds, it is possible to suppress flickering from being recognized by a person.
 また、前記指定調光度が第4の値よりも小さい場合には、前記指定調光度で前記光源を発光させながら、前記第1のモードで符号化された前記信号を輝度変化により送信するとともに、前記指定調光度が小さくなるにしたがって、前記ピーク電流の値を小さくすることにより、小さくなる前記指定調光度で前記光源を発光させ、前記第4の値は、前記第2の値よりも小さくてもよい。 Further, when the designated dimming degree is smaller than a fourth value, the signal encoded in the first mode is transmitted by a luminance change while causing the light source to emit light at the designated dimming degree. As the specified dimming degree decreases, the peak current value is decreased to cause the light source to emit light at the specified dimming level, and the fourth value is smaller than the second value. Also good.
 これにより、指定調光度がより小さくても、その指定調光度で光源を適切に発光させることができる。 Thereby, even if the designated dimming degree is smaller, the light source can be appropriately emitted with the designated dimming degree.
 また、前記指定調光度が前記第1の値である場合における、前記光源のピーク電流の値と、前記指定調光度が最大値である場合における、前記光源のピーク電流の値とは同じであってもよい。例えば、指定調光度の最大値は100%である。 The peak current value of the light source when the designated dimming level is the first value is the same as the peak current value of the light source when the designated dimming level is the maximum value. May be. For example, the maximum value of the specified dimming degree is 100%.
 これにより、第1のモードでも、大きいピーク電流を光源に流すことができるため、その光源の輝度変化によって送信される信号を、受信機に受信させ易くすることができる。 Thereby, even in the first mode, since a large peak current can be passed through the light source, it is possible to make it easier for the receiver to receive a signal transmitted by a change in luminance of the light source.
 また、前記第2のモードで符号化された前記信号のデューティ比は、前記第1のモードで符号化された前記信号のデューティ比よりも大きくてもよい。 Also, the duty ratio of the signal encoded in the second mode may be larger than the duty ratio of the signal encoded in the first mode.
 第1のモードは、調光度の増加が小さくてもピーク電流の増加を大きくするモードであり、第2のモードは、調光度の増加が大きくてもピーク電流の増加を抑えるモードである。したがって、第2のモードによって、大きなピーク電流が光源に流れることが抑えれるため、光源の劣化を抑制することができる。さらに、第1のモードによって、調光度が小さくても大きなピーク電流が光源に流れるため、その光源の輝度変化によって送信される信号を、受信機に容易に受信させることができる。したがって、本開示では、光源の劣化の抑制と、信号の受信し易さとの両立を図ることができる。 The first mode is a mode in which the increase in peak current is increased even if the increase in dimming degree is small, and the second mode is a mode in which the increase in peak current is suppressed even if the increase in dimming degree is large. Therefore, since the second mode suppresses a large peak current from flowing to the light source, deterioration of the light source can be suppressed. Further, since the first mode causes a large peak current to flow through the light source even when the dimming degree is small, the signal transmitted by the luminance change of the light source can be easily received by the receiver. Therefore, in the present disclosure, it is possible to achieve both suppression of deterioration of the light source and ease of signal reception.
 また、前記光源のピーク電流の値が第5の値を超えた場合、前記光源の輝度変化による前記信号の送信を停止してもよい。 Further, when the value of the peak current of the light source exceeds the fifth value, the transmission of the signal due to a change in luminance of the light source may be stopped.
 これにより、光源の劣化をさらに抑制することができる。 Thereby, the deterioration of the light source can be further suppressed.
 また、前記光源の使用時間を計測し、前記使用時間が所定時間以上である場合、前記指定調光度よりも大きい調光度で前記光源を発光させるためのパラメータの値を用いて、前記信号を輝度変化により送信してもよい。 In addition, when the usage time of the light source is measured and the usage time is equal to or longer than a predetermined time, the brightness of the signal is set using a parameter value for causing the light source to emit light with a dimming degree greater than the specified dimming degree. You may send by change.
 これにより、光源の経時的な劣化によって、輝度変化により送信される信号が受信機に受信され難くなることを抑えることができる。 This makes it possible to prevent the signal transmitted due to the luminance change from becoming difficult to be received by the receiver due to the deterioration of the light source over time.
 また、前記光源の使用時間を計測し、前記使用時間が所定時間以上である場合、前記使用時間が所定時間未満である場合よりも、前記光源の電流のパルス幅を大きくしてもよい。 In addition, when the usage time of the light source is measured and the usage time is equal to or longer than the predetermined time, the pulse width of the current of the light source may be made larger than when the usage time is less than the predetermined time.
 これにより、光源が経時的に劣化しても、光源の電流のパルス幅が大きくなるため、その光源の輝度変化によって送信される信号が受信機に受信され難くなることを抑えることができる。 Thereby, even if the light source is deteriorated with time, the pulse width of the current of the light source is increased, so that it is possible to prevent the signal transmitted due to the luminance change of the light source from becoming difficult to be received by the receiver.
 また、本発明の他の態様に係る送信方法は、光源の輝度変化により信号を送信する送信方法であって、前記光源に対して指定される調光度を指定調光度として受け付ける受付ステップと、前記指定調光度で前記光源を発光させながら、第1のモードまたは第2のモードで符号化された前記信号を輝度変化により送信する送信ステップとを含み、前記第2のモードで符号化された前記信号のデューティ比は、前記第1のモードで符号化された前記信号のデューティ比よりも大きく、前記送信ステップでは、前記指定調光度が小さい値から大きい値に変更される場合、前記指定調光度が第1の値であるときに、前記信号の符号化に用いられるモードを、前記第1のモードから前記第2のモードに切り替え、前記指定調光度が大きい値から小さい値に変更される場合、前記指定調光度が第2の値であるときに、前記信号の符号化に用いられるモードを、前記第2のモードから前記第1のモードに切り替え、前記第2の値は、前記第1の値よりも小さい。 Moreover, the transmission method according to another aspect of the present invention is a transmission method for transmitting a signal according to a change in luminance of a light source, and accepting a dimming degree designated for the light source as a designated dimming degree; A step of transmitting the signal encoded in the first mode or the second mode by a luminance change while causing the light source to emit light at a specified dimming level, and the encoded in the second mode When the duty ratio of the signal is larger than the duty ratio of the signal encoded in the first mode, and the designated dimming level is changed from a small value to a large value in the transmission step, the designated dimming level When is the first value, the mode used for encoding the signal is switched from the first mode to the second mode, and the designated dimming level is small from a large value When the designated dimming level is the second value, the mode used for encoding the signal is switched from the second mode to the first mode, and the second value is changed. Is smaller than the first value.
 これにより、図358に示すように、第1のモードと第2のモードとの切り替えが行われる指定調光度(すなわち切り替え点)は、指定用光度が増加する場合と減少する場合とで異なる。したがって、これらのモードの頻繁な切り替えを抑えることができる。すなわち、いわゆるチャタリングの発生を抑えることができる。その結果、信号を送信する送信装置の動作を安定させることができる。また、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された信号のデューティ比よりも大きい。したがって、上記本発明の一態様に係る送信方法と同様に、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。また、光源の劣化が抑えられるため、多様な機器間の通信を長期的に行うことができる。また、指定調光度が小さい場合には、デューティ比が小さい第1のモードが用いられる。したがって、上述のピーク電流を大きくすることができ、受信機に受信され易い信号を可視光信号として送信することができる。 Thereby, as shown in FIG. 358, the designated dimming degree (that is, the switching point) at which switching between the first mode and the second mode is performed differs depending on whether the designated luminous intensity increases or decreases. Therefore, frequent switching between these modes can be suppressed. That is, the occurrence of so-called chattering can be suppressed. As a result, the operation of the transmission device that transmits a signal can be stabilized. Further, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Therefore, similarly to the transmission method according to one embodiment of the present invention, it is possible to suppress a large peak current from flowing to the light source as the designated dimming degree is increased. As a result, deterioration of the light source can be suppressed. Further, since deterioration of the light source can be suppressed, communication between various devices can be performed for a long time. Further, when the designated dimming degree is small, the first mode with a small duty ratio is used. Therefore, the above-described peak current can be increased, and a signal that is easily received by the receiver can be transmitted as a visible light signal.
 また、本発明の他の態様に係る送信方法は、前記送信ステップでは、前記第1のモードから前記第2のモードへの切り替えが行われる際に、符号化された前記信号を輝度変化により送信するための前記光源のピーク電流を、第1の電流値から、前記第1の電流値よりも小さい第2の電流値に変更し、前記第2のモードから前記第1のモードへの切り替えが行われる際に、前記ピーク電流を、第3の電流値から、前記第3の電流値よりも大きい第4の電流値に変更し、前記第1の電流値は、前記第4の電流値よりも大きく、前記第2の電流値は、前記第3の電流値よりも大きい。 In the transmission method according to another aspect of the present invention, in the transmission step, the encoded signal is transmitted by a change in luminance when switching from the first mode to the second mode is performed. Changing the peak current of the light source for changing from the first current value to a second current value smaller than the first current value, and switching from the second mode to the first mode. When performed, the peak current is changed from a third current value to a fourth current value that is larger than the third current value, and the first current value is greater than the fourth current value. The second current value is larger than the third current value.
 これにより、第1のモードと第2のモードとを適切に切り替えることができる。 This makes it possible to appropriately switch between the first mode and the second mode.
 また、本発明のさらに他の態様に係る送信方法は、発光体の輝度変化によって可視光信号を送信する送信方法であって、信号を変調することによって、輝度変化のパターンを決定する決定ステップと、前記発光体に含まれる光源によって表現される赤色の輝度を、決定された前記パターンにしたがって変化させることによって前記可視光信号を送信する送信ステップとを含み、前記可視光信号は、データと、プリアンブルと、ペイロードとを含み、前記データでは、第1の輝度値、および、前記第1の輝度値よりも小さい第2の輝度値が、時間軸上に沿って現れ、前記第1の輝度値および前記第2の輝度値のうちの少なくとも一方が継続する時間長は第1の所定の値以下であり、前記プリアンブルでは、前記第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れ、前記ペイロードでは、前記第1および第2の輝度値が時間軸上に沿って交互に現れ、前記第1および第2の輝度値のそれぞれが継続する時間長は前記第1の所定の値よりも大きく、かつ、前記信号および所定の方式にしたがって決定されている。 A transmission method according to still another aspect of the present invention is a transmission method for transmitting a visible light signal according to a luminance change of a light emitter, and determining a pattern of the luminance change by modulating the signal; Transmitting the visible light signal by changing the luminance of red expressed by the light source included in the light emitter according to the determined pattern, wherein the visible light signal includes data, In the data, a first luminance value and a second luminance value smaller than the first luminance value appear on the time axis, and the first luminance value includes a preamble and a payload. And the length of time during which at least one of the second luminance values continues is not more than a first predetermined value, and in the preamble, that of the first and second luminance values. However, the first and second luminance values appear alternately along the time axis in the payload, and each of the first and second luminance values continues. The time length to be performed is greater than the first predetermined value and is determined according to the signal and a predetermined method.
 これにより、図363に示すように、可視光信号は、変調される信号に応じて決定される波形のペイロード(すなわち、Lデータ部またはRデータ部)を1つ含み、2つのペイロードを含んでいない。したがって、可視光信号、すなわち可視光信号のパケットを、短くすることができる。つまり、短時間で可視光信号を送信することができ、多様な機器間の通信を短時間で行うことができる。その結果、例えば、発光体に含まれる光源によって表現される赤色の光の発光期間が短くても、その発光期間に可視光信号のパケットを送信することができる。 Thereby, as shown in FIG. 363, the visible light signal includes one payload having a waveform determined according to the signal to be modulated (that is, the L data portion or the R data portion), and includes two payloads. Not in. Therefore, the visible light signal, that is, the packet of the visible light signal can be shortened. That is, a visible light signal can be transmitted in a short time, and communication between various devices can be performed in a short time. As a result, for example, even if the light emission period of red light expressed by the light source included in the light emitter is short, a packet of visible light signals can be transmitted during the light emission period.
 また、前記ペイロードでは、第1の時間長の前記第1の輝度値、第2の時間長の前記第2の輝度値、第3の時間長の前記第1の輝度値、第4の時間長の前記第2の輝度値の順で、それぞれの輝度値が現れ、前記送信ステップでは、前記第1の時間長と前記第3の時間長の和が、第2の所定の値よりも小さい場合、前記第1の時間長と前記第3の時間長の和が、前記第2の所定の値よりも大きい場合よりも、前記光源に流れる電流値を大きくし、前記第2の所定の値は、前記第1の所定の値よりも大きくてもよい。 In the payload, the first luminance value having a first time length, the second luminance value having a second time length, the first luminance value having a third time length, and a fourth time length When the respective luminance values appear in the order of the second luminance values, and in the transmission step, the sum of the first time length and the third time length is smaller than a second predetermined value The value of the current flowing through the light source is larger than when the sum of the first time length and the third time length is greater than the second predetermined value, and the second predetermined value is , Greater than the first predetermined value.
 これにより、図362および図363に示すように、第1の時間長と第3の時間長の和が小さい場合には、光源の電流値は大きくされ、第1の時間長と第3の時間長の和が大きい場合には、光源の電流値は小さくされる。したがって、データ、プリアンブルおよびペイロードからなるパケットの平均輝度を、信号に関わらずに一定に保つことができる。 Accordingly, as shown in FIGS. 362 and 363, when the sum of the first time length and the third time length is small, the current value of the light source is increased, and the first time length and the third time length are increased. When the sum of the lengths is large, the current value of the light source is reduced. Therefore, the average luminance of the packet including data, preamble, and payload can be kept constant regardless of the signal.
 また、前記ペイロードでは、第1の時間長Dの前記第1の輝度値、第2の時間長Dの前記第2の輝度値、第3の時間長Dの前記第1の輝度値、第4の時間長Dの前記第2の輝度値の順で、それぞれの輝度値が現れ、前記信号から得られる4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値以下である場合、前記第1~4の時間長D~Dのそれぞれは、D=W+W×y(WおよびWはそれぞれ、0以上の整数)に従って決定されていてもよい。 In the payload, the first luminance value having the first time length D 0 , the second luminance value having the second time length D 1 , and the first luminance value having the third time length D 2 are used. , The respective luminance values appear in the order of the second luminance value of the fourth time length D 3 , and the sum of the four parameters y k (k = 0, 1, 2, 3) obtained from the signal is If it is less than or equal to the third predetermined value, each of the first to fourth time lengths D 0 to D 3 is D k = W 0 + W 1 × y k (W 0 and W 1 are each greater than or equal to 0 It may be determined according to an integer).
 これにより、図363の(b)に示すように、第1~4の時間長D~DのそれぞれをW以上にしながら、信号に応じて短い波形のペイロードを生成することができる。 As a result, as shown in FIG. 363 (b), it is possible to generate a payload having a short waveform according to the signal while setting each of the first to fourth time lengths D 0 to D 3 to be W 0 or more.
 また、前記4つのパラメータy(k=0,1,2,3)の総和が前記第3の所定の値以下である場合、前記送信ステップでは、前記データ、前記プリアンブルおよび前記ペイロードを、前記データ、前記プリアンブル、前記ペイロードの順に送信してもよい。 When the sum of the four parameters y k (k = 0, 1, 2, 3) is less than or equal to the third predetermined value, the transmission step includes the data, the preamble, and the payload, Data, the preamble, and the payload may be transmitted in this order.
 これにより、図363の(b)に示すように、データ(すなわち無効データ)を含む可視光信号のパケットがLデータ部を含んでいないことを、そのデータによって、そのパケットを受信する受信装置に知らせることができる。 As a result, as shown in FIG. 363 (b), the fact that the packet of the visible light signal including the data (that is, invalid data) does not include the L data portion is indicated to the receiving apparatus that receives the packet by the data. I can inform you.
 また、前記4つのパラメータy(k=0,1,2,3)の総和が前記第3の所定の値よりも大きい場合、前記第1~4の時間長D~Dのそれぞれは、D=W+W×(A-y)、D=W+W×(B-y)、D=W+W×(A-y)、およびD=W+W×(B-y)(AおよびBはそれぞれ、0以上の整数)に従って決定されていてもよい。 When the sum of the four parameters y k (k = 0, 1, 2, 3) is larger than the third predetermined value, each of the first to fourth time lengths D 0 to D 3 is D 0 = W 0 + W 1 × (A−y 0 ), D 1 = W 0 + W 1 × (By 1 ), D 2 = W 0 + W 1 × (A−y 2 ), and D 3 = W 0 + W 1 × (B−y 3 ) (A and B are each integers of 0 or more) may be determined.
 これにより、図363の(a)に示すように、第1~4の時間長D~D(すなわち、第1~4の時間長D’~D’)のそれぞれをW以上にしながら、上述の総和が大きくても、信号に応じて短い波形のペイロードを生成することができる。 Thereby, as shown in FIG. 363 (a), each of the first to fourth time lengths D 0 to D 3 (that is, the first to fourth time lengths D ′ 0 to D ′ 3 ) is set to W 0 or more. However, even if the total sum is large, a short waveform payload can be generated according to the signal.
 また、前記4つのパラメータy(k=0,1,2,3)の総和が前記第3の所定の値よりも大きい場合、前記送信ステップでは、前記データ、前記プリアンブルおよび前記ペイロードを、前記ペイロード、前記プリアンブル、前記データの順に送信してもよい。 Further, when the sum of the four parameters y k (k = 0, 1, 2, 3) is larger than the third predetermined value, in the transmission step, the data, the preamble, and the payload are The payload, the preamble, and the data may be transmitted in this order.
 これにより、図363の(a)に示すように、データ(すなわち無効データ)を含む可視光信号のパケットがRデータ部を含んでいないことを、そのデータによって、そのパケットを受信する受信装置に知らせることができる。 As a result, as shown in FIG. 363 (a), the fact that the packet of the visible light signal including data (that is, invalid data) does not include the R data portion is indicated to the receiving device that receives the packet by the data. I can inform you.
 また、前記発光体は、赤色の光源、青色の光源、および緑色の光源を含む複数の光源を有し、前記送信ステップでは、前記複数の光源のうち、前記赤色の光源のみを用いて前記可視光信号を送信してもよい。 The light emitter has a plurality of light sources including a red light source, a blue light source, and a green light source, and in the transmitting step, the visible light source is formed using only the red light source among the plurality of light sources. An optical signal may be transmitted.
 これにより、発光体は、赤色の光源、青色の光源、および緑色の光源を用いて映像を表示することができるとともに、受信装置に受信し易い波長の可視光信号を送信することができる。 Thereby, the light emitter can display an image using a red light source, a blue light source, and a green light source, and can transmit a visible light signal having a wavelength that can be easily received to the receiving device.
 なお、これらの包括的または具体的な態様は、装置、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、装置、システム、方法、集積回路、コンピュータプログラムまたは記録媒体の任意な組み合わせで実現されてもよい。 Note that these comprehensive or specific modes may be realized by a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement | achieve in arbitrary combinations of a circuit, a computer program, or a recording medium.
 以下、実施の形態について、図面を参照しながら具体的に説明する。 Hereinafter, embodiments will be specifically described with reference to the drawings.
 なお、以下で説明する実施の形態は、いずれも包括的または具体的な例を示すものである。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置及び接続形態、ステップ、ステップの順序などは、一例であり、本発明を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 It should be noted that each of the embodiments described below shows a comprehensive or specific example. The numerical values, shapes, materials, constituent elements, arrangement positions and connecting forms of the constituent elements, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present invention. In addition, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept are described as optional constituent elements.
 (実施の形態1)
 以下、実施の形態1について説明する。
(Embodiment 1)
The first embodiment will be described below.
 (発光部の輝度の観測)
 1枚の画像を撮像するとき、全ての撮像素子を同一のタイミングで露光させるのではなく、撮像素子ごとに異なる時刻に露光を開始・終了する撮像方法を提案する。図1は、1列に並んだ撮像素子は同時に露光させ、列が近い順に露光開始時刻をずらして撮像する場合の例である。ここでは、同時に露光する撮像素子の露光ラインと呼び、その撮像素子に対応する画像上の画素のラインを輝線と呼ぶ。
(Observation of luminance of light emitting part)
We propose an imaging method that starts and ends the exposure at different times for each image sensor, rather than exposing all the image sensors at the same timing when capturing one image. FIG. 1 shows an example in which imaging devices arranged in one row are exposed simultaneously, and imaging is performed by shifting the exposure start time in the order of closer rows. Here, the exposure line of the image sensor that is exposed simultaneously is referred to as an exposure line, and the pixel line on the image corresponding to the image sensor is referred to as a bright line.
 この撮像方法を用いて、点滅している光源を撮像素子の全面に写して撮像した場合、図2のように、撮像画像上に露光ラインに沿った輝線(画素値の明暗の線)が生じる。この輝線のパターンを認識することで、撮像フレームレートを上回る速度の光源輝度変化を推定することができる。これにより、信号を光源輝度の変化として送信することで、撮像フレームレート以上の速度での通信を行うことができる。光源が2種類の輝度値をとることで信号を表現する場合、低い方の輝度値をロー(LO),高い方の輝度値をハイ(HI)と呼ぶ。ローは光源が光っていない状態でも良いし、ハイよりも弱く光っていても良い。 When this image capturing method is used to capture an image of a blinking light source on the entire surface of the image sensor, bright lines (light and dark lines of pixel values) along the exposure line appear on the captured image as shown in FIG. . By recognizing the bright line pattern, it is possible to estimate the light source luminance change at a speed exceeding the imaging frame rate. Thereby, by transmitting a signal as a change in light source luminance, communication at a speed higher than the imaging frame rate can be performed. When a signal is expressed by the light source taking two types of luminance values, the lower luminance value is called low (LO), and the higher luminance value is called high (HI). Low may be in a state where the light source is not shining, or may be shining weaker than high.
 この方法によって、撮像フレームレートを超える速度で情報の伝送を行う。 ∙ By this method, information is transmitted at a speed exceeding the imaging frame rate.
 一枚の撮像画像中に、露光時間が重ならない露光ラインが20ラインあり、撮像のフレームレートが30fpsのときは、1.67ミリ秒周期の輝度変化を認識できる。露光時間が重ならない露光ラインが1000ラインある場合は、3万分の1秒(約33マイクロ秒)周期の輝度変化を認識できる。なお、露光時間は例えば10ミリ秒よりも短く設定される。 When there are 20 exposure lines where the exposure times do not overlap in one captured image, and the imaging frame rate is 30 fps, a change in luminance with a period of 1.67 milliseconds can be recognized. When there are 1000 exposure lines whose exposure times do not overlap, it is possible to recognize a luminance change with a period of 1 / 30,000 second (about 33 microseconds). The exposure time is set shorter than 10 milliseconds, for example.
 図2は、一つの露光ラインの露光が完了してから次の露光ラインの露光が開始される場合を示している。 FIG. 2 shows a case where the exposure of the next exposure line is started after the exposure of one exposure line is completed.
 この場合、1秒あたりのフレーム数(フレームレート)がf、1画像を構成する露光ライン数がlのとき、各露光ラインが一定以上の光を受光しているかどうかで情報を伝送すると、最大でflビット毎秒の速度で情報を伝送することができる。 In this case, when the number of frames per second (frame rate) is f and the number of exposure lines constituting one image is 1, if information is transmitted depending on whether or not each exposure line receives a certain amount of light, the maximum Can transmit information at a rate of fl bits per second.
 なお、ラインごとではなく、画素ごとに時間差で露光を行う場合は、さらに高速で通信が可能である。 It should be noted that communication is possible at higher speeds when exposure is performed with a time difference for each pixel, not for each line.
 このとき、露光ラインあたりの画素数がm画素であり、各画素が一定以上の光を受光しているかどうかで情報を伝送する場合には、伝送速度は最大でflmビット毎秒となる。 At this time, when the number of pixels per exposure line is m pixels and information is transmitted depending on whether each pixel receives light above a certain level, the transmission speed is a maximum of flm bits per second.
 図3のように、発光部の発光による各露光ラインの露光状態を複数のレベルで認識可能であれば、発光部の発光時間を各露光ラインの露光時間より短い単位の時間で制御することで、より多くの情報を伝送することができる。 As shown in FIG. 3, if the exposure state of each exposure line by the light emission of the light emitting unit can be recognized at a plurality of levels, the light emission time of the light emitting unit is controlled by a unit time shorter than the exposure time of each exposure line. More information can be transmitted.
 露光状態をElv段階で認識可能である場合には、最大でflElvビット毎秒の速度で情報を伝送することができる。 If the exposure state can be recognized in the Elv stage, information can be transmitted at a maximum rate of flElv bits per second.
 また、各露光ラインの露光のタイミングと少しずつずらしたタイミングで発光部を発光させることで、発信の基本周期を認識することができる。 Also, the basic period of transmission can be recognized by causing the light emitting unit to emit light at a timing slightly different from the exposure timing of each exposure line.
 図4は、一つの露光ラインの露光が完了する前に次の露光ラインの露光が開始される場合を示している。即ち、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成となっている。このような構成により、(1)一つの露光ラインの露光時間の終了を待って次の露光ラインの露光を開始する場合に比べ、所定の時間内におけるサンプル数を多くすることができる。所定時間内におけるサンプル数が多くなることにより、被写体である光送信機が発生する光信号をより適切に検出することが可能となる。即ち、光信号を検出する際のエラー率を低減することが可能となる。更に、(2)一つの露光ラインの露光時間の終了を待って次の露光ラインの露光を開始する場合に比べ、各露光ラインの露光時間を長くすることができるため、被写体が暗い場合であっても、より明るい画像を取得することが可能となる。即ち、S/N比を向上させることが可能となる。なお、全ての露光ラインにおいて、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成となる必要はなく、一部の露光ラインについて部分的に時間的な重なりを持たない構成とすることも可能である。一部の露光ラインについて部分的に時間的な重なりを持たないように構成するにより、撮像画面上における露光時間の重なりによる中間色の発生を抑制でき、より適切に輝線を検出することが可能となる。 FIG. 4 shows a case where the exposure of the next exposure line is started before the exposure of one exposure line is completed. That is, the exposure times of adjacent exposure lines are partially overlapped in time. With such a configuration, (1) it is possible to increase the number of samples within a predetermined time as compared with the case where the exposure of the next exposure line is started after waiting for the end of the exposure time of one exposure line. By increasing the number of samples in a predetermined time, it becomes possible to more appropriately detect the optical signal generated by the optical transmitter that is the subject. That is, it is possible to reduce the error rate when detecting an optical signal. Further, (2) the exposure time of each exposure line can be made longer than when the exposure time of the next exposure line is started after waiting for the end of the exposure time of one exposure line. However, a brighter image can be acquired. That is, the S / N ratio can be improved. In all exposure lines, it is not necessary that the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. By configuring a part of the exposure lines so as not to partially overlap in time, it is possible to suppress the generation of intermediate colors due to the overlap of exposure times on the imaging screen, and to detect bright lines more appropriately. .
 この場合は、各露光ラインの明るさから露光時間を算出し、発光部の発光の状態を認識する。 In this case, the exposure time is calculated from the brightness of each exposure line, and the light emission state of the light emitting unit is recognized.
 なお、各露光ラインの明るさを、輝度が閾値以上であるかどうかの2値で判別する場合には、発光していない状態を認識するために、発光部は発光していない状態を各ラインの露光時間以上の時間継続しなければならない。 When the brightness of each exposure line is determined by a binary value indicating whether the luminance is equal to or higher than a threshold value, in order to recognize the state where no light is emitted, the state where the light emitting unit does not emit light is indicated for each line. It must last longer than the exposure time.
 図5Aは、各露光ラインの露光開始時刻が等しい場合に、露光時間の違いによる影響を示している。7500aは前の露光ラインの露光終了時刻と次の露光ラインの露光開始時刻とが等しい場合であり、7500bはそれより露光時間を長くとった場合である。7500bのように、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成とすることにより、露光時間を長くとることが可能となる。即ち、撮像素子に入射する光が増大し、明るい画像を得ることができる。また、同一の明るさの画像を撮像するための撮像感度を低く抑えられることで、ノイズの少ない画像が得られるため、通信エラーが抑制される。 FIG. 5A shows the influence of the difference in exposure time when the exposure start times of the exposure lines are equal. 7500a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line, and 7500b is the case where the exposure time is longer than that. As in the case of 7500b, the exposure time of adjacent exposure lines is partially overlapped in time, so that the exposure time can be increased. That is, the light incident on the image sensor increases and a bright image can be obtained. In addition, since the imaging sensitivity for capturing images with the same brightness can be suppressed to a low level, an image with less noise can be obtained, so that communication errors are suppressed.
 図5Bは、露光時間が等しい場合に、各露光ラインの露光開始時刻の違いによる影響を示している。7501aは前の露光ラインの露光終了時刻と次の露光ラインの露光開始時刻とが等しい場合であり、7501bは前の露光ラインの露光終了より早く次の露光ラインの露光を開始する場合である。7501bのように、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成とすることにより、時間あたりに露光できるラインを増やすことが可能となる。これにより、より解像度が高くなり、多くの情報量が得られる。サンプル間隔(=露光開始時刻の差)が密になることで、より正確に光源輝度の変化を推定することができ、エラー率が低減でき、更に、より短い時間における光源輝度の変化を認識することができる。露光時間に重なりを持たせることで、隣接する露光ラインの露光量の差を利用して、露光時間よりも短い光源の点滅を認識することができる。 FIG. 5B shows the influence of the difference in the exposure start time of each exposure line when the exposure times are equal. 7501a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line, and 7501b is the case where the exposure of the next exposure line is started earlier than the end of exposure of the previous exposure line. By adopting a configuration in which the exposure times of adjacent exposure lines partially overlap in time as in 7501b, the number of lines that can be exposed per time can be increased. Thereby, the resolution becomes higher and a large amount of information can be obtained. Since the sample interval (= difference in exposure start time) becomes dense, the change in the light source luminance can be estimated more accurately, the error rate can be reduced, and the change in the light source luminance in a shorter time is recognized. be able to. By making the exposure time overlap, it is possible to recognize blinking of the light source that is shorter than the exposure time by using the difference in exposure amount between adjacent exposure lines.
 また、上述のサンプル数が少ない場合、つまり、サンプル間隔(図5Bに示す時間差t)が長いと、光源輝度の変化を正確に検出することができない可能性が高くなる。この場合には、露光時間を短くすることによって、その可能性を抑えることができる。つまり、光源輝度の変化を正確に検出することができる。また、露光時間は、露光時間>(サンプル間隔-パルス幅)を満たすことが望ましい。パルス幅は、光源の輝度がHighになっている期間である光のパルス幅である。これにより、Highの輝度を適切に検出することができる。 Further, when the number of samples is small, that is, when the sample interval (time difference t D shown in FIG. 5B) is long, there is a high possibility that a change in light source luminance cannot be detected accurately. In this case, the possibility can be suppressed by shortening the exposure time. That is, it is possible to accurately detect a change in light source luminance. Further, it is desirable that the exposure time satisfy the exposure time> (sample interval−pulse width). The pulse width is a pulse width of light that is a period during which the luminance of the light source is High. Thereby, the High brightness can be detected appropriately.
 図5A、図5Bで説明したように、隣接する露光ラインの露光時間が、部分的に時間的な重なりをもつように、各露光ラインを順次露光する構成において、露光時間を通常撮影モードよりも短く設定することにより発生する輝線パターンを信号伝送に用いることにより通信速度を飛躍的に向上させることが可能になる。ここで、可視光通信時における露光時間を1/480秒以下に設定することにより適切な輝線パターンを発生させることが可能となる。ここで、露光時間は、フレーム周波数=fとすると、露光時間<1/8×fと設定する必要がある。撮影の際に発生するブランキングは、最大で1フレームの半分の大きさになる。即ち、ブランキング時間は、撮影時間の半分以下であるため、実際の撮影時間は、最も短い時間で1/2fとなる。更に、1/2fの時間内において、4値の情報を受ける必要があるため、少なくとも露光時間は、1/(2f×4)よりも短くする必要が生じる。通常フレームレートは、60フレーム/秒以下であることから、1/480秒以下の露光時間に設定することにより、適切な輝線パターンを画像データに発生させ、高速の信号伝送を行うことが可能となる。 As described with reference to FIGS. 5A and 5B, in the configuration in which each exposure line is sequentially exposed so that the exposure times of adjacent exposure lines partially overlap in time, the exposure time is set to be longer than that in the normal shooting mode. By using the bright line pattern generated by setting it short for signal transmission, the communication speed can be dramatically improved. Here, it is possible to generate an appropriate bright line pattern by setting the exposure time during visible light communication to 1/480 seconds or less. Here, when the frame frequency = f, the exposure time needs to be set as exposure time <1/8 × f. Blanking that occurs during shooting is at most half the size of one frame. That is, since the blanking time is less than half of the shooting time, the actual shooting time is 1 / 2f at the shortest time. Furthermore, since it is necessary to receive quaternary information within a time of 1 / 2f, at least the exposure time needs to be shorter than 1 / (2f × 4). Since the normal frame rate is 60 frames / second or less, it is possible to generate an appropriate bright line pattern in the image data and perform high-speed signal transmission by setting the exposure time to 1/480 seconds or less. Become.
 図5Cは、各露光ラインの露光時間が重なっていない場合、露光時間が短い場合の利点を示している。露光時間が長い場合は、光源は7502aのように2値の輝度変化をしていたとしても、撮像画像では7502eのように中間色の部分ができ、光源の輝度変化を認識することが難しくなる傾向がある。しかし、7502dのように、一つの露光ラインの露光終了後、次の露光ラインの露光開始まで所定の露光しない空き時間(所定の待ち時間)tD2を設ける構成とすることにより、光源の輝度変化を認識しやすくすることが可能となる。即ち、7502fのような、より適切な輝線パターンを検出することが可能となる。7502dのように、所定の露光しない空き時間を設ける構成は、露光時間tを各露光ラインの露光開始時刻の時間差tよりも小さくすることにより実現することが可能となる。通常撮影モードが、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成である場合において、露光時間を通常撮影モード時よりも、所定の露光しない空き時間が生じるまで短く設定することにより、実現することができる。また、通常撮影モードが、前の露光ラインの露光終了時刻と次の露光ラインの露光開始時刻とが等しい場合であっても、所定の露光しない時間が生じるまで露光時間を短く設定することにより、実現することができる。また、7502gのように、各露光ラインの露光開始時刻の間隔tを大きくすることによっても、一つの露光ラインの露光終了後、次の露光ラインの露光開始まで所定の露光しない空き時間(所定の待ち時間)tD2を設ける構成をとることができる。この構成では、露光時間を長くすることができるため、明るい画像を撮像することができ、ノイズが少なくなることからエラー耐性が高い。一方で、この構成では、一定時間内に露光できる露光ラインが少なくなるため、7502hのように、サンプル数が少なくなるという欠点があるため、状況によって使い分けることが望ましい。例えば、撮像対象が明るい場合には前者の構成を用い、暗い場合には後者の構成を用いることで、光源輝度変化の推定誤差を低減することができる。 FIG. 5C shows an advantage when the exposure times are short when the exposure times of the exposure lines do not overlap. When the exposure time is long, even if the light source has a binary luminance change as in 7502a, the captured image has an intermediate color portion as in 7502e, and it becomes difficult to recognize the luminance change of the light source. There is. However, as the 7502D, after completion exposure of one exposure line, by a configuration in which the free time (predetermined waiting time) t D2 not predetermined exposure start exposure of the next exposure line, the luminance variation of the light source Can be easily recognized. That is, a more appropriate bright line pattern such as 7502f can be detected. As in 7502D, be provided with a free time without predetermined exposure becomes an exposure time t E can be realized to be smaller than the time difference t D of the exposure start time of each exposure line. When the normal shooting mode has a configuration in which the exposure times of adjacent exposure lines partially overlap in time, the exposure time is set shorter than the normal shooting mode until a predetermined idle time occurs. This can be realized. Further, even when the normal photographing mode is the case where the exposure end time of the previous exposure line and the exposure start time of the next exposure line are equal, by setting the exposure time short until a predetermined non-exposure time occurs, Can be realized. Further, as 7502G, also by increasing the distance t D of the exposure start time of each exposure line, after the exposure of one exposure line, following exposure line exposure start until a predetermined exposure was not free time (predetermined Waiting time) t D2 can be provided. In this configuration, since the exposure time can be extended, a bright image can be taken, and noise is reduced, so that error tolerance is high. On the other hand, in this configuration, since the number of exposure lines that can be exposed within a certain time is reduced, there is a disadvantage that the number of samples is reduced as in 7502h. For example, when the imaging target is bright, the former configuration is used, and when the imaging target is dark, the latter configuration can be used to reduce the estimation error of the light source luminance change.
 なお、全ての露光ラインにおいて、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成となる必要はなく、一部の露光ラインについて部分的に時間的な重なりを持たない構成とすることも可能である。また、全ての露光ラインにおいて、一つの露光ラインの露光終了後、次の露光ラインの露光開始まで所定の露光しない空き時間(所定の待ち時間)を設ける構成となる必要はなく、一部の露光ラインについて部分的に時間的な重なりを持つ構成とすることも可能である。このような構成とすることにより、それぞれの構成における利点を生かすことが可能となる。また、通常のフレームレート(30fps、60fps)にて撮影を行う通常撮影モードと、可視光通信を行う1/480秒以下の露光時間にて撮影を行う可視光通信モードとにおいて、同一の読み出し方法または回路にて信号の読み出しを行ってもよい。同一の読み出し方法または回路にて信号を読み出すことにより、通常撮影モードと、可視光通信モードとに対して、それぞれ別の回路を用いる必要がなくなり、回路規模を小さくすることが可能となる。 In all exposure lines, it is not necessary that the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. Further, in all exposure lines, it is not necessary to provide a configuration in which an idle time (predetermined waiting time) in which a predetermined exposure is not performed is provided after the exposure of one exposure line is completed until the exposure of the next exposure line is started. It is also possible to have a configuration in which the lines partially overlap in time. With such a configuration, it is possible to take advantage of the advantages of each configuration. Further, the same readout method is used in the normal shooting mode in which shooting is performed at a normal frame rate (30 fps, 60 fps) and in the visible light communication mode in which shooting is performed with an exposure time of 1/480 second or less in which visible light communication is performed. Alternatively, a signal may be read by a circuit. By reading out signals with the same reading method or circuit, it is not necessary to use different circuits for the normal imaging mode and the visible light communication mode, and the circuit scale can be reduced.
 図5Dは、光源輝度の最小変化時間tと、露光時間tと、各露光ラインの露光開始時刻の時間差tと、撮像画像との関係を示している。t+t<tとした場合は、必ず一つ以上の露光ラインが露光の開始から終了まで光源が変化しない状態で撮像するため、7503dのように輝度がはっきりとした画像が得られ、光源の輝度変化を認識しやすい。2t>tとした場合は、光源の輝度変化とは異なるパターンの輝線が得られる場合があり、撮像画像から光源の輝度変化を認識することが難しくなる。 FIG. 5D shows the relationship between the minimum change time t S of the light source luminance, the exposure time t E , the time difference t D of the exposure start time of each exposure line, and the captured image. When t E + t D <t S , since one or more exposure lines are always imaged in a state where the light source does not change from the start to the end of exposure, an image with clear brightness as in 7503d is obtained. It is easy to recognize the luminance change of the light source. When 2t E > t S , a bright line having a pattern different from the luminance change of the light source may be obtained, and it becomes difficult to recognize the luminance change of the light source from the captured image.
 図5Eは、光源輝度の遷移時間tと、各露光ラインの露光開始時刻の時間差tとの関係を示している。tに比べてtが大きいほど、中間色になる露光ラインが少なくなり、光源輝度の推定が容易になる。t>tのとき中間色の露光ラインは連続で2ライン以下になり、望ましい。tは、光源がLEDの場合は1マイクロ秒以下、光源が有機ELの場合は5マイクロ秒程度となるため、tを5マイクロ秒以上とすることで、光源輝度の推定を容易にすることができる。 Figure 5E, the transition and time t T of the light source luminance, which shows the relationship between the time difference t D of the exposure start time of each exposure line. as t D is larger than the t T, exposure lines to be neutral is reduced, it is easy to estimate the light source luminance. When t D > t T , the exposure line of the intermediate color is continuously 2 or less, which is desirable. t T, the light source is less than 1 microsecond in the case of LED, light source for an approximately 5 microseconds in the case of organic EL, a t D by 5 or more microseconds, to facilitate estimation of the light source luminance be able to.
 図5Fは、光源輝度の高周波ノイズtHTと、露光時間tとの関係を示している。tHTに比べてtが大きいほど、撮像画像は高周波ノイズの影響が少なくなり、光源輝度の推定が容易になる。tがtHTの整数倍のときは高周波ノイズの影響がなくなり、光源輝度の推定が最も容易になる。光源輝度の推定には、t>tHTであることが望ましい。高周波ノイズの主な原因はスイッチング電源回路に由来し、多くの電灯用のスイッチング電源ではtHTは20マイクロ秒以下であるため、tを20マイクロ秒以上とすることで、光源輝度の推定を容易に行うことができる。 Figure 5F shows a high frequency noise t HT of light source luminance, the relationship between the exposure time t E. As t E is larger than t HT , the captured image is less affected by high frequency noise, and light source luminance is easily estimated. When t E is an integral multiple of t HT , the influence of high frequency noise is eliminated, and the light source luminance is most easily estimated. For estimation of the light source luminance, it is desirable that t E > t HT . The main cause of high frequency noise derived from the switching power supply circuit, since many of the t HT in the switching power supply for the lamp is less than 20 microseconds, by the t E and 20 micro-seconds or more, the estimation of the light source luminance It can be done easily.
 図5Gは、tHTが20マイクロ秒の場合の、露光時間tと高周波ノイズの大きさとの関係を表すグラフである。tHTは光源によってばらつきがあることを考慮すると、グラフより、tは、ノイズ量が極大をとるときの値と等しくなる値である、15マイクロ秒以上、または、35マイクロ秒以上、または、54マイクロ秒以上、または、74マイクロ秒以上として定めると効率が良いことが確認できる。高周波ノイズ低減の観点からはtは大きいほうが望ましいが、前述のとおり、tが小さいほど中間色部分が発生しづらくなるという点で光源輝度の推定が容易になるという性質もある。そのため、光源輝度の変化の周期が15~35マイクロ秒のときはtは15マイクロ秒以上、光源輝度の変化の周期が35~54マイクロ秒のときはtは35マイクロ秒以上、光源輝度の変化の周期が54~74マイクロ秒のときはtは54マイクロ秒以上、光源輝度の変化の周期が74マイクロ秒以上のときはtは74マイクロ秒以上として設定すると良い。 Figure 5G is the case t HT is 20 microseconds, which is a graph showing the relationship between the size of the exposure time t E and the high frequency noise. When t HT is considered that there is variation by the light source, from the graph, t E is the value becomes equal to the value when the amount of noise takes a maximum, 15 microseconds or more, or, 35 microseconds or more, or, It can be confirmed that the efficiency is good when it is set to 54 microseconds or more, or 74 microseconds or more. From the viewpoint of reducing high-frequency noise, it is desirable that t E be large. However, as described above, there is a property that light source luminance can be easily estimated in that the smaller the t E , the more difficult the intermediate color portion is generated. Therefore, when the light source luminance change period is 15 to 35 microseconds, t E is 15 microseconds or more, and when the light source luminance change period is 35 to 54 microseconds, t E is 35 microseconds or more. t E is 54 microseconds or more when the cycle is 54 to 74 microseconds of change, t E when the period of the change in light source luminance is 74 microseconds or more may be set as 74 microseconds or more.
 図5Hは、露光時間tと認識成功率との関係を示す。露光時間tは光源の輝度が一定である時間に対して相対的な意味を持つため、光源輝度が変化する周期tを露光時間tで割った値(相対露光時間)を横軸としている。グラフより、認識成功率をほぼ100%としたい場合は、相対露光時間を1.2以下にすれば良いことがわかる。例えば、送信信号を1kHzとする場合は露光時間を約0.83ミリ秒以下とすれば良い。同様に、認識成功率を95%以上としたい場合は相対露光時間を1.25以下に、認識成功率を80%以上としたい場合は相対露光時間を1.4以下にすれば良いということがわかる。また、相対露光時間が1.5付近で認識成功率が急激に下がり、1.6でほぼ0%となるため、相対露光時間が1.5を超えないように設定すべきであることがわかる。また、認識率が7507cで0になった後、7507dや、7507e、7507fで、再度上昇していることがわかる。そのため、露光時間を長くして明るい画像を撮像したい場合などは、相対露光時間が1.9から2.2、2.4から2.6、2.8から3.0となる露光時間を利用すれば良い。例えば、中間モードとして、これらの露光時間を使うと良い。 Figure 5H shows the relationship between the exposure time t E and the recognition success rate. Since the exposure time t E has a relative meaning with respect to the time when the luminance of the light source is constant, the value (relative exposure time) obtained by dividing the period t S where the luminance of the light source changes by the exposure time t E is taken as the horizontal axis. Yes. From the graph, it can be seen that if the recognition success rate is desired to be almost 100%, the relative exposure time should be 1.2 or less. For example, when the transmission signal is 1 kHz, the exposure time may be about 0.83 milliseconds or less. Similarly, when it is desired to set the recognition success rate to 95% or more, the relative exposure time may be set to 1.25 or less, and when the recognition success rate is set to 80% or more, the relative exposure time may be set to 1.4 or less. Recognize. Also, the recognition success rate drops sharply when the relative exposure time is around 1.5, and becomes almost 0% at 1.6, so it can be seen that the relative exposure time should not be set to exceed 1.5. . It can also be seen that after the recognition rate becomes 0 at 7507c, it rises again at 7507d, 7507e, and 7507f. Therefore, when it is desired to take a bright image by extending the exposure time, use an exposure time in which the relative exposure time is 1.9 to 2.2, 2.4 to 2.6, and 2.8 to 3.0. Just do it. For example, these exposure times may be used as the intermediate mode.
 図6Aは、本実施の形態における情報通信方法のフローチャートである。 FIG. 6A is a flowchart of the information communication method in the present embodiment.
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、ステップSK91~SK93を含む。 The information communication method in the present embodiment is an information communication method for acquiring information from a subject, and includes steps SK91 to SK93.
 つまり、この情報通信方法は、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる複数の露光ラインに対応する複数の輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの第1の露光時間を設定する第1の露光時間設定ステップSK91と、前記イメージセンサが、輝度変化する前記被写体を、設定された前記第1の露光時間で撮影することによって、前記複数の輝線を含む輝線画像を取得する第1の画像取得ステップSK92と、取得された前記輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップSK93とを含み、前記第1の画像取得ステップSK92では、前記複数の露光ラインのそれぞれは、順次異なる時刻で露光を開始し、かつ、当該露光ラインに隣接する隣接露光ラインの露光が終了してから所定の空き時間経過後に、露光を開始する。 That is, in this information communication method, a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor are generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. A first exposure time setting step SK91 for setting a first exposure time of the image sensor; and the image sensor shoots the subject whose luminance changes with the set first exposure time, First image acquisition step SK92 for acquiring a bright line image including a plurality of bright lines, and information acquisition for acquiring information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image Step SK93, and in the first image acquisition step SK92, the plurality of exposure lines Re starts exposure at successively different times, and the exposure of the adjacent exposure line after a predetermined idle time from the end, the exposure is started adjacent to the exposure line.
 図6Bは、本実施の形態における情報通信装置のブロック図である。 FIG. 6B is a block diagram of the information communication apparatus according to the present embodiment.
 本実施の形態における情報通信装置K90は、被写体から情報を取得する情報通信装置であって、構成要素K91~K93を備える。 The information communication device K90 in the present embodiment is an information communication device that acquires information from a subject, and includes constituent elements K91 to K93.
 つまり、この情報通信装置K90は、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる複数の露光ラインに対応する複数の輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定部K91と、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記複数の輝線を含む輝線画像を取得する前記イメージセンサを有する画像取得部K92と、取得された前記輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得部K93とを備え、前記複数の露光ラインのそれぞれは、順次異なる時刻で露光を開始し、かつ、当該露光ラインに隣接する隣接露光ラインの露光が終了してから所定の空き時間経過後に、露光を開始する。 That is, the information communication apparatus K90 causes a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor to be generated in response to a change in luminance of the subject in an image obtained by photographing the subject by an image sensor. And an exposure time setting unit K91 for setting an exposure time of the image sensor, and the image sensor for acquiring a bright line image including the plurality of bright lines by photographing the subject whose luminance changes with the set exposure time. And an information acquisition unit K93 that acquires information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image, and the plurality of exposure lines. Each of these starts exposure at different times sequentially and is adjacent to the exposure line. From the exposure of the down is completed after a predetermined idle time has elapsed, exposure is started.
 このような図6Aおよび図6Bによって示される情報通信方法および情報通信装置K90では、例えば図5Cなどに示すように、複数の露光ラインのそれぞれは、その露光ラインに隣接する隣接露光ラインの露光が終了してから所定の空き時間経過後に、露光を開始するため、被写体の輝度変化を認識しやすくすることができる。その結果、被写体から情報を適切に取得することができる。 In the information communication method and the information communication apparatus K90 shown in FIGS. 6A and 6B as described above, for example, as shown in FIG. 5C, each of the plurality of exposure lines is exposed to the adjacent exposure line adjacent to the exposure line. Since exposure is started after a lapse of a predetermined idle time after the end, it is possible to easily recognize a change in luminance of the subject. As a result, information can be appropriately acquired from the subject.
 なお、上記実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図6Aのフローチャートによって示される情報通信方法をコンピュータに実行させる。 In the above embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the information communication method shown by the flowchart of FIG. 6A.
 (実施の形態2)
 本実施の形態では、上記実施の形態1における情報通信装置K90であるスマートフォンなどの受信機と、LEDや有機ELなどの光源の点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 2)
In the present embodiment, each application example using a receiver such as a smartphone that is the information communication device K90 in the first embodiment and a transmitter that transmits information as a blinking pattern of a light source such as an LED or an organic EL. explain.
 なお、以下の説明では、通常撮影モード、または通常撮影モードによる撮影を通常撮影といい、可視光通信モード、または可視光通信モードによる撮影を可視光撮影(可視光通信)という。また、通常撮影および可視光撮影の代わりに、中間モードによる撮影を用いてもよく、後述の合成画像の代わりに中間画像を用いてもよい。 In the following description, shooting in the normal shooting mode or normal shooting mode is referred to as normal shooting, and shooting in the visible light communication mode or visible light communication mode is referred to as visible light shooting (visible light communication). Further, instead of normal shooting and visible light shooting, shooting in an intermediate mode may be used, and an intermediate image may be used instead of a composite image described later.
 図7は、本実施の形態における受信機の撮影動作の一例を示す図である。 FIG. 7 is a diagram illustrating an example of the photographing operation of the receiver in this embodiment.
 受信機8000は、撮影モードを通常撮影、可視光通信、通常撮影、・・・のように切り替える。そして、受信機8000は、通常撮影画像と可視光通信画像とを合成することによって、輝線模様と被写体およびその周囲とが鮮明に映し出された合成画像を生成し、その合成画像をディスプレイに表示する。この合成画像は、通常撮影画像における信号が送信されている箇所に、可視光通信画像の輝線模様を重畳することによって生成された画像である。また、この合成画像によって映し出される輝線模様、被写体およびその周囲はそれぞれ鮮明であって、ユーザによって十分に認識される鮮明度を有する。このような合成画像が表示されることによって、ユーザは、どこから、またはどの位置から信号が送信されているかをより明確に知ることができる。 The receiver 8000 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on. Then, the receiver 8000 generates a composite image in which the bright line pattern, the subject, and the surrounding area are clearly displayed by combining the normal captured image and the visible light communication image, and displays the composite image on the display. . This composite image is an image generated by superimposing the bright line pattern of the visible light communication image on the portion where the signal in the normal captured image is transmitted. Further, the bright line pattern, the subject, and the surroundings displayed by the composite image are clear and have a sharpness sufficiently recognized by the user. By displaying such a composite image, the user can more clearly know from where or from where the signal is transmitted.
 図8は、本実施の形態における受信機の撮影動作の他の例を示す図である。 FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
 受信機8000は、カメラCa1およびカメラCa2を備える。このような受信機8000では、カメラCa1は通常撮影を行い、カメラCa2は可視光撮影を行う。これにより、カメラCa1は、上述のような通常撮影画像を取得し、カメラCa2は、上述のような可視光通信画像を取得する。そして、受信機8000は、通常撮影画像および可視光通信画像を合成することによって、上述の合成画像を生成してディスプレイに表示する。 The receiver 8000 includes a camera Ca1 and a camera Ca2. In such a receiver 8000, the camera Ca1 performs normal photographing, and the camera Ca2 performs visible light photographing. Thereby, the camera Ca1 acquires the normal captured image as described above, and the camera Ca2 acquires the visible light communication image as described above. Then, the receiver 8000 generates the above-described combined image by combining the normal captured image and the visible light communication image, and displays the combined image on the display.
 図9は、本実施の形態における受信機の撮影動作の他の例を示す図である。 FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
 2つのカメラを有する上記受信機8000では、カメラCa1は、撮影モードを通常撮影、可視光通信、通常撮影、・・・のように切り替える。一方、カメラCa2は、通常撮影を継続して行う。そして、カメラCa1とカメラCa2とで同時に通常撮影が行われているときには、受信機8000は、それらのカメラによって取得された通常撮影画像から、ステレオ視(三角測量の原理)を利用して、受信機8000から被写体までの距離(以下、被写体距離という)を推定する。このように推定された被写体距離を用いることによって、受信機8000は、可視光通信画像の輝線模様を通常撮影画像の適切な位置に重畳することができる。つまり、適切な合成画像を生成することができる。 In the receiver 8000 having two cameras, the camera Ca1 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on. On the other hand, the camera Ca2 continuously performs normal shooting. When the normal shooting is performed simultaneously with the cameras Ca1 and Ca2, the receiver 8000 receives from the normal shooting images acquired by these cameras using stereo vision (the principle of triangulation). The distance from the machine 8000 to the subject (hereinafter referred to as subject distance) is estimated. By using the subject distance estimated in this way, the receiver 8000 can superimpose the bright line pattern of the visible light communication image on an appropriate position of the normal captured image. That is, an appropriate composite image can be generated.
 図10は、本実施の形態における受信機の表示動作の一例を示す図である。 FIG. 10 is a diagram illustrating an example of the display operation of the receiver in this embodiment.
 受信機8000は、上述のように、撮影モードを可視光通信、通常撮影、可視光通信、・・・のように切り替える。ここで、受信機8000は、最初に可視光通信を行うときに、アプリケーションプログラムを起動する。そして、受信機8000は、可視光通信によって受信した信号に基づいて、自らの位置を推定する。次に、受信機8000は、通常撮影を行うときには、その通常撮影によって取得された通常撮影画像に、AR(Augmented Reality)情報を表示する。このAR情報は、上述のように推定された位置などに基づいて取得されるものである。また、受信機8000は、9軸センサによる検出結果、および通常撮影画像の動き検出などに基づいて、受信機8000の移動および方向の変化を推定し、その推定された移動および方向の変化に合わせてAR情報の表示位置を移動させる。これにより、AR情報を通常撮影画像の被写体像に追随させることができる。 As described above, the receiver 8000 switches the photographing mode to visible light communication, normal photographing, visible light communication, and so on. Here, the receiver 8000 activates an application program when performing visible light communication for the first time. Then, the receiver 8000 estimates its own position based on the signal received by visible light communication. Next, when performing normal shooting, the receiver 8000 displays AR (Augmented Reality) information on the normal shot image acquired by the normal shooting. This AR information is acquired based on the position estimated as described above. Further, the receiver 8000 estimates the movement and direction change of the receiver 8000 based on the detection result of the 9-axis sensor and the motion detection of the normal captured image, and matches the estimated movement and direction change. To move the display position of the AR information. Thereby, the AR information can be made to follow the subject image of the normal captured image.
 また、受信機8000は、通常撮影から可視光通信に撮影モードを切り替えると、その可視光通信時には、直前の通常撮影時に取得された最新の通常撮影画像にAR情報を重畳する。そして、受信機8000は、AR情報が重畳された通常撮影画像を表示する。また、受信機8000は、通常撮影時と同様に、9軸センサによる検出結果に基づいて、受信機8000の移動および方向の変化を推定し、その推定された移動および方向の変化に合わせてAR情報および通常撮影画像を移動させる。これにより、可視光通信時にも、通常撮影時と同様に、受信機8000の移動などに合わせてAR情報を通常撮影画像の被写体像に追随させることができる。また、受信機8000の移動などに合わせて、その通常画像を拡大および縮小することができる。 Further, when the receiver 8000 switches the shooting mode from the normal shooting to the visible light communication, the AR information is superimposed on the latest normal shooting image acquired at the time of the normal shooting immediately before the visible light communication. The receiver 8000 displays a normal captured image on which the AR information is superimposed. Similarly to the normal shooting, the receiver 8000 estimates the movement and direction change of the receiver 8000 on the basis of the detection result by the 9-axis sensor, and AR in accordance with the estimated movement and direction change. Move information and normal captured images. Thereby, AR information can be made to follow the subject image of the normal captured image in accordance with the movement of the receiver 8000 or the like in the case of visible light communication as in the case of normal imaging. Further, the normal image can be enlarged and reduced in accordance with the movement of the receiver 8000 or the like.
 図11は、本実施の形態における受信機の表示動作の一例を示す図である。 FIG. 11 is a diagram showing an example of the display operation of the receiver in this embodiment.
 例えば、受信機8000は、図11の(a)に示すように、輝線模様が映し出された上記合成画像を表示してもよい。また、受信機8000は、図11の(b)に示すように、輝線模様の代わりに、信号が送信されていることを通知するための所定の色を有する画像である信号明示オブジェクトを通常撮影画像に重畳することによって合成画像を生成し、その合成画像を表示してもよい。 For example, the receiver 8000 may display the composite image on which the bright line pattern is projected, as shown in FIG. In addition, as shown in FIG. 11B, the receiver 8000 normally captures a signal explicit object that is an image having a predetermined color for notifying that a signal is transmitted instead of the bright line pattern. A composite image may be generated by superimposing on the image, and the composite image may be displayed.
 また、受信機8000は、図11の(c)に示すように、信号が送信されている箇所が点線の枠と識別子(例えば、ID:101、ID:102など)とによって示されている通常撮影画像を合成画像として表示してもよい。また、受信機8000は、図11の(d)に示すように、輝線模様の代わりに、特定の種類の信号が送信されていることを通知するための所定の色を有する画像である信号識別オブジェクトを通常撮影画像に重畳することによって合成画像を生成し、その合成画像を表示してもよい。この場合、その信号識別オブジェクトの色は、送信機から出力されている信号の種類によって異なる。例えば、送信機から出力されている信号が位置情報である場合には、赤色の信号識別オブジェクトが重畳され、送信機から出力されている信号がクーポンである場合には、緑色の信号識別オブジェクトが重畳される。 Further, as shown in FIG. 11 (c), the receiver 8000 normally has a location where a signal is transmitted indicated by a dotted frame and an identifier (for example, ID: 101, ID: 102, etc.). The captured image may be displayed as a composite image. Further, as shown in FIG. 11D, the receiver 8000 recognizes a signal that is an image having a predetermined color for notifying that a specific type of signal is transmitted instead of the bright line pattern. A composite image may be generated by superimposing an object on a normal captured image, and the composite image may be displayed. In this case, the color of the signal identification object differs depending on the type of signal output from the transmitter. For example, when the signal output from the transmitter is position information, a red signal identification object is superimposed, and when the signal output from the transmitter is a coupon, the green signal identification object is Superimposed.
 図12は、本実施の形態における受信機の動作の一例を示す図である。 FIG. 12 is a diagram illustrating an example of the operation of the receiver in this embodiment.
 例えば、受信機8000は、可視光通信によって信号を受信した場合には、通常撮影画像を表示するとともに、送信機を発見したことをユーザに通知するための音を出力してもよい。この場合、受信機8000は、発見した送信機の個数、受信した信号の種類、または、その信号によって特定される情報の種類などによって、出力される音の種類、出力回数、または出力時間を異ならせてもよい。 For example, when receiving a signal through visible light communication, the receiver 8000 may display a normal captured image and output a sound for notifying the user that the transmitter has been found. In this case, the receiver 8000 varies the type of output sound, the number of outputs, or the output time depending on the number of transmitters found, the type of received signal, or the type of information specified by the signal. It may be allowed.
 図13は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 13 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像に映し出された輝線模様にユーザがタッチすると、受信機8000は、そのタッチされた輝線模様に対応する被写体から送信された信号に基づいて、情報通知画像を生成し、その情報通知画像を表示する。この情報通知画像は、例えば、店舗のクーポンや場所などを示す。なお、輝線模様は、図11に示す信号明示オブジェクト、信号識別オブジェクト、または点線枠などであってもよい。以下に記載されている輝線模様についても同様である。 For example, when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image. This information notification image indicates, for example, a store coupon or a place. The bright line pattern may be a signal explicit object, a signal identification object, a dotted line frame, or the like shown in FIG. The same applies to the bright line patterns described below.
 図14は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 14 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像に映し出された輝線模様にユーザがタッチすると、受信機8000は、そのタッチされた輝線模様に対応する被写体から送信された信号に基づいて、情報通知画像を生成し、その情報通知画像を表示する。この情報通知画像は、例えば、受信機8000の現在地を地図などによって示す。 For example, when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image. For example, the information notification image indicates the current location of the receiver 8000 by a map or the like.
 図15は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 15 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像が表示されている受信機8000に対してユーザがスワイプを行うと、受信機8000は、図11の(c)に示す通常撮影画像と同様の、点線枠および識別子を有する通常撮影画像を表示するとともに、スワイプの操作に追随するように情報の一覧を表示する。この一覧には、各識別子によって示される箇所(送信機)から送信される信号によって特定される情報が示されている。また、スワイプは、例えば、受信機8000におけるディスプレイの右側の外から中に指を動かす操作であってもよい。なお、スワイプは、ディスプレイの上側から、下側から、または左側から中に指を動かす操作であってもよい。 For example, when the user performs a swipe on the receiver 8000 on which the composite image is displayed, the receiver 8000 performs normal shooting having a dotted frame and an identifier, similar to the normal shot image illustrated in FIG. An image is displayed and a list of information is displayed so as to follow the swipe operation. In this list, information specified by a signal transmitted from a location (transmitter) indicated by each identifier is shown. The swipe may be, for example, an operation of moving a finger from outside the right side of the display in the receiver 8000. The swipe may be an operation of moving a finger from the upper side, the lower side, or the left side of the display.
 また、その一覧に含まれる情報がユーザによってタップされると、受信機8000は、その情報をより詳細に示す情報通知画像(例えばクーポンを示す画像)を表示してもよい。 In addition, when information included in the list is tapped by the user, the receiver 8000 may display an information notification image (for example, an image showing a coupon) showing the information in more detail.
 図16は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 16 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像が表示されている受信機8000に対してユーザがスワイプを行うと、受信機8000は、スワイプの操作に追随するように情報通知画像を合成画像に重畳して表示する。この情報通知画像は、被写体距離を矢印とともにユーザに分かり易く示すものである。また、スワイプは、例えば、受信機8000におけるディスプレイの下側の外から中に指を動かす操作であってもよい。なお、スワイプは、ディスプレイの左側から、上側から、または右側から中に指を動かす操作であってもよい。 For example, when the user swipes the receiver 8000 on which the composite image is displayed, the receiver 8000 displays the information notification image superimposed on the composite image so as to follow the swipe operation. This information notification image shows the subject distance with an arrow in an easy-to-understand manner for the user. The swipe may be, for example, an operation of moving a finger from outside the lower side of the display in the receiver 8000. The swipe may be an operation of moving a finger from the left side of the display, from the upper side, or from the right side.
 図17は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 17 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、受信機8000は、複数の店舗を示すサイネージである送信機を被写体として撮影し、その撮影によって取得された通常撮影画像を表示する。ここで、通常撮影画像に映し出された被写体に含まれる、1つの店舗のサイネージの画像をユーザがタップすると、受信機8000は、その店舗のサイネージから送信される信号に基づいて情報通知画像を生成し、その情報通知画像8001を表示する。この情報通知画像8001は、例えば店舗の空席状況などを示す画像である。 For example, the receiver 8000 images a transmitter, which is a signage indicating a plurality of stores, as a subject, and displays a normal captured image acquired by the imaging. Here, when the user taps the signage image of one store included in the subject displayed in the normal captured image, the receiver 8000 generates an information notification image based on a signal transmitted from the signage of the store Then, the information notification image 8001 is displayed. This information notification image 8001 is an image showing, for example, a vacant seat situation in a store.
 図18は、本実施の形態における受信機と送信機とサーバとの動作の一例を示す図である。 FIG. 18 is a diagram illustrating an example of operations of the receiver, the transmitter, and the server in the present embodiment.
 まず、テレビとして構成されている送信機8012は、輝度変化によって信号を受信機8011に送信する。この信号は、例えば、視聴されている番組に関連するコンテンツの購入をユーザに促すための情報を含む。受信機8011は、可視光通信によってその信号を受信すると、その信号に基づいて、コンテンツの購入をユーザに促す情報通知画像を表示する。ユーザがそのコンテンツを購入するための操作を行うと、受信機8011は、受信機8011に差し込まれているSIM(Subscriber Identity Module)カードに含まれる情報、ユーザID、端末ID、クレジットカード情報、課金のための情報、パスワード、および送信機IDのうちの少なくとも1つをサーバ8013に送信する。サーバ8013は、ユーザごとに、ユーザIDと支払い情報とを紐付けて管理している。そして、サーバ8013は、受信機8011から送信される情報に基づいて、ユーザIDを特定し、そのユーザIDに紐付けられた支払い情報を確認する。この確認によって、サーバ8013は、ユーザに対してコンテンツの購入を許可するか否かを判断する。そして、サーバ8013は、許可すると判断すると、許可情報を受信機8011に送信する。受信機8011は、許可情報を受信すると、その許可情報を送信機8012に送信する。許可情報を受信した送信機8012は、そのコンテンツを例えばネットワークを介して取得して再生する。 First, the transmitter 8012 configured as a television transmits a signal to the receiver 8011 by a luminance change. This signal includes, for example, information for prompting the user to purchase content related to the program being viewed. When the receiver 8011 receives the signal through visible light communication, the receiver 8011 displays an information notification image that prompts the user to purchase content based on the signal. When the user performs an operation to purchase the content, the receiver 8011 receives information included in a SIM (Subscriber Identity Module) card inserted into the receiver 8011, user ID, terminal ID, credit card information, billing At least one of the information, the password, and the transmitter ID is transmitted to the server 8013. The server 8013 manages a user ID and payment information in association with each user. Then, the server 8013 identifies the user ID based on the information transmitted from the receiver 8011, and confirms the payment information associated with the user ID. By this confirmation, the server 8013 determines whether or not to allow the user to purchase content. If the server 8013 determines to permit, the server 8013 transmits permission information to the receiver 8011. When receiving the permission information, the receiver 8011 transmits the permission information to the transmitter 8012. The transmitter 8012 that has received the permission information acquires and reproduces the content via a network, for example.
 また、送信機8012は、輝度変化することによって送信機8012のIDを含む情報を受信機8011に対して送信してもよい。この場合、受信機8011は、その情報をサーバ8013に送信する。サーバ8013は、その情報を取得すると、その送信機8012によって例えばテレビ番組が視聴されていると判断することができ、テレビ番組の視聴率調査を行うことができる。 Further, the transmitter 8012 may transmit information including the ID of the transmitter 8012 to the receiver 8011 by changing the luminance. In this case, the receiver 8011 transmits the information to the server 8013. When the server 8013 obtains the information, the server 8013 can determine that, for example, a television program is being viewed by the transmitter 8012, and can perform a viewing rate survey of the television program.
 また、受信機8011は、ユーザによって操作された内容(投票など)を上述の情報に含めてサーバ8013に送信することによって、サーバ8013は、その内容をテレビ番組に反映することができる。つまり、視聴者参加型の番組を実現することができる。さらに、受信機8011は、ユーザによる書き込みを受け付けた場合には、その書き込みの内容を上述の情報に含めてサーバ8013に送信することによって、サーバ8013は、その書き込みをテレビ番組やネットワーク上の掲示板などに反映することができる。 Further, the receiver 8011 includes the content operated by the user (voting or the like) in the above information and transmits it to the server 8013, so that the server 8013 can reflect the content in the television program. That is, a viewer participation type program can be realized. Further, when the receiver 8011 accepts writing by the user, the contents of the writing are included in the above-described information and transmitted to the server 8013 so that the server 8013 can write the writing to the TV program or a bulletin board on the network. Etc. can be reflected.
 さらに、送信機8012が上述のような情報を送信することによって、サーバ8013は、有料放送またはオンデマンドプログラムによるテレビ番組の視聴に対して課金を行うことができる。また、サーバ8013は、受信機8011に対して広告を表示させたり、送信機8012に表示されるテレビ番組の詳細情報を表示させてり、その詳細情報を示すサイトのURLを表示させたりすることができる。さらに、サーバ8013は、受信機8011によって広告が表示された回数、または、その広告によって購入された商品の金額などを取得することによって、その回数または金額に応じた課金を広告主に対して行うことができる。このような金額による課金は、広告を見たユーザがその商品をすぐに購入しなくても行うことができる。また、サーバ8013は、送信機8012から受信機8011を介して送信機8012のメーカを示す情報を取得したときには、その情報によって示されるメーカに対してサービス(例えば、上述の商品の販売に対する報酬の支払い)を行うことができる。 Furthermore, when the transmitter 8012 transmits information as described above, the server 8013 can charge for viewing of a television program by pay broadcasting or an on-demand program. Further, the server 8013 displays an advertisement on the receiver 8011, displays detailed information of a TV program displayed on the transmitter 8012, and displays a URL of a site indicating the detailed information. Can do. Further, the server 8013 obtains the number of times the advertisement is displayed by the receiver 8011 or the amount of the product purchased by the advertisement, and thereby charges the advertiser according to the number or amount. be able to. Such billing can be made even if the user who saw the advertisement does not purchase the product immediately. Further, when the server 8013 acquires information indicating the manufacturer of the transmitter 8012 from the transmitter 8012 via the receiver 8011, the server 8013 provides a service (for example, a reward for sales of the above-described product to the manufacturer indicated by the information). Payment).
 図19は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 19 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 受信機8030は、例えば、カメラを備えたヘッドマウントディスプレイとして構成されている。この受信機8030は、開始ボタンが押下されたときに、可視光通信モードによる撮影、つまり可視光通信を開始する。そして、可視光通信によって信号が受信された場合には、受信機8030は、その受信された信号に応じた情報をユーザに通知する。この通知は、例えば、受信機8030に備えられたスピーカから音声が出力されることによって行われたり、画像の表示によって行われる。また、可視光通信は、開始ボタンが押下されたとき以外にも、開始を指示する音声の入力が受信機8030に受け付けられたとき、または開始を指示する信号が無線通信で受信機8030に受信されたきに、開始されてもよい。また、受信機8030に備えられた9軸センサによって得られた値の変化幅が所定の範囲を超えたとき、または、通常撮影画像に輝線模様が少しでも現れたときに、可視光通信を開始してもよい。 The receiver 8030 is configured as a head mounted display including a camera, for example. The receiver 8030 starts photographing in the visible light communication mode, that is, visible light communication when the start button is pressed. When a signal is received through visible light communication, the receiver 8030 notifies the user of information corresponding to the received signal. This notification is performed, for example, by outputting sound from a speaker provided in the receiver 8030 or by displaying an image. In addition, when the start button is pressed, the visible light communication is received by the receiver 8030 when an input of a voice instructing the start is received by the receiver 8030 or a signal instructing the start is received by wireless communication. It may be started when it is done. In addition, visible light communication is started when the change width of the value obtained by the 9-axis sensor provided in the receiver 8030 exceeds a predetermined range or when a bright line pattern appears even in a normal photographed image. May be.
 図20は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 20 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 受信機8030は、上述と同様に、合成画像8034を表示する。ここで、ユーザは、合成画像8034中の輝線模様を囲うように指先を動かす操作を行う。受信機8030は、この操作を受け付けると、その操作の対象とされた輝線模様を特定し、その輝線模様に対応する箇所から送信されている信号に基づく情報通知画像8032を表示する。 The receiver 8030 displays the composite image 8034 as described above. Here, the user performs an operation of moving the fingertip so as to surround the bright line pattern in the composite image 8034. Upon receiving this operation, the receiver 8030 identifies the bright line pattern that is the target of the operation, and displays an information notification image 8032 based on a signal transmitted from a location corresponding to the bright line pattern.
 図21は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 21 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 受信機8030は、上述と同様に、合成画像8034を表示する。ここで、ユーザは、合成画像8034中の輝線模様に指先を予め定められた時間以上あてる操作を行う。受信機8030は、この操作を受け付けると、その操作の対象とされた輝線模様を特定し、その輝線模様に対応する箇所から送信されている信号に基づく情報通知画像8032を表示する。 The receiver 8030 displays the composite image 8034 as described above. Here, the user performs an operation of placing the fingertip on the bright line pattern in the composite image 8034 for a predetermined time or more. Upon receiving this operation, the receiver 8030 identifies the bright line pattern that is the target of the operation, and displays an information notification image 8032 based on a signal transmitted from a location corresponding to the bright line pattern.
 図22は、本実施の形態における送信機の動作の一例を示す図である。 FIG. 22 is a diagram illustrating an example of the operation of the transmitter according to the present embodiment.
 送信機は、例えば予め定められた周期で、信号1と信号2とを交互に送信する。信号1の送信と、信号2の送信とは、それぞれ可視光の点滅などの輝度変化によって行われる。また、信号1を送信するための輝度変化のパターンと、信号2を送信するための輝度変化のパターンとは互いに異なる。 The transmitter transmits the signal 1 and the signal 2 alternately at a predetermined cycle, for example. Transmission of the signal 1 and transmission of the signal 2 are performed by luminance changes such as blinking of visible light. Further, the luminance change pattern for transmitting the signal 1 and the luminance change pattern for transmitting the signal 2 are different from each other.
 図23は、本実施の形態における送信機の動作の他の例を示す図である。 FIG. 23 is a diagram illustrating another example of the operation of the transmitter according to the present embodiment.
 送信機は、上述のように、ブロック1、ブロック2およびブロック3を含む構成単位の信号列を繰り返し送信する際には、信号列ごとに、その信号列に含まれるブロックの配置を変更してもよい。例えば、最初の信号列には、ブロック1、ブロック2、ブロック3の順に各ブロックが配置され、次の信号列には、ブロック3、ブロック1、ブロック2の順に各ブロックが配置される。これにより、周期的なブランキング期間を要する受信機によって同じブロックだけが取得されることを避けることができる。 As described above, when the transmitter repeatedly transmits the signal sequence of the structural unit including the block 1, the block 2, and the block 3, for each signal sequence, the transmitter changes the arrangement of the blocks included in the signal sequence. Also good. For example, each block is arranged in the order of block 1, block 2, and block 3 in the first signal sequence, and each block is arranged in the order of block 3, block 1, and block 2 in the next signal sequence. Thereby, it can be avoided that only the same block is acquired by a receiver that requires a periodic blanking period.
 図24は、本実施の形態における受信機の応用例を示す図である。 FIG. 24 is a diagram illustrating an application example of the receiver in this embodiment.
 例えばスマートフォンとして構成される受信機7510aは、バックカメラ(アウトカメラ)7510cで光源7510bを撮像し、光源7510bから送信された信号を受信し、受信した信号から光源7510bの位置と向きを取得する。受信機7510aは、光源7510bの撮像画像中における写り方や、受信機7510aに備えた9軸センサのセンサ値から、受信機7510a自身の位置と向きを推定する。受信機7510aは、フロントカメラ(フェイスカメラ、インカメラ)7510fで、ユーザ7510eを撮像し、画像処理によって、7510eの頭部の位置と向き、及び、視線方向(眼球の位置と向き)を推定する。受信機7510aは、推定結果をサーバに送信する。受信機7510aは、ユーザ7510eの視線方向に応じて挙動(ディスプレイの表示内容や再生音)を変更する。バックカメラ7510cによる撮像と、フロントカメラ7510fによる撮像は、同時に行なっても良いし、交互に行なっても良い。 For example, the receiver 7510a configured as a smartphone images the light source 7510b with a back camera (out camera) 7510c, receives a signal transmitted from the light source 7510b, and acquires the position and orientation of the light source 7510b from the received signal. The receiver 7510a estimates the position and orientation of the receiver 7510a itself from how the light source 7510b is captured in the captured image and the sensor values of the 9-axis sensor provided in the receiver 7510a. The receiver 7510a captures the user 7510e with a front camera (face camera, in-camera) 7510f, and estimates the position and orientation of the head of the 7510e and the line-of-sight direction (eyeball position and orientation) by image processing. . The receiver 7510a transmits the estimation result to the server. The receiver 7510a changes the behavior (display content and playback sound) according to the viewing direction of the user 7510e. The imaging by the back camera 7510c and the imaging by the front camera 7510f may be performed simultaneously or alternately.
 図25は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 25 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 受信機は、上述のような合成画像または中間画像などによって、輝線模様を表示する。このとき、受信機は、この輝線模様に対応する送信機からの信号を受信することができなくてもよい。ここで、ユーザが輝線模様に対する操作(例えばタップ)を行うことによってその輝線模様が選択されると、受信機は、光学ズームを行うことによって、その輝線模様の箇所が拡大された合成画像または中間画像を表示する。このような光学ズームが行われることによって、受信機は、その輝線模様に対応する送信機からの信号を適切に受信することができる。つまり、撮像によって得られる画像が小さすぎて、信号を取得することができなくても、光学ズームを行うことによって、その信号を適切に受信することができる。また、信号を取得可能な大きさの画像が表示されている場合であっても、光学ズームを行うことによって、速い受信を行うことができる。 The receiver displays the bright line pattern by the composite image or the intermediate image as described above. At this time, the receiver may not be able to receive a signal from the transmitter corresponding to the bright line pattern. Here, when the bright line pattern is selected by the user performing an operation (for example, tapping) on the bright line pattern, the receiver performs an optical zoom to enlarge the composite line image or the intermediate image in which the bright line pattern is enlarged. Display an image. By performing such optical zoom, the receiver can appropriately receive a signal from the transmitter corresponding to the bright line pattern. That is, even if an image obtained by imaging is too small to acquire a signal, the signal can be appropriately received by performing optical zoom. Even when an image having a size capable of acquiring a signal is displayed, fast reception can be performed by performing optical zoom.
 (本実施の形態のまとめ)
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む画像である輝線画像を取得する輝線画像取得ステップと、前記輝線画像に基づいて、前記輝線が現われた部位の空間的な位置が識別し得る態様で、前記被写体と当該被写体の周囲とが映し出された表示用画像を表示する画像表示ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより送信情報を取得する情報取得ステップとを含む。
(Summary of this embodiment)
The information communication method according to the present embodiment is an information communication method for acquiring information from a subject, and an bright line corresponding to an exposure line included in the image sensor is included in an image obtained by photographing the subject with an image sensor. A first exposure time setting step for setting an exposure time of the image sensor so as to occur in accordance with a change in luminance of the subject; and the image sensor photographs the subject whose luminance changes with the set exposure time. The bright line image acquisition step of acquiring a bright line image that is an image including the bright line, and the spatial position of the portion where the bright line appears can be identified based on the bright line image, and the subject and the subject An image display step for displaying a display image in which the surroundings of the subject are projected, and the image included in the acquired bright line image Including an information acquisition step of acquiring transmission information by demodulating the data identified by the pattern of lines.
 例えば、図7、図8および図11に示すような合成画像または中間画像が表示用画像として表示される。また、被写体と当該被写体の周囲とが映し出された表示用画像において、輝線が現われた部位の空間的な位置は、輝線模様、信号明示オブジェクト、信号識別オブジェクト、または点線枠などによって識別される。したがって、ユーザは、このような表示画像を見ることによって、輝度変化によって信号を送信している被写体を容易に見つけることができる。 For example, a composite image or an intermediate image as shown in FIGS. 7, 8, and 11 is displayed as a display image. In the display image in which the subject and the surroundings of the subject are projected, the spatial position of the part where the bright line appears is identified by a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like. Therefore, the user can easily find a subject that is transmitting a signal due to a change in luminance by viewing such a display image.
 また、前記情報通信方法は、さらに、前記露光時間よりも長い露光時間を設定する第2の露光時間設定ステップと、前記イメージセンサが、前記被写体と当該被写体の周囲とを前記長い露光時間で撮影することによって、通常撮影画像を取得する通常画像取得ステップと、前記通常撮影画像において前記輝線が現われた部位を、前記輝線画像に基づいて特定し、前記部位を指し示す画像である信号オブジェクトを前記通常撮影画像に重畳することによって、合成画像を生成する合成ステップとを含み、前記画像表示ステップでは、前記合成画像を前記表示用画像として表示してもよい。 The information communication method further includes a second exposure time setting step for setting an exposure time longer than the exposure time, and the image sensor photographs the subject and the surroundings of the subject with the long exposure time. A normal image acquisition step of acquiring a normal photographed image, a part where the bright line appears in the normal photographed image is identified based on the bright line image, and a signal object which is an image indicating the part is designated as the normal image A composite step of generating a composite image by superimposing the captured image, and the composite image may be displayed as the display image in the image display step.
 例えば、信号オブジェクトは、輝線模様、信号明示オブジェクト、信号識別オブジェクト、または点線枠などであって、図7、図8および図11に示すように、合成画像が表示用画像として表示される。これにより、ユーザは、輝度変化によって信号を送信している被写体をさらに容易に見つけることができる。 For example, the signal object is a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like, and a composite image is displayed as a display image as shown in FIGS. Thus, the user can more easily find the subject that is transmitting the signal due to the luminance change.
 また、前記第1の露光時間設定ステップでは、露光時間を1/3000秒に設定し、前記輝線画像取得ステップでは、前記被写体の周囲が映し出された前記輝線画像を取得し、前記画像表示ステップでは、前記輝線画像を前記表示用画像として表示してもよい。 In the first exposure time setting step, an exposure time is set to 1/3000 sec. In the bright line image acquisition step, the bright line image in which the periphery of the subject is projected is acquired, and in the image display step. The bright line image may be displayed as the display image.
 例えば、輝線画像は中間画像として取得されて表示される。したがって、通常撮影画像と可視光通信画像とを取得して合成するなどの処理を行う必要がなく、処理の簡略化を図ることができる。 For example, the bright line image is acquired and displayed as an intermediate image. Therefore, it is not necessary to perform processing such as acquiring and synthesizing the normal captured image and the visible light communication image, and the processing can be simplified.
 また、前記イメージセンサは、第1のイメージセンサと第2のイメージセンサを含み、前記通常画像取得ステップでは、前記第1のイメージセンサが撮影することによって、前記通常撮影画像を取得し、前記輝線画像取得ステップでは、前記第2のイメージセンサが前記第1のイメージセンサの撮影と同時に撮影することによって、前記輝線画像を取得してもよい。 The image sensor includes a first image sensor and a second image sensor. In the normal image acquisition step, the first image sensor captures the normal captured image, and the bright line is acquired. In the image acquisition step, the bright line image may be acquired by capturing the second image sensor simultaneously with the capturing of the first image sensor.
 例えば、図8に示すように、通常撮影画像と輝線画像である可視光通信画像とがそれぞれのカメラで取得される。したがって、1つのカメラで通常撮影画像と可視光通信画像とを取得する場合と比べて、それらの画像を早く取得することができ、処理を高速化することができる。 For example, as shown in FIG. 8, a normal photographed image and a visible light communication image that is a bright line image are acquired by each camera. Therefore, compared with the case where a normal captured image and a visible light communication image are acquired with one camera, those images can be acquired earlier, and the processing can be speeded up.
 また、前記情報通信方法は、さらに、前記表示用画像における前記輝線が現われた部位がユーザによる操作によって指定された場合には、指定された部位の前記輝線のパターンから取得された前記送信情報に基づく提示情報を提示する情報提示ステップを含んでもよい。例えば、前記ユーザによる操作は、タップ、スワイプ、前記部位に指先を所定の時間以上継続して当てる操作、前記部位に視線を向けた状態を所定の時間以上継続する操作、前記部位に関連付けて示される矢印に前記ユーザの身体の一部を動かす操作、輝度変化するペン先を前記部位に当てる操作、または、タッチセンサに触れることによって、前記表示用画像に表示されているポインタを前記部位に当てる操作である。 Further, in the information communication method, when the part where the bright line appears in the display image is designated by a user operation, the information communication method further includes the transmission information acquired from the pattern of the bright line of the designated part. An information presentation step of presenting presentation information based on the information may be included. For example, the operation by the user is shown in association with a tap, swipe, an operation in which a fingertip is continuously applied to the part for a predetermined time, an operation in which a line of sight is directed to the part for a predetermined time, or the like. An operation of moving a part of the user's body to an arrow, an operation of applying a pen tip that changes in luminance to the part, or a touch sensor is touched, and a pointer displayed on the display image is applied to the part It is an operation.
 例えば、図13~図17、図20および図21に示すように、提示情報が情報通知画像として表示される。これにより、ユーザに所望の情報を提示することができる。 For example, as shown in FIGS. 13 to 17, 20 and 21, the presentation information is displayed as an information notification image. Thereby, desired information can be presented to the user.
 また、前記イメージセンサはヘッドマウントディスプレイに備えられ、前記画像表示ステップでは、前記ヘッドマウントディスプレイに搭載されたプロジェクタが前記表示用画像を表示してもよい。 Further, the image sensor may be provided in a head mounted display, and in the image display step, a projector mounted on the head mounted display may display the display image.
 これにより、例えば、図19~図21に示すように、簡単に情報をユーザに提示することができる。 Thus, for example, as shown in FIGS. 19 to 21, information can be easily presented to the user.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む画像である輝線画像を取得する輝線画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含み、前記輝線画像取得ステップでは、前記イメージセンサが移動されている期間に、複数の前記被写体を撮影することによって、前記輝線が現われた部位を複数含む前記輝線画像を取得し、前記情報取得ステップでは、前記部位ごとに、当該部位の前記輝線のパターンによって特定されるデータを復調することによって、複数の前記被写体のそれぞれの位置を取得し、前記情報通信方法は、さらに、取得された複数の前記被写体のそれぞれの位置、および前記イメージセンサの移動状態に基づいて、前記イメージセンサの位置を推定する位置推定ステップを含んでもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time. A bright line image acquiring step for acquiring the bright line image, and an information acquiring step for acquiring information by demodulating data specified by the pattern of the bright line included in the acquired bright line image. In the acquisition step, the plurality of subjects are photographed during the period in which the image sensor is moved. By acquiring the bright line image including a plurality of parts where the bright lines appear, and in the information acquisition step, for each part, by demodulating data specified by the pattern of the bright lines of the parts, The information communication method further estimates the position of the image sensor based on the acquired positions of the plurality of subjects and the movement state of the image sensor. A position estimation step may be included.
 これにより、複数の照明などの被写体による輝度変化によって、イメージセンサを含む受信機の位置を正確に推定することができる。 This makes it possible to accurately estimate the position of the receiver including the image sensor based on luminance changes caused by subjects such as a plurality of lights.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む画像である輝線画像を取得する輝線画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップと、取得された前記情報を提示する情報提示ステップとを含み、前記情報提示ステップでは、前記イメージセンサのユーザに対して、予め定められたジェスチャを促す画像を前記情報として提示してもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time. A bright line image acquisition step of acquiring a bright line image, an information acquisition step of acquiring information by demodulating data specified by a pattern of the bright line included in the acquired bright line image, and the acquired information Information presenting step for presenting, in the information presenting step, the image sensor Against over The may present an image that prompts the gesture that has been predetermined as the information.
 これにより、ユーザが、促されたとおりのジェスチャを行うか否かによって、そのユーザに対する認証などを行うことができ、利便性を高めることができる。 Thus, depending on whether or not the user performs the gesture as prompted, the user can be authenticated and the convenience can be improved.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含み、前記画像取得ステップでは、反射面に映る複数の前記被写体を撮影することによって前記輝線画像を取得し、前記情報取得ステップでは、前記輝線画像に含まれる輝線の強度に応じて、前記輝線を、複数の前記被写体のそれぞれに対応する輝線に分離し、前記被写体ごとに、当該被写体に対応する輝線のパターンによって特定されるデータを復調することにより情報を取得してもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the exposure time setting step for setting the exposure time of the image sensor, and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time. And an information acquisition step of acquiring information by demodulating data specified by the bright line pattern included in the acquired bright line image. In the image acquisition step, the image is reflected on the reflection surface. The bright line image is acquired by photographing a plurality of the subjects, and the information acquisition step is performed. The bright line is separated into bright lines corresponding to each of the plurality of subjects according to the intensity of the bright lines included in the bright line image, and each subject is specified by a bright line pattern corresponding to the subject. Information may be acquired by demodulating the data.
 これにより、複数の照明などの被写体がそれぞれ輝度変化する場合でも、被写体のそれぞれから適切な情報を取得することができる。 This makes it possible to acquire appropriate information from each of the subjects even when the subject such as a plurality of illuminations changes in luminance.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含み、前記画像取得ステップでは、反射面に映る前記被写体を撮影することによって前記輝線画像を取得し、前記情報通信方法は、さらに、前記輝線画像内における輝度分布に基づいて、前記被写体の位置を推定する位置推定ステップを含んでもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the exposure time setting step for setting the exposure time of the image sensor, and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time. And an information acquisition step of acquiring information by demodulating data specified by the bright line pattern included in the acquired bright line image. In the image acquisition step, the image is reflected on the reflection surface. The bright line image is acquired by photographing the subject, and the information communication method includes: , Based on the luminance distribution in the emission line image may include position estimation step for estimating the position of the object.
 これにより、輝度分布に基づいて適切な被写体の位置を推定することができる。 This makes it possible to estimate an appropriate subject position based on the luminance distribution.
 また、輝度変化によって信号を送信する情報通信方法であって、送信対象の第1の信号を変調することによって、輝度変化の第1のパターンを決定する第1の決定ステップと、送信対象の第2の信号を変調することによって、輝度変化の第2のパターンを決定する第2の決定ステップと、発光体が、決定された前記第1のパターンにしたがった輝度変化と、決定された前記第2のパターンにしたがった輝度変化とを、交互に行うことによって、前記第1および第2の信号を送信する送信ステップとを含んでもよい。 An information communication method for transmitting a signal according to a luminance change, wherein a first determination step of determining a first pattern of luminance change by modulating a first signal to be transmitted, A second determination step of determining a second pattern of luminance change by modulating the signal of 2; and a luminance change according to the first pattern determined by the light emitter; A transmission step of transmitting the first and second signals by alternately performing a luminance change according to the second pattern.
 これにより、例えば、図22に示すように、第1の信号と第2の信号とをそれぞれ遅滞なく送信することができる。 Thereby, for example, as shown in FIG. 22, the first signal and the second signal can be transmitted without delay.
 また、前記送信ステップでは、輝度変化を、前記第1のパターンにしたがった輝度変化と、前記第2のパターンにしたがった輝度変化とで切り替えるときには、緩衝時間を空けて切り替えてもよい。 Further, in the transmission step, when the luminance change is switched between the luminance change according to the first pattern and the luminance change according to the second pattern, it may be switched with a buffer time.
 これにより、第1の信号と第2の信号との混信を抑えることができる。 Thereby, interference between the first signal and the second signal can be suppressed.
 また、輝度変化によって信号を送信する情報通信方法であって、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記信号は、複数の大ブロックからなり、前記複数の大ブロックのそれぞれは、第1のデータと、前記第1のデータに対するプリアンブルと、前記第1のデータに対するチェック信号とを含み、前記第1のデータは、複数の小ブロックからなり、前記小ブロックは、第2のデータと、前記第2のデータに対するプリアンブルと、前記第2のデータに対するチェック信号とを含んでもよい。 An information communication method for transmitting a signal according to a luminance change, wherein a determination step of determining a luminance change pattern by modulating a signal to be transmitted, and the light emitter changes in luminance according to the determined pattern And transmitting the signal to be transmitted, the signal comprising a plurality of large blocks, each of the plurality of large blocks including first data and a preamble for the first data. And a check signal for the first data, wherein the first data is composed of a plurality of small blocks, and the small blocks include second data, a preamble for the second data, and the first data. And a check signal for the second data may be included.
 これにより、ブランキング期間を要する受信機でも、ブランキング期間を必要としない受信機でも、適切にデータを取得することができる。 This makes it possible to acquire data appropriately for both receivers that require a blanking period and receivers that do not require a blanking period.
 また、輝度変化によって信号を送信する情報通信方法であって、複数の送信機がそれぞれ、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、送信機ごとに、当該送信機に備えられた発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記送信ステップでは、互いに周波数またはプロトコルが異なる信号を送信してもよい。 An information communication method for transmitting a signal by luminance change, wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, And a transmitting step in which a light emitter provided in the transmitter changes the luminance according to the determined pattern and transmits the signal to be transmitted. In the transmitting step, signals having different frequencies or protocols are transmitted. May be.
 これにより、複数の送信機からの信号の混信を抑えることができる。 This makes it possible to suppress signal interference from multiple transmitters.
 また、輝度変化によって信号を送信する情報通信方法であって、複数の送信機がそれぞれ、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、送信機ごとに、当該送信機に備えられた発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記送信ステップでは、前記複数の送信機のうちの1つの送信機は、他方の送信機から送信される信号を受信し、受信された信号と混信しない態様で、他の信号を送信してもよい。 An information communication method for transmitting a signal by luminance change, wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, A transmitter in which a light emitter provided in the transmitter transmits a signal to be transmitted by changing in luminance according to the determined pattern, wherein in the transmitting step, one of the plurality of transmitters One transmitter may receive a signal transmitted from the other transmitter and transmit another signal in a manner that does not interfere with the received signal.
 これにより、複数の送信機からの信号の混信を抑えることができる。 This makes it possible to suppress signal interference from multiple transmitters.
 (実施の形態3)
 本実施の形態では、上記実施の形態1または2におけるスマートフォンなどの受信機と、LEDや有機ELなどの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 3)
In this embodiment, each application example using a receiver such as a smartphone in Embodiment 1 or 2 and a transmitter that transmits information as a blinking pattern such as an LED or an organic EL will be described.
 図26は、実施の形態3における受信機、送信機およびサーバの処理動作の一例を示す図である。 FIG. 26 is a diagram illustrating an example of processing operations of the receiver, the transmitter, and the server in the third embodiment.
 例えばスマートフォンとして構成される受信機8142は、自らの位置を示す位置情報を取得し、その位置情報をサーバ8141に送信する。なお、受信機8142は、例えばGPSなどを利用したり、他の信号を受信したときにその位置情報を取得する。サーバ8141は、その位置情報によって示される位置に対応付けられたIDリストを受信機8142に送信する。IDリストには、「abcd」などのIDごとに、そのIDと、そのIDに対応付けられた情報とが含まれている。 For example, the receiver 8142 configured as a smartphone acquires position information indicating its own position, and transmits the position information to the server 8141. Note that the receiver 8142 acquires position information when, for example, GPS is used or other signals are received. The server 8141 transmits the ID list associated with the position indicated by the position information to the receiver 8142. The ID list includes, for each ID such as “abcd”, the ID and information associated with the ID.
 受信機8142は、例えば照明機器として構成される送信機8143から信号を受信する。このとき、受信機8142は、IDの一部(例えば「 b  」)しか上述の信号として受信できない場合がある。この場合、受信機8142は、そのIDの一部を含むIDをIDリストから検索する。一意のIDが見つからない場合には、受信機8142は、さらに、送信機8143から、そのIDの他の部分を含む信号を受信する。これにより、受信機8142は、そのIDのうちのより多くの部分(例えば「 bc  」)を取得する。そして、受信機8142は、そのIDの一部(例えば「 bc  」)を含むIDをIDリストから再び検索する。このような検索を行うことによって、受信機8142は、IDの一部しか取得することができなくても、IDの全てを特定することができる。なお、受信機8142は、送信機8143から信号を受信するときには、IDの一部だけでなく、CRC(Cyclic Redundancy Check)などのチェック部分なども受信する。 The receiver 8142 receives a signal from a transmitter 8143 configured as, for example, a lighting device. At this time, the receiver 8142 may receive only a part of the ID (for example, “b”) as the above signal. In this case, the receiver 8142 searches the ID list for an ID including a part of the ID. If the unique ID is not found, the receiver 8142 further receives a signal from the transmitter 8143 that includes another portion of that ID. As a result, the receiver 8142 acquires a larger part (for example, “bc”) of the ID. Then, the receiver 8142 searches the ID list again for an ID including a part of the ID (for example, “bc”). By performing such a search, the receiver 8142 can specify all of the IDs even if only a part of the IDs can be acquired. When receiving a signal from the transmitter 8143, the receiver 8142 receives not only a part of the ID but also a check part such as CRC (Cyclic Redundancy Check).
 図27は、実施の形態3における送信機および受信機の動作の一例を示す図である。 FIG. 27 is a diagram illustrating an example of operations of the transmitter and the receiver in the third embodiment.
 例えばテレビとして構成される送信機8165は、画像と、その画像に対応付けられたID(ID 1000)とを制御部8166から取得する。そして、送信機8165は、その画像を表示するとともに、輝度変化することによって、そのID(ID 1000)を受信機8167に送信する。受信機8167は、撮像することによって、そのID(ID 1000)を受信するとともに、そのID(ID 1000)に対応付けられた情報を表示する。ここで、制御部8166は、送信機8165に出力される画像を他の画像に変更する。このとき、制御部8166は、送信機8165に出力されるIDも変更する。つまり、制御部8166は、その他の画像とともに、他の画像に対応付けられた他のID(ID 1001)を送信機8165に出力する。これにより、送信機8165は、その他の画像を表示するとともに、輝度変化することによって、他のID(ID 1001)を受信機8167に送信する。受信機8167は、撮像することによって、その他のID(ID 1001)を受信するとともに、その他のID(ID 1001)に対応付けられた情報を表示する。 For example, the transmitter 8165 configured as a television acquires an image and an ID (ID 1000) associated with the image from the control unit 8166. The transmitter 8165 displays the image and transmits the ID (ID 1000) to the receiver 8167 by changing the luminance. The receiver 8167 receives the ID (ID 1000) by imaging and displays information associated with the ID (ID 1000). Here, the control unit 8166 changes the image output to the transmitter 8165 to another image. At this time, the control unit 8166 also changes the ID output to the transmitter 8165. That is, the control unit 8166 outputs the other ID (ID 1001) associated with the other image to the transmitter 8165 together with the other image. Thus, the transmitter 8165 displays another image and transmits another ID (ID 1001) to the receiver 8167 by changing the luminance. The receiver 8167 receives the other ID (ID 1001) by imaging, and displays information associated with the other ID (ID 1001).
 図28は、実施の形態3における送信機、受信機およびサーバの動作の一例を示す図である。 FIG. 28 is a diagram illustrating an example of operations of the transmitter, the receiver, and the server in the third embodiment.
 例えばスマートフォントして構成される送信機8185は、ディスプレイ8185aのうちのバーコード部分8185bを除く部分を輝度変化させることによって、すなわち、可視光通信によって、例えば「クーポン 100円引き」を示す情報を送信する。また、送信機8185は、バーコード部分8185bを輝度変化させずに、そのバーコード部分8185bにバーコードを表示させる。このバーコードは、上述の可視光通信によって送信される情報と同じ情報を示す。さらに、送信機8185は、ディスプレイ8185aのうちのバーコード部分8185bを除く部分に、可視光通信によって送信される情報を示す文字または絵、例えば文字「クーポン 100円引き」を表示する。このような文字または絵が表示されることによって、送信機8185のユーザは、どのような情報が送信されているかを容易に把握することができる。 For example, the transmitter 8185 configured as a smartphone displays information indicating “coupon 100 yen discount”, for example, by changing the luminance of the display 8185a excluding the barcode portion 8185b, that is, by visible light communication. Send. The transmitter 8185 displays the barcode on the barcode portion 8185b without changing the luminance of the barcode portion 8185b. This bar code indicates the same information as the information transmitted by the visible light communication described above. Further, the transmitter 8185 displays a character or a picture indicating information transmitted by visible light communication, for example, a character “coupon 100 yen discount” on a portion of the display 8185a excluding the barcode portion 8185b. By displaying such characters or pictures, the user of the transmitter 8185 can easily grasp what information is being transmitted.
 受信機8186は、撮像することによって、可視光通信によって送信された情報と、バーコードによって示される情報とを取得し、これらの情報をサーバ8187に送信する。サーバ8187は、これらの情報が一致または関連するか否かを判定し、一致または関連すると判定したときには、それらの情報にしたがった処理を実行する。または、サーバ8187は、その判定結果を受信機8186に送信し、受信機8186にそれらの情報にしたがった処理を実行させる。 The receiver 8186 acquires the information transmitted by visible light communication and the information indicated by the barcode by imaging, and transmits the information to the server 8187. The server 8187 determines whether or not these pieces of information match or relate to each other. When it determines that these pieces of information match or relate to each other, the server 8187 executes processing according to the pieces of information. Alternatively, the server 8187 transmits the determination result to the receiver 8186, and causes the receiver 8186 to execute processing according to the information.
 なお、送信機8185は、バーコードによって示される情報のうちの一部を可視光通信によって送信してもよい。また、バーコードには、サーバ8187のURLが示されていてもよい。また、送信機8185は、受信機としてIDを取得して、そのIDをサーバ8187に送信することによって、そのIDに対応付けられている情報を取得してもよい。このIDに対応付けられている情報は、上述の可視光通信によって送信される情報、または、バーコードによって示される情報と同一である。また、サーバ8187は、受信機8186を介して送信機8185から送信される情報(可視光通信の情報またはバーコードの情報)に対応付けられたIDを、送信機8185に送信してもよい。 Note that the transmitter 8185 may transmit a part of the information indicated by the barcode by visible light communication. The barcode may indicate the URL of the server 8187. Further, the transmitter 8185 may acquire information associated with the ID by acquiring the ID as a receiver and transmitting the ID to the server 8187. The information associated with this ID is the same as the information transmitted by the above visible light communication or the information indicated by the barcode. Further, the server 8187 may transmit an ID associated with information (visible light communication information or barcode information) transmitted from the transmitter 8185 via the receiver 8186 to the transmitter 8185.
 図29は、実施の形態3における送信機および受信機の動作の一例を示す図である。 FIG. 29 is a diagram illustrating an example of operations of the transmitter and the receiver in the third embodiment.
 例えば、受信機8183は、複数の人物8197および街灯8195を含む被写体を撮像する。街灯8195は、輝度変化によって情報を送信する送信機8195aを備えている。この撮像によって、受信機8183は、送信機8195aの像が上述の輝線模様として表れた画像を取得する。さらに、受信機8183は、その輝線模様によって示されるIDに関連付けられているARオブジェクト8196aを例えばサーバなどから取得する。そして、受信機8183は、通常撮影によって得られる通常撮影画像8196にそのARオブジェクト8196aを重畳し、そのARオブジェクト8196aが重畳された通常撮影画像8196を表示する。 For example, the receiver 8183 images a subject including a plurality of persons 8197 and street lamps 8195. The streetlight 8195 includes a transmitter 8195a that transmits information according to a change in luminance. By this imaging, the receiver 8183 acquires an image in which the image of the transmitter 8195a appears as the bright line pattern described above. Furthermore, the receiver 8183 acquires the AR object 8196a associated with the ID indicated by the bright line pattern from, for example, a server. Then, the receiver 8183 superimposes the AR object 8196a on a normal captured image 8196 obtained by normal imaging, and displays a normal captured image 8196 on which the AR object 8196a is superimposed.
 (本実施の形態のまとめ)
 本実施の形態における情報通信方法は、輝度変化によって信号を送信する情報通信方法であって、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記輝度変化のパターンは、予め定められた時間幅における任意の各位置に、互いに異なる2つの輝度値のうちの一方が出現するパターンであって、前記決定ステップでは、送信対象の互いに異なる信号のそれぞれに対して、前記時間幅における輝度の立ち上がり位置または立ち下がり位置である輝度変化位置が互いに異なり、且つ、前記時間幅における前記発光体の輝度の積分値が、予め設定された明るさに応じた同一の値となるように、前記輝度変化のパターンを決定する。
(Summary of this embodiment)
The information communication method according to the present embodiment is an information communication method for transmitting a signal according to a luminance change, wherein a determination step for determining a luminance change pattern by modulating a signal to be transmitted and a light emitter are determined. A transmission step of transmitting the signal to be transmitted by changing the luminance according to the pattern, wherein the luminance change pattern includes two different luminances at arbitrary positions in a predetermined time width. In the pattern in which one of the values appears, in the determination step, the luminance change position that is the rising or falling position of the luminance in the time width is different for each of the different signals to be transmitted. In addition, the integrated value of the luminance of the luminous body in the time width is the same according to the preset brightness As a value to determine the pattern of the luminance change.
 例えば、送信対象の互いに異なる信号「00」、「01」、「10」および「11」のそれぞれに対して、輝度の立ち上がり位置(輝度変化位置)が互いに異なり、且つ、予め定められた時間幅(単位時間幅)における発光体の輝度の積分値が、予め定められた明るさ(例えば99%または1%など)に応じた同一の値となるように、輝度変化のパターンが決定される。これにより、送信対象の信号のそれぞれに対して、発光体の明るさを一定に保つことができ、ちらつきを抑えることができるとともに、その発光体を撮像する受信機は、輝度変化位置に基づいて、その輝度変化のパターンを適切に復調することができる。また、輝度変化のパターンは、単位時間幅における任意の各位置に、互いに異なる2つの輝度値(輝度H(High)または輝度L(Low))のうちの一方が出現するパターンであるため、発光体の明るさを連続的に変化させることができる。 For example, for each of the signals “00”, “01”, “10”, and “11” that are different from each other, the rising positions of the luminance (luminance change positions) are different from each other, and a predetermined time width is set. The luminance change pattern is determined so that the integrated value of the luminance of the light emitter in (unit time width) becomes the same value according to a predetermined brightness (for example, 99% or 1%). As a result, the brightness of the illuminant can be kept constant for each signal to be transmitted, flicker can be suppressed, and the receiver that images the illuminant is based on the luminance change position. The brightness change pattern can be demodulated appropriately. In addition, the luminance change pattern is a pattern in which one of two different luminance values (luminance H (High) or luminance L (Low)) appears at any position in the unit time width. The brightness of the body can be changed continuously.
 また、前記情報通信方法は、さらに、複数の画像のそれぞれを順に切り替えて表示する画像表示ステップを含み、前記決定ステップでは、前記画像表示ステップで画像が表示されるごとに、表示されている画像に対応する識別情報を前記送信対象の信号として変調することによって、前記識別情報に対する輝度変化のパターンを決定し、前記送信ステップでは、前記画像表示ステップで画像が表示されるごとに、表示されている画像に対応する識別情報に対して決定された輝度変化のパターンにしたがって前記発光体が輝度変化することによって前記識別情報を送信してもよい。 The information communication method further includes an image display step of sequentially switching and displaying each of the plurality of images. In the determination step, the displayed image is displayed each time the image is displayed in the image display step. The luminance change pattern for the identification information is determined by modulating the identification information corresponding to the signal to be transmitted. In the transmission step, the image is displayed every time the image is displayed in the image display step. The identification information may be transmitted by the luminance change of the light emitter according to the luminance change pattern determined with respect to the identification information corresponding to the existing image.
 これにより、例えば図27に示すように、画像が表示されるごとに、表示されている画像に対応する識別情報が送信されるため、ユーザは、表示される画像に基づいて、受信機に受信させる識別情報を容易に選択することができる。 As a result, for example, as shown in FIG. 27, each time an image is displayed, identification information corresponding to the displayed image is transmitted. Therefore, the user receives the received information on the receiver based on the displayed image. The identification information to be made can be easily selected.
 また、前記送信ステップでは、前記画像表示ステップで画像が表示されるごとに、さらに、過去に表示された画像に対応する識別情報に対して決定された輝度変化のパターンにしたがって前記発光体が輝度変化することによって前記識別情報を送信してもよい。 In the transmission step, each time an image is displayed in the image display step, the light emitter is further radiated according to a luminance change pattern determined for identification information corresponding to an image displayed in the past. The identification information may be transmitted by changing.
 これにより、表示される画像が切り替わったために、切り替わり前に送信された識別信号を受信機が受信できなかった場合でも、現在表示されている画像に対応する識別情報とともに、過去に表示された画像に対応する識別情報も送信されるため、切り替わり前に送信された識別情報を、改めて受信機で適切に受信することができる。 Thus, even if the receiver cannot receive the identification signal transmitted before switching because the displayed image has been switched, the image displayed in the past together with the identification information corresponding to the currently displayed image is displayed. Since the identification information corresponding to is also transmitted, the identification information transmitted before switching can be properly received again by the receiver.
 また、前記決定ステップでは、前記画像表示ステップで画像が表示されるごとに、表示されている画像に対応する識別情報と、前記画像が表示されている時刻とを前記送信対象の信号として変調することによって、前記識別情報および前記時刻に対する輝度変化のパターンを決定し、前記送信ステップでは、前記画像表示ステップで画像が表示されるごとに、表示されている画像に対応する識別情報および時刻に対して決定された輝度変化のパターンにしたがって前記発光体が輝度変化することによって前記識別情報および前記時刻を送信し、さらに、過去に表示された画像に対応する識別情報および時刻に対して決定された輝度変化のパターンにしたがって前記発光体が輝度変化することによって前記識別情報および前記時刻を送信してもよい。 In the determination step, each time an image is displayed in the image display step, the identification information corresponding to the displayed image and the time when the image is displayed are modulated as the signal to be transmitted. Thus, a pattern of luminance change with respect to the identification information and the time is determined, and in the transmission step, each time an image is displayed in the image display step, the identification information and time corresponding to the displayed image are determined. The identification information and the time are transmitted when the luminous body changes in luminance according to the luminance change pattern determined in the above, and the identification information and the time corresponding to an image displayed in the past are further determined. The identification information and the time are transmitted when the luminous body changes in luminance according to a luminance change pattern. Good.
 これにより、画像が表示されるごとに、複数のID時刻情報(識別情報および時刻からなる情報)が送信されるため、受信機は、受信された複数のID時刻情報の中から、過去に送信されて受信できなかった識別信号を、そのID時刻情報のそれぞれに含まれる時刻に基づいて容易に選択することができる。 As a result, each time an image is displayed, a plurality of ID time information (information consisting of identification information and time) is transmitted, and therefore the receiver transmits in the past from the received plurality of ID time information. Thus, the identification signal that could not be received can be easily selected based on the time included in each of the ID time information.
 また、前記発光体は、それぞれ発光する複数の領域を有し、前記複数の領域のうち互いに隣接する領域の光が相互に干渉し、前記複数の領域のうちの1つだけが、決定された前記輝度変化のパターンにしたがって輝度変化する場合、前記送信ステップでは、前記複数の領域のうちの端に配置された領域だけが、決定された前記輝度変化のパターンにしたがって輝度変化してもよい。 In addition, each of the light emitters has a plurality of regions that emit light, light in regions adjacent to each other among the plurality of regions interfere with each other, and only one of the plurality of regions is determined. When the luminance changes according to the luminance change pattern, in the transmission step, only the region arranged at the end of the plurality of regions may change in luminance according to the determined luminance change pattern.
 これにより、端に配置された領域(発光部)だけが輝度変化するため、端以外に配置された領域だけが輝度変化する場合と比べて、他の領域からの光によるその輝度変化への影響を抑えることができる。その結果、受信機は、撮影によって、その輝度変化のパターンを適切に捉えることができる。 As a result, only the area (light emitting part) arranged at the edge changes in luminance, so that the luminance change due to light from other areas is different from the case where only the area arranged at the edge other than the luminance changes. Can be suppressed. As a result, the receiver can appropriately capture the luminance change pattern by photographing.
 また、前記複数の領域のうちの2つだけが、決定された前記輝度変化のパターンにしたがって輝度変化する場合、前記送信ステップでは、前記複数の領域のうちの端に配置された領域と、前記端に配置された領域に隣接する領域とが、決定された前記輝度変化のパターンにしたがって輝度変化してもよい。 In addition, when only two of the plurality of regions change in luminance according to the determined luminance change pattern, the transmission step includes: a region disposed at an end of the plurality of regions; The area adjacent to the area arranged at the end may change in luminance according to the determined luminance change pattern.
 これにより、端に配置された領域(発光部)と、その端に配置された領域に隣接する領域(発光部)とが輝度変化するため、互いに離れた領域が輝度変化する場合と比べて、空間的に連続して輝度変化する範囲の面積を広く保つことができる。その結果、受信機は、撮影によって、その輝度変化のパターンを適切に捉えることができる。 As a result, the luminance of the region arranged at the end (light emitting unit) and the region adjacent to the region arranged at the end (light emitting unit) change in luminance, compared to the case where the luminance of regions separated from each other changes. It is possible to keep a wide area in which the luminance continuously changes in brightness. As a result, the receiver can appropriately capture the luminance change pattern by photographing.
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、前記被写体の撮影に用いられるイメージセンサの位置を示す位置情報を送信する位置情報送信ステップと、前記位置情報によって示される位置に対応付けられた、複数の識別情報を含むIDリストを受信するリスト受信ステップと、前記イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップと、取得された前記情報を含む識別情報を前記IDリストから検索する検索ステップとを含む。 The information communication method according to the present embodiment is an information communication method for acquiring information from a subject, a location information transmission step for transmitting location information indicating a location of an image sensor used for photographing the subject, and the location information. A list receiving step for receiving an ID list including a plurality of pieces of identification information associated with the position indicated by, and an image obtained by photographing the subject by the image sensor corresponding to an exposure line included in the image sensor An exposure time setting step for setting an exposure time of the image sensor so that a bright line to be generated is generated according to a change in luminance of the subject, and the subject in which the image sensor changes in luminance is photographed at the set exposure time. An image acquisition step of acquiring a bright line image including the bright line, and An information acquisition step of acquiring information by demodulating data specified by the bright line pattern included in the bright line image, and a search step of searching the ID list for identification information including the acquired information. Including.
 これにより、例えば図26に示すように、予めIDリストが受信されているため、取得された情報「 bc 」が識別情報の一部だけであっても、IDリストに基づいて適切な識別情報「abcd」を特定することができる。 As a result, for example, as shown in FIG. 26, since the ID list is received in advance, even if the acquired information “bc” is only a part of the identification information, the appropriate identification information “ abcd "can be specified.
 また、前記検索ステップにおいて、取得された前記情報を含む識別情報が一意に特定されない場合には、前記画像取得ステップおよび前記情報取得ステップを繰り返し行うことによって、新たな情報を取得し、前記情報通信方法は、さらに、取得された前記情報と、前記新たな情報とを含む識別情報を前記IDリストから検索する再検索ステップを含んでもよい。 If the identification information including the acquired information is not uniquely specified in the search step, new information is acquired by repeatedly performing the image acquisition step and the information acquisition step, and the information communication The method may further include a re-search step of searching the ID list for identification information including the acquired information and the new information.
 これにより、例えば図26に示すように、取得された情報「 b  」が識別情報の一部だけであって、その情報だけでは識別情報が一意に特定されない場合であっても、新たな情報「  c 」が取得されるため、その新たな情報とIDリストに基づいて適切な識別情報「abcd」を特定することができる。 As a result, for example, as shown in FIG. 26, even if the acquired information “b” is only a part of the identification information and the identification information is not uniquely specified by the information alone, the new information “ Since “c” is acquired, appropriate identification information “abcd” can be specified based on the new information and the ID list.
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより識別情報を取得する情報取得ステップと、取得された前記識別情報と、前記イメージセンサの位置を示す位置情報とを送信する送信ステップと、前記位置情報によって示される位置に対応付けられた、複数の識別情報を含むIDリストに、取得された前記識別情報がない場合には、エラーを通知するためのエラー通知情報を受信するエラー受信ステップとを含む。 The information communication method according to the present embodiment is an information communication method for acquiring information from a subject, and an bright line corresponding to an exposure line included in the image sensor is included in an image obtained by photographing the subject with an image sensor. An exposure time setting step for setting an exposure time of the image sensor so as to occur according to a change in luminance of the subject, and the image sensor photographing the subject whose luminance changes with the set exposure time, An image acquisition step of acquiring a bright line image including the bright line, an information acquisition step of acquiring identification information by demodulating data specified by the pattern of the bright line included in the acquired bright line image, and A transmission step for transmitting the identification information and position information indicating the position of the image sensor. And an error that receives error notification information for notifying an error when there is no acquired identification information in an ID list that includes a plurality of identification information associated with the position indicated by the position information. Receiving step.
 これにより、取得された識別情報がIDリストにない場合には、エラー通知情報を受信するため、そのエラー通知情報を受信した受信機のユーザは、その取得された識別情報に関連付けられた情報を得ることができないことを容易に把握することができる。 As a result, when the acquired identification information is not in the ID list, the error notification information is received. Therefore, the user of the receiver that has received the error notification information can change the information associated with the acquired identification information. It can be easily grasped that it cannot be obtained.
 (実施の形態4)
 本実施の形態では、上記実施の形態1~4におけるスマートフォンなどの受信機と、LEDや有機ELなどの点滅パターンとして情報を送信する送信機とを用いた適用例について説明する。
(Embodiment 4)
In this embodiment, an application example using a receiver such as a smartphone in Embodiments 1 to 4 and a transmitter that transmits information as a blinking pattern such as an LED or an organic EL will be described.
 図30は、実施の形態4における送信機および受信機の動作の一例を示す図である。 FIG. 30 is a diagram illustrating an example of operations of the transmitter and the receiver in the fourth embodiment.
 送信機は、ID記憶部8361、乱数生成部8362、加算部8363、暗号部8364、および送信部8365を備えている。ID記憶部8361は、送信機のIDを記憶している。乱数生成部8362は、一定時間ごとに異なる乱数を生成する。加算部8363は、ID記憶部8361に記憶されているIDに対して、乱数生成部8362によって生成された最新の乱数を組み合わせ、その結果を編集IDとして出力する。暗号部8364は、その編集IDに対して暗号化を行うことによって暗号化編集IDを生成する。送信部8365は輝度変化することによって、その暗号化編集IDを受信機に送信する。 The transmitter includes an ID storage unit 8361, a random number generation unit 8362, an addition unit 8363, an encryption unit 8364, and a transmission unit 8365. The ID storage unit 8361 stores the ID of the transmitter. The random number generation unit 8362 generates different random numbers every certain time. Adder 8363 combines the latest random number generated by random number generator 8362 with the ID stored in ID storage unit 8361, and outputs the result as an edit ID. The encryption unit 8364 generates an encrypted edit ID by encrypting the edit ID. The transmission unit 8365 transmits the encrypted edit ID to the receiver by changing the luminance.
 受信機は、受信部8366、復号部8367およびID取得部8368を備えている。受信部8366は、送信機を撮像(可視光撮影)することによって、暗号化編集IDを送信機から受信する。復号部8367は、その受信された暗号化編集IDを復号することによって編集IDを復元する。ID取得部8368は、復元された編集IDからIDを抽出することによってそのIDを取得する。 The receiver includes a receiving unit 8366, a decoding unit 8367, and an ID acquisition unit 8368. The receiving unit 8366 receives the encrypted edit ID from the transmitter by imaging the transmitter (visible light imaging). The decryption unit 8367 restores the edit ID by decrypting the received encrypted edit ID. The ID acquisition unit 8368 acquires the ID by extracting the ID from the restored editing ID.
 例えば、ID記憶部8361はID「100」を記憶しており、乱数生成部8362は最新の乱数「817」を生成する(例1)。この場合、加算部8363は、ID「100」に対して乱数「817」を組み合わせることによって、編集ID「100817」を生成して出力する。暗号部8364は、その編集ID「100817」に対して暗号化を行うことによって、暗号化編集ID「abced」を生成する。受信機の復号部8367は、その暗号化編集ID「abced」を復号することによって、編集ID「100817」を復元する。そして、ID取得部8368は、復元された編集ID「100817」からID「100」を抽出する。言い換えれば、ID取得部8368は、編集IDの下3桁を削除することによって、ID「100」を取得する。 For example, the ID storage unit 8361 stores the ID “100”, and the random number generation unit 8362 generates the latest random number “817” (Example 1). In this case, the adding unit 8363 generates and outputs the edit ID “100817” by combining the random number “817” with the ID “100”. The encryption unit 8364 generates an encrypted edit ID “abced” by encrypting the edit ID “100817”. The decryption unit 8367 of the receiver restores the edit ID “100817” by decrypting the encrypted edit ID “abced”. Then, the ID acquisition unit 8368 extracts the ID “100” from the restored editing ID “100817”. In other words, the ID acquisition unit 8368 acquires the ID “100” by deleting the last three digits of the edit ID.
 次に、乱数生成部8362は新たな乱数「619」を生成する(例2)。この場合、加算部8363は、ID「100」に対して乱数「619」を組み合わせることによって、編集ID「100619」を生成して出力する。暗号部8364は、その編集ID「100619」に対して暗号化を行うことによって、暗号化編集ID「difia」を生成する。送信機の復号部8367は、その暗号化編集ID「difia」を復号することによって、編集ID「100619」を復元する。そして、ID取得部8368は、復元された編集ID「100619」からID「100」を抽出する。言い換えれば、ID取得部8368は、編集IDの下3桁を削除することによって、ID「100」を取得する。 Next, the random number generation unit 8362 generates a new random number “619” (example 2). In this case, the adding unit 8363 generates and outputs the edit ID “100619” by combining the random number “619” with the ID “100”. The encryption unit 8364 generates an encrypted edit ID “diffia” by encrypting the edit ID “100619”. The decryption unit 8367 of the transmitter restores the edit ID “100619” by decrypting the encrypted edit ID “diffia”. Then, the ID acquisition unit 8368 extracts the ID “100” from the restored editing ID “100619”. In other words, the ID acquisition unit 8368 acquires the ID “100” by deleting the last three digits of the edit ID.
 このように、送信機はIDを単純に暗号化することなく、一定時間ごとに変更される乱数が組み合わされたものを暗号化するため、送信部8365から送信される信号から簡単にIDが解読されることを防ぐことができる。つまり、単純に暗号化されたIDが送信機から受信機に何度か送信される場合には、そのIDが暗号化されていても、そのIDが同じであれば、送信機から受信機に送信される信号も同じであるため、そのIDが解読される可能性がある。しかし、図30に示す例では、一定時間ごとに変更される乱数がIDに組み合わされて、その乱数が組み合わされたIDが暗号化される。したがって、同じIDが受信機に何度か送信される場合でも、それらのIDの送信のタイミングが異なれば、送信機から受信機へ送信される信号を異ならせることができる。その結果、IDが容易に解読されるのを防ぐことができる。 In this way, the transmitter simply encrypts a combination of random numbers that change every fixed time without simply encrypting the ID, so the ID can be easily decrypted from the signal transmitted from the transmitter 8365. Can be prevented. In other words, when a simple encrypted ID is transmitted from the transmitter to the receiver several times, even if the ID is encrypted, if the ID is the same, the transmitter to the receiver. Since the transmitted signal is the same, the ID may be deciphered. However, in the example shown in FIG. 30, a random number that is changed at regular intervals is combined with an ID, and the ID that is combined with the random number is encrypted. Therefore, even when the same ID is transmitted to the receiver several times, the signals transmitted from the transmitter to the receiver can be made different if the transmission timings of these IDs are different. As a result, it is possible to prevent the ID from being easily decoded.
 なお、図30に示す受信機は、暗号化編集IDを取得すると、その暗号化編集IDをサーバに送信し、そのサーバからIDを取得してもよい。 Note that when the receiver shown in FIG. 30 acquires the encryption edit ID, the receiver may transmit the encryption edit ID to the server and acquire the ID from the server.
 (駅での案内)
 図31は、電車のホームにおける本発明の利用形態の一例を示したものである。ユーザが、携帯端末を電子掲示板や照明にかざし、可視光通信により、電子掲示板に表示されている情報、または、電子掲示板の設置されている駅の電車情報・駅の構内情報などを取得する。ここでは、電子掲示板に表示されている情報自体が、可視光通信により、携帯端末に送信されてもよいし、電子掲示板に対応するID情報が携帯端末に送信され、携帯端末が取得したID情報をサーバに問い合わせることにより、電子掲示板に表示されている情報を取得してもよい。サーバは、携帯端末からID情報が送信されてきた場合に、ID情報に基づき、電子掲示板に表示されている内容を携帯端末に送信する。携帯端末のメモリに保存されている電車のチケット情報と、電子掲示板に表示されている情報とを対比し、ユーザのチケットに対応するチケット情報が電子掲示板に表示されている場合に、携帯端末のディスプレイに、ユーザの乗車予定の電車が到着するホームへの行き先を示す矢印を表示する。降車時に出口や乗り換え経路に近い車両までの経路を表示するとしてもよい。座席指定がされている場合は、その座席までの経路を表示するとしてもよい。矢印を表示する際には、地図や、電車案内情報における電車の路線の色と同じ色を用いて矢印を表示することにより、より分かりやすく表示することができる。また、矢印の表示とともに、ユーザの予約情報(ホーム番号、車両番号、発車時刻、座席番号)を表示することもできる。ユーザの予約情報を併せて表示することにより、誤認識を防ぐことが可能となる。チケット情報がサーバに保存されている場合には、携帯端末からサーバに問い合わせてチケット情報を取得し対比するか、または、サーバ側でチケット情報と電子掲示板に表示されている情報とを対比することにより、チケット情報に関連する情報を取得することができる。ユーザが乗換検索を行った履歴から目的の路線を推定し、経路を表示してもよい。また、電子掲示板に表示されている内容だけでなく、電子掲示板が設置されている駅の電車情報・構内情報を取得し、対比を行ってもよい。ディスプレイ上の電子掲示板の表示に対してユーザに関連する情報を強調表示してもよいし、書き換えて表示してもよい。ユーザの乗車予定が不明である場合には、各路線の乗り場への案内の矢印を表示してもよい。駅の構内情報を取得した場合には、売店・お手洗いへなどの案内する矢印をディスプレイに表示してもよい。ユーザの行動特性を予めサーバで管理しておき、ユーザが駅構内で売店・お手洗いに立ち寄ることが多い場合に、売店・お手洗いなどへ案内する矢印をディスプレイに表示する構成にしてもよい。売店・お手洗いに立ち寄る行動特性を有するユーザに対してのみ、売店・お手洗いなどへ案内する矢印を表示し、その他のユーザに対しては表示を行わないため処理量を減らすことが可能となる。売店・お手洗いなどへ案内する矢印の色を、ホームへの行き先を案内する矢印と異なる色としてもよい。両方の矢印を同時に表示する際には、異なる色とすることにより、誤認識を防ぐことが可能となる。尚、図31では電車の例を示したが、飛行機やバスなどでも同様の構成で表示を行うことが可能である。
(Guidance at the station)
FIG. 31 shows an example of a form of use of the present invention in a train platform. The user holds the portable terminal over an electronic bulletin board or lighting, and obtains information displayed on the electronic bulletin board, train information of a station where the electronic bulletin board is installed, information on the premises of the station, or the like by visible light communication. Here, the information itself displayed on the electronic bulletin board may be transmitted to the portable terminal by visible light communication, or ID information corresponding to the electronic bulletin board is transmitted to the portable terminal, and the ID information acquired by the portable terminal The information displayed on the electronic bulletin board may be acquired by inquiring the server. When the ID information is transmitted from the mobile terminal, the server transmits the content displayed on the electronic bulletin board to the mobile terminal based on the ID information. The train ticket information stored in the memory of the mobile terminal is compared with the information displayed on the electronic bulletin board, and the ticket information corresponding to the user's ticket is displayed on the electronic bulletin board. An arrow indicating the destination to the home where the user's scheduled train arrives is displayed on the display. When getting off, the route to the vehicle near the exit or the transfer route may be displayed. If a seat has been designated, the route to that seat may be displayed. When the arrow is displayed, it can be displayed more easily by displaying the arrow using the same color as the color of the train route in the map or the train guide information. In addition to the arrow display, the user's reservation information (home number, vehicle number, departure time, seat number) can also be displayed. By displaying the user reservation information together, it is possible to prevent erroneous recognition. If the ticket information is stored in the server, query the server from the mobile terminal to obtain and compare the ticket information, or compare the ticket information with the information displayed on the electronic bulletin board on the server side. Thus, information related to the ticket information can be acquired. The target route may be estimated from the history of the user performing a transfer search, and the route may be displayed. Further, not only the contents displayed on the electronic bulletin board but also the train information / premises information of the station where the electronic bulletin board is installed may be acquired and compared. Information related to the user may be highlighted with respect to the display of the electronic bulletin board on the display, or may be rewritten and displayed. When the user's boarding schedule is unknown, an arrow for guiding to the boarding place on each route may be displayed. When station premises information is acquired, an arrow for guiding to a store or restroom may be displayed on the display. The user's behavior characteristics may be managed in advance by a server, and an arrow for guiding the user to a store / restroom may be displayed on the display when the user often stops at a store / restaurant in the station. Only users who have behavioral characteristics of stopping at a store / restroom will display an arrow that directs them to a store / restroom, etc., and will not be displayed to other users, so the amount of processing can be reduced. . The color of the arrow leading to a store / restroom may be different from the arrow guiding the destination to the home. When both arrows are displayed simultaneously, it is possible to prevent erroneous recognition by using different colors. Note that although an example of a train is shown in FIG. 31, it is possible to perform display with a similar configuration even on an airplane or a bus.
 (クーポンのポップアップ)
 図32は、ユーザが店舗に近づくと、可視光通信により取得したクーポン情報が表示される、または、ポップアップが携帯端末のディスプレイに表示される一例を示したものである。ユーザは、携帯端末を用いて、可視光通信により、電子掲示板などから店舗のクーポン情報を取得する。次に、店舗から所定の範囲内にユーザが入ると、店舗のクーポン情報、または、ポップアップが表示される。ユーザが、店舗から所定の範囲内に入ったか否かは、携帯端末のGPS情報と、クーポン情報に含まれる店舗情報とを用いて判断される。クーポン情報に限らず、チケット情報でもよい。クーポンやチケットが利用できる店舗などが近づくと自動的にアラートしてくれるため、ユーザはクーポンやチケットを適切に利用することが可能となる。
(Coupon pop-up)
FIG. 32 shows an example in which coupon information acquired by visible light communication is displayed or a popup is displayed on the display of the mobile terminal when the user approaches the store. A user acquires coupon information of a store from an electronic bulletin board etc. by visible light communication using a portable terminal. Next, when the user enters the predetermined range from the store, the coupon information of the store or a pop-up is displayed. Whether or not the user has entered the predetermined range from the store is determined using the GPS information of the mobile terminal and the store information included in the coupon information. Not only coupon information but also ticket information may be used. Since it automatically alerts when a store where coupons and tickets can be used approaches, the user can use coupons and tickets appropriately.
 (操作用アプリケーションの起動)
 図33は、ユーザが携帯端末を用いて、可視光通信により、家電より情報を取得する一例を示したものである。可視光通信により、家電からID情報、または、当該家電に関する情報を取得した場合に、当該家電を操作するためのアプリケーションが自動的に立ち上がる。図33では、テレビを用いた例を示している。このような構成により、携帯端末を家電にかざすだけで、家電を操作するためのアプリケーションを起動することが可能となる。
(Launch operation application)
FIG. 33 shows an example in which a user acquires information from a home appliance by visible light communication using a mobile terminal. When ID information or information related to the home appliance is acquired from the home appliance by visible light communication, an application for operating the home appliance is automatically started. FIG. 33 shows an example using a television. With such a configuration, it is possible to start an application for operating a home appliance simply by holding the portable terminal over the home appliance.
 (データベース)
 図34は、送信機が送信するIDを管理するサーバの保持するデータベースの構成の一例を示したものである。
(Database)
FIG. 34 shows an example of the configuration of the database held by the server that manages the ID transmitted by the transmitter.
 データベースは、IDをキーとした問い合わせに対して渡すデータを保持するID-データテーブルと、IDをキーとした問い合わせの記録を保存するアクセスログテーブルを持つ。ID-データテーブルは、送信機が送信するID、IDをキーとした問い合わせに対して渡すデータ、データを渡す条件、IDをキーとしたアクセスがあった回数、条件をクリアしてデータが渡された回数を持つ。データを渡す条件には、日時や、アクセス回数や、アクセス成功回数や、問い合わせ元の端末の情報(端末の機種、問い合わせを行ったアプリケーション、端末の現在位置など)や、問い合わせ元のユーザ情報(年齢、性別、職業、国籍、使用言語、信教など)がある。アクセス成功回数を条件とすることで、「アクセス1回あたり1円、ただし100円を上限としてそれ以降はデータを返さない」といったサービスの方法が可能となる。ログテーブルは、IDをキーとしたアクセスがあったとき、そのIDや、要求したユーザのIDや、時刻や、その他の付帯情報や、条件をクリアしてデータを渡したかどうかや、渡したデータの内容を記録する。 The database has an ID-data table that holds data to be passed in response to an inquiry using the ID as a key, and an access log table that stores a record of the inquiry using the ID as a key. In the ID-data table, the ID sent by the transmitter, the data passed in response to an inquiry using the ID as a key, the conditions for passing the data, the number of times the access was made using the ID as a key, and the data are passed with the conditions cleared Have the number of times. The conditions for passing data include the date and time, the number of accesses, the number of successful accesses, the information of the terminal of the inquiry source (terminal model, the application that made the inquiry, the current location of the terminal, etc.) Age, gender, occupation, nationality, language, religion, etc.). By setting the number of successful accesses as a condition, a service method such as “1 yen per access, but 100 yen as the upper limit and no data returned thereafter” becomes possible. When there is access using ID as a key, the log table clears the ID, requested user ID, time, other incidental information, whether or not the data is passed, and the passed data Record the contents of.
 (ゾーン毎に異なる通信プロトコル)
 図35は、実施の形態4における送信機と受信機の動作の一例を示す図である。
(Different communication protocols for each zone)
FIG. 35 is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 4.
 受信機8420aは、基地局8420hからゾーン情報を受け取り、自身がどのゾーンに位置しているかを認識し、受信プロトコルを選択する。基地局8420hは、例えば携帯電話の通信基地局やWi-FiアクセスポイントやIMES送信機やスピーカーや無線送信機(Bluetooth(登録商標)、ZigBee、特定小電力無線局等)として構成される。なお、受信機8420aは、GPS等から得た位置情報をもとにゾーンを特定してもよい。例として、ゾーンAでは9.6kHzの信号周波数で通信し、ゾーンBでは、天井照明は15kHz、サイネージは4.8kHzの信号周波数で通信すると定めるとする。受信機8420aは、位置8420jでは、基地局8420hの情報から現在地がゾーンAであることを認識し、9.6kHzの信号周波数で受信を行い、送信機8420b・8420cの送信した信号を受信する。受信機8420aは、位置8420lでは、基地局8420iの情報から現在地がゾーンBであることを認識し、さらに、インカメラが上方に向けられていることから天井照明からの信号を受信しようとしていることを推定し、15kHzの信号周波数で受信を行い、送信機8420e・8420fの送信した信号を受信する。受信機8420aは、位置8420mでは、基地局8420iの上方から現在地がゾーンBであることを認識し、さらに、アウトカメラを突き出す動きからサイネージの送信する信号を受信しようとしていることを推定し、4.8kHzの信号周波数で受信を行い、送信機8420gの送信する信号を受信する。受信機8420aは、位置8420kでは、基地局8420hと基地局8420iの両方の信号が受信され、現在地がゾーンAとゾーンBのどちらであるか判断できないため、9.6kHzと15kHzの両方で受信処理を行う。なお、ゾーンによってプロトコルが異なる部分は周波数だけではなく、送信信号の変調方式や信号フォーマットやIDを問い合わせるサーバが異なるとしても良い。なお、基地局8420h・8420iは、ゾーン内のプロトコルを受信機に送信してもよいし、ゾーンを示すIDのみを送信し、受信機がゾーンIDをキーにサーバからプロトコル情報を獲得するとしてもよい。 The receiver 8420a receives zone information from the base station 8420h, recognizes in which zone it is located, and selects a reception protocol. The base station 8420h is configured as, for example, a mobile phone communication base station, a Wi-Fi access point, an IMES transmitter, a speaker, or a wireless transmitter (Bluetooth (registered trademark), ZigBee, specific low power wireless station, etc.). Note that the receiver 8420a may specify a zone based on position information obtained from GPS or the like. As an example, it is assumed that communication is performed at a signal frequency of 9.6 kHz in zone A, and communication is performed at a signal frequency of 15 kHz for ceiling lighting and signage of 4.8 kHz in zone B. At the position 8420j, the receiver 8420a recognizes that the current location is zone A from the information of the base station 8420h, performs reception at a signal frequency of 9.6 kHz, and receives signals transmitted from the transmitters 8420b and 8420c. The receiver 8420a recognizes that the current location is zone B from the information of the base station 8420i at the position 8420l, and is further trying to receive a signal from the ceiling lighting because the in-camera is directed upward. Is received at a signal frequency of 15 kHz, and signals transmitted by the transmitters 8420e and 8420f are received. At the position 8420m, the receiver 8420a recognizes that the current location is the zone B from above the base station 8420i, and further estimates that it is trying to receive a signal transmitted by the signage from the movement of the out camera. Receive at a signal frequency of .8 kHz and receive the signal transmitted by transmitter 8420g. The receiver 8420a receives the signals of both the base station 8420h and the base station 8420i at the position 8420k, and cannot determine whether the current location is the zone A or the zone B. Therefore, the reception process is performed at both 9.6 kHz and 15 kHz. I do. It should be noted that the part where the protocol differs depending on the zone is not limited to the frequency, but may be the server that inquires about the modulation method, signal format, and ID of the transmission signal. The base stations 8420h and 8420i may transmit the protocol in the zone to the receiver, or may transmit only the ID indicating the zone, and the receiver may acquire the protocol information from the server using the zone ID as a key. Good.
 送信機8420b~8420fは、基地局8420h・8420iの送信するゾーンIDやプロトコル情報を受信し、信号送信プロトコルを決定する。基地局8420hと基地局8420iの両方の送信する信号を受信可能な送信機8420dは、より信号強度強い基地局のゾーンのプロトコルを利用する、または、両方のプロトコルを交互に用いる。 The transmitters 8420b to 8420f receive the zone ID and protocol information transmitted by the base stations 8420h and 8420i, and determine the signal transmission protocol. A transmitter 8420d capable of receiving signals transmitted by both base station 8420h and base station 8420i utilizes a base station zone protocol with stronger signal strength, or alternately uses both protocols.
 (ゾーンの認識とゾーン毎のサービス)
 図36は、実施の形態4における受信機と送信機の動作の一例を示す図である。
(Zone recognition and services for each zone)
FIG. 36 is a diagram illustrating an example of operation of a receiver and a transmitter in Embodiment 4.
 受信機8421aは、受信した信号から、自身の位置の属するゾーンを認識する。受信機8421aは、ゾーン毎に定められたサービス(クーポンの配布、ポイントの付与、道案内等)を提供する。一例として、受信機8421aは、送信機8421bの左側から送信する信号を受信し、ゾーンAに居ることを認識する。ここで、送信機8421bは、送信方向によって異なる信号を送信するとしてもよい。また、送信機8421bは、2217aのような発光パターンの信号を用いることで、受信機までの距離に応じて異なる信号が受信されるように信号を送信してもよい。また、受信機8421aは、送信機8421bの撮像される方向と大きさから、送信機8421bとの位置関係を認識し、自身の位置するゾーンを認識してもよい。 The receiver 8421a recognizes the zone to which it belongs from the received signal. The receiver 8421a provides services (coupon distribution, point assignment, route guidance, etc.) determined for each zone. As an example, the receiver 8421a receives a signal transmitted from the left side of the transmitter 8421b and recognizes that it is in the zone A. Here, the transmitter 8421b may transmit different signals depending on the transmission direction. Further, the transmitter 8421b may transmit a signal such that a different signal is received according to the distance to the receiver by using a signal having a light emission pattern such as 2217a. In addition, the receiver 8421a may recognize the positional relationship with the transmitter 8421b from the direction and size in which the transmitter 8421b is imaged, and may recognize the zone in which the receiver 8421a is located.
 同じゾーンに位置することを示す信号の一部を共通としてもよい。例えば、送信機8421bと送信機8421cから送信される、ゾーンAを表すIDは前半を共通とする。これにより、受信機8421aは、信号の前半を受信するだけで自身の位置するゾーンを認識可能となる。 一部 A part of the signal indicating that it is located in the same zone may be shared. For example, IDs representing zone A transmitted from the transmitter 8421b and the transmitter 8421c are common to the first half. Accordingly, the receiver 8421a can recognize the zone where the receiver 8421a is located only by receiving the first half of the signal.
 (本実施の形態のまとめ)
 本実施の形態における情報通信方法は、輝度変化によって信号を送信する情報通信方法であって、複数の送信対象の信号のそれぞれを変調することによって、複数の輝度変化のパターンを決定する決定ステップと、複数の発光体のそれぞれが、決定された複数の輝度変化のパターンのうちの何れか1つのパターンにしたがって輝度変化することによって、前記何れか1つのパターンに対応する送信対象の信号を送信する送信ステップとを含み、前記送信ステップでは、前記複数の発光体のうちの2つ以上の発光体のそれぞれは、当該発光体に対して予め定められた時間単位ごとに、互いに輝度の異なる2種類の光のうちの何れか一方の光が出力されるように、且つ、前記2つ以上の発光体のそれぞれに対して予め定められた前記時間単位が互いに異なるように、互いに異なる周波数で輝度変化する。
(Summary of this embodiment)
The information communication method in the present embodiment is an information communication method for transmitting a signal by a luminance change, and a determination step for determining a plurality of luminance change patterns by modulating each of a plurality of transmission target signals; Each of the plurality of light emitters changes a luminance according to any one of the determined plurality of luminance change patterns, thereby transmitting a transmission target signal corresponding to any one of the patterns. A transmission step, wherein two or more of the plurality of light emitters each have two types of brightness different from each other for each time unit predetermined for the light emitter. The time unit predetermined for each of the two or more light emitters is set so that either one of the two lights is output. Differently, luminance changes at different frequencies to.
 これにより、2つ以上の発光体(例えば、照明機器として構成された送信機)のそれぞれが互いに異なる周波数で輝度変化するため、それらの発光体から送信対象の信号(例えば、発光体のID)を受信する受信機は、それらの送信対象の信号を容易に区別して取得することができる。 Thereby, since each of the two or more light emitters (for example, a transmitter configured as a lighting device) changes in luminance at a different frequency, a signal to be transmitted (for example, the ID of the light emitter) from the light emitters. Can easily distinguish and acquire these signals to be transmitted.
 また、前記送信ステップでは、前記複数の発光体のそれぞれは、少なくとも4種類の周波数のうちの何れか1つの周波数で輝度変化し、前記複数の発光体のうちの2つ以上の発光体は、同一の周波数で輝度変化してもよい。例えば、前記送信ステップでは、前記複数の送信対象の信号を受信するためのイメージセンサの受光面に、前記複数の発光体が投影される場合に、前記受光面上で互いに隣接する全ての発光体間で輝度変化の周波数が異なるように、前記複数の発光体のそれぞれは輝度変化する。 In the transmission step, each of the plurality of light emitters changes in luminance at any one of at least four frequencies, and two or more light emitters of the plurality of light emitters are: The luminance may be changed at the same frequency. For example, in the transmitting step, when the plurality of light emitters are projected onto a light receiving surface of an image sensor for receiving the plurality of transmission target signals, all the light emitters adjacent to each other on the light receiving surface. Each of the plurality of light emitters changes in luminance so that the frequency of the luminance change differs between them.
 これにより、輝度変化に用いられる周波数が少なくとも4種類あれば、同一の周波数で輝度変化する発光体が2つ以上ある場合であっても、つまり、周波数の種類の数が複数の発光体の数よりも少ない場合であっても、四色問題または四色定理に基づいて、イメージセンサの受光面上で互いに隣接する全ての発光体間で輝度変化の周波数を確実に異なるせることができる。その結果、受信機は、複数の発光体から送信される送信対象の信号のそれぞれを容易に区別して取得することができる。 Thus, if there are at least four types of frequencies used for luminance change, even if there are two or more illuminants that change in luminance at the same frequency, that is, the number of frequency types is the number of illuminants. Even in the case where the number is smaller, the frequency of the luminance change can be surely made different among all the light emitters adjacent to each other on the light receiving surface of the image sensor based on the four-color problem or the four-color theorem. As a result, the receiver can easily distinguish and acquire each of the transmission target signals transmitted from the plurality of light emitters.
 また、前記送信ステップでは、前記複数の発光体のそれぞれは、送信対象の信号のハッシュ値によって特定される周波数で輝度変化することによって、前記送信対象の信号を送信してもよい。 In the transmission step, each of the plurality of light emitters may transmit the signal to be transmitted by changing in luminance at a frequency specified by a hash value of the signal to be transmitted.
 これにより、複数の発光体のそれぞれは、送信対象の信号(例えば、発光体のID)のハッシュ値によって特定される周波数で輝度変化するため、受信機は、送信対象の信号を受信したときには、実際の輝度変化から特定される周波数と、ハッシュ値によって特定される周波数とが一致するか否かを判定することができる。つまり、受信機は、受信された信号(例えば、発光体のID)にエラーがあったか否かを判定することができる。 Thereby, since each of the plurality of light emitters changes in luminance at a frequency specified by a hash value of a signal to be transmitted (for example, the ID of the light emitter), when the receiver receives the signal to be transmitted, It can be determined whether the frequency specified from the actual luminance change matches the frequency specified by the hash value. That is, the receiver can determine whether or not there is an error in the received signal (for example, the ID of the light emitter).
 また、前記情報通信方法は、さらに、信号記憶部に記憶されている送信対象の信号から、予め定められた関数にしたがって、当該送信対象の信号に対応する周波数を第1の周波数として算出する周波数算出ステップと、周波数記憶部に記憶されている第2の周波数と、算出された前記1の周波数とが一致するか否かを判定する周波数判定ステップと、前記第1の周波数と前記第2の周波数とが一致しないと判定された場合には、エラーを報知する周波数エラー報知ステップとを含み、前記第1の周波数と前記第2の周波数とが一致すると判定された場合には、前記決定ステップでは、前記信号記憶部に記憶されている前記送信対象の信号を変調することによって輝度変化のパターンを決定し、前記送信ステップでは、前記複数の発光体のうちの何れか1つの発光体が、決定された前記パターンにしたがって、前記第1の周波数で輝度変化することによって、前記信号記憶部に記憶されている前記送信対象の信号を送信してもよい。 The information communication method may further calculate a frequency corresponding to the transmission target signal as a first frequency from the transmission target signal stored in the signal storage unit according to a predetermined function. A calculation step; a frequency determination step for determining whether or not the second frequency stored in the frequency storage unit matches the calculated first frequency; and the first frequency and the second frequency A frequency error notification step of notifying an error when it is determined that the frequency does not match, and a determination step when it is determined that the first frequency and the second frequency match. Then, a luminance change pattern is determined by modulating the transmission target signal stored in the signal storage unit, and in the transmission step, the plurality of light emitters are detected. Of any one of the light emitters according to the determined the pattern, by changing the luminance in the first frequency may transmit a signal of the transmission target stored in the signal storage unit.
 これにより、周波数記憶部に記憶されている周波数と、信号記憶部(ID記憶部)に記憶されている送信対象の信号から算出された周波数とが一致するか否かが判定され、一致しないと判定された場合にはエラーが報知されるため、発光体による信号送信機能の異常検出を容易に行うことができる。 Thereby, it is determined whether or not the frequency stored in the frequency storage unit matches the frequency calculated from the signal to be transmitted stored in the signal storage unit (ID storage unit). When the determination is made, an error is notified, so that the abnormality detection of the signal transmission function by the light emitter can be easily performed.
 また、前記情報通信方法は、さらに、信号記憶部に記憶されている送信対象の信号から、予め定められた関数にしたがって第1のチェック値を算出するチェック値算出ステップと、チェック値記憶部に記憶されている第2のチェック値と、算出された前記1のチェック値とが一致するか否かを判定するチェック値判定ステップと、前記第1のチェック値と前記第2のチェック値とが一致しないと判定された場合には、エラーを報知するチェック値エラー報知ステップとを含み、前記第1のチェック値と前記第2のチェック値とが一致すると判定された場合には、前記決定ステップでは、前記信号記憶部に記憶されている前記送信対象の信号を変調することによって輝度変化のパターンを決定し、前記送信ステップでは、前記複数の発光体のうちの何れか1つの発光体が、決定された前記パターンにしたがって輝度変化することによって、前記信号記憶部に記憶されている前記送信対象の信号を送信してもよい。 The information communication method further includes a check value calculation step of calculating a first check value from a transmission target signal stored in the signal storage unit according to a predetermined function, and a check value storage unit. A check value determination step for determining whether or not the stored second check value matches the calculated first check value; and the first check value and the second check value. A check value error notification step of notifying an error when it is determined that they do not match, and a determination step when it is determined that the first check value and the second check value match Then, a luminance change pattern is determined by modulating the signal to be transmitted stored in the signal storage unit. In the transmission step, the plurality of light emitters are determined. Any one of the emitters of blood, by changing luminance in accordance with the determined the pattern, may transmit a signal of the transmission target stored in the signal storage unit.
 これにより、チェック値記憶部に記憶されているチェック値と、信号記憶部(ID記憶部)に記憶されている送信対象の信号から算出されたチェック値とが一致するか否かが判定され、一致しないと判定された場合にはエラーが報知されるため、発光体による信号送信機能の異常検出を容易に行うことができる。 Thereby, it is determined whether or not the check value stored in the check value storage unit matches the check value calculated from the transmission target signal stored in the signal storage unit (ID storage unit), If it is determined that they do not coincide with each other, an error is notified, so that it is possible to easily detect abnormality of the signal transmission function by the light emitter.
 また、本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる複数の露光ラインに対応する複数の輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記複数の輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップと、取得された前記輝線画像に含まれる前記複数の輝線のパターンに基づいて、前記被写体の輝度変化の周波数を特定する周波数特定ステップとを含む。例えば、前記周波数特定ステップでは、前記複数の輝線のパターンに含まれる、それぞれヘッダを示すために予め定められた複数のパターンである複数のヘッダパターンを特定し、前記複数のヘッダパターン間の画素数に応じた周波数を、前記被写体の輝度変化の周波数として特定する。 The information communication method according to the present embodiment is an information communication method for acquiring information from a subject, and corresponds to a plurality of exposure lines included in the image sensor in an image obtained by photographing the subject by an image sensor. An exposure time setting step for setting an exposure time of the image sensor so that a plurality of bright lines are generated according to a change in luminance of the subject, and the exposure time set for the subject whose luminance is changed by the image sensor. And acquiring information by demodulating data identified by the pattern of the plurality of bright lines included in the acquired bright line image, and an image acquisition step of acquiring a bright line image including the plurality of bright lines Information acquisition step, and a plurality of bright line patterns included in the acquired bright line image. There are, and a frequency specifying step of specifying the frequency of the luminance change of the subject. For example, in the frequency specifying step, a plurality of header patterns, which are a plurality of predetermined patterns for indicating a header, are included in the plurality of bright line patterns, and the number of pixels between the plurality of header patterns is determined. Is determined as the frequency of luminance change of the subject.
 これにより、被写体の輝度変化の周波数が特定されるため、輝度変化の周波数が異なる複数の被写体が撮影される場合には、それらの被写体からの情報を容易に区別して取得することができる。 Thus, since the frequency of the luminance change of the subject is specified, when a plurality of subjects having different luminance change frequencies are photographed, information from these subjects can be easily distinguished and acquired.
 また、前記画像取得ステップでは、それぞれ輝度変化する複数の被写体を撮影することによって、それぞれ複数の輝線によって表される複数のパターンを含む前記輝線画像を取得し、前記情報取得ステップでは、取得された前記輝線画像に含まれる前記複数のパターンのそれぞれの一部が重なっている場合には、前記複数のパターンのそれぞれから前記一部を除く部分によって特定されるデータを復調することにより、前記複数のパターンのそれぞれから情報を取得してもよい。 Further, in the image acquisition step, the bright line image including a plurality of patterns each represented by a plurality of bright lines is acquired by photographing a plurality of subjects each changing in luminance, and the information acquisition step acquires the bright line image. When a part of each of the plurality of patterns included in the bright line image overlaps, by demodulating data specified by a portion excluding the part from each of the plurality of patterns, the plurality of patterns Information may be acquired from each of the patterns.
 これにより、複数のパターン(複数の輝線パターン)が重なっている部分からはデータの復調が行われないため、誤った情報を取得してしまうことを防ぐことができる。 Thus, since data is not demodulated from a portion where a plurality of patterns (a plurality of bright line patterns) overlap, it is possible to prevent erroneous information from being acquired.
 また、前記画像取得ステップでは、前記複数の被写体を互いに異なるタイミングで複数回撮影することによって、複数の輝線画像を取得し、前記周波数特定ステップでは、輝線画像ごとに、当該輝線画像に含まれる前記複数のパターンのそれぞれに対する周波数を特定し、前記情報取得ステップでは、前複数の輝線画像から、同一の周波数が特定された複数のパターンを検索し、検索された前記複数のパターンを結合し、結合された前記複数のパターンによって特定さるデータを復調することにより情報を取得してもよい。 Further, in the image acquisition step, a plurality of bright line images are acquired by photographing the plurality of subjects a plurality of times at mutually different timings, and in the frequency specifying step, the bright line images are included in the bright line image. A frequency for each of a plurality of patterns is specified, and in the information acquisition step, a plurality of patterns in which the same frequency is specified is searched from a plurality of previous bright line images, and the searched plurality of patterns are combined and combined. The information may be obtained by demodulating data specified by the plurality of patterns.
 これにより、複数の輝線画像から、同一の周波数が特定された複数のパターン(複数の輝線パターン)が検索され、検索された複数のパターンが結合され、結合された複数のパターンから情報が取得されるため、複数の被写体が移動している場合であっても、それらの複数の被写体からの情報を容易に区別して取得することができる。 Thereby, a plurality of patterns (plural emission line patterns) in which the same frequency is specified are searched from a plurality of emission line images, the plurality of searched patterns are combined, and information is acquired from the combined patterns. Therefore, even when a plurality of subjects are moving, information from the plurality of subjects can be easily distinguished and acquired.
 また、前記情報通信方法は、さらに、識別情報のそれぞれに対して周波数が登録されているサーバに対して、前記情報取得ステップで取得された情報に含まれる前記被写体の識別情報と、前記周波数特定ステップで特定された周波数を示す特定周波数情報とを送信する送信ステップと、前記識別情報と、前記特定周波数情報によって示される周波数とに関連付けられた関連情報を前記サーバから取得する関連情報取得ステップとを含んでもよい。 The information communication method may further include: identifying information on the subject included in the information acquired in the information acquisition step; and the frequency specification for a server in which frequencies are registered for each of the identification information. A transmission step of transmitting specific frequency information indicating the frequency specified in the step; a related information acquisition step of acquiring related information associated with the identification information and the frequency indicated by the specific frequency information from the server; May be included.
 これにより、被写体(送信機)の輝度変化に基づいて取得された識別情報(ID)と、その輝度変化の周波数とに関連付けられた関連情報が取得される。したがって、被写体の輝度変化の周波数を変更し、サーバに登録されている周波数を変更後の周波数に更新することによって、周波数の変更前に識別情報を取得した受信機がサーバから関連情報を取得することを防ぐことができる。つまり、被写体の輝度変化の周波数の変更に合わせて、サーバに登録されている周波数も変更することによって、被写体の識別情報を過去に取得した受信機が無期限にサーバから関連情報を取得し得る状態になってしまうことを防ぐことができる。 Thereby, related information associated with the identification information (ID) acquired based on the luminance change of the subject (transmitter) and the frequency of the luminance change is acquired. Therefore, by changing the frequency of the luminance change of the subject and updating the frequency registered in the server to the frequency after the change, the receiver that acquired the identification information before the frequency change acquires the related information from the server. Can be prevented. That is, by changing the frequency registered in the server in accordance with the change in the luminance change frequency of the subject, the receiver that has acquired the subject identification information in the past can acquire the related information from the server indefinitely. It can prevent becoming a state.
 また、前記情報通信方法は、さらに、前記情報取得ステップで取得された前記情報から一部を抽出することによって、前記被写体の識別情報を取得する識別情報取得ステップと、前記情報取得ステップで取得された前記情報のうち、前記一部以外の残りの部分によって示される数を、前記被写体に対して設定されている輝度変化の設定周波数として特定する設定周波数特定ステップとを含んでもよい。 The information communication method is further acquired in the identification information acquisition step of acquiring the identification information of the subject by extracting a part from the information acquired in the information acquisition step, and in the information acquisition step. In addition, a setting frequency specifying step of specifying a number indicated by the remaining part other than the part of the information as a setting frequency of the luminance change set for the subject may be included.
 これにより、複数の輝線のパターンから得られる情報に、被写体の識別情報と、被写体に設定されている輝度変化の設定周波数とを互いに依存することなく含めることができるため、識別情報と設定周波数との自由度を高めることができる。 As a result, the information obtained from the plurality of bright line patterns can include the identification information of the subject and the set frequency of the luminance change set for the subject without depending on each other. Can increase the degree of freedom.
 (実施の形態5)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 5)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 (人間への可視光通信の周知)
 図37は、実施の形態5における送信機の動作の一例を示す図である。
(Well-known communication of visible light to humans)
FIG. 37 is a diagram illustrating an example of operation of a transmitter in Embodiment 5.
 送信機8921aの発光部は、図37の(a)に示すように、人間が視認可能な点滅と可視光通信とを繰り返す。人間に視認可能な点滅を行うことで、可視光通信が可能であることを人間に知らせることができる。ユーザは送信機8921aが点滅していることで可視光通信が可能であることに気づき、受信機8921bを送信機8921aに向けて可視光通信を行い、送信機8921aのユーザ登録を行う。 The light emitting unit of the transmitter 8921a repeats blinking and visible light communication that are visible to humans, as shown in FIG. By performing blinking that is visible to humans, it is possible to inform humans that visible light communication is possible. The user notices that visible light communication is possible because the transmitter 8921a is blinking, performs visible light communication with the receiver 8921b directed to the transmitter 8921a, and performs user registration of the transmitter 8921a.
 つまり、本実施の形態における送信機は、発光体が輝度変化によって信号を送信するステップと、発光体が人の目で視認されるように点滅するステップとを交互に繰り返し行う。 That is, the transmitter in the present embodiment alternately and repeatedly performs a step in which the light emitter transmits a signal due to a change in luminance and a step in which the light emitter blinks so as to be visually recognized by human eyes.
 送信機は、図37の(b)のように、可視光通信部と点滅部(通信状況表示部)とを別に設けてもよい。 The transmitter may separately provide a visible light communication unit and a blinking unit (communication status display unit) as shown in FIG.
 送信機は、図37の(c)のように、動作することで、可視光通信を行いながら、人間には発光部が点滅しているように見せることができる。つまり、送信機は、例えば明るさ75%の高輝度可視光通信と、明るさ1%の低輝度可視光通信とを交互に繰り返し行う。例えば、送信機に異常等が発生して普段とは異なる信号を送信しているときに図37の(c)に示す動作をすることで、可視光通信をやめることなくユーザに注意を促すことができる。 When the transmitter operates as shown in FIG. 37 (c), it can be seen that the light emitting unit is blinking while performing visible light communication. That is, for example, the transmitter repeatedly performs high luminance visible light communication with a brightness of 75% and low luminance visible light communication with a brightness of 1% alternately. For example, when an abnormality occurs in the transmitter and a signal different from usual is transmitted, the operation shown in (c) of FIG. 37 is performed to alert the user without stopping the visible light communication. Can do.
 (道案内への応用例)
 図38は、実施の形態5における送受信システムの応用例の一例を示す図である。
(Example of application to directions)
FIG. 38 is a diagram illustrating an example of application of the transmission and reception system in the fifth embodiment.
 受信機8955aは、例えば案内板として構成される送信機8955bの送信IDを受信し、案内板に表示された地図のデータをサーバから取得して表示する。このとき、サーバは受信機8955aのユーザに適した広告を送信し、受信機8955aはこの広告情報も表示するとしてもよい。受信機8955aは、現在地から、ユーザが指定した場所までの経路を表示する。 The receiver 8955a receives, for example, the transmission ID of the transmitter 8955b configured as a guide plate, acquires the map data displayed on the guide plate from the server, and displays the map data. At this time, the server may transmit an advertisement suitable for the user of the receiver 8955a, and the receiver 8955a may also display this advertisement information. The receiver 8955a displays a route from the current location to a location designated by the user.
 (利用ログ蓄積と解析への応用例)
 図39は、実施の形態5における送受信システムの応用例の一例を示す図である。
(Application log storage and application examples)
FIG. 39 is a diagram illustrating an example of application of the transmission / reception system in the fifth embodiment.
 受信機8957aは、例えば看板として構成される送信機8957bの送信するIDを受信し、サーバからクーポン情報を取得して表示する。受信機8957aは、その後のユーザの行動、例えば、クーポンを保存したり、クーポンに表示された店舗に移動したり、その店舗で買い物を行ったり、クーポンを保存せずに立ち去ったりといった行動をサーバ8957cに保存する。これにより、看板8957bから情報を取得したユーザのその後の行動を解析することができ、看板8957bの広告価値を見積もることができる。 The receiver 8957a receives the ID transmitted from the transmitter 8957b configured as a signboard, for example, acquires coupon information from the server, and displays the coupon information. The receiver 8957a stores subsequent user actions such as saving a coupon, moving to a store displayed on the coupon, shopping at the store, and leaving without saving the coupon. Save to 8957c. As a result, the subsequent behavior of the user who has acquired information from the sign 8957b can be analyzed, and the advertising value of the sign 8957b can be estimated.
 (画面共有への応用例)
 図40は、実施の形態5における送受信システムの応用例の一例を示す図である。
(Application example for screen sharing)
FIG. 40 is a diagram illustrating an example of application of the transmission and reception system in the fifth embodiment.
 例えばプロジェクタやディスプレイとして構成される送信機8960bは、自身へ無線接続するための情報(SSID、無線接続用パスワード、IPアドレス、送信機を操作するためのパスワード)を送信する。または、これらの情報にアクセスするためのキーとなるIDを送信する。例えばスマートフォンやタブレットやノートパソコンやカメラとして構成される受信機8960aは、送信機8960bから送信された信号を受信して前記情報を取得し、送信機8960bとの無線接続を確立する。この無線接続は、ルータを介して接続してもよいし、Wi-FiダイレクトやBluetooth(登録商標)やWireless Home Digital Interface等によって直接接続してもよい。受信機8960aは、送信機8960bによって表示される画面を送信する。これにより、手軽に受信機の画像を送信機に表示することができる。 For example, the transmitter 8960b configured as a projector or a display transmits information (SSID, password for wireless connection, IP address, password for operating the transmitter) for wireless connection to itself. Alternatively, an ID serving as a key for accessing these pieces of information is transmitted. For example, the receiver 8960a configured as a smartphone, a tablet, a laptop computer, or a camera receives the signal transmitted from the transmitter 8960b, acquires the information, and establishes a wireless connection with the transmitter 8960b. This wireless connection may be connected via a router, or may be directly connected by Wi-Fi Direct, Bluetooth (registered trademark), Wireless Home Digital Interface, or the like. The receiver 8960a transmits a screen displayed by the transmitter 8960b. Thereby, the image of the receiver can be easily displayed on the transmitter.
 なお、送信機8960bは、受信機8960aと接続されたときに、画面表示のためには、送信機が送信している情報の他にパスワードが必要であることを受信機8960aに伝え、正しいパスワードが送られない場合は送信された画面を表示しないとしてもよい。このとき、受信機8960aは、8960dのような、パスワード入力画面を表示し、ユーザにパスワードを入力させる。 When the transmitter 8960b is connected to the receiver 8960a, the transmitter 8960b notifies the receiver 8960a that a password is required in addition to the information transmitted by the transmitter in order to display the screen. If is not sent, the transmitted screen may not be displayed. At this time, the receiver 8960a displays a password input screen such as 8960d and allows the user to input the password.
 以上、一つまたは複数の態様に係る情報通信方法について、実施の形態に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態における構成要素を組み合わせて構築される形態も、一つまたは複数の態様の範囲内に含まれてもよい。 As described above, the information communication method according to one or more aspects has been described based on the embodiment. However, the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in this embodiment, and forms constructed by combining components in different embodiments are also within the scope of one or more aspects. May be included.
 また、図41に示すように、本発明の一態様に係る情報通信方法を応用してもよい。 Also, as shown in FIG. 41, an information communication method according to an aspect of the present invention may be applied.
 図41は、実施の形態5における送受信システムの応用例の一例を示す図である。 FIG. 41 is a diagram showing an example of application of the transmission / reception system in the fifth embodiment.
 可視光通信の受信機として構成されるカメラは、通常撮像モードで撮像を行う(Step1)。この撮像によって、カメラは、例えばEXIF(Exchangeable image file format)等のフォーマットによって構成される画像ファイルを取得する。次に、カメラは、可視光通信撮像モードで撮像を行う(Step2)。カメラは、この撮像によって得られる画像中の輝線のパターンに基づいて、被写体である送信機から可視光通信によって送信された信号(可視光通信情報)を取得する(Step3)。さらに、カメラは、その信号(受信情報)をキーとして扱ってサーバにアクセスすることにより、サーバからそのキーに対応する情報を取得する(Step4)。そしてカメラは、被写体から可視光通信によって送信された信号(可視光受信データ)、サーバから取得された情報、画像ファイルによって示される画像中の、被写体である送信機が映し出された位置を示すデータと、可視光通信によって送信された信号を受信した時刻(動画中における時刻)を示すデータなどをそれぞれ、上述の画像ファイル中のメタデータとして保存する。なお、カメラは、撮像によって得られる画像(画像ファイル)に複数の送信機が被写体として映し出されている場合には、送信機ごとに、その送信機に対応する幾つかのメタデータを、その画像ファイルに保存する。 A camera configured as a receiver for visible light communication performs imaging in a normal imaging mode (Step 1). By this imaging, the camera acquires an image file configured in a format such as EXIF (Exchangeable image file format). Next, the camera performs imaging in the visible light communication imaging mode (Step 2). Based on the bright line pattern in the image obtained by this imaging, the camera acquires a signal (visible light communication information) transmitted by the visible light communication from the transmitter that is the subject (Step 3). Furthermore, the camera obtains information corresponding to the key from the server by using the signal (reception information) as a key and accessing the server (Step 4). The camera transmits a signal (visible light reception data) transmitted from the subject by visible light communication, information acquired from the server, and data indicating a position where the transmitter as the subject is projected in the image indicated by the image file. And the data indicating the time when the signal transmitted by the visible light communication is received (the time in the moving image) are stored as metadata in the above-described image file. Note that when a plurality of transmitters are projected as subjects in an image (image file) obtained by imaging, the camera stores, for each transmitter, some metadata corresponding to the transmitter. Save to file.
 可視光通信の送信機として構成されるディスプレイまたはプロジェクタは、上述の画像ファイルによって示される画像を表示するときには、その画像ファイルに含まれるメタデータに応じた信号を可視光通信によって送信する。例えば、ディスプレイまたはプロジェクタは、メタデータそのものを可視光通信によって送信してもよく、画像に映し出された送信機に関連付けられた信号をキーとして送信してもよい。 When a display or projector configured as a transmitter for visible light communication displays an image indicated by the above-described image file, the display or projector transmits a signal corresponding to metadata included in the image file by visible light communication. For example, the display or the projector may transmit the metadata itself by visible light communication, or may transmit a signal associated with the transmitter displayed in the image as a key.
 可視光通信の受信機として構成される携帯端末(スマートフォン)は、ディスプレイまたはプロジェクタの画像を撮像することによって、ディスプレイまたはプロジェクタから可視光通信によって送信される信号を受信する。携帯端末は、その受信した信号が上述のキーであれば、そのキーを用いて、ディスプレイ、プロジェクタまたはサーバから、そのキーに関連付けられた送信機のメタデータを取得する。また、携帯端末は、その受信した信号が、実在する送信機から可視光通信によって送信された信号(可視光受信データまたは可視光通信情報)であれば、ディスプレイ、プロジェクタまたはサーバから、その可視光受光データまたは可視光通信情報に対応する情報を取得する。 A portable terminal (smart phone) configured as a receiver for visible light communication receives a signal transmitted by visible light communication from the display or projector by capturing an image of the display or projector. If the received signal is the above-described key, the portable terminal uses the key to acquire the transmitter metadata associated with the key from the display, projector, or server. In addition, if the received signal is a signal (visible light reception data or visible light communication information) transmitted from an actual transmitter by visible light communication, the portable terminal receives the visible light from a display, a projector, or a server. Information corresponding to received light data or visible light communication information is acquired.
 (本実施の形態等のまとめ)
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体である第1の被写体の撮影によって得られる画像に、前記イメージセンサに含まれる各露光ラインに対応する複数の輝線が前記第1の被写体の輝度変化に応じて生じるように、前記イメージセンサの第1の露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記第1の被写体を、設定された前記第1の露光時間で撮影することによって、前記複数の輝線を含む画像である第1の輝線画像を取得する第1の輝線画像取得ステップと、取得された前記第1の輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより第1の送信情報を取得する第1の情報取得ステップと、前記第1の送信情報が取得された後に、制御信号を送信することによって、扉の開閉駆動機器に対して前記扉を開かせる扉制御ステップとを含む。
(Summary of this embodiment etc.)
The information communication method in the present embodiment is an information communication method for acquiring information from a subject, and each exposure included in the image sensor is included in an image obtained by photographing the first subject that is the subject by an image sensor. A first exposure time setting step for setting a first exposure time of the image sensor so that a plurality of bright lines corresponding to a line are generated according to a change in luminance of the first subject; A first bright line image acquisition step of acquiring a first bright line image that is an image including the plurality of bright lines by photographing the first subject that changes with the set first exposure time; The first transmission information is acquired by demodulating data specified by the pattern of the plurality of bright lines included in the acquired first bright line image. An information obtaining step of, after said first transmission information is obtained by sending a control signal, and a door control steps to open the door against the opening and closing devices of the door.
 これにより、イメージセンサを備えた受信機を扉の鍵のように用いることができ、特別な電子錠を不要にすることができる。その結果、演算力が少ないような機器を含む多様な機器間で通信を行うことができる。 This makes it possible to use a receiver equipped with an image sensor like a door key and eliminate the need for a special electronic lock. As a result, it is possible to perform communication between various devices including a device having a small computing power.
 また、前記情報通信方法は、さらに、前記イメージセンサが、輝度変化する第2の被写体を、設定された前記第1の露光時間で撮影することによって、複数の輝線を含む画像である第2の輝線画像を取得する第2の輝線画像取得ステップと、取得された前記第2の輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより第2の送信情報を取得する第2の情報取得ステップと、取得された前記第1および第2の送信情報に基づいて、前記イメージセンサを備えた受信装置が前記扉に近づいているか否かを判定する接近判定ステップとを含み、前記扉制御ステップでは、前記受信装置が前記扉に近づいていると判定されたときに、前記制御信号を送信してもよい。 The information communication method may further include a second image in which the image sensor includes a plurality of bright lines by photographing the second subject whose luminance changes with the set first exposure time. Second transmission information is acquired by demodulating data specified by a pattern of the plurality of bright lines included in the acquired second bright line image and a second bright line image acquiring step of acquiring a bright line image A second information acquisition step; and an approach determination step of determining whether or not a receiving device including the image sensor is approaching the door based on the acquired first and second transmission information. In the door control step, the control signal may be transmitted when it is determined that the receiving device is approaching the door.
 これにより、受信装置(受信機)が扉に近づいたときにのみ、つまり、適切なタイミングにのみ、その扉を開かせることができる。 Thereby, the door can be opened only when the receiving device (receiver) approaches the door, that is, only at an appropriate timing.
 また、前記情報通信方法は、さらに、前記第1の露光時間よりも長い第2の露光時間を設定する第2の露光時間設定ステップと、前記イメージセンサが、第3の被写体を、設定された前記第2の露光時間で撮影することによって、前記第3の被写体が映し出された通常画像を取得する通常画像取得ステップとを含み、前記通常画像取得ステップでは、前記イメージセンサのオプティカルブラックを含む領域にある複数の露光ラインのそれぞれに対して、当該露光ラインの隣の露光ラインに対する電荷の読み出しが行われた時点から所定の時間経過後に、電荷の読み出しを行い、前記第1の輝線画像取得ステップでは、前記オプティカルブラックを電荷の読み出しに用いることなく、前記イメージセンサにおける前記オプティカルブラック以外の領域にある複数の露光ラインのそれぞれに対して、当該露光ラインの隣の露光ラインに対する電荷の読み出しが行われた時点から、前記所定の時間よりも長い時間経過後に、電荷の読み出しを行ってもよい。 Further, in the information communication method, a second exposure time setting step for setting a second exposure time longer than the first exposure time, and the image sensor sets a third subject. A normal image acquisition step of acquiring a normal image on which the third subject is projected by photographing at the second exposure time, and the normal image acquisition step includes an area including optical black of the image sensor For each of the plurality of exposure lines, the charge is read after a predetermined time has elapsed from the time when the charge is read for the exposure line adjacent to the exposure line, and the first bright line image obtaining step is performed. Then, without using the optical black for the charge readout, the optical black in the image sensor is different from the optical black. For each of the plurality of exposure lines in the region, the charge is read after a time longer than the predetermined time from when the charge is read for the exposure line adjacent to the exposure line. Also good.
 これにより、第1の輝線画像が取得されるときには、オプティカルブラックに対する電荷の読み出し(露光)は行われないため、イメージセンサにおけるオプティカルブラック以外の領域である有効画素領域に対する電荷の読み出し(露光)にかかる時間を長くすることができる。その結果、その有効画素領域において信号を受信する時間を長くすることができ、多くの信号を取得することができる。 As a result, when the first bright line image is acquired, the readout (exposure) of charges from the optical black is not performed. Therefore, the readout (exposure) of charges from the effective pixel area, which is an area other than the optical black in the image sensor, is performed. This time can be lengthened. As a result, it is possible to increase the time for receiving a signal in the effective pixel region, and it is possible to acquire many signals.
 また、前記情報通信方法は、さらに、前記第1の輝線画像に含まれる前記複数の輝線のパターンにおける、当該複数の輝線のそれぞれに垂直な方向の長さが、予め定められた長さ未満であるか否かを判定する長さ判定ステップと、前記パターンの長さが前記予め定められた長さ未満であると判定された場合には、前記イメージセンサのフレームレートを、前記第1の輝線画像を取得したときの第1のフレームレートよりも遅い第2のフレームレートに変更するフレームレート変更ステップと、前記イメージセンサが、輝度変化する前記第1の被写体を、前記第2のフレームレートで、且つ、設定された前記第1の露光時間で撮影することによって、複数の輝線を含む画像である第3の輝線画像を取得する第3の輝線画像取得ステップと、取得された前記第3の輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより前記第1の送信情報を取得する第3の情報取得ステップとを含んでもよい。 Further, in the information communication method, the length in the direction perpendicular to each of the plurality of bright lines in the plurality of bright line patterns included in the first bright line image is less than a predetermined length. A length determination step for determining whether or not there is a frame rate of the image sensor when the pattern length is determined to be less than the predetermined length; A frame rate changing step of changing to a second frame rate that is slower than the first frame rate when the image is acquired; and the image sensor detects the first subject whose luminance changes at the second frame rate. And a third bright line image acquisition step of acquiring a third bright line image, which is an image including a plurality of bright lines, by taking an image with the set first exposure time. It may include a third information obtaining step of obtaining the first transmission information by demodulating the data identified by said plurality of emission lines pattern contained in has been the third bright line image.
 これにより、第1の輝線画像に含まれる輝線のパターン(輝線領域)によって示される信号長が、送信された信号の例えば1ブロック分に満たない場合には、フレームレートが落とされて、改めて輝線画像が第3の輝線画像として取得される。その結果、第3の輝線画像に含まれる輝線のパターンの長さを長くすることができ、送信された信号を1ブロック分取得することができる。 As a result, when the signal length indicated by the bright line pattern (bright line region) included in the first bright line image is less than, for example, one block of the transmitted signal, the frame rate is reduced and the bright line is renewed. An image is acquired as a third bright line image. As a result, the length of the bright line pattern included in the third bright line image can be increased, and the transmitted signal can be acquired for one block.
 また、前記情報通信方法は、さらに、前記イメージセンサによって得られる画像の縦幅と横幅の比率を設定する比率設定ステップを含み、前記第1の輝線画像取得ステップは、設定された前記比率によって、前記画像における前記各露光ラインと垂直な方向の端がクリッピングされるか否かを判定するクリッピング判定ステップと、前記端がクリッピングされると判定されたときには、前記比率設定ステップで設定された前記比率を、前記端がクリッピングされない比率である非クリッピング比率に変更する比率変更ステップと、前記イメージセンサが、輝度変化する前記第1の被写体を撮影することによって、前記非クリッピング比率の前記第1の輝線画像を取得する取得ステップとを含んでもよい。 The information communication method further includes a ratio setting step for setting a ratio between a vertical width and a horizontal width of an image obtained by the image sensor, and the first bright line image acquisition step includes the set ratio. A clipping determination step for determining whether or not an end in a direction perpendicular to each exposure line in the image is clipped, and when it is determined that the end is clipped, the ratio set in the ratio setting step Changing the ratio to a non-clipping ratio which is a ratio at which the edge is not clipped, and the image sensor captures the first bright line with the non-clipping ratio by photographing the first subject whose luminance changes. An acquisition step of acquiring an image.
 これにより、例えばイメージセンサの有効画素領域の横幅と縦幅の比率が4:3であって、画像の横幅と縦幅の比率が16:9に設定され、水平方向に沿う輝線が表れる場合、つまり、露光ラインが水平方向に沿っている場合には、上述の画像の上端および下端がクリッピングされると判定される。つまり、第1の輝線画像の端が欠落してしまうと判定される。この場合には、その画像の比率が、クリッピングされない比率である例えば4:3に変更される。その結果、第1の輝線画像の端の欠落を防ぐことができ、第1の輝線画像から多くの情報を取得することができる。 Thereby, for example, when the ratio of the horizontal width to the vertical width of the effective pixel area of the image sensor is 4: 3, the ratio of the horizontal width to the vertical width of the image is set to 16: 9, and a bright line along the horizontal direction appears. That is, when the exposure line is along the horizontal direction, it is determined that the upper end and the lower end of the image are clipped. That is, it is determined that the end of the first bright line image is missing. In this case, the ratio of the image is changed to 4: 3, which is a ratio that is not clipped. As a result, it is possible to prevent the end of the first bright line image from being lost, and a large amount of information can be acquired from the first bright line image.
 また、前記情報通信方法は、さらに、前記第1の輝線画像に含まれる前記複数の輝線のそれぞれに平行な方向に、前記第1の輝線画像を圧縮することによって、圧縮画像を生成する圧縮ステップと、前記圧縮画像を送信する圧縮画像送信ステップとを含んでもよい。 The information communication method further includes a compression step of generating a compressed image by compressing the first bright line image in a direction parallel to each of the plurality of bright lines included in the first bright line image. And a compressed image transmission step of transmitting the compressed image.
 これにより、複数の輝線によって示される情報を欠落させることなく適切に第1の輝線画像を圧縮することができる。 Thereby, it is possible to appropriately compress the first bright line image without losing information indicated by a plurality of bright lines.
 また、前記情報通信方法は、さらに、前記イメージセンサを備える受信装置が、予め定められた態様で動かされたか否かを判定するジェスチャ判定ステップと、前記予め定められた態様で動かされたと判定したときには、前記イメージセンサを起動する起動ステップとを含んでもよい。 The information communication method further determines that the receiving device including the image sensor has been moved in a predetermined manner, and a gesture determination step for determining whether or not the receiving device has been moved in a predetermined manner. In some cases, an activation step of activating the image sensor may be included.
 これにより、必要なときにのみイメージセンサを簡単に起動させることができ、消費電力効率の向上を図ることができる。 This makes it possible to easily start the image sensor only when necessary, and to improve power consumption efficiency.
 (実施の形態6)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 6)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 図42は、実施の形態6における送信機と受信機の適用例を示す図である。 FIG. 42 is a diagram illustrating an application example of the transmitter and the receiver in the sixth embodiment.
 ロボット8970は、例えば自走式の掃除機としての機能と、上記各実施の形態における受信機としての機能とを有する。照明機器8971a,8971bは、それぞれ上記各実施の形態における送信機としての機能を有する。 The robot 8970 has, for example, a function as a self-propelled cleaner and a function as a receiver in each of the above embodiments. The lighting devices 8971a and 8971b each have a function as a transmitter in each of the above embodiments.
 例えば、ロボット8970は、室内を移動しながら、掃除を行うとともに、その室内を照らす照明機器8971aを撮影する。この照明機器8971aは、輝度変化することによって照明機器8971aのIDを送信している。その結果、ロボット8970は、上記各実施の形態と同様に、照明機器8971aからそのIDを受信し、そのIDに基づいて自らの位置(自己位置)を推定する。つまり、ロボット8970は、9軸センサによる検出結果と、撮影によって得られる画像に映る照明機器8971aの相対位置と、IDによって特定される照明機器8971aの絶対位置とに基づいて、移動しながら自己位置を推定している。 For example, the robot 8970 performs cleaning while moving in the room and photographs the lighting device 8971a that illuminates the room. The lighting device 8971a transmits the ID of the lighting device 8971a by changing the luminance. As a result, the robot 8970 receives the ID from the lighting device 8971a and estimates its own position (self-position) based on the ID as in the above embodiments. That is, the robot 8970 moves itself based on the detection result by the 9-axis sensor, the relative position of the lighting device 8971a reflected in the image obtained by photographing, and the absolute position of the lighting device 8971a specified by the ID. Is estimated.
 さらに、ロボット8970は、移動することによって照明機器8971aから離れると、照明機器8971aに対して消灯を命令する信号(消灯命令)を送信する。例えば、ロボット8970は、予め定められた距離だけ照明機器8971aから離れると、消灯命令を送信する。または、ロボット8970は、撮影によって得られる画像にその照明機器8971aが映らなくなったときに、あるいは、その画像に他の照明機器が映ると、消灯命令を照明機器8971aに送信する。照明機器8971aは、消灯命令をロボット8970から受信すると、その消灯命令に応じて消灯する。 Furthermore, when the robot 8970 moves away from the lighting device 8971a by moving, the robot 8970 transmits a signal to turn off the lighting device 8971a (turn-off command). For example, when the robot 8970 leaves the lighting device 8971a by a predetermined distance, the robot 8970 transmits a turn-off command. Alternatively, the robot 8970 transmits a turn-off command to the lighting device 8971a when the lighting device 8971a does not appear in the image obtained by shooting or when another lighting device appears in the image. When the lighting device 8971a receives a turn-off command from the robot 8970, the lighting device 8971a turns off according to the turn-off command.
 次に、ロボット8970は、移動して掃除を行っている途中で、推定された自己位置に基づいて、照明機器8971bに近づいたことを検知する。つまり、ロボット8970は、照明機器8971bの位置を示す情報を保持しており、自己位置とその照明機器8971bの位置との間の距離が予め定められた距離以下になったときに、照明機器8971bに近づいたことを検知する。そして、ロボット8970は、その照明機器8971bに対して点灯を命令する信号(点灯命令)を送信する。照明機器8971bは、点灯命令を受けると、その点灯命令に応じて点灯する。 Next, the robot 8970 detects that it has approached the lighting device 8971b based on the estimated self-position while moving and cleaning. That is, the robot 8970 holds information indicating the position of the lighting device 8971b, and when the distance between the self position and the position of the lighting device 8971b is equal to or less than a predetermined distance, the lighting device 8971b. Detecting that you are approaching. Then, the robot 8970 transmits a signal (lighting command) for instructing lighting to the lighting device 8971b. When the lighting device 8971b receives the lighting command, the lighting device 8971b lights up in accordance with the lighting command.
 これにより、ロボット8970は、移動しながら自らの周りだけを明るくして、掃除を容易に行うことができる。 Thereby, the robot 8970 can brighten only the surroundings while moving and can easily perform cleaning.
 図43は、実施の形態6における送信機および受信機の適用例を示す図である。 FIG. 43 is a diagram illustrating an application example of the transmitter and the receiver in the sixth embodiment.
 照明機器8974は、上記各実施の形態における送信機としての機能を有する。この照明機器8974は、輝度変化しながら例えば鉄道の駅にある路線掲示板8975を照らす。ユーザによってその路線掲示板8975に向けられた受信機8973は、その路線掲示板8975を撮影する。これにより、受信機8973は、その路線掲示板8975のIDを取得し、そのIDに関連付けられている情報であって、その路線掲示板8975に記載されている各路線についての詳細な情報を取得する。そして、受信機8973は、その詳細な情報を示す案内画像8973aを表示する。例えば、案内画像8973aは、路線掲示板8975に記載されている路線までの距離と、その路線に向かう方向と、その路線において次に電車が到着する時刻とを示す。 The lighting device 8974 has a function as a transmitter in each of the above embodiments. The lighting device 8974 illuminates a route bulletin board 8975 at a railway station, for example, while changing in luminance. The receiver 8973 pointed to the route bulletin board 8975 by the user photographs the route bulletin board 8975. As a result, the receiver 8973 acquires the ID of the route bulletin board 8975, and acquires detailed information about each route described in the route bulletin board 8975, which is information associated with the ID. The receiver 8973 displays a guide image 8973a indicating the detailed information. For example, the guidance image 8973a indicates the distance to the route described on the route bulletin board 8975, the direction toward the route, and the time when the next train arrives on the route.
 ここで、受信機8973は、その案内画像8973aがユーザによってタッチされると、補足案内画像8973bを表示する。この補足案内画像8973bは、例えば、鉄道の時刻表、案内画像8973aによって示される路線とは異なる別の路線に関する情報、および、その駅に関する詳細な情報、のうちの何れかをユーザによる選択操作に応じて表示するための画像である。 Here, when the guide image 8973a is touched by the user, the receiver 8973 displays a supplementary guide image 8973b. This supplementary guide image 8973b is, for example, a user's selection operation of any one of a railway timetable, information on another route different from the route indicated by the guide image 8973a, and detailed information on the station. It is an image for displaying accordingly.
 (実施の形態7)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 7)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 (複数の受光部による複数の方向からの信号の受信)
 図44は、実施の形態7における受信機の一例を示す図である。
(Reception of signals from multiple directions by multiple light receiving units)
FIG. 44 is a diagram illustrating an example of a receiver in Embodiment 7.
 例えば腕時計として構成される受信機9020aは、複数の受光部を備える。例えば、受信機9020aは、図44に示すように、腕時計の長針および短針を支持する回転軸の上端部に配置された受光部9020bと、腕時計の周縁部における、12時を示す文字付近に配置された受光部9020cとを備える。受光部9020bは、上述の回転軸の方向に沿って受光部9020bに向かう光を受け、受光部9020cは、その回転軸と12時を示す文字とを結ぶ方向に沿って受光部9020cに向かう光を受ける。これにより、ユーザが時刻を確認するときのように胸の前に受信機9020aを構えた時に、受光部9020bは、上方向からの光を受光できる。その結果、受信機9020aは天井照明からの信号を受信できる。さらに、ユーザが時刻を確認するときのように胸の前に受信機9020aを構えた時に、受光部9020cは、正面方向からの光を受光できる。その結果、受信機9020aは、正面にあるサイネージ等からの信号を受信することが出来る。 For example, the receiver 9020a configured as a wristwatch includes a plurality of light receiving units. For example, as shown in FIG. 44, the receiver 9020a is arranged in the vicinity of the character indicating 12 o'clock in the light receiving portion 9020b arranged at the upper end portion of the rotating shaft that supports the long hand and the short hand of the watch. Light receiving portion 9020c. The light receiving unit 9020b receives light traveling toward the light receiving unit 9020b along the direction of the rotation axis described above, and the light receiving unit 9020c transmits light traveling toward the light receiving unit 9020c along the direction connecting the rotation axis and the character indicating 12:00. Receive. Accordingly, when the user holds the receiver 9020a in front of the chest as when checking the time, the light receiving unit 9020b can receive light from above. As a result, the receiver 9020a can receive a signal from the ceiling lighting. Further, when the user holds the receiver 9020a in front of the chest as when checking the time, the light receiving unit 9020c can receive light from the front direction. As a result, the receiver 9020a can receive a signal from a signage or the like at the front.
 これらの受光部9020bおよび9020cは指向性を持たせることで、近い位置に複数の送信機がある場合でも混信することなく信号を受信することができる。 These light receiving units 9020b and 9020c have directivity so that signals can be received without interference even when there are a plurality of transmitters at close positions.
 (腕時計型ディスプレイによる道案内)
 図45は、実施の形態7における受信システムの一例を示す図である。
(Wayway guidance using a watch-type display)
FIG. 45 is a diagram illustrating an example of a reception system in the seventh embodiment.
 例えば腕時計として構成される受信機9023bは、Bluetooth(登録商標)等の無線通信を介してスマートフォン9022aと接続される。受信機9023bは、文字盤が液晶等のディスプレイで構成されており、時刻以外の情報を表示することができる。受信機9023bが受信した信号からスマートフォン9022aは現在地を認識し、目的地までの経路や距離を受信機9023bの表示面に表示する。 For example, the receiver 9023b configured as a wristwatch is connected to the smartphone 9022a via wireless communication such as Bluetooth (registered trademark). The receiver 9023b has a dial made up of a display such as a liquid crystal display, and can display information other than the time. The smartphone 9022a recognizes the current location from the signal received by the receiver 9023b, and displays the route and distance to the destination on the display surface of the receiver 9023b.
 図46は、実施の形態7における信号送受信システムの一例を示す図である。 FIG. 46 is a diagram illustrating an example of a signal transmission / reception system according to the seventh embodiment.
 信号送受信システムは、多機能携帯電話であるスマートフォン(スマホ)と、照明機器であるLED発光機と、冷蔵庫などの家電機器と、サーバとを備えている。LED発光機は、BTLE(Bluetooth(登録商標) Low Energy)を用いた通信を行うとともに、LED(Light Emitting Diode)を用いた可視光通信を行う。例えば、LED発光機は、BTLEによって、冷蔵庫を制御したり、エアコンと通信する。また、LED発光機は、可視光通信によって、電子レンジ、空気清浄機またはテレビ(TV)などの電源を制御する。 The signal transmission / reception system includes a smartphone (smartphone) that is a multi-function mobile phone, an LED light emitting device that is a lighting device, a home appliance such as a refrigerator, and a server. The LED light emitter performs communication using BTLE (Bluetooth (registered trademark) Low Energy) and visible light communication using LED (Light-Emitting-Diode). For example, the LED light emitter controls a refrigerator or communicates with an air conditioner by BTLE. In addition, the LED light emitter controls a power source of a microwave oven, an air purifier, a television (TV), or the like by visible light communication.
 テレビは、例えば太陽光発電素子を備え、この太陽光発電素子を光センサとして利用する。つまり、LED発光機が輝度変化することによって信号を送信すると、テレビは、太陽光発電素子によって発電される電力の変化によって、そのLED発光機の輝度変化を検出する。そして、テレビは、その検出された輝度変化によって示される信号を復調することによって、LED発光機から送信された信号を取得する。テレビは、その信号が電源ONを示す命令である場合には、自らの主電源をONに切り替え、その信号が電源OFFを示す命令である場合には、自らの主電源をOFFに切り替える。 The television includes, for example, a solar power generation element, and uses the solar power generation element as an optical sensor. That is, when a signal is transmitted when the brightness of the LED light emitter changes, the television detects a change in the brightness of the LED light emitter based on a change in the power generated by the solar power generation element. Then, the television acquires the signal transmitted from the LED light emitter by demodulating the signal indicated by the detected luminance change. When the signal is a command indicating that the power is on, the television switches its main power source to ON, and when the signal is a command indicating that the power source is OFF, the television switches its main power source to OFF.
 また、サーバは、ルータおよび特定小電力無線局(特小)を介してエアコンと通信することができる。さらに、エアコンはBTLEを介してLED発光機と通信することができるため、サーバはLED発光機と通信することができる。したがって、サーバは、LED発光機を介してTVの電源をONとOFFとに切り替えることができる。また、スマートフォンは、サーバと例えばWi-Fi(Wireless Fidelity)などを介して通信することによって、サーバを介してTVの電源を制御することができる。 Also, the server can communicate with the air conditioner via a router and a specific low power radio station (extra small). Furthermore, since the air conditioner can communicate with the LED light emitter via BTLE, the server can communicate with the LED light emitter. Therefore, the server can switch the power of the TV ON and OFF via the LED light emitter. The smartphone can control the power supply of the TV via the server by communicating with the server via, for example, Wi-Fi (Wireless Fidelity).
 図46に示すように、本実施の形態における情報通信方法は、携帯端末(スマートフォン)が、可視光通信と異なる無線通信(BTLEまたはWi-Fiなど)によって、制御信号(送信データ列またはユーザコマンド)を照明機器(発光機)に送信する無線通信ステップと、照明機器が、その制御信号に応じて輝度変化することによって可視光通信を行う可視光通信ステップと、制御対象機器(電子レンジなど)が、その照明機器の輝度変化を検出し、検出された輝度変化によって特定される信号を復調することにより制御信号を取得し、その制御信号に応じた処理を実行する実行ステップとを含む。これにより、携帯端末は、可視光通信のための輝度変化を行うことができなくても、無線通信によって、照明機器を携帯端末の代わりに輝度変化させることができ、制御対象機器を適切に制御することができる。なお、携帯端末はスマートフォンではなく腕時計であってもよい。 As shown in FIG. 46, the information communication method according to the present embodiment allows the mobile terminal (smart phone) to transmit a control signal (transmission data string or user command) by wireless communication (such as BTLE or Wi-Fi) different from visible light communication. ) To a lighting device (light emitting device), a visible light communication step in which the lighting device performs visible light communication by changing the luminance according to the control signal, and a control target device (such as a microwave oven). Includes an execution step of detecting a luminance change of the lighting device, acquiring a control signal by demodulating a signal specified by the detected luminance change, and executing a process according to the control signal. Thereby, even if the portable terminal cannot change the luminance for visible light communication, the luminance of the lighting device can be changed instead of the portable terminal by wireless communication, and the control target device is appropriately controlled. can do. The mobile terminal may be a wristwatch instead of a smartphone.
 (干渉を排除した受信)
 図47は、実施の形態7における干渉を排除した受信方法を示すフローチャートである。
(Reception without interference)
FIG. 47 is a flowchart showing a reception method in which interference is eliminated in the seventh embodiment.
 まず、ステップ9001aでstartして、ステップ9001bで受光した光の強さに周期的な変化があるかどうかを確認して、YESの場合はステップ9001cへ進む。NOの場合はステップ9001dへ進み、受光部のレンズを広角にして広範囲の光を受光して、ステップ9001bへ戻る。ステップ9001cで信号を受信できるかどうかを確認して、YESの場合はステップ9001eへ進み、信号を受信して、ステップ9001gで終了する。NOの場合はステップ9001fへ進み、受光部のレンズを望遠にして狭い範囲の光を受光して、ステップ9001cへ戻る。 First, start is made in step 9001a, and it is checked whether there is a periodic change in the intensity of light received in step 9001b. If YES, the process proceeds to step 9001c. In the case of NO, the process proceeds to Step 9001d to receive a wide range of light by setting the lens of the light receiving unit to a wide angle, and returns to Step 9001b. In step 9001c, it is confirmed whether the signal can be received. If YES, the process proceeds to step 9001e, the signal is received, and the process ends in step 9001g. In the case of NO, the process proceeds to Step 9001f, the lens of the light receiving unit is telephoto, and light in a narrow range is received, and the process returns to Step 9001c.
 この方法により、複数の送信機からの信号の干渉を排除しつつ、広い方向にある送信機からの信号を受信することができる。 This method makes it possible to receive signals from transmitters in a wide direction while eliminating signal interference from a plurality of transmitters.
 (送信機の方位の推定)
 図48は、実施の形態7における送信機の方位の推定方法を示すフローチャートである。
(Estimation of transmitter orientation)
FIG. 48 is a flowchart showing a method for estimating the orientation of a transmitter in the seventh embodiment.
 まず、ステップ9002aでstartして、ステップ9002bで受光部のレンズを最大望遠にして、ステップ9002cで受光した光の強さに周期的な変化があるかどうかを確認して、YESの場合はステップ9002dへ進む。NOの場合はステップ9002eへ進み、受光部のレンズを広角にして広範囲の光を受光して、ステップ9002cへ戻る。ステップ9002dで信号を受信して、ステップ9002fで受光部のレンズを最大望遠とし、受光範囲の境界に沿うように受光方向を変化させ、受光強度が最大になる方向を検出し、送信機がその方向にあると推定して、ステップ9002dで終了する。 First, start is performed in step 9002a, the lens of the light receiving unit is set to the maximum telephoto in step 9002b, and it is checked whether there is a periodic change in the intensity of light received in step 9002c. Proceed to 9002d. In the case of NO, the process proceeds to Step 9002e, where a wide range of light is received by setting the lens of the light receiving unit to a wide angle, and the process returns to Step 9002c. In step 9002d, the signal is received, in step 9002f, the lens of the light receiving unit is set to the maximum telephoto position, the light receiving direction is changed along the boundary of the light receiving range, and the direction in which the light receiving intensity is maximized is detected. Estimating that it is in the direction, the process ends at step 9002d.
 この方法により、送信機が存在する方向を推定することができる。なお、最初に最大広角にして、次第に望遠にしてもよい。 This method makes it possible to estimate the direction in which the transmitter exists. The maximum wide angle may be set first, and the telephoto may be gradually increased.
 (受信の開始)
 図49は、実施の形態7における受信の開始方法を示すフローチャートである。
(Start receiving)
FIG. 49 is a flowchart showing a reception start method according to the seventh embodiment.
 まず、ステップ9003aでstartして、ステップ9003bでWi-FiやBluetooth(登録商標)やIMES等の基地局からの信号を受信したかどうかを確認して、YESの場合は、ステップ9003cへ進む。NOの場合はステップ9003bへ戻る。ステップ9003cで前記基地局が、受信開始のトリガとして受信機やサーバに登録されているかどうかを確認して、YESの場合はステップ9003dへ進み、信号の受信を開始して、ステップ9003eで終了する。NOの場合はステップ9003bへ戻る。 First, start is performed in step 9003a, and in step 9003b, it is confirmed whether a signal from a base station such as Wi-Fi, Bluetooth (registered trademark), IMES or the like is received. If YES, the process proceeds to step 9003c. If NO, the process returns to step 9003b. In step 9003c, it is confirmed whether or not the base station is registered in the receiver or server as a trigger for starting reception. If YES, the process proceeds to step 9003d, starts receiving a signal, and ends in step 9003e. . If NO, the process returns to step 9003b.
 この方法により、ユーザが受信開始の操作をしなくても受信を開始することができる。また、常に受信を行うよりも消費電力を抑えることが出来る。 This method can start reception without the user performing a reception start operation. Further, power consumption can be suppressed as compared with the case where reception is always performed.
 (他媒体の情報を併用したIDの生成)
 図50は、実施の形態7における他媒体の情報を併用したIDの生成方法を示すフローチャートである。
(Generation of ID using information from other media)
FIG. 50 is a flowchart showing an ID generation method using information of another medium together in the seventh embodiment.
 まず、ステップ9004aでstartして、ステップ9004bで接続されているキャリア通信網やWi-FiやBluetooth(登録商標)等のID、または、上記IDから得た位置情報やGPS等から得た位置情報を上位ビットID索引サーバに送信する。ステップ9004cで上位ビットID索引サーバから可視光IDの上位ビットを受信して、ステップ9004dで送信機からの信号を可視光IDの下位ビットとして受信する。ステップ9004eで可視光IDの上位ビットと下位ビットを合わせてID解決サーバへ送信して、ステップ9004fで終了する。 First, start in step 9004a, ID of carrier communication network, Wi-Fi, Bluetooth (registered trademark), etc. connected in step 9004b, or position information obtained from the ID, position information obtained from GPS, etc. Is transmitted to the upper bit ID index server. In step 9004c, the upper bits of the visible light ID are received from the upper bit ID index server, and in step 9004d, the signal from the transmitter is received as the lower bits of the visible light ID. In step 9004e, the upper and lower bits of the visible light ID are combined and transmitted to the ID resolution server, and the process ends in step 9004f.
 この方法により、受信機の付近の場所で共通的に用いられる上位ビットを得ることができ、送信機が送信するデータ量を少なくすることができる。また、受信機が受信する速度を上げることができる。 This method makes it possible to obtain upper bits that are commonly used near the receiver and reduce the amount of data transmitted by the transmitter. In addition, the receiving speed of the receiver can be increased.
 なお、送信機は上位ビットと下位ビットの両方を送信しているとしてもよい。この場合は、この方法を用いている受信機は下位ビットを受信した時点でIDを合成することができ、この方法を用いていない受信機は送信機からID全体を受信することでIDを得る。 Note that the transmitter may transmit both the upper and lower bits. In this case, the receiver using this method can synthesize the ID when the lower bit is received, and the receiver not using this method obtains the ID by receiving the entire ID from the transmitter. .
 (周波数分離による受信方式の選択)
 図51は、実施の形態7における周波数分離による受信方式の選択方法を示すフローチャートである。
(Selection of reception method by frequency separation)
FIG. 51 is a flowchart showing a reception method selection method based on frequency separation in the seventh embodiment.
 まず、ステップ9005aでstartして、ステップ9005bで受光した光信号を周波数フィルタ回路にかける、または、離散フーリエ級数展開を行い周波数分解を行う。ステップ9005cで低周波数成分が存在するかどうかを確認して、YESの場合はステップ9005dへ進み、周波数変調等の低周波数領域で表現された信号をデコードして、ステップ9005eへ進む。NOの場合はステップ9005eへ進む。ステップ9005eで前記基地局が、受信開始のトリガとして受信機やサーバに登録されているかどうかを確認して、YESの場合はステップ9005fへ進み、パルス位置変調等の高周波数領域で表現された信号をデコードして、ステップ9005gへ進む。NOの場合はステップ9005gへ進む。ステップ9005gで信号の受信を開始して、ステップ9005hで終了する。 First, start is performed in step 9005a, and the optical signal received in step 9005b is applied to a frequency filter circuit, or discrete Fourier series expansion is performed to perform frequency decomposition. In step 9005c, it is confirmed whether or not a low frequency component exists. If YES, the process proceeds to step 9005d, a signal expressed in a low frequency region such as frequency modulation is decoded, and the process proceeds to step 9005e. If NO, the process proceeds to step 9005e. In step 9005e, it is confirmed whether or not the base station is registered in the receiver or server as a trigger for starting reception. If YES, the process proceeds to step 9005f, and a signal expressed in a high frequency region such as pulse position modulation. And the process proceeds to Step 9005g. If NO, the process advances to step 9005g. In step 9005g, signal reception is started, and in step 9005h, the process ends.
 この方法により、複数の変調方式で変調された信号を受信することができる。 This method makes it possible to receive signals modulated by a plurality of modulation schemes.
 (露光時間が長い場合の信号受信)
 図52は、実施の形態7における露光時間が長い場合の信号受信方法を示すフローチャートである。
(Signal reception when exposure time is long)
FIG. 52 is a flowchart showing a signal reception method when the exposure time is long in the seventh embodiment.
 まず、ステップ9030aでstartして、ステップ9030bで感度が設定できる場合は感度を最高に設定する。ステップ9030cで露光時間が設定できる場合は通常撮影モードよりも短い時間に設定する。ステップ9030dで2枚の画像を撮像して輝度の差分を求める。2枚の画像を撮像する間に撮像部の位置や方向が変化した場合はその変化をキャンセルして同じ位置・方向から撮像したかのような画像を生成して差分を求める。ステップ9030eで差分画像、または、撮像画像の露光ラインに平行な方向の輝度値を平均した値を求める。ステップ9030fで前記平均した値を、露光ラインに垂直な方向に並べ離散フーリエ変換を行って、ステップ9030gで所定の周波数の付近にピークがあるかどうかを認識して、ステップ9030hで終了する。 First, start in step 9030a, and if the sensitivity can be set in step 9030b, set the sensitivity to the maximum. If the exposure time can be set in step 9030c, the exposure time is set shorter than that in the normal shooting mode. In step 9030d, two images are picked up and a difference in luminance is obtained. If the position or direction of the imaging unit changes during the imaging of two images, the change is canceled and an image as if it was captured from the same position and direction is generated to obtain the difference. In Step 9030e, a value obtained by averaging the luminance values in the direction parallel to the difference image or the exposure line of the captured image is obtained. The averaged values in step 9030f are arranged in the direction perpendicular to the exposure line, and discrete Fourier transform is performed. In step 9030g, it is recognized whether there is a peak near a predetermined frequency, and the process ends in step 9030h.
 この方法により、露光時間が設定できない場合や通常画像を同時に撮像する場合等、露光時間が長い場合においても信号を受信することができる。 This method allows signals to be received even when the exposure time is long, such as when the exposure time cannot be set or when a normal image is captured simultaneously.
 露光時間を自動設定としている場合、カメラを照明として構成される送信機へ向けると、自動露出補正機能によって露光時間は60分の1秒から480分の1秒程度に設定される。露光時間の設定ができない場合には、この条件で信号を受信する。実験では、照明を周期的に点滅させた場合、1周期の時間が露光時間の約16分の1以上であれば、露光ラインに垂直な方向に縞が視認でき、画像処理によって点滅の周期を認識することができた。このとき、照明が写っている部分は輝度が高すぎて縞が確認しづらいため、照明光が反射している部分から信号の周期を求めるのが良い。 When the exposure time is set automatically, when the camera is directed to a transmitter configured as illumination, the exposure time is set to about 1/60 second to 1/480 second by the automatic exposure correction function. If the exposure time cannot be set, a signal is received under this condition. In the experiment, when the illumination is blinked periodically, if the time of one cycle is about 1/16 or more of the exposure time, stripes can be visually recognized in the direction perpendicular to the exposure line. I was able to recognize it. At this time, since the brightness is too high in the portion where the illumination is reflected and it is difficult to confirm the stripe, it is preferable to obtain the signal period from the portion where the illumination light is reflected.
 周波数偏移変調方式や周波数多重変調方式のように、発光部を周期的に点灯・消灯させる方式を用いた場合は、パルス位置変調方式を用いた場合よりも、同じ変調周波数であっても人間にとってちらつきが視認しづらく、また、ビデオカメラで撮影した動画にもちらつきが現れにくい。そのため、低い周波数を変調周波数として用いることができる。人間の視覚の時間分解能は60Hz程度であるため、この周波数以上の周波数を変調周波数として用いることができる。 When using a method of periodically turning on / off the light emitting unit, such as a frequency shift keying method or a frequency multiplexing modulation method, even if the modulation frequency is the same as that of the pulse position modulation method, It is difficult for the viewer to see the flicker, and it is difficult for the flicker to appear in the video shot with the video camera. Therefore, a low frequency can be used as the modulation frequency. Since the temporal resolution of human vision is about 60 Hz, a frequency higher than this frequency can be used as the modulation frequency.
 なお、変調周波数が受信機の撮像フレームレートの整数倍のときは、2枚の画像の同じ位置の画素は送信機の光パターンが同じ位相の時点で撮像を行うため、差分画像に輝線があらわれず、受信が行いにくい。受信機の撮像フレームレートは通常30fpsであるため、変調周波数は30Hzの整数倍以外に設定すると受信が行い易い。また、受信機の撮像フレームレートは様々なものが存在するため、互いに素な二つの変調周波数を同じ信号に割り当て、送信機は、その二つの変調周波数を交互に用いて送信することで、受信機は、少なくとも一つの信号を受信することで、容易に信号を復元できる。 When the modulation frequency is an integral multiple of the imaging frame rate of the receiver, pixels at the same position in the two images are imaged when the light pattern of the transmitter is in the same phase, so a bright line appears in the difference image. It is difficult to receive. Since the imaging frame rate of the receiver is usually 30 fps, reception is easy if the modulation frequency is set to a value other than an integral multiple of 30 Hz. Also, since there are various imaging frame rates of the receiver, two disjoint modulation frequencies are assigned to the same signal, and the transmitter receives signals by alternately using the two modulation frequencies for transmission. The machine can easily restore the signal by receiving at least one signal.
 図53は、送信機の調光(明るさを調整すること)方法の一例を示す図である。 FIG. 53 is a diagram illustrating an example of a transmitter dimming (adjusting brightness) method.
 輝度が高い区間と輝度が低い区間の割合を調整することで、平均輝度が変化し、明るさを調整することができる。このとき、輝度の高低を繰り返す周期Tを一定に保つことで、周波数ピークを一定に保つことが出来る。例えば、図53の(a)、(b)、(c)のいずれも、平均輝度よりも明るくなる第1の輝度変化と、第2の輝度変化の間の時間T1は一定に保ちながら、送信機を暗く調光する際には、平均輝度よりも明るく照明する時間を短くする。一方、送信機を明るく調光する際には、平均輝度よりも明るく照明する時間を長くする。図53の(b)、(c)は、(a)よりも暗く調光されており、図53の(c)は、最も暗く調光されている。これにより、同一の意味を持った信号を送信しながら調光を行うことが出来る。 By adjusting the ratio between the high luminance section and the low luminance section, the average luminance changes and the brightness can be adjusted. At this time, to keep the period T 1 which repeats high and low brightness constant, it is possible to keep the frequency peak constant. For example, in any of (a), (b), and (c) of FIG. 53, transmission is performed while the time T1 between the first luminance change that becomes brighter than the average luminance and the second luminance change is kept constant. When dimming the machine darkly, the time for lighting brighter than the average brightness is shortened. On the other hand, when the transmitter is dimmed brightly, the illumination time is set longer than the average luminance. 53 (b) and 53 (c) are dimmed darker than (a), and FIG. 53 (c) is dimmed the darkest. Thereby, dimming can be performed while transmitting signals having the same meaning.
 輝度の高い区間の輝度、または、輝度が低い区間の輝度、または、その両方の輝度の値を変化させることで、平均輝度を変化させるとしてもよい。 The average brightness may be changed by changing the brightness value of the section with high brightness, the brightness of the section with low brightness, or both brightness values.
 図54は、送信機の調光機能を構成する方法の一例を示す図である。 FIG. 54 is a diagram showing an example of a method for configuring the dimming function of the transmitter.
 構成部品の精度には限界があるため、同じ調光設定を行ったとしても、別の送信機とは明るさが微妙に異なる。しかし、送信機を並べて配置する場合には、隣接する送信機の明るさが異なっていると、不自然さが感じられる。そこで、ユーザは、調光補正操作部を操作することで、送信機の明るさを調整する。調光補正部は、補正値を保持し、調光制御部は、補正値に従って発光部の明るさを制御する。ユーザが調光操作部を操作することによって調光の程度が変化された場合には、調光制御部は、変化された調光設定値と調光補正部に保持された補正値をもとに、発光部の明るさを制御する。また、調光制御部は、連動調光部を通して、他の送信機に調光設定値を伝える。他の機器から連動調光部を通して調光設定値が伝えられた場合には、調光制御部は、その調光設定値と調光補正部に保持された補正値をもとに、発光部の明るさを制御する。 Since the accuracy of the component parts is limited, even if the same dimming setting is performed, the brightness is slightly different from that of another transmitter. However, when transmitters are arranged side by side, if the brightness of adjacent transmitters is different, unnaturalness can be felt. Therefore, the user adjusts the brightness of the transmitter by operating the dimming correction operation unit. The dimming correction unit holds the correction value, and the dimming control unit controls the brightness of the light emitting unit according to the correction value. When the degree of dimming is changed by the user operating the dimming operation unit, the dimming control unit uses the changed dimming setting value and the correction value held in the dimming correction unit. In addition, the brightness of the light emitting unit is controlled. The dimming control unit transmits the dimming setting value to another transmitter through the interlocking dimming unit. When the dimming setting value is transmitted from another device through the linked dimming unit, the dimming control unit, based on the dimming setting value and the correction value held in the dimming correction unit, To control the brightness.
 本発明の一つの実施形態によれば、発光体を輝度変化させることによって信号を送信する情報通信装置を制御する制御方法であって、情報通信装置のコンピュータに対して、複数の異なる信号を含む、送信対象の信号を変調させることによって、異なる信号毎に、異なる周波数の輝度変化のパターンを決定させる決定ステップと、単一の周波数に該当する時間に、単一の信号を変調した輝度変化のパターンのみを含むように、発光体を輝度変化させることによって送信対象の信号を送信させる送信ステップと、を有する、制御方法であってもよい。 According to one embodiment of the present invention, there is provided a control method for controlling an information communication device that transmits a signal by changing luminance of a light emitter, and includes a plurality of different signals for a computer of the information communication device. A determination step for determining a luminance change pattern of a different frequency for each different signal by modulating a signal to be transmitted; and a luminance change obtained by modulating a single signal at a time corresponding to a single frequency. A control method may include a transmission step of transmitting a signal to be transmitted by changing the luminance of the light emitter so as to include only the pattern.
 例えば、単一の周波数に該当する時間に、複数の信号を変調した輝度変化のパターンを含む場合、時間経過による輝度変化の波形が複雑になり、適切に受信することが困難となる。しかしながら、単一の周波数に該当する時間に、単一の信号を変調した輝度変化のパターンのみを含むように制御することにより、受信する際により適切に受信を行うことが可能となる。 For example, when a pattern of luminance change obtained by modulating a plurality of signals is included in a time corresponding to a single frequency, the waveform of the luminance change over time becomes complicated and it is difficult to receive it appropriately. However, by performing control so that only a luminance change pattern obtained by modulating a single signal is included at a time corresponding to a single frequency, reception can be performed more appropriately.
 本発明の一つの実施の形態によれば、決定ステップは、所定の時間内において、複数の異なる信号のうちの一つの信号を送信させる送信回数が、他の信号を送信させる送信回数と異なるように、送信回数を決定させてもよい。 According to one embodiment of the present invention, the determining step is configured such that the number of transmissions for transmitting one signal among a plurality of different signals is different from the number of transmissions for transmitting other signals within a predetermined time. In addition, the number of transmissions may be determined.
 一つの信号を送信させる送信回数が、他の信号を送信させる送信回数と異なることにより、送信する際のちらつきを防ぐことが可能となる。 The flickering at the time of transmission can be prevented because the number of transmissions for transmitting one signal is different from the number of transmissions for transmitting other signals.
 本発明の一つの実施の形態によれば、決定ステップは、所定の時間内において、高い周波数に該当する信号の送信回数を、他の信号の送信回数よりも多くさせもよい。 According to one embodiment of the present invention, the determining step may increase the number of transmissions of a signal corresponding to a high frequency within a predetermined time, compared to the number of transmissions of other signals.
 受信側において周波数変換を行う際に、高い周波数に該当する信号は、輝度が小さくなるが、送信回数を多くすることにより、周波数変換を行う際の輝度値を大きくすることが可能となる。 When performing frequency conversion on the receiving side, a signal corresponding to a high frequency is reduced in luminance, but by increasing the number of transmissions, it is possible to increase the luminance value when performing frequency conversion.
 本発明の一つの実施の形態によれば、輝度変化のパターンは、時間経過による輝度変化の波形が、矩形波、三角波、鋸波のいずれかとなるパターンであってもよい。 According to one embodiment of the present invention, the luminance change pattern may be a pattern in which the waveform of the luminance change over time is any one of a rectangular wave, a triangular wave, and a sawtooth wave.
 矩形波などにすることにより、より適切に受信を行うことが可能となる。 By using a rectangular wave or the like, it becomes possible to receive more appropriately.
 本発明の一つの実施の形態によれば、発光体の平均輝度の値を大きくする場合に、単一の周波数に該当する時間において、発光体の輝度が所定の値よりも大きくなる時間を、前記発光体の平均輝度の値を小さくする場合に対して、長くしてもよい。 According to one embodiment of the present invention, when increasing the value of the average luminance of the illuminant, the time during which the luminance of the illuminant is greater than a predetermined value at a time corresponding to a single frequency, You may lengthen with respect to the case where the value of the average luminance of the said light-emitting body is made small.
 単一の周波数に該当する時間において、発光体の輝度が所定の値よりも大きくなる時間を調整することにより、信号を送信し、かつ、発光体の平均輝度を調整することが可能となる。例えば、発光体を照明として使用する場合には、全体の明るさを暗くしたり、明るくしたりしながら、信号を送信することが可能となる。 It is possible to transmit a signal and adjust the average luminance of the light emitter by adjusting the time during which the luminance of the light emitter becomes larger than a predetermined value at a time corresponding to a single frequency. For example, when the light emitter is used as illumination, it is possible to transmit a signal while making the overall brightness darker or brighter.
 受信機は,露光時間を設定するAPI(アプリケーション・プログラミング・インタフェースの略で、OSの機能を利用するための手段を指す)を利用することで、露光時間を所定の値に設定することができ、可視光信号を安定して受信することができる。また、受信機は、感度を設定するAPIを利用することで、感度を所定の値に設定することができ、送信信号の明るさが暗い場合や明るい場合でも可視光信号を安定して受信することができる。 The receiver can set the exposure time to a predetermined value by using an API for setting the exposure time (which stands for application programming interface and indicates a means for using the function of the OS). The visible light signal can be received stably. Further, the receiver can set the sensitivity to a predetermined value by using an API for setting the sensitivity, and stably receives a visible light signal even when the brightness of the transmission signal is dark or bright. be able to.
 (実施の形態8)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 8)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 ここで、EXズームについて説明する。 Here, the EX zoom will be described.
 図55は、EXズームを説明するための図である。 FIG. 55 is a diagram for explaining the EX zoom.
 ズーム、つまり、大きな像を得る方法には、レンズの焦点距離を調整して撮像素子に写る像の大きさを変化させる光学ズームと、撮像素子に写った像をデジタル処理で補間して大きな像を得るデジタルズームと、撮像に用いられる複数の撮像素子を変更することで大きな像を得るEXズームとがある。EXズームは、撮像画像の解像度に比べてイメージセンサに含まれる撮像素子の数が多い場合に利用できる。 The zoom, that is, the method of obtaining a large image, includes an optical zoom that adjusts the focal length of the lens to change the size of the image captured on the image sensor, and a digital image that interpolates the image captured on the image sensor. There are a digital zoom for obtaining a zoom lens and an EX zoom for obtaining a large image by changing a plurality of image sensors used for imaging. The EX zoom can be used when the number of image sensors included in the image sensor is larger than the resolution of the captured image.
 例えば、図55に示すイメージセンサ10080aでは、32×24個の撮像素子がマトリックス状に配列されている。つまり、撮像素子が横に32個、縦に24個配置されている。このイメージセンサ10080aによる撮像によって、横16×縦12の解像度の画像を得る場合、図55の(a)に示すように、イメージセンサ10080aに含まれる32×24個の撮像素子のうち、イメージセンサ10080aにおいて全体的に均等に分散して配置された16×12個の撮像素子(例えば、図55の(a)におけるイメージセンサ1080a中の黒四角によって示される撮像素子)だけが撮像に用いられる。つまり、縦方向および横方向のそれぞれに配列される複数の撮像素子のうち、奇数番目または偶数番目の撮像素子だけが撮像に用いられる。これにより、所望の解像度の画像10080bが得られる。なお、図55において、イメージセンサ1008aに被写体が現れているが、これは、各撮像素子と、撮像によって得られる画像との対応関係を分かりやすくするためである。 For example, in the image sensor 10080a shown in FIG. 55, 32 × 24 image sensors are arranged in a matrix. That is, 32 image sensors are arranged horizontally and 24 elements are arranged vertically. When an image having a horizontal 16 × vertical 12 resolution is obtained by imaging by the image sensor 10080a, as shown in FIG. 55A, among the 32 × 24 imaging elements included in the image sensor 10080a, the image sensor Only 16 × 12 image sensors (for example, image sensors indicated by black squares in the image sensor 1080a in FIG. 55A) arranged uniformly distributed throughout 10080a are used for imaging. That is, only an odd-numbered or even-numbered image sensor is used for imaging among a plurality of image sensors arranged in the vertical direction and the horizontal direction. As a result, an image 10080b having a desired resolution is obtained. In FIG. 55, a subject appears on the image sensor 1008a, in order to make it easy to understand the correspondence between each imaging element and an image obtained by imaging.
 このイメージセンサ10080aを備えた受信機は、広い範囲を撮像することで、送信機を探索したり、多くの送信機からの情報を受信したりする場合は、イメージセンサ10080aにおいて全体的に均等に分散して配置された一部の撮像素子のみを用いて撮像する。 The receiver including the image sensor 10080a captures a wide range, and searches for a transmitter or receives information from many transmitters in the image sensor 10080a. An image is picked up using only a part of the image pickup elements arranged in a distributed manner.
 また、受信機は、EXズームを行うときには、図55の(b)に示すように、イメージセンサ10080aにおいて、局所的に密に配置された一部の撮像素子(例えば、図55の(b)におけるイメージセンサ1080a中の黒四角によって示される16×12個の撮像素子)のみを撮像に用いる。これにより、画像10080bのうち、その一部の撮像素子に対応する部分がズームされることになり、画像10080dが得られる。このようなEXズームによって、送信機を大きく撮像することで、可視光信号を長時間受信できるようになり、受信速度が向上し、また、遠くから可視光信号を受信できる。 When the receiver performs the EX zoom, as shown in FIG. 55 (b), the image sensor 10080a has a part of the image pickup elements arranged locally densely (for example, FIG. 55 (b)). Only 16 × 12 image sensors indicated by black squares in the image sensor 1080a in FIG. As a result, a part of the image 10080b corresponding to a part of the imaging elements is zoomed, and an image 10080d is obtained. By such an EX zoom, it is possible to receive a visible light signal for a long time by capturing a large image of the transmitter, the reception speed is improved, and a visible light signal can be received from a distance.
 デジタルズームでは、可視光信号を受ける露光ラインの数を増やすことはできず、可視光信号の受信時間も増加しないため、可能な限り他のズームを用いるほうがよい。光学ズームは、レンズやイメージセンサの物理的な移動時間が必要であるが、EXズームは電子的な設定変更のみで行われるため、ズームにかかる時間が短いという利点がある。この観点から、各ズームの優先順位は、(1)EXズーム、(2)光学ズーム、(3)デジタルズームである。受信機は、この優先順位と、ズーム倍率の必要性とに応じて、いずれか1つまたは複数のズームを選択して用いてもよい。なお、図55の(a)および(b)に示す撮像方法では、使用していない撮像素子を用いることで、画像ノイズを抑えることが可能である。 In digital zoom, the number of exposure lines that receive visible light signals cannot be increased, and the reception time of visible light signals does not increase. Therefore, it is better to use other zooms as much as possible. The optical zoom requires a physical movement time of the lens and the image sensor. However, since the EX zoom is performed only by electronic setting change, there is an advantage that the time required for the zoom is short. From this viewpoint, the priority order of each zoom is (1) EX zoom, (2) optical zoom, and (3) digital zoom. The receiver may select and use any one or a plurality of zooms according to the priority order and the necessity of the zoom magnification. In the imaging methods shown in FIGS. 55A and 55B, image noise can be suppressed by using an unused imaging device.
 (実施の形態9)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 9)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 本実施の形態では、露光ライン毎または撮像素子毎に露光時間を設定する。 In this embodiment, the exposure time is set for each exposure line or each image sensor.
 図56、図57、図58は、実施の形態9における信号受信方法の一例を示す図である。 56, 57, and 58 are diagrams illustrating an example of a signal reception method according to the ninth embodiment.
 図56に示すように、受信機に備えられている撮像部であるイメージセンサ10010aでは、露光ライン毎に露光時間が設定される。即ち、所定の露光ライン(図56中における白い露光ライン)には、通常撮像用の長い露光時間が設定され、他の露光ライン(図56中における黒い露光ライン)には、可視光撮像用の短い露光時間が設定されている。例えば、垂直方向に配列されている各露光ラインに対して、長い露光時間と短い露光時間とが交互に設定されている。これにより、輝度変化によって可視光信号を送信する送信機を撮像する際に、通常撮像と可視光撮像(可視光通信)とをほぼ同時に行うことができる。なお、二つの露光時間は1ライン毎に交互に設定されてもよいし、数ライン毎に設定されてもよいし、イメージセンサ10010aの上部と下部で別々の露光時間が設定されてもよい。このように2つの露光時間を用いることにより、同じ露光時間に設定された複数の露光ラインの撮像によって得られたデータをそれぞれまとめると、通常撮像画像10010bと、複数の輝線のパターンを示す輝線画像である可視光撮像画像10010cとが得られる。通常撮像画像10010bでは、長い露光時間で撮像していない部分(つまり、短い露光時間に設定された複数の露光ラインに対応する画像)が欠けているため、その部分を補間することで、プレビュー画像10010dを表示することができる。ここで、プレビュー画像10010dには、可視光通信によって得られた情報を重畳することができる。この情報は、可視光撮像画像10010cに含まれる複数の輝線のパターンを復号することによって得られた可視光信号に関連付けられた情報である。なお、受信機は、通常撮像画像10010b、またはその通常撮像画像10010bに対して補間が行われた画像を撮像画像として保存し、受信した可視光信号、またはその可視光信号に関連付けられた情報を付加情報として、保存される撮像画像に付加することもできる。 As shown in FIG. 56, in the image sensor 10010a which is an imaging unit provided in the receiver, an exposure time is set for each exposure line. That is, a long exposure time for normal imaging is set for a predetermined exposure line (white exposure line in FIG. 56), and a visible light imaging for other exposure lines (black exposure line in FIG. 56). A short exposure time is set. For example, a long exposure time and a short exposure time are alternately set for each exposure line arranged in the vertical direction. Thereby, when imaging the transmitter which transmits a visible light signal by a luminance change, normal imaging and visible light imaging (visible light communication) can be performed almost simultaneously. The two exposure times may be alternately set for each line, may be set for every several lines, or different exposure times may be set for the upper part and the lower part of the image sensor 10010a. By using two exposure times in this way, when data obtained by imaging a plurality of exposure lines set to the same exposure time are collected, a normal captured image 10010b and a bright line image showing a plurality of bright line patterns are obtained. A visible light captured image 10010c is obtained. In the normal captured image 10010b, since a portion that has not been captured with a long exposure time (that is, an image corresponding to a plurality of exposure lines set to a short exposure time) is missing, the preview image is obtained by interpolating that portion. 10010d can be displayed. Here, information obtained by visible light communication can be superimposed on the preview image 10010d. This information is information associated with a visible light signal obtained by decoding a plurality of bright line patterns included in the visible light captured image 10010c. The receiver stores the normal captured image 10010b or an image obtained by performing interpolation on the normal captured image 10010b as a captured image, and stores the received visible light signal or information associated with the visible light signal. As additional information, it can also be added to the stored captured image.
 図57に示すように、イメージセンサ10010aの代わりにイメージセンサ10011aを用いてもよい。イメージセンサ1011aでは、露光ラインごとにではなく、露光ラインと垂直な方向に沿って配列された複数の撮像素子からなる列(以下、垂直ラインという)ごとに、露光時間が設定される。即ち、所定の垂直ライン(図57中における白い垂直ライン)には、通常撮像用の長い露光時間が設定され、他の垂直ライン(図57中における黒い垂直ライン)には、可視光撮像用の短い露光時間が設定されている。この場合、イメージセンサ10011aでは、イメージセンサ10010aと同様に、露光ラインごとに互いに異なるタイミングで露光が開始されるが、露光ラインのそれぞれで、その露光ラインに含まれる各撮像素子の露光時間が異なる。受信機は、このイメージセンサ10011aによる撮像によって、通常撮像画像10011bと、可視光撮像画像10011cとを得る。さらに、受信機は、この通常撮像画像10011bと、可視光撮像画像10011cから得られた可視光信号に関連付けられた情報とに基づいて、プレビュー画像10011dを生成して表示する。 As shown in FIG. 57, an image sensor 10011a may be used instead of the image sensor 10010a. In the image sensor 1011a, the exposure time is set not for each exposure line but for each column (hereinafter, referred to as a vertical line) composed of a plurality of imaging elements arranged along a direction perpendicular to the exposure line. That is, a long exposure time for normal imaging is set for a predetermined vertical line (white vertical line in FIG. 57), and a visible light imaging for other vertical lines (black vertical line in FIG. 57). A short exposure time is set. In this case, in the image sensor 10011a, exposure is started at different timings for each exposure line, as in the image sensor 10010a. However, the exposure time of each image sensor included in the exposure line is different for each exposure line. . The receiver obtains a normal captured image 10011b and a visible light captured image 10011c by imaging with the image sensor 10011a. Further, the receiver generates and displays a preview image 10011d based on the normal captured image 10011b and information associated with the visible light signal obtained from the visible light captured image 10011c.
 このイメージセンサ10011aでは、イメージセンサ10010aとは異なり、可視光撮像に全ての露光ラインを用いることができる。その結果、イメージセンサ10011aによって得られる可視光撮像画像10011cには、可視光撮像画像10010cと比べて輝線が多く含まれているため、可視光信号の受信精度を高くすることができる。 In this image sensor 10011a, unlike the image sensor 10010a, all exposure lines can be used for visible light imaging. As a result, since the visible light captured image 10011c obtained by the image sensor 10011a includes more bright lines than the visible light captured image 10010c, the reception accuracy of the visible light signal can be increased.
 また、図58に示すように、イメージセンサ10010aの代わりにイメージセンサ10012aを用いてもよい。イメージセンサ10012aでは、水平方向および垂直方向に沿って各撮像素子に対して連続して同じ露光時間が設定されないように、撮像素子ごとに露光時間が設定される。つまり、長い露光時間が設定される複数の撮像素子と、短い露光時間が設定される複数の撮像素子とが、格子状または市松模様のように分布するように、各撮像素子に対して露光時間が設定される。この場合も、イメージセンサ10010aと同様に、露光ラインごとに互いに異なるタイミングで露光が開始されるが、露光ラインのそれぞれで、その露光ラインに含まれる各撮像素子の露光時間が異なる。受信機は、このイメージセンサ10012aによる撮像によって、通常撮像画像10012bと、可視光撮像画像10012cとを得る。さらに、受信機は、この通常撮像画像10012bと、可視光撮像画像10012cから得られた可視光信号に関連付けられた情報とに基づいて、プレビュー画像10012dを生成して表示する。 As shown in FIG. 58, an image sensor 10012a may be used instead of the image sensor 10010a. In the image sensor 10012a, the exposure time is set for each image sensor so that the same exposure time is not set continuously for each image sensor along the horizontal direction and the vertical direction. That is, the exposure time for each image sensor is such that a plurality of image sensors with a long exposure time set and a plurality of image sensors with a short exposure time are distributed like a grid or checkered pattern. Is set. Also in this case, similarly to the image sensor 10010a, the exposure is started at different timings for each exposure line, but the exposure time of each image sensor included in the exposure line is different for each exposure line. The receiver obtains a normal captured image 10012b and a visible light captured image 10012c by imaging with the image sensor 10012a. Further, the receiver generates and displays a preview image 10012d based on the normal captured image 10012b and information associated with the visible light signal obtained from the visible light captured image 10012c.
 イメージセンサ10012aによって得られる通常撮像画像10012bは、格子状に配置された、または均一に配置された複数の撮像素子のデータを持つため、通常撮像画像10010bと通常撮像画像10011bよりも正確に補間やリサイズをすることができる。また、可視光撮像画像10012cは、イメージセンサ10012aの全ての露光ラインを用いた撮像によって生成されている。つまり、このイメージセンサ10012aでは、イメージセンサ10010aとは異なり、可視光撮像に全ての露光ラインを用いることができる。その結果、イメージセンサ10012aによって得られる可視光撮像画像10012cには、可視光撮像画像10011cと同様に、可視光撮像画像10010cと比べて輝線が多く含まれているため、可視光信号の受信を高精度に行うことができる。 Since the normal captured image 10012b obtained by the image sensor 10012a has data of a plurality of imaging elements arranged in a grid pattern or uniformly, interpolation or more accurately than the normal captured image 10010b and the normal captured image 10011b is performed. You can resize. The visible light captured image 10012c is generated by imaging using all exposure lines of the image sensor 10012a. That is, in the image sensor 10012a, unlike the image sensor 10010a, all exposure lines can be used for visible light imaging. As a result, the visible light captured image 10012c obtained by the image sensor 10012a includes a larger number of bright lines than the visible light captured image 10010c, as in the visible light captured image 10011c. Can be done with precision.
 ここで、プレビュー画像のインタレース表示について説明する。 Here, the interlaced display of the preview image will be described.
 図59は、実施の形態9における受信機の画面表示方法の一例を示す図である。 FIG. 59 is a diagram illustrating an example of a receiver screen display method according to the ninth embodiment.
 上述の図56に示すイメージセンサ10010aを備える受信機は、奇数番目の露光ライン(以下、奇数ラインという)に設定される露光時間と、偶数番目の露光ライン(以下、偶数ラインという)に設定される露光時間とを所定の時間ごとに入れ替える。例えば、図59に示すように、受信機は、時刻t1で、奇数ラインの各撮像素子に対して長い露光時間を設定し、偶数ラインの各撮像素子に対して短い露光時間を設定し、それらの設定された露光時間を用いた撮像を行う。さらに、受信機は、時刻t2で、奇数ラインの各撮像素子に対して短い露光時間を設定し、偶数ラインの各撮像素子に対して長い露光時間を設定し、それらの設定された露光時間を用いた撮像を行う。そして、受信機は、時刻t3で、時刻t1のときと同様に設定された各露光時間を用いた撮像を行い、時刻t4で、時刻t2のときと同様に設定された各露光時間を用いた撮像を行う。 The receiver including the image sensor 10010a shown in FIG. 56 is set to an exposure time set for an odd-numbered exposure line (hereinafter referred to as an odd-numbered line) and an even-numbered exposure line (hereinafter referred to as an even-numbered line). The exposure time is changed every predetermined time. For example, as shown in FIG. 59, at time t1, the receiver sets a long exposure time for each image sensor on the odd lines and sets a short exposure time on each image sensor on the even lines. Imaging is performed using the set exposure time. Further, at time t2, the receiver sets a short exposure time for each image sensor of the odd line, sets a long exposure time for each image sensor of the even line, and sets the set exposure time. The used imaging is performed. Then, at time t3, the receiver performs imaging using each exposure time set similarly to time t1, and uses each exposure time set similarly to time t2 at time t4. Take an image.
 また、受信機は、時刻t1では、撮像によって複数の奇数ラインのそれぞれから得られる画像(以下、奇数ライン像という)と、撮像によって複数の偶数ラインのそれぞれから得られる画像(以下、偶数ライン像という)とを含むImage1を取得する。このときには、複数の偶数ラインのそれぞれでは露光時間が短いため、偶数ライン像のそれぞれには、被写体が鮮明に映し出されていない。したがって、受信機は、複数の奇数ライン像に対して画素値の補間を行うことによって、複数の補間ライン像を生成する。そして、受信機は、複数の偶数ライン像の代わりに複数の補間ライン像を含むプレビュー画像を表示する。つまり、プレビュー画像には、奇数ライン像と補間ライン像とが交互に配列されている。 Further, at time t1, the receiver captures an image obtained from each of a plurality of odd lines by imaging (hereinafter referred to as an odd line image) and an image obtained from each of a plurality of even lines by imaging (hereinafter referred to as an even line image). Image1 including the above. At this time, since the exposure time is short in each of the plurality of even lines, the subject is not clearly displayed in each of the even line images. Therefore, the receiver generates a plurality of interpolated line images by interpolating pixel values for the plurality of odd line images. Then, the receiver displays a preview image including a plurality of interpolation line images instead of the plurality of even line images. That is, odd-numbered line images and interpolated line images are alternately arranged in the preview image.
 受信機は、時刻t2では、撮像によって複数の奇数ライン像と偶数ライン像とを含むImage2を取得する。このときには、複数の奇数ラインのそれぞれでは、露光時間が短いため、奇数ライン像のそれぞれには、被写体が鮮明に映し出されていない。したがって、受信機は、Image2の奇数ライン像の代わりに、Image1の奇数ライン像を含むプレビュー画像を表示する。つまり、プレビュー画像には、Image1の奇数ライン像とImage2の偶数ライン像とが交互に配列されている。 The receiver acquires Image2 including a plurality of odd line images and even line images by imaging at time t2. At this time, since the exposure time is short in each of the plurality of odd lines, the subject is not clearly displayed in each of the odd line images. Therefore, the receiver displays a preview image including the odd line image of Image1 instead of the odd line image of Image2. That is, the odd line image of Image1 and the even line image of Image2 are alternately arranged in the preview image.
 さらに、受信機は、時刻t3では、撮像によって複数の奇数ライン像と偶数ライン像とを含むImage3を取得する。このときには、時刻t1のときと同様に、複数の偶数ラインのそれぞれでは、露光時間が短いため、偶数ライン像のそれぞれには、被写体が鮮明に映し出されていない。したがって、受信機は、Image3の偶数ライン像の代わりに、Image2の偶数ライン像を含むプレビュー画像を表示する。つまり、プレビュー画像には、Image2の偶数ライン像とImage3の奇数ライン像とが交互に配列されている。そして、受信機は、時刻t4では、撮像によって複数の奇数ライン像と偶数ライン像とを含むImage4を取得する。このときには、時刻t2のときと同様に、複数の奇数ラインのそれぞれでは、露光時間が短いため、奇数ライン像のそれぞれには、被写体が鮮明に映し出されていない。したがって、受信機は、Image4の奇数ライン像の代わりに、Image3の奇数ライン像を含むプレビュー画像を表示する。つまり、プレビュー画像には、Image3の奇数ライン像とImage4の偶数ライン像とが交互に配列されている。 Further, at time t3, the receiver acquires Image3 including a plurality of odd line images and even line images by imaging. At this time, similarly to the time t1, since the exposure time is short in each of the plurality of even lines, the subject is not clearly displayed in each of the even line images. Therefore, the receiver displays a preview image including an even line image of Image2 instead of an even line image of Image3. That is, in the preview image, the even line image of Image2 and the odd line image of Image3 are alternately arranged. Then, at time t4, the receiver acquires Image4 including a plurality of odd line images and even line images by imaging. At this time, similarly to the time t2, since the exposure time is short in each of the plurality of odd lines, the subject is not clearly displayed in each of the odd line images. Therefore, the receiver displays a preview image including the odd line image of Image3 instead of the odd line image of Image4. That is, the odd line image of Image3 and the even line image of Image4 are alternately arranged in the preview image.
 このように、受信機は、取得されたタイミングが互いに異なる偶数ライン像と奇数ライン像とを含むImageを表示する、いわゆるインタレース表示を行う。 As described above, the receiver performs so-called interlaced display, in which an image including even line images and odd line images obtained at different timings is displayed.
 これのような受信機は、可視光撮像を行いながら精細なプレビュー画像を表示することができる。なお、同じ露光時間が設定される複数の撮像素子は、イメージセンサ10010aのように露光ラインに水平な方向に沿って配列されている複数の撮像素子でもよいし、イメージセンサ10011aのように露光ラインに垂直な方向に沿って配列されている複数の撮像素子でもよいし、イメージセンサ10012aのように市松模様にしたがって配列されている複数の撮像素子であってもよい。また、受信機は、プレビュー画像を撮像データとして保存してもよい。 Such a receiver can display a fine preview image while performing visible light imaging. Note that the plurality of image sensors set with the same exposure time may be a plurality of image sensors arranged along the horizontal direction of the exposure line as in the image sensor 10010a, or the exposure line as in the image sensor 10011a. A plurality of image sensors arranged along a direction perpendicular to the image sensor may be used, or a plurality of image sensors arranged according to a checkered pattern like the image sensor 10012a. Further, the receiver may save the preview image as imaging data.
 次に、通常撮像と可視光撮像の空間比率について説明する。 Next, the spatial ratio between normal imaging and visible light imaging will be described.
 図60は、実施の形態9における信号受信方法の一例を示す図である。 FIG. 60 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
 受信機に備えられるイメージセンサ10014bでは、上述のイメージセンサ10010aと同様に、露光ラインごとに長い露光時間または短い露光時間が設定される。このイメージセンサ10014bでは、長い露光時間が設定される撮像素子の数と、短い露光時間が設定される撮像素子の数との比は、1:1である。なお、この比は、通常撮像と可視光撮像との比であって、以下、空間比率という。 In the image sensor 10014b provided in the receiver, a long exposure time or a short exposure time is set for each exposure line as in the image sensor 10010a described above. In this image sensor 10014b, the ratio of the number of image sensors for which a long exposure time is set to the number of image sensors for which a short exposure time is set is 1: 1. This ratio is a ratio between normal imaging and visible light imaging, and is hereinafter referred to as a spatial ratio.
 しかし、本実施の形態では、その空間比率は1:1である必要はない。例えば、受信機は、イメージセンサ10014aを備えていてもよい。このイメージセンサ10014aでは、短い露光時間の撮像素子の方が、長い露光時間の撮像素子よりも多く、空間比率は、1:N(N>1)である。また、受信機は、イメージセンサ10014cを備えていてもよい。このイメージセンサ10014cでは、短い露光時間の撮像素子の方が、長い露光時間の撮像素子よりも少なく、空間比率は、N(N>1):1である。また、受信機は、イメージセンサ10014a~10014cの代わりに、上述の垂直ラインごとに露光時間が設定され、それぞれ1:N、1:1、またはN:1の空間比率を有するイメージセンサ10015a~10015cの何れかを備えてもよい。 However, in this embodiment, the space ratio does not have to be 1: 1. For example, the receiver may include an image sensor 10014a. In this image sensor 10014a, the number of image sensors with a short exposure time is larger than that of images with a long exposure time, and the spatial ratio is 1: N (N> 1). The receiver may include an image sensor 10014c. In this image sensor 10014c, the image sensor with a short exposure time is smaller than the image sensor with a long exposure time, and the spatial ratio is N (N> 1): 1. In addition, instead of the image sensors 10014a to 10014c, the receiver sets the exposure time for each of the above-described vertical lines, and the image sensors 10015a to 10015c have a spatial ratio of 1: N, 1: 1, or N: 1, respectively. Any of these may be provided.
 このようなイメージセンサ10014a,10015aでは、短い露光時間の撮像素子が多いため、可視光信号の受信精度または受信速度を高めることができる。一方、イメージセンサ10014c,10015cでは、長い露光時間の撮像素子が多いため、精細なプレビュー画像を表示することができる。 In such image sensors 10014a and 10015a, since there are many imaging elements with a short exposure time, the reception accuracy or reception speed of a visible light signal can be increased. On the other hand, since the image sensors 10014c and 10015c have many image sensors with a long exposure time, a fine preview image can be displayed.
 また、受信機は、イメージセンサ10014a,10014c,10015a,10015cを用いて、図59に示すように、インタレース表示を行ってもよい。 Further, the receiver may perform interlaced display as shown in FIG. 59 using the image sensors 10014a, 10014c, 10015a, and 10015c.
 次に、通常撮像と可視光撮像の時間比率について説明する。 Next, the time ratio between normal imaging and visible light imaging will be described.
 図61は、実施の形態9における信号受信方法の一例を示す図である。 FIG. 61 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
 受信機は、図61の(a)に示すように、撮像モードを1フレーム毎に通常撮像モードと可視光撮像モードとに切り替えてもよい。通常撮像モードは、受信機のイメージセンサの全ての撮像素子に対して、通常撮像用の長い露光時間が設定される撮像モードである。可視光撮像モードは、受信機のイメージセンサの全ての撮像素子に対して、可視光撮像用の短い露光時間が設定される撮像モードである。このように、露光時間の長い/短いを切り替えることで、短い露光時間での撮像によって可視光信号を受信しながら、長い露光時間での撮像によってプレビュー画像を表示することができる。 The receiver may switch the imaging mode between the normal imaging mode and the visible light imaging mode for each frame, as shown in FIG. The normal imaging mode is an imaging mode in which a long exposure time for normal imaging is set for all imaging elements of the image sensor of the receiver. The visible light imaging mode is an imaging mode in which a short exposure time for visible light imaging is set for all imaging elements of the image sensor of the receiver. In this way, by switching between long and short exposure times, a preview image can be displayed by imaging with a long exposure time while receiving a visible light signal by imaging with a short exposure time.
 なお、受信機は、長い露光時間を自動露出によって決定する場合には、短い露光時間での撮像によって得られた画像を無視し、長い露光時間での撮像によって得られた画像の明るさのみを基準に自動露出を行ってもよい。これにより、長い露光時間を適切な時間に決定することができる。 Note that when the long exposure time is determined by automatic exposure, the receiver ignores the image obtained by imaging with a short exposure time, and only determines the brightness of the image obtained by imaging with a long exposure time. Automatic exposure may be performed with reference. Thereby, a long exposure time can be determined as an appropriate time.
 また、受信機は、図61の(b)に示すように、撮像モードを複数フレームのセットごとに通常撮像モードと可視光撮像モードとに切り替えてもよい。露光時間の切替に時間がかかる場合や、露光時間が安定するまでに時間がかかる場合には、図61の(b)に示すように、複数フレームのセットごとに露光時間を変化させることで、可視光撮像(可視光信号の受信)と通常撮像とを両立させることができる。また、セットに含まれるフレームの数が多いほど、露光時間の切替の回数が少なくなるため、受信機における電力消費、及び、発熱を抑えることができる。 Also, as shown in FIG. 61 (b), the receiver may switch the imaging mode between the normal imaging mode and the visible light imaging mode for each set of a plurality of frames. When it takes time to switch the exposure time or when it takes time to stabilize the exposure time, as shown in FIG. 61 (b), by changing the exposure time for each set of a plurality of frames, Visible light imaging (reception of visible light signals) and normal imaging can be made compatible. Further, since the number of exposure time switching is reduced as the number of frames included in the set is increased, power consumption and heat generation in the receiver can be suppressed.
 ここで、通常撮像モードでの長い露光時間の撮像によって連続して生成される少なくとも1つのフレームの数と、可視光撮像モードでの短い露光時間の撮像によって連続して生成される少なくとも1つのフレームの数との比(以下、時間比率という)は、1:1でなくてもよい。つまり、図61の(a)および(b)に示す場合では、時間比率は1:1であるが、その時間比率は1:1でなくてもよい。 Here, the number of at least one frame continuously generated by imaging with a long exposure time in the normal imaging mode and the at least one frame continuously generated by imaging with a short exposure time in the visible light imaging mode The ratio (hereinafter referred to as the time ratio) with the number of s is not necessarily 1: 1. That is, in the cases shown in FIGS. 61A and 61B, the time ratio is 1: 1, but the time ratio may not be 1: 1.
 例えば、受信機は、図61の(c)に示すように、可視光撮像モードのフレームを、通常撮像モードのフレームより多くしてもよい。これにより、可視光信号の受信速度を速くすることができる。プレビュー画像のフレームレートが所定のレート以上であれば、フレームレートによるプレビュー画像の違いは人間の目には認識されない。撮像のフレームレートが十分高い場合、例えば、そのフレームレートが120fpsの場合には、受信機は、連続する3フレームに対して可視光撮像モードを設定し、次く1フレームに対して可視光撮像モードを設定する。これにより、受信機は、上述の所定のレートよりも十分に高い30fpsのフレームレートでプレビュー画像を表示しながら、高速に可視光信号を受信することができる。また、切替の回数が少なくなるため、図61の(b)で説明した効果も得られる。 For example, the receiver may increase the number of frames in the visible light imaging mode as compared with the frame in the normal imaging mode, as illustrated in FIG. Thereby, the receiving speed of the visible light signal can be increased. If the frame rate of the preview image is equal to or higher than a predetermined rate, the difference in the preview image depending on the frame rate is not recognized by human eyes. When the imaging frame rate is sufficiently high, for example, when the frame rate is 120 fps, the receiver sets the visible light imaging mode for three consecutive frames, and then the visible light imaging for one frame. Set the mode. Thereby, the receiver can receive a visible light signal at high speed while displaying a preview image at a frame rate of 30 fps, which is sufficiently higher than the above-described predetermined rate. Further, since the number of times of switching is reduced, the effect described with reference to FIG.
 また、受信機は、図61の(d)に示すように、通常撮像モードのフレームを、可視光撮像モードのフレームより多くしてもよい。このように、通常撮像モードのフレーム、つまり、長い露光時間での撮像によって得られるフレームを多くすることで、プレビュー画像を滑らかに表示することができる。また、可視光信号の受信処理を行う回数が減るため、省電力効果がある。また、切替の回数が少なくなるため、図61の(b)で説明した効果も得られる。 Further, as shown in FIG. 61 (d), the receiver may increase the number of frames in the normal imaging mode than that in the visible light imaging mode. Thus, the preview image can be smoothly displayed by increasing the number of frames in the normal imaging mode, that is, the frames obtained by imaging with a long exposure time. In addition, since the number of times of receiving the visible light signal is reduced, there is a power saving effect. Further, since the number of times of switching is reduced, the effect described with reference to FIG.
 また、受信機は、図61の(e)に示すように、まず、図61の(a)に示す場合と同様に、1フレームごとに撮像モードを切り替え、次に、可視光信号の受信が完了すると、図61の(d)に示す場合と同様に、通常撮像モードのフレームを多くしてもよい。これにより、可視光信号の受信完了後には、プレビュー画像を滑らかに表示しつつ、新たな可視光信号が存在しないかの探索を続けることができる。また、切替の回数が少なくなるため、図61の(b)で説明した効果も得られる。 As shown in (e) of FIG. 61, the receiver first switches the imaging mode for each frame as in the case of (a) of FIG. 61, and then receives a visible light signal. When completed, the number of frames in the normal imaging mode may be increased as in the case shown in FIG. Thereby, after the reception of the visible light signal is completed, the search for a new visible light signal can be continued while the preview image is displayed smoothly. Further, since the number of times of switching is reduced, the effect described with reference to FIG.
 図62は、実施の形態9における信号受信方法の一例を示すフローチャートである。 FIG. 62 is a flowchart illustrating an example of a signal reception method according to the ninth embodiment.
 受信機は、可視光信号を受信する処理である可視光受信を開始し(ステップS10017a)、露光時間長短設定比を、ユーザが指定した値に設定する(ステップS10017b)。露光時間長短設定比は、上述の空間比率と時間比率のうちの少なくとも1つである。ユーザは、空間比率のみ、時間比率のみ、または、その空間比率および時間比率の双方の値を指定してもよいし、受信機がユーザによる指定に関わらず自動で設定してもよい。 The receiver starts visible light reception, which is a process of receiving a visible light signal (step S10017a), and sets the exposure time length setting ratio to a value designated by the user (step S10017b). The exposure time length short / high setting ratio is at least one of the above-described space ratio and time ratio. The user may specify only the space ratio, only the time ratio, or both the space ratio and the time ratio, or the receiver may automatically set regardless of the user's specification.
 次に、受信機は、受信性能が所定の値以下であるか否かを判定する(ステップS10017c)。所定の値以下であると判定すると(ステップS10017cのY)、受信機は、短い露光時間の比率を高く設定する(ステップS10017d)。これにより、受信性能を高めることができる。なお、短い露光時間の比率は、空間比率の場合、長い露光時間が設定される撮像素子の数に対する、短い露光時間が設定される撮像素子の数の比率であり、時間比率の場合、通常撮像モードで連続して生成されるフレームの数に対する、可視光撮像モードで連続して生成されるフレームの数の比率である。 Next, the receiver determines whether or not the reception performance is equal to or less than a predetermined value (step S10017c). If it is determined that the value is equal to or less than the predetermined value (Y in step S10017c), the receiver sets a high ratio of the short exposure time (step S10017d). Thereby, reception performance can be improved. Note that the ratio of the short exposure time is the ratio of the number of image sensors set with a short exposure time to the number of image sensors set with a long exposure time in the case of a spatial ratio. It is a ratio of the number of frames generated continuously in the visible light imaging mode to the number of frames generated continuously in the mode.
 次に、受信機は、可視光信号の少なくとも一部を受信し、その受信された可視光信号の少なくとも一部(以下、受信信号という)に優先度が設定されているか否かを判定する(ステップS10017e)。なお、優先度が設定されている場合には、優先度を示す識別子が受信信号に含まれている。受信機は、優先度が設定されていると判定すると(ステップS10017eのY)、その優先度にしたがって露光時間長短比を設定する(ステップS10017f)。すなわち、優先度が高ければ、受信機は、短い露光時間の比率を高く設定する。例えば、送信機として構成された非常灯が輝度変化することによって、高い優先度を示す識別子を発している。この場合、受信機は、短い露光時間の比率を高くすることで受信速度を上げ、速やかに避難経路などを表示することができる。 Next, the receiver receives at least a part of the visible light signal and determines whether or not a priority is set for at least a part of the received visible light signal (hereinafter referred to as a received signal) ( Step S10017e). When priority is set, an identifier indicating the priority is included in the received signal. When the receiver determines that the priority is set (Y in step S10017e), the receiver sets the exposure time length / short ratio according to the priority (step S10017f). That is, if the priority is high, the receiver sets the ratio of the short exposure time high. For example, an emergency light configured as a transmitter emits an identifier indicating a high priority when the luminance changes. In this case, the receiver can increase the reception speed by increasing the ratio of the short exposure time, and can promptly display the evacuation route and the like.
 次に、受信機は、可視光信号の全ての受信が完了したか否かを判定する(ステップS10017g)。ここで、完了していないと判定したときには(ステップS10017gのN)、受信機はステップS10017cからの処理を繰り返し実行する。一方、完了したと判定したときには(ステップS10017gのY)、受信機は、長い露光時間の比率を高く設定し、省電力モードに移行する(ステップS10017h)。なお、長い露光時間の比率は、空間比率の場合、短い露光時間が設定される撮像素子の数に対する、長い露光時間が設定される撮像素子の数の比率であり、時間比率の場合、可視光撮像モードで連続して生成されるフレームの数に対する、通常撮像モードで連続して生成されるフレームの数の比率である。これにより、不要な可視光受信を行わず、プレビュー画像を滑らかに表示することができる。 Next, the receiver determines whether or not reception of all visible light signals has been completed (step S10017g). Here, when it is determined that it has not been completed (N in step S10017g), the receiver repeatedly executes the processing from step S10017c. On the other hand, when it is determined that the process is completed (Y in step S10017g), the receiver sets the ratio of the long exposure time to a high value and shifts to the power saving mode (step S10017h). Note that the ratio of the long exposure time is the ratio of the number of image sensors set with a long exposure time to the number of image sensors set with a short exposure time in the case of a spatial ratio. This is the ratio of the number of frames generated continuously in the normal imaging mode to the number of frames generated continuously in the imaging mode. As a result, the preview image can be displayed smoothly without receiving unnecessary visible light.
 次に、受信機は、別の可視光信号を発見したか否かを判定する(ステップS10017i)。ここで、発見したと判定したときには(ステップS10017iのY)、受信機は、ステップS10017bからの処理を繰り返し実行する。 Next, the receiver determines whether another visible light signal has been found (step S10017i). Here, when it is determined that it has been found (Y in step S10017i), the receiver repeatedly executes the processing from step S10017b.
 次に、可視光撮像と通常撮像との同時実行について説明する。 Next, simultaneous execution of visible light imaging and normal imaging will be described.
 図63は、実施の形態9における信号受信方法の一例を示す図である。 FIG. 63 is a diagram illustrating an example of a signal reception method according to the ninth embodiment.
 受信機は、イメージセンサに2以上の露光時間を設定してもよい。つまり、図63の(a)に示すように、イメージセンサに含まれる露光ラインのそれぞれは、設定された2以上の露光時間のうち、最も長い露光時間だけ連続して露光される。受信機は、露光ラインごとに、上述の設定された2以上の露光時間がそれぞれ経過した時点で、その露光ラインの露光によって得られた撮像データを読み出す。ここで、受信機は、最も長い露光時間が経過するまでは、読み出された撮像データをリセットしない。したがって、受信機は、読み出された撮像データの累積値を記録しておくことで、最も長い露光時間の露光だけで、複数の露光時間での撮像データを得ることができる。なお、イメージセンサは、撮像データの累積値の記録を行ってもよいし、行わなくてもよい。イメージセンサが行わない場合には、イメージセンサからデータを読み出す受信機の構成要素が、累積の計算、つまり撮像データの累積値の記録を行う。 The receiver may set two or more exposure times for the image sensor. That is, as shown in FIG. 63A, each of the exposure lines included in the image sensor is continuously exposed for the longest exposure time among two or more set exposure times. For each exposure line, the receiver reads out the imaging data obtained by exposure of the exposure line when the above-described two or more set exposure times have elapsed. Here, the receiver does not reset the read image data until the longest exposure time has elapsed. Therefore, the receiver can obtain image data for a plurality of exposure times only by exposure with the longest exposure time by recording the accumulated value of the read image data. Note that the image sensor may or may not record the cumulative value of the imaging data. When the image sensor is not used, the components of the receiver that read data from the image sensor perform accumulation calculation, that is, recording the accumulated value of the imaging data.
 例えば、露光時間が2つ設定されている場合には、図63の(a)に示すように、受信機は、短い露光時間の露光によって生成された、可視光信号を含む可視光撮像データを読み出し、続けて、長い露光時間の露光によって生成された通常撮像データを読み出す。 For example, when two exposure times are set, as shown in (a) of FIG. 63, the receiver captures visible light imaging data including a visible light signal generated by exposure with a short exposure time. Read out, followed by normal imaging data generated by exposure with a long exposure time.
 これにより、可視光信号を受信するための撮像である可視光撮像と、通常撮像とを同時に行うことができ、可視光信号を受信しながら通常の撮像を行うことができる。また、複数の露光時間のデータを用いることで、サンプリング定理以上の信号周波数を認識することができ、高周波信号や高密度変調信号の受信を行うことができる。 Thereby, visible light imaging that is imaging for receiving a visible light signal and normal imaging can be performed simultaneously, and normal imaging can be performed while receiving a visible light signal. Further, by using data of a plurality of exposure times, a signal frequency higher than the sampling theorem can be recognized, and a high-frequency signal or a high-density modulation signal can be received.
 さらに、受信機は、撮像データを出力する際、図63(b)に示すように、その撮像データを撮像データボディとして含むデータ列を出力する。つまり、受信機は、撮像モード(可視光撮像または通常撮像)を示す撮像モード識別子と、撮像素子または撮像素子が属する露光ラインを特定するための撮像素子識別子と、撮像データボディが何番目の露光時間の撮像データであるかを示す撮像データ番号と、撮像データボディのサイズを示す撮像データ長とを含む付加情報を、撮像データボディに付加することによって、上述のデータ列を生成して出力する。図63の(a)を用いて説明した撮像データの読み出し方法では、それぞれの撮像データが露光ラインの順番に出力されるとは限らない。そこで、図63の(b)に示す付加情報を付加することで、撮像データがどの露光ラインの撮像データであるかを特定することができる。 Furthermore, when outputting the imaging data, the receiver outputs a data string including the imaging data as an imaging data body, as shown in FIG. 63 (b). That is, the receiver has an imaging mode identifier indicating an imaging mode (visible light imaging or normal imaging), an imaging element identifier for specifying an imaging element or an exposure line to which the imaging element belongs, and an exposure number of an imaging data body. By adding additional information including an imaging data number indicating whether it is time-based imaging data and an imaging data length indicating the size of the imaging data body to the imaging data body, the above-described data string is generated and output. . In the readout method of imaging data described with reference to FIG. 63A, the respective imaging data is not always output in the order of exposure lines. Therefore, by adding the additional information shown in (b) of FIG. 63, it is possible to specify which exposure line the imaging data is.
 図64は、実施の形態9における受信プログラムの処理を示すフローチャートである。 FIG. 64 is a flowchart showing processing of the reception program in the ninth embodiment.
 この受信プログラムは、受信機に備えられたコンピュータに例えば図56~図63に示す処理を実行させるプログラムである。 This reception program is a program that causes a computer provided in the receiver to execute the processes shown in FIGS. 56 to 63, for example.
 つまり、この受信プログラムは、輝度変化する発光体から、情報を受信するための受信プログラムである。具体的には、この受信プログラムは、ステップSA31、ステップSA32およびステップSA33をコンピュータに実行させる。ステップSA31では、イメージセンサに含まれるK個(Kは4以上の整数)の撮像素子のうちの一部の複数の撮像素子に対して第1の露光時間を設定し、K個の撮像素子のうちの残りの複数の撮像素子に対して、第1の露光時間よりも短い第2の露光時間を設定する。ステップSA32では、輝度変化する発光体である被写体を、設定された第1および第2の露光時間でイメージセンサに撮像させることによって、第1の露光時間が設定された複数の撮像素子からの出力に応じた通常画像を取得するとともに、第2の露光時間が設定された複数の撮像素子からの出力に応じた画像であって、イメージセンサに含まれる複数の露光ラインのそれぞれに対応する輝線を含む画像である輝線画像を取得する。ステップSA33では、取得された輝線画像に含まれる複数の輝線のパターンを復号することにより情報を取得する。 That is, this reception program is a reception program for receiving information from a light-emitting body that changes in luminance. Specifically, this reception program causes the computer to execute step SA31, step SA32, and step SA33. In step SA31, a first exposure time is set for some of the K image sensors (K is an integer of 4 or more) included in the image sensor, and the K image sensors are set. A second exposure time shorter than the first exposure time is set for the remaining plurality of image sensors. In step SA32, the image sensor captures the subject, which is a light-emitting body that changes in luminance, with the set first and second exposure times, and outputs from the plurality of image sensors having the first exposure time set. And acquiring bright lines corresponding to the plurality of exposure lines included in the image sensor, the images corresponding to the outputs from the plurality of imaging elements set with the second exposure time. A bright line image that is an image to be included is acquired. In step SA33, information is acquired by decoding a plurality of bright line patterns included in the acquired bright line image.
 これにより、第1の露光時間が設定される複数の撮像素子と、第2の露光時間が設定される複数の撮像素子とによって撮像が行われるため、イメージセンサによる1回の撮像で、通常画像と輝線画像とを取得することができる。つまり、通常画像の撮像と、可視光通信による情報の取得とを同時に行うことができる。 Thereby, since imaging is performed by the plurality of imaging elements in which the first exposure time is set and the plurality of imaging elements in which the second exposure time is set, a normal image can be obtained by one imaging by the image sensor. And bright line images can be acquired. That is, it is possible to simultaneously capture a normal image and acquire information by visible light communication.
 また、露光時間設定ステップSA31では、イメージセンサに含まれるL個(Lは4以上の整数)の撮像素子列のうちの一部の複数の撮像素子列に対して、第1の露光時間を設定し、L個の撮像素子列のうちの残りの複数の撮像素子列に対して、第2の露光時間を設定する。ここで、L個の撮像素子列のそれぞれは、イメージセンサに含まれる、一列に配列された複数の撮像素子からなる。 In the exposure time setting step SA31, the first exposure time is set for a part of a plurality of image sensor rows in the L (L is an integer of 4 or more) image sensor rows included in the image sensor. Then, the second exposure time is set for the remaining plurality of image sensor rows in the L image sensor rows. Here, each of the L image sensor rows is composed of a plurality of image sensors arranged in a row included in the image sensor.
 これにより、小さな単位である撮像素子のそれぞれに対して個別に露光時間を設定することなく、大きな単位である撮像素子列ごとに露光時間を設定することができ、処理負担を軽減することができる。 Accordingly, it is possible to set the exposure time for each image pickup device array which is a large unit without individually setting the exposure time for each image pickup device which is a small unit, and to reduce the processing load. .
 例えば、L個の撮像素子列のそれぞれは、図56に示すように、イメージセンサに含まれる露光ラインである。または、L個の撮像素子列のそれぞれは、図57に示すように、イメージセンサに含まれる露光ラインに垂直な方向に沿って配列された複数の撮像素子からなる。 For example, each of the L image sensor rows is an exposure line included in the image sensor as shown in FIG. Alternatively, as shown in FIG. 57, each of the L image pickup device arrays includes a plurality of image pickup devices arranged along a direction perpendicular to an exposure line included in the image sensor.
 また、図59に示すように、露光時間設定ステップSA31では、イメージセンサに含まれるL個の撮像素子列のうちの奇数番目にある撮像素子列のそれぞれに対して同一の露光時間である、第1および第2の露光時間のうちの一方を設定し、L個の撮像素子列のうちの偶数番目にある撮像素子列のそれぞれに対して同一の露光時間である、第1および第2の露光時間のうちの他方を設定してもよい。そして、露光時間設定ステップSA31、画像取得ステップSA32および情報取得ステップSA33を繰り返す場合、繰り返される露光時間設定ステップSA31では、直前の露光時間設定ステップSA31で、奇数番目の撮像素子列のそれぞれに設定されていた露光時間と、偶数番目の撮像素子列のそれぞれに設定されていた露光時間とを入れ替えてもよい。 Further, as shown in FIG. 59, in the exposure time setting step SA31, the same exposure time is used for each of the odd-numbered image sensor rows of the L image sensor rows included in the image sensor. One of the first exposure time and the second exposure time is set, and the first exposure time and the second exposure time are the same exposure time for each of even-numbered image sensor rows of the L image sensor rows. You may set the other of time. When the exposure time setting step SA31, the image acquisition step SA32, and the information acquisition step SA33 are repeated, the repeated exposure time setting step SA31 is set for each of the odd-numbered image sensor rows in the previous exposure time setting step SA31. The exposure time that has been set may be interchanged with the exposure time that has been set for each even-numbered imaging element array.
 これにより、通常画像の取得が行われるごとに、その取得に用いられる複数の撮像素子列を、奇数番目の複数の撮像素子列と、偶数番目の複数の撮像素子列とに切り替えることができる。その結果、順次取得される通常画像のそれぞれをインタレースによって表示することができる。また、連続して取得された2つの通常画像を互いに補完することによって、奇数番目の複数の撮像素子列による画像と、偶数番目の複数の撮像素子列による画像と含む新たな通常画像を生成することができる。 Thus, each time a normal image is acquired, the plurality of image sensor rows used for the acquisition can be switched between an odd number of image sensor rows and an even number of image sensor rows. As a result, each of the sequentially acquired normal images can be displayed by interlace. Further, by complementing two consecutively acquired normal images with each other, a new normal image including an image by an odd-numbered plurality of imaging element arrays and an image by an even-numbered plurality of imaging element arrays is generated. be able to.
 また、図60に示すように、露光時間設定ステップSA31では、設定モードを通常優先モードと可視光優先モードとに切り替え、通常優先モードに切り替えられる場合には、第1の露光時間が設定される撮像素子の数を、第2の露光時間が設定される撮像素子の数よりも多くしてもよい。また、可視光優先モードに切り替えられる場合には、第1の露光時間が設定される撮像素子の数を、第2の露光時間が設定される撮像素子の数よりも少なくしてもよい。 As shown in FIG. 60, in the exposure time setting step SA31, the first exposure time is set when the setting mode is switched between the normal priority mode and the visible light priority mode and can be switched to the normal priority mode. The number of image sensors may be larger than the number of image sensors for which the second exposure time is set. In addition, when the mode is switched to the visible light priority mode, the number of image sensors for which the first exposure time is set may be smaller than the number of image sensors for which the second exposure time is set.
 これにより、設定モードが通常優先モードに切り替えられた場合には、通常画像の画質を向上することができ、可視光優先モードに切り替えられた場合には、発光体からの情報の受信効率を向上することができる。 As a result, when the setting mode is switched to the normal priority mode, the image quality of the normal image can be improved. When the setting mode is switched to the visible light priority mode, the reception efficiency of information from the light emitter is improved. can do.
 また、図58に示すように、露光時間設定ステップSA31では、第1の露光時間が設定される複数の撮像素子と、第2の露光時間が設定される複数の撮像素子とが、市松模様(Checkered pattern)のように分布するように、イメージセンサに含まれる撮像素子ごとに、その撮像素子の露光時間を設定してもよい。 As shown in FIG. 58, in the exposure time setting step SA31, a plurality of image sensors for which the first exposure time is set and a plurality of image sensors for which the second exposure time is set are in a checkered pattern ( The exposure time of the image sensor may be set for each image sensor included in the image sensor so as to be distributed like Checkered pattern.
 これにより、第1の露光時間が設定される複数の撮像素子と、第2の露光時間が設定される複数の撮像素子とがそれぞれ均一に分布するため、水平方向および垂直方向に画質の偏りのない通常画像および輝線画像を取得することができる。 As a result, the plurality of image pickup devices for which the first exposure time is set and the plurality of image pickup devices for which the second exposure time is set are uniformly distributed. Not normal images and bright line images can be acquired.
 図65は、実施の形態9における受信装置のブロック図である。 FIG. 65 is a block diagram of a receiving apparatus according to the ninth embodiment.
 この受信装置A30は、例えば図56~図63に示す処理を実行する上述の受信機である。 The receiving device A30 is the above-described receiver that executes the processing shown in FIGS. 56 to 63, for example.
 つまり、この受信装置A30は、輝度変化する発光体から情報を受信する受信装置であって、複数露光時間設定部A31と、撮像部A32と、復号部A33とを備える。複数露光時間設定部A31は、イメージセンサに含まれるK個(Kは4以上の整数)の撮像素子のうちの一部の複数の撮像素子に対して第1の露光時間を設定し、K個の撮像素子のうちの残りの複数の撮像素子に対して、第1の露光時間よりも短い第2の露光時間を設定する。撮像部A32は、輝度変化する発光体である被写体を、設定された第1および第2の露光時間でイメージセンサに撮像させることによって、第1の露光時間が設定された複数の撮像素子からの出力に応じた通常画像を取得するとともに、第2の露光時間が設定された複数の撮像素子からの出力に応じた画像であって、イメージセンサに含まれる複数の露光ラインのそれぞれに対応する輝線を含む画像である輝線画像を取得する。復号部A33は、取得された輝線画像に含まれる複数の輝線のパターンを復号することにより情報を取得する。このような受信装置A30では、上述の受信プログラムと同様の効果を奏することができる。 That is, the receiving device A30 is a receiving device that receives information from a light-emitting body that changes in luminance, and includes a multiple exposure time setting unit A31, an imaging unit A32, and a decoding unit A33. The multiple exposure time setting unit A31 sets the first exposure time for some of the plurality of image pickup elements (K is an integer of 4 or more) included in the image sensor, and K pieces. A second exposure time shorter than the first exposure time is set for the remaining plurality of image sensors. The imaging unit A32 causes the image sensor to capture an image of a subject, which is a light-emitting body that changes in luminance, with the set first and second exposure times, so that a plurality of imaging elements with the first exposure time are set. A bright image corresponding to each of the plurality of exposure lines included in the image sensor, which is an image corresponding to the output from the plurality of imaging elements for which the second exposure time is set, while acquiring a normal image according to the output A bright line image that is an image including The decoding unit A33 acquires information by decoding a plurality of bright line patterns included in the acquired bright line image. Such a receiving apparatus A30 can achieve the same effects as the above-described receiving program.
 次に、受信された可視光信号に関する内容の表示について説明する。 Next, display of contents related to the received visible light signal will be described.
 図66および図67は、可視光信号を受信したときの受信機の表示の一例を示す図である。 66 and 67 are diagrams showing an example of the display of the receiver when a visible light signal is received.
 図66の(a)に示すように、受信機は、送信機10020dを撮像すると、その送信機10020dが映し出された画像10020aを表示する。さらに、受信機は、画像10020aにオブジェクト10020eを重畳することによって、画像10020bを生成して表示する。オブジェクト10020eは、その送信機10020dの像がある場所と、その送信機10020dからの可視光信号を受信していることとを示す画像である。オブジェクト10020eは、可視光信号の受信状態(受信中の状態、送信機を探索している状態、受信の進行の程度、受信速度、またはエラー率等)によって異なる画像であってもよい。例えば、受信機は、オブジェクト1020eの色、線の太さ、線の種類(単線、2重線、または点線等)、または点線の間隔などを変化させる。これにより、ユーザに受信状態を認識させることができる。次に、受信機は、取得データの内容を示す画像を取得データ画像10020fとして画像10020aに重畳することによって、画像10020cを生成して表示する。取得データは、受信した可視光信号、または、受信した可視光信号によって示されるIDに関連付けられたデータである。 66 (a), when the receiver captures an image of the transmitter 10020d, the receiver displays an image 10020a on which the transmitter 10020d is projected. Further, the receiver generates and displays an image 10020b by superimposing the object 10020e on the image 10020a. The object 10020e is an image indicating that the image of the transmitter 10020d is present and that a visible light signal is received from the transmitter 10020d. The object 10020e may be an image that varies depending on the reception state of the visible light signal (the state of reception, the state of searching for a transmitter, the degree of progress of reception, the reception speed, or the error rate). For example, the receiver changes the color of the object 1020e, the thickness of the line, the type of line (single line, double line, dotted line, or the like), or the interval between dotted lines. Thereby, a user can be made to recognize a receiving state. Next, the receiver generates and displays an image 10020c by superimposing an image indicating the content of the acquired data on the image 10020a as an acquired data image 10020f. The acquired data is data associated with the received visible light signal or the ID indicated by the received visible light signal.
 受信機は、この取得データ画像10020fを表示する際には、図66の(a)に示すように、送信機10020dからの吹き出しのように取得データ画像10020fを表示したり、送信機10020dの近くに取得データ画像10020fを表示する。また、受信機は、図66の(b)に示すように、取得データ画像10020fが送信機10020dから受信機側に徐々に近づくように、その取得データ画像10020fを表示してもよい。これにより、取得データ画像10020fが、どの送信機から受信された可視光信号に基づくものであるのかを、ユーザに認識させることができる。また、受信機は、図67に示すように、取得データ画像10020fが受信機のディスプレイの端から徐々に出てくるように、その取得データ画像10020fを表示してもよい。これにより、そのときに可視光信号を取得したということをユーザにわかりやすく認識させることができる。 When the receiver displays the acquired data image 10020f, as shown in FIG. 66A, the receiver displays the acquired data image 10020f like a balloon from the transmitter 10020d or near the transmitter 10020d. The acquired data image 10020f is displayed. In addition, as illustrated in FIG. 66B, the receiver may display the acquired data image 10020f so that the acquired data image 10020f gradually approaches the receiver side from the transmitter 10020d. Thereby, the user can recognize which transmitter the received data image 10020f is based on the visible light signal received from. In addition, as shown in FIG. 67, the receiver may display the acquired data image 10020f so that the acquired data image 10020f gradually emerges from the end of the receiver display. This makes it possible for the user to easily recognize that the visible light signal has been acquired at that time.
 次に、AR(Augmented Reality)について説明する。 Next, AR (Augmented Reality) will be described.
 図68は、取得データ画像10020fの表示の一例を示す図である。 FIG. 68 is a diagram showing an example of display of the acquired data image 10020f.
 受信機は、ディスプレイ内で送信機の像が移動した場合には、取得データ画像10020fを送信機の像の移動に合わせて移動させる。これにより、取得データ画像10020fがその送信機に対応しているということをユーザに認識させることができる。また、受信機は、取得データ画像10020fを、その送信機の像ではなく別のものに対応付けて表示してもよい。これにより、AR表示を行うことができる。 When the transmitter image moves in the display, the receiver moves the acquired data image 10020f in accordance with the movement of the transmitter image. This allows the user to recognize that the acquired data image 10020f corresponds to the transmitter. The receiver may display the acquired data image 10020f in association with another image instead of the image of the transmitter. Thereby, AR display can be performed.
 次に、取得データの保存および破棄について説明する。 Next, saving and discarding of acquired data will be described.
 図69は、取得データを保存する、または、破棄する場合の操作の一例を示す図である。 FIG. 69 is a diagram illustrating an example of an operation when saving or discarding acquired data.
 例えば、受信機は、図69の(a)に示すように、取得データ画像10020fに対して、下側へのスワイプがユーザによって行われると、その取得データ画像10020fによって示される取得データを保存する。受信機は、保存した取得データを示す取得データ画像10020fを、他の既に保存されている1つまたは複数の取得データを示す取得データ画像の一番端に配置させる。これにより、取得データ画像10020fによって示される取得データが最後に保存された取得データであることを、ユーザに認識させることができる。例えば、受信機は、図69の(a)に示すように、複数の取得データ画像の中で一番手前に取得データ画像10020fを配置する。 For example, as shown in FIG. 69A, when the user performs a swipe down on the acquired data image 10020f, the receiver stores the acquired data indicated by the acquired data image 10020f. . The receiver places an acquired data image 10020f indicating the stored acquired data at the end of the acquired data image indicating one or more other already stored acquired data. This allows the user to recognize that the acquisition data indicated by the acquisition data image 10020f is the acquisition data stored last. For example, as shown in FIG. 69A, the receiver arranges the acquired data image 10020f in the forefront among the plurality of acquired data images.
 また、受信機は、図69の(b)に示すように、取得データ画像10020fに対して、右側へのスワイプがユーザによって行われると、その取得データ画像10020fによって示される取得データを破棄する。または、受信機は、ユーザが受信機を移動させることによって送信機の像がディスプレイからフレームアウトすると、取得データ画像10020fによって示される取得データを破棄してもよい。なお、スワイプする方向は、上下左右のどちらでも、上述と同様の効果が得られる。受信機は、保存または破棄に対応したスワイプの方向を表示してもよい。これにより、その操作によって保存または破棄ができることをユーザに認識させることができる。 Further, as shown in FIG. 69 (b), when the user swipes the acquired data image 10020f to the right side, the receiver discards the acquired data indicated by the acquired data image 10020f. Alternatively, the receiver may discard the acquired data indicated by the acquired data image 10020f when the image of the transmitter is framed out of the display by the user moving the receiver. Note that the same effect as described above can be obtained regardless of whether the swipe direction is up, down, left, or right. The receiver may display the swipe direction corresponding to the save or discard. As a result, the user can recognize that the data can be saved or discarded by the operation.
 次に、取得データの閲覧について説明する。 Next, browsing of acquired data will be described.
 図70は、取得データを閲覧する際の表示例を示す図である。 FIG. 70 is a diagram showing a display example when browsing acquired data.
 受信機は、図70の(a)に示すように、保存されている複数の取得データの取得データ画像を、ディスプレイの下端に重ねて小さく表示している。このときに、ユーザが表示されている取得データ画像の一部をタップすると、受信機は、図70の(b)に示すように、複数の取得データ画像のそれぞれを大きく表示する。これにより、各取得データの閲覧が必要なときにのみ、それらの取得データ画像を大きく表示し、不要なときは、他の表示のためにディスプレイを有効に利用することができる。 As shown in (a) of FIG. 70, the receiver displays the acquired data images of a plurality of stored acquired data in a small manner overlapping the lower end of the display. At this time, when the user taps a part of the acquired data image displayed, the receiver displays each of the plurality of acquired data images in a large size as shown in FIG. Thereby, only when it is necessary to view each piece of acquired data, those acquired data images are displayed in a large size, and when not necessary, the display can be used effectively for other displays.
 図70の(b)に示す状態で、ユーザが表示したい取得データ画像をタップすると、受信機は、図70の(c)に示すように、そのタップされた取得データ画像をさらに大きく表示し、その取得データ画像の中で多くの情報を表示する。また、裏面表示ボタン10024aをユーザがタップすると、受信機は、取得データ画像の裏面を表示し、その取得データに関連する別のデータを表示する。 When the user taps the acquired data image to be displayed in the state shown in (b) of FIG. 70, the receiver displays the tapped acquired data image in a larger size as shown in (c) of FIG. A lot of information is displayed in the acquired data image. When the user taps the back surface display button 10024a, the receiver displays the back surface of the acquired data image, and displays other data related to the acquired data.
 次に、事故位置推定時の手ぶれ補正をオフにすることについて説明する。 Next, we will explain how to turn off image stabilization when estimating the accident location.
 受信機は、手ぶれ補正を無効(オフ)にする、または、手ぶれ補正の補正方向と補正量に対応して撮像画像を変換することで、正確な撮像方向を取得し、正確に自己位置推定を行うことが出来る。なお、撮像画像は、受信機の撮像部による撮像によって得られる画像である。また、自己位置推定は、受信機が自らの位置を推定することである。自己位置推定では、具体的には、受信機は、受信された可視光信号に基づいて送信機の位置を特定し、撮像画像に映る送信機の大きさ、位置または形状などに基づいて、受信機と送信機との間の相対的な位置関係を特定する。そして、受信機は、送信機の位置と、受信機と送信機との間の相対的な位置関係とに基づいて、受信機の位置を推定する。 The receiver disables camera shake correction (off), or converts the captured image according to the correction direction and correction amount of camera shake correction, thereby acquiring the correct imaging direction and accurately performing self-position estimation. Can be done. The captured image is an image obtained by imaging by the imaging unit of the receiver. Self-position estimation means that the receiver estimates its own position. In the self-position estimation, specifically, the receiver specifies the position of the transmitter based on the received visible light signal, and receives based on the size, position, or shape of the transmitter reflected in the captured image. Identify the relative positional relationship between the transmitter and transmitter. Then, the receiver estimates the position of the receiver based on the position of the transmitter and the relative positional relationship between the receiver and the transmitter.
 また、図56などに示す、一部の露光ラインのみを用いて撮像を行う部分読み出し時には、つまり、図56などに示す撮像が行われるときには、受信機の少しのブレで送信機がフレームアウトしてしまう。このような場合、受信機は、手ぶれ補正を有効にすることで、継続して信号を受信することができる。 Also, at the time of partial readout where imaging is performed using only a part of the exposure lines shown in FIG. 56, that is, when imaging shown in FIG. 56 is performed, the transmitter is out of frame with a slight blurring of the receiver. End up. In such a case, the receiver can continuously receive the signal by enabling the camera shake correction.
 次に、非対称形の発光部を用いた自己位置推定について説明する。 Next, self-position estimation using an asymmetrical light emitting unit will be described.
 図71は、実施の形態9における送信機の一例を示す図である。 FIG. 71 is a diagram illustrating an example of a transmitter in the ninth embodiment.
 上述の送信機は発光部を備え、その発光部を輝度変化させることによって可視光信号を送信する。上述の自己位置推定では、受信機は、撮像画像中の送信機(具体的には発光部)の形状に基づいて、受信機と送信機との間の相対的な位置関係として、受信機と送信機との間の相対角度を求める。ここで、例えば図71に示すように、送信機が回転対称の形状の発光部10090aを備えている場合には、上述のように、撮像画像中の送信機の形状に基づいて、送信機と受信機との間の相対角度を正確に求めることができない。そこで、送信機は、回転対称ではない形状の発光部を備えていることが望ましい。これにより、受信機は上述の相対角度を正確に求めることができる。つまり、角度を取得するための方位センサでは計測結果の誤差が大きいため、受信機は、上述の方法で求めた相対角度を用いることで、正確な自己位置推定を行うことができる。 The above-described transmitter includes a light emitting unit, and transmits a visible light signal by changing the luminance of the light emitting unit. In the above-described self-position estimation, the receiver determines the relative positional relationship between the receiver and the transmitter based on the shape of the transmitter (specifically, the light emitting unit) in the captured image. Find the relative angle to the transmitter. Here, for example, as shown in FIG. 71, when the transmitter includes a light-emitting unit 10090a having a rotationally symmetric shape, as described above, based on the shape of the transmitter in the captured image, The relative angle with the receiver cannot be determined accurately. Therefore, it is desirable that the transmitter includes a light emitting unit having a shape that is not rotationally symmetric. Thereby, the receiver can obtain | require correctly the above-mentioned relative angle. That is, since the error of the measurement result is large in the azimuth sensor for acquiring the angle, the receiver can perform accurate self-position estimation by using the relative angle obtained by the above method.
 ここで、送信機は、図71に示すように、完全な回転対称の形状ではない発光部10090bを備えていてもよい。この発光部10090bの形状は、90°の回転に対しては対称形ではあるが、完全な回転対称ではない。この場合は、受信機は、おおまかな角度を方位センサで求め、さらに、撮像画像中の送信機の形状を用いることで、受信機と送信機との間の相対角度を一意に限定することができ、正確な自己位置推定を行うことができる。 Here, as shown in FIG. 71, the transmitter may include a light emitting unit 10090b that is not in a completely rotationally symmetric shape. The shape of the light emitting unit 10090b is symmetric with respect to 90 ° rotation, but is not completely rotationally symmetric. In this case, the receiver obtains a rough angle with the azimuth sensor, and further uses the shape of the transmitter in the captured image to uniquely limit the relative angle between the receiver and the transmitter. And accurate self-position estimation can be performed.
 また、送信機は、図71に示す発光部10090cを備えていてもよい。この発光部10090cの形状は、基本的には回転対称の形状である。しかし、その発光部10090cの一部分に導光板などが設置されていることで、発光部10090cの形状は、回転対称ではない形状にされている。 Further, the transmitter may include a light emitting unit 10090c shown in FIG. The shape of the light emitting unit 10090c is basically a rotationally symmetric shape. However, since a light guide plate or the like is provided in a part of the light emitting unit 10090c, the shape of the light emitting unit 10090c is not rotationally symmetric.
 また、送信機は、図71に示す発光部10090dを備えてもよい。この発光部10090dは、それぞれ回転対称の形状の照明を具備している。しかし、それらを組み合わせて配置されることによって構成される発光部10090dの全体の形状は、回転対称の形状ではない。したがって、受信機は、その送信機を撮像することにより、正確な自己位置推定を行うことができる。また、発光部10090dに含まれる全ての照明が、可視光信号を送信するために輝度変化する可視光通信用の照明である必要はなく、一部の照明のみが可視光通信用の照明であってもよい。 Further, the transmitter may include a light emitting unit 10090d shown in FIG. Each of the light emitting units 10090d includes rotationally symmetric illumination. However, the overall shape of the light emitting unit 10090d configured by combining them is not a rotationally symmetric shape. Therefore, the receiver can perform accurate self-position estimation by imaging the transmitter. In addition, it is not necessary for all illumination included in the light emitting unit 10090d to be illumination for visible light communication that changes in luminance in order to transmit a visible light signal, and only some illumination is illumination for visible light communication. May be.
 また、送信機は、図71に示す発光部10090eおよび物体10090fを備えてもよい。ここで、物体10090fは、発光部10090eとの間の位置関係が変化しないように構成されている物体(例えば、火災報知機や配管等)である。発光部10090eと物体10090fとの組み合わせの形状は回転対称の形状ではないため、受信機は、発光部10090eと物体10090fと撮像することにより、正確に自己位置推定を行うことができる。 Further, the transmitter may include a light emitting unit 10090e and an object 10090f shown in FIG. Here, the object 10090f is an object (for example, a fire alarm or a pipe) configured so that the positional relationship with the light emitting unit 10090e does not change. Since the shape of the combination of the light emitting unit 10090e and the object 10090f is not a rotationally symmetric shape, the receiver can accurately perform self-position estimation by imaging the light emitting unit 10090e and the object 10090f.
 次に、自己位置推定の時系列処理について説明する。 Next, the time series processing for self-position estimation will be described.
 受信機は、撮像するごとに、撮像画像中の送信機の位置と形状から、自己位置推定を行うことができる。その結果、受信機は、撮像中の受信機の移動方向と距離を推定することができる。また、受信機は、複数のフレームまたは画像を用いた三角測量を行うことで、より正確な自己位置推定を行うことができる。複数の画像を用いた推定結果や、異なる組み合わせの複数の画像を用いた推定結果を総合することで、受信機は、より正確に自己位置推定を行うことができる。この際、受信機は、最近の撮像画像から推定した結果を重要視して総合することで、より正確に自己位置推定を行うことができる。 Every time the receiver takes an image, the receiver can perform self-position estimation from the position and shape of the transmitter in the captured image. As a result, the receiver can estimate the moving direction and distance of the receiver being imaged. Further, the receiver can perform more accurate self-position estimation by performing triangulation using a plurality of frames or images. By integrating estimation results using a plurality of images and estimation results using a plurality of different combinations of images, the receiver can perform self-position estimation more accurately. At this time, the receiver can perform self-position estimation more accurately by emphasizing and summing up the results estimated from recent captured images.
 次に、オプティカルブラックの読み飛ばしについて説明する。 Next, optical black skipping will be described.
 図72は、実施の形態9における受信方法の一例を示す図である。なお、図72に示すグラフの横軸は、時刻を示し、縦軸は、イメージセンサ内の各露光ラインの位置を示す。さらに、そのグラフの実線矢印は、イメージセンサ内の各露光ラインの露光が開始される時刻(露光タイミング)を示す。 FIG. 72 is a diagram illustrating an example of a reception method according to the ninth embodiment. The horizontal axis of the graph shown in FIG. 72 indicates time, and the vertical axis indicates the position of each exposure line in the image sensor. Furthermore, a solid line arrow in the graph indicates a time (exposure timing) at which exposure of each exposure line in the image sensor is started.
 受信機は、通常撮像時には、図72の(a)に示すように、イメージセンサにおける水平オプティカルブラックの信号を読み出すが、図72の(b)に示すように、水平オプティカルブラックの信号を読み飛ばしてもよい。これにより、連続的な可視光信号を受信することが出来る。 During normal imaging, the receiver reads the horizontal optical black signal in the image sensor as shown in FIG. 72A, but skips the horizontal optical black signal as shown in FIG. 72B. May be. Thereby, a continuous visible light signal can be received.
 水平オプティカルブラックは、露光ラインに水平方向のオプティカルブラックである。また、垂直オプティカルブラックは、オプティカルブラックのうち水平オプティカルブラック以外の部分である。 Horizontal optical black is optical black in the horizontal direction on the exposure line. The vertical optical black is a portion of the optical black other than the horizontal optical black.
 受信機は、オプティカルブラックから読み出される信号によって黒レベルの調整を行うため、可視光撮像開始時には通常撮像時と同様にオプティカルブラックを用いて、黒レベルを調整することができる。垂直オプティカルブラックが利用できる場合は、受信機は、垂直オプティカルブラックのみを用いて黒レベル調整を行うとすることで、連続受信と黒レベル調整が可能である。可視光撮像継続時は、受信機は、所定の時間毎に水平オプティカルブラックを用いて黒レベルを調整してもよい。受信機は、通常撮像と可視光撮像を交互に行う場合において、可視光撮像を連続して行うときには、水平オプティカルブラックの信号を読み飛ばし、それ以外のときには、水平オプティカルブラックの信号を読み出す。そして、受信機は、その読み出された信号に基づいて黒レベルの調整を行うことで、連続的に可視光信号を受信しつつ、黒レベルの調整を行うことができる。受信機は、可視光撮像画像の最も暗い部分を黒として黒レベルの調整を行うとしてもよい。 Since the receiver adjusts the black level according to the signal read from the optical black, the black level can be adjusted using the optical black at the start of the visible light imaging similarly to the normal imaging. When vertical optical black can be used, the receiver can perform continuous reception and black level adjustment by performing black level adjustment using only vertical optical black. When visible light imaging continues, the receiver may adjust the black level using horizontal optical black every predetermined time. In the case where the normal imaging and the visible light imaging are alternately performed, the receiver skips the horizontal optical black signal when continuously performing the visible light imaging, and reads the horizontal optical black signal otherwise. Then, the receiver can adjust the black level while continuously receiving the visible light signal by adjusting the black level based on the read signal. The receiver may adjust the black level by setting the darkest part of the visible light captured image as black.
 このように、信号が読み出されるオプティカルブラックを垂直オプティカルブラックのみとすることで、連続的な可視光信号の受信が可能である。また、水平オプティカルブラックの信号を読み飛ばすモードを備えることで、通常撮像時には黒レベル調整を行い、可視光撮像時には必要に応じて連続通信を行うことができる。また、水平オプティカルブラックの信号を読み飛ばすことで、露光ライン間の露光を開始するタイミングの差が大きくなるため、小さくしか写っていない送信機からの可視光信号も受信できる。 In this way, continuous reception of visible light signals is possible by using only vertical optical black as the optical black from which signals are read out. In addition, by providing a mode for skipping horizontal optical black signals, black level adjustment can be performed during normal imaging, and continuous communication can be performed as necessary during visible light imaging. In addition, by skipping the horizontal optical black signal, the difference in timing of starting exposure between exposure lines is increased, so that a visible light signal from a transmitter that is only small can be received.
 次に、送信機の種類を示す識別子について説明する。 Next, an identifier indicating the type of transmitter will be described.
 送信機は、送信機の種類を示す送信機識別子を可視光信号に付加して送信してもよい。この場合、受信機は、送信機識別子を受信した時点で、その送信機の種類に応じた受信動作を行うことができる。例えば、送信機識別子がデジタルサイネージを示す場合は、送信機は、送信機の個体識別を行うための送信機IDの他に、現在どのコンテンツを表示しているのかを示すコンテンツIDを可視光信号として送信している。受信機は、送信機識別子に基づいて、これらのIDを分けて扱うことで、送信機が現在表示しているコンテンツに合わせた情報を表示することができる。また、例えば、送信機識別子がデジタルサイネージや非常灯を示す場合は、受信機は、感度を上げて撮像することで、受信エラーを低減することができる。 The transmitter may add a transmitter identifier indicating the type of transmitter to the visible light signal for transmission. In this case, when the receiver receives the transmitter identifier, the receiver can perform a receiving operation according to the type of the transmitter. For example, in the case where the transmitter identifier indicates digital signage, the transmitter displays a content ID indicating which content is currently displayed in addition to the transmitter ID for performing individual identification of the transmitter as a visible light signal. As sending. The receiver can display information according to the content currently displayed by the transmitter by separately handling these IDs based on the transmitter identifier. For example, when the transmitter identifier indicates digital signage or an emergency light, the receiver can reduce reception errors by imaging with increased sensitivity.
 (実施の形態10)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 10)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 ここで、同じアドレスのデータ部を比較する受信方法について説明する。 Here, a reception method for comparing the data part of the same address will be described.
 図73は、本実施の形態における受信方法の一例を示すフローチャートである。 FIG. 73 is a flowchart showing an example of the reception method in the present embodiment.
 受信機は、パケットを受信し(ステップS10101)、誤り訂正を行う(ステップS10102)。そして、受信機は、受信したパケットのアドレスと同じアドレスのパケットを既に受信しているか否かを判定する(ステップS10103)。ここで、受信していると判定した場合は(ステップS10103のY)、受信機は、それらのデータを比較する。つまり、受信機は、データ部が等しいか否かを判定する(ステップS10104)。ここで、等しくないと判定した場合(ステップS10104のN)、受信機は、さらに、複数のデータ部における差異が所定の数以上であるか、具体的には、異なるビットの数、または、輝度状態が異なるスロットの数が所定の数以上である否かを判定する(ステップS10105)。ここで、所定の数以上であると判定すると(ステップS10105のN)、受信機は、既に受信していたパケットを破棄する(ステップS10106)。これにより、別の送信機からパケットを受信し始めたときに、以前の送信機から受信したパケットとの混信を避ける事ができる。一方、所定の数以上ではないと判定すると(ステップS10105のN)、受信機は、等しいデータ部を持つパケットが最も多いデータ部のデータをそのアドレスのデータとする(ステップS10107)。または、受信機は、等しいビットの最も多いビットを、そのアドレスのそのビットの値とする。または、受信機は、等しい輝度状態が最も多い輝度状態をそのアドレスのそのスロットの輝度状態とし、そのアドレスのデータを復調する。 The receiver receives the packet (step S10101) and performs error correction (step S10102). Then, the receiver determines whether or not a packet having the same address as that of the received packet has already been received (step S10103). If it is determined that the data is received (Y in step S10103), the receiver compares the data. That is, the receiver determines whether the data parts are equal (step S10104). Here, when it is determined that they are not equal (N in step S10104), the receiver further determines whether the difference in the plurality of data parts is equal to or greater than a predetermined number, specifically, the number of different bits or the luminance. It is determined whether or not the number of slots having different states is equal to or greater than a predetermined number (step S10105). If it is determined that the number is equal to or greater than the predetermined number (N in step S10105), the receiver discards the packet that has already been received (step S10106). Thereby, when a packet is started to be received from another transmitter, interference with a packet received from a previous transmitter can be avoided. On the other hand, if it is determined that the number is not equal to or greater than the predetermined number (N in step S10105), the receiver sets the data of the data part with the most packets having the same data part as the data of the address (step S10107). Alternatively, the receiver takes the most equal bit as the value of that bit at that address. Alternatively, the receiver sets the luminance state having the largest number of equal luminance states as the luminance state of the slot at the address, and demodulates the data at the address.
 このように、本実施の形態では、受信機は、まず、複数の輝線のパターンから、データ部およびアドレス部を含む第1のパケットを取得する。次に、受信機は、第1のパケットよりも前に既に取得されている少なくとも1つのパケットのうち、その第1のパケットのアドレス部と同一のアドレス部を含むパケットである少なくとも1つの第2のパケットが存在するか否かを判定する。次に、受信機は、その少なくとも1つの第2のパケットが存在すると判定した場合には、その少なくとも1つの第2のパケットと第1のパケットとのそれぞれのデータ部が全て等しいか否かを判定する。それぞれのデータ部が全て等しくないと判定した場合には、受信機は、その少なくとも1つの第2のパケットのそれぞれにおいて、第2のパケットのデータ部に含まれる各部分のうち、第1のパケットのデータ部に含まれる各部分と異なる部分の数が、所定の数以上存在するか否かを判定する。ここで、受信機は、その少なくとも1つの第2のパケットのうち、異なる部分の数が所定の数以上存在すると判定された第2のパケットがある場合には、その少なくとも1つの第2のパケットを破棄する。一方、その少なくとも1つの第2のパケットのうち、異なる部分の数が所定の数以上存在すると判定された第2パケットがない場合には、受信機は、第1のパケットおよび少なくとも1つの第2のパケットのうち、同一のデータ部を有するパケットの数が最も多い複数のパケットを特定する。そして、受信機は、その複数のパケットのそれぞれに含まれるデータ部を、第1のパケットに含まれるアドレス部に対応するデータ部として復号することによって、可視光識別子(ID)の少なくとも一部を取得する。 Thus, in the present embodiment, the receiver first acquires a first packet including a data portion and an address portion from a plurality of bright line patterns. Next, the receiver has at least one second packet which is a packet including the same address part as the address part of the first packet among at least one packet already acquired before the first packet. It is determined whether or not there is a packet. Next, when the receiver determines that the at least one second packet exists, the receiver determines whether the data parts of the at least one second packet and the first packet are all equal. judge. If it is determined that the respective data parts are not all equal, the receiver, in each of the at least one second packet, out of the parts included in the data part of the second packet, the first packet It is determined whether the number of parts different from each part included in the data part is greater than or equal to a predetermined number. Here, when there is a second packet in which the number of different portions is determined to be greater than or equal to a predetermined number among the at least one second packet, the receiver includes the at least one second packet. Is discarded. On the other hand, if there is no second packet in which the number of different portions of the at least one second packet is determined to be greater than or equal to a predetermined number, the receiver may include the first packet and the at least one second packet. Among these packets, a plurality of packets having the largest number of packets having the same data part are specified. The receiver then decodes at least a part of the visible light identifier (ID) by decoding the data part included in each of the plurality of packets as a data part corresponding to the address part included in the first packet. get.
 これにより、同一のアドレス部を有する複数のパケットが受信されたときに、それらのパケットのデータ部が異なっていても、適切なデータ部を復号することができ、可視光識別子の少なくとも一部を正しく取得することができる。つまり、同一の送信機から送信される同一のアドレス部を有する複数のパケットは、基本的に同一のデータ部を有する。しかし、受信機が、パケットの送信元となる送信機を切り替える場合には、受信機は、同一のアドレス部を有していても互いに異なるデータ部を有する複数のパケットを受信することがある。このような場合には、本実施の形態では、図73のステップS10106のように、既に受信されているパケット(第2のパケット)が破棄され、最新のパケット(第1のパケット)のデータ部を、そのアドレス部に対応する正しいデータ部として復号することができる。さらに、上述のような送信機の切り替えがない場合であっても、可視光信号の送受信状況に応じて、同一のアドレス部を有する複数のパケットのデータ部が少し異なることがある。このような場合には、本実施の形態では、図73のステップS10107のように、いわゆる多数決によって、適切なデータ部を復号することができる。 Thereby, when a plurality of packets having the same address part are received, even if the data part of those packets is different, the appropriate data part can be decoded, and at least a part of the visible light identifier is You can get it correctly. That is, a plurality of packets having the same address part transmitted from the same transmitter basically have the same data part. However, when the receiver switches the transmitter that is the transmission source of the packet, the receiver may receive a plurality of packets having different data parts even though they have the same address part. In such a case, in the present embodiment, as in step S10106 in FIG. 73, the already received packet (second packet) is discarded, and the data portion of the latest packet (first packet). Can be decoded as a correct data portion corresponding to the address portion. Furthermore, even when there is no transmitter switching as described above, the data portions of a plurality of packets having the same address portion may be slightly different depending on the transmission / reception state of the visible light signal. In such a case, in the present embodiment, an appropriate data part can be decoded by so-called majority decision as in step S10107 of FIG.
 ここで、複数のパケットからデータ部のデータを復調する受信方法について説明する。 Here, a reception method for demodulating data in a data portion from a plurality of packets will be described.
 図74は、本実施の形態における受信方法の一例を示すフローチャートである。 FIG. 74 is a flowchart showing an example of the reception method in this embodiment.
 まず、受信機は、パケットを受信し(ステップS10111)、アドレス部の誤り訂正を行う(ステップS10112)。このとき、受信機は、データ部の復調を行わず、撮像によって得られる画素値をそのまま保持する。そして、受信機は、既に受信された複数のパケットにおいて、同じアドレスのパケットが所定の数以上存在するか否かを判定する(ステップS10113)。ここで、存在すると判定すると(ステップS10113のY)、受信機は、同じアドレスを持つ複数のパケットのデータ部に相当する部分の画素値を合わせて復調処理を行う(ステップS10114)。 First, the receiver receives the packet (step S10111) and performs error correction of the address part (step S10112). At this time, the receiver does not demodulate the data part and holds the pixel value obtained by imaging as it is. Then, the receiver determines whether or not there are a predetermined number or more of packets having the same address among the plurality of packets that have already been received (step S10113). If it is determined that the packet exists (Y in step S10113), the receiver performs demodulation processing by combining pixel values of portions corresponding to data portions of a plurality of packets having the same address (step S10114).
 このように本実施の形態における受信方法では、複数の輝線のパターンから、データ部およびアドレス部を含む第1のパケットを取得する。そして、第1のパケットよりも前に既に取得されている少なくとも1つのパケットのうち、第1のパケットのアドレス部と同一のアドレス部を含むパケットである第2のパケットが所定の数以上存在するか否かを判定する。第2のパケットがその所定の数以上存在すると判定した場合には、その所定の数以上の第2のパケットのそれぞれのデータ部に対応する輝線画像の一部の領域の画素値と、第1のパケットのデータ部に対応する輝線画像の一部の領域の画素値とを合わせる。つまり、画素値を加算する。その加算によって、合成画素値を算出し、その合成画素値を含むデータ部を復号することによって、可視光識別子(ID)の少なくとも一部を取得する。 As described above, in the reception method according to the present embodiment, the first packet including the data portion and the address portion is acquired from the plurality of bright line patterns. Then, among at least one packet that has already been acquired before the first packet, there are a predetermined number or more of second packets that include the same address part as the address part of the first packet. It is determined whether or not. If it is determined that there are more than the predetermined number of second packets, the pixel values of the partial areas of the bright line image corresponding to the respective data portions of the second packet more than the predetermined number, The pixel values of a part of the bright line image corresponding to the data portion of the packet are matched. That is, the pixel values are added. By the addition, a composite pixel value is calculated, and at least a part of the visible light identifier (ID) is obtained by decoding the data portion including the composite pixel value.
 複数のパケットが受信されたタイミングはそれぞれ異なるため、データ部の画素値はそれぞれ微妙に異なる時点の送信機の輝度を反映した値となっている。したがって、上述のように復調処理される部分は、単一のパケットのデータ部よりも多くのデータ量(サンプル数)を含むことになる。これにより、より正確にデータ部を復調することができる。また、サンプル数の増加により、より高い変調周波数で変調された信号を復調することができる。 Since the timing at which a plurality of packets are received is different, the pixel values in the data portion are values that reflect the luminance of the transmitter at slightly different times. Therefore, the portion to be demodulated as described above includes a larger amount of data (number of samples) than the data portion of a single packet. Thereby, a data part can be demodulated more correctly. Further, by increasing the number of samples, a signal modulated with a higher modulation frequency can be demodulated.
 データ部とその誤り訂正符号部は、ヘッダ部、アドレス部およびアドレス部の誤り訂正符号部よりも、高い周波数で変調されている。上記の復調方法により、データ部以降は高い変調周波数で変調されていても復調可能であるため、この構成により、パケット全体の送信時間を短くすることができ、より遠くからでも、より小さい光源からでも、より速く可視光信号を受信することができる。 The data part and its error correction code part are modulated at a higher frequency than the header part, the address part, and the error correction code part of the address part. By the above demodulation method, since the data portion and the subsequent part can be demodulated even if modulated at a high modulation frequency, the transmission time of the whole packet can be shortened by this configuration, and from a smaller light source even from a longer distance. However, a visible light signal can be received faster.
 次に、可変長アドレスのデータを受信する受信方法について説明する。 Next, a reception method for receiving variable length address data will be described.
 図75は、本実施の形態における受信方法の一例を示すフローチャートである。 FIG. 75 is a flowchart showing an example of a reception method in the present embodiment.
 受信機は、パケットを受信し(ステップS10121)、データ部の全てのビットが0となっているパケット(以下、0終端パケットという)を受信したか否かを判定する(ステップS10122)。ここで、受信したと判定すると、つまり、0終端パケットが存在すると判定すると(ステップS10122のY)、受信機は、その0終端パケットのアドレス以下のアドレスのパケットが全て揃っているか否か、つまり受信しているか否かを判定する(ステップS10123)。なお、アドレスは、送信されるデータを分割することによって生成されたパケットのそれぞれに対して、それらのパケットの送信順にしたがって大きくなる値に設定されている。受信機は、全て揃っていると判定すると(ステップS10123のY)、0終端パケットのアドレスが、送信機から送信されるパケットの最後のアドレスであると判断する。そして、受信機は、0終端パケットまでの各アドレスのパケットのデータをつなげることで、データを復元する(ステップS10124)。さらに、受信機は、復元されたデータのエラーチェックを行う(ステップS10125)。これにより、送信されるデータがいくつに分割されているか分からない場合、つまり、アドレスが固定長ではなく可変長である場合にも、可変長アドレスのデータを送受信することでき、固定長アドレスのデータよりも多くのIDを、高い効率で送受信することができる。 The receiver receives the packet (step S10121), and determines whether or not a packet in which all the bits of the data part are 0 (hereinafter referred to as a 0 termination packet) is received (step S10122). Here, if it is determined that it has been received, that is, if it is determined that there is a zero-termination packet (Y in step S10122), the receiver determines whether or not all packets having addresses below the zero-termination packet address are available, that is, It is determined whether or not it has been received (step S10123). Note that the address is set to a value that increases in accordance with the order of transmission of each packet generated by dividing the transmitted data. If it is determined that the receiver is complete (Y in step S10123), the receiver determines that the address of the 0-termination packet is the last address of the packet transmitted from the transmitter. Then, the receiver restores the data by connecting the data of the packets of each address up to the 0 terminal packet (step S10124). Further, the receiver performs an error check on the restored data (step S10125). This makes it possible to send and receive variable-length address data even when the data to be transmitted is not divided into several parts, that is, when the address is not fixed length but variable length. More IDs can be transmitted and received with high efficiency.
 このように、本実施の形態では、受信機は、複数の輝線のパターンから、それぞれデータ部およびアドレス部を含む複数のパケットを取得する。そして、受信機は、取得された複数のパケットのうち、データ部に含まれる全てのビットが0を示すパケットである0終端パケットが存在するか否かを判定する。0終端パケットが存在すると判定した場合には、受信機は、複数のパケットのうち、その0終端パケットのアドレス部に関連付けられているアドレス部を含むパケットであるN個(Nは1以上の整数)の関連パケットが全て存在するか否かを判定する。次に、受信機は、N個の関連パケットが全て存在すると判定した場合には、N個の関連パケットのそれぞれのデータ部を並べて復号することによって、可視光識別子(ID)を取得する。ここで、0終端パケットのアドレス部に関連付けられているアドレス部は、0終端パケットのアドレス部に示されるアドレスよりも小さく0以上のアドレスを示すアドレス部である。 Thus, in the present embodiment, the receiver acquires a plurality of packets each including a data portion and an address portion from a plurality of bright line patterns. Then, the receiver determines whether or not there is a 0-termination packet that is a packet in which all bits included in the data part indicate 0 among the plurality of acquired packets. If it is determined that there is a zero-termination packet, the receiver, among a plurality of packets, N packets (N is an integer equal to or greater than one) including an address part associated with the address part of the zero-termination packet. It is determined whether or not all related packets are present. Next, when the receiver determines that all the N related packets exist, the receiver acquires a visible light identifier (ID) by arranging and decoding the data portions of the N related packets. Here, the address part associated with the address part of the 0-termination packet is an address part that indicates an address that is smaller than the address shown in the address part of the 0-termination packet and is 0 or more.
 次に、変調周波数の周期より長い露光時間を用いた受信方法について説明する。 Next, a reception method using an exposure time longer than the modulation frequency period will be described.
 図76と図77は、本実施の形態における受信機が、変調周波数の周期(変調周期)より長い露光時間を用いた受信方法を説明するための図である。 76 and 77 are diagrams for explaining a reception method in which the receiver according to the present embodiment uses an exposure time longer than the period of the modulation frequency (modulation period).
 例えば図76の(a)に示すように、露光時間が変調周期と等しい時間に設定されと、可視光信号を正しく受信することができない場合がある。なお、変調周期は、上述の1つのスロットの時間である。つまり、このような場合には、あるスロットの輝度の状態を反映している露光ライン(図76中の黒で示している露光ライン)が少ない。その結果、これらの露光ラインの画素値にノイズが偶然多く含まれた場合には、送信機の輝度を推定することは難しい。 For example, as shown in (a) of FIG. 76, when the exposure time is set to a time equal to the modulation period, the visible light signal may not be received correctly. The modulation period is the time of one slot described above. That is, in such a case, there are few exposure lines (exposure lines indicated by black in FIG. 76) reflecting the luminance state of a certain slot. As a result, it is difficult to estimate the luminance of the transmitter when a lot of noise is accidentally included in the pixel values of these exposure lines.
 一方、例えば図76の(b)に示すように、露光時間が変調周期よりも長い時間に設定されと、可視光信号を正しく受信することができる。つまり、このような場合には、有るスロットの輝度を反映している露光ラインが多いため、多くの露光ラインの画素値から送信機の輝度を推定することができ、ノイズに強い。 On the other hand, for example, as shown in FIG. 76 (b), when the exposure time is set longer than the modulation period, the visible light signal can be correctly received. That is, in such a case, since there are many exposure lines reflecting the brightness of a certain slot, the brightness of the transmitter can be estimated from the pixel values of many exposure lines, and it is resistant to noise.
 また、露光時間が長すぎると、逆に、可視光信号を正しく受信することができない。 Also, if the exposure time is too long, the visible light signal cannot be received correctly.
 例えば、図77の(a)に示すように、露光時間が変調周期と等しい場合には、受信機で受信される輝度変化(つまり、各露光ラインの画素値の変化)は、送信に用いられる輝度変化に追従する。しかし、図77の(b)に示すように、露光時間が変調周期の3倍である場合には、受信機で受信される輝度変化は、送信に用いられる輝度変化に十分に追従することができない。また、図77の(c)に示すように、露光時間が変調周期の10倍である場合には、受信機で受信される輝度変化は、送信に用いられる輝度変化に全く追従するができない。つまり、露光時間が長いほうが、多くの露光ラインから輝度を推定できるためノイズ耐性が高くなるが、露光時間が長くなると、識別マージンが下がる、あるいは識別マージンが小さくなることでノイズ耐性が低くなる。これらのバランスにより、露光時間を変調周期の2~5倍程度とすることで、最もノイズ耐性を高くすることができる。 For example, as shown in FIG. 77 (a), when the exposure time is equal to the modulation period, the luminance change (that is, the change in the pixel value of each exposure line) received by the receiver is used for transmission. Follow changes in brightness. However, as shown in FIG. 77 (b), when the exposure time is three times the modulation period, the luminance change received by the receiver can sufficiently follow the luminance change used for transmission. Can not. As shown in FIG. 77 (c), when the exposure time is 10 times the modulation period, the luminance change received by the receiver cannot follow the luminance change used for transmission at all. That is, the longer the exposure time, the higher the noise resistance because the luminance can be estimated from many exposure lines. However, the longer the exposure time, the lower the identification margin or the smaller the identification margin. With these balances, the noise resistance can be maximized by setting the exposure time to about 2 to 5 times the modulation period.
 次に、パケットの分割数について説明する。 Next, the number of packet divisions will be described.
 図78は、送信データのサイズに対する効率的な分割数を示す図である。 FIG. 78 is a diagram showing an efficient number of divisions with respect to the size of transmission data.
 送信機がデータを輝度変化によって送信する場合、送信される全てのデータ(送信データ)を1つのパケットに含めると、そのパケットのデータサイズは大きい。しかし、その送信データを複数の部分データに分割して、それらの部分データを各パケットに含めると、それぞれのパケットのデータサイズは小さくなる。ここで、受信機は、撮像によって、そのパケットを受信する。しかし、パケットのデータサイズが大きいほど、受信機はそのパケットを1回の撮像によって受信することが難しくなり、撮像を繰り返す必要がある。 When the transmitter transmits data due to a change in luminance, if all data to be transmitted (transmission data) is included in one packet, the data size of the packet is large. However, if the transmission data is divided into a plurality of partial data and the partial data is included in each packet, the data size of each packet is reduced. Here, the receiver receives the packet by imaging. However, the larger the data size of the packet, the more difficult it is for the receiver to receive the packet by one imaging, and it is necessary to repeat imaging.
 したがって、送信機は、図78の(a)および(b)に示すように、送信データのデータサイズが大きいほど、その送信データの分割数を多くする方が望ましい。しかし、分割数が多すぎると、それらの部分データを全て受信しなければ送信データを復元することができないため、逆に、受信効率が低下する。 Therefore, as shown in FIGS. 78 (a) and 78 (b), it is desirable for the transmitter to increase the number of divisions of the transmission data as the data size of the transmission data increases. However, if the number of divisions is too large, the transmission data cannot be restored unless all of the partial data is received.
 したがって、図78の(a)に示すように、アドレスのデータサイズ(アドレスサイズ)が可変であり、送信データのデータサイズが、2-16ビット、16-24ビット、24-64ビット、66-78ビット、78-128ビット、128ビット以上の場合には、それぞれ、1-2個、2-4個、4個、4-6個、6-8個、7個以上の部分データに送信データを分割すると、送信データを効率よく可視光信号によって送信することができる。また、図78の(b)に示すように、アドレスのデータサイズ(アドレスサイズ)が4ビットに固定され、送信データのデータサイズが、2-8ビット、8-16ビット、16-30ビット、30-64ビット、66-80ビット、80-96ビット、96-132ビット、132ビット以上の場合には、それぞれ、1-2個、2-3個、2-4個、4-5個、4-7個、6個、6-8個、7個以上の部分データに送信データを分割すると、送信データを効率よく可視光信号によって送信することができる。 Therefore, as shown in FIG. 78A, the data size of the address (address size) is variable, and the data size of the transmission data is 2-16 bits, 16-24 bits, 24-64 bits, 66- In the case of 78 bits, 78-128 bits, 128 bits or more, send data into 1-2 pieces, 2-4 pieces, 4 pieces, 4-6 pieces, 6-8 pieces, 7 pieces or more, respectively. Is divided, transmission data can be efficiently transmitted by a visible light signal. Further, as shown in FIG. 78 (b), the data size of the address (address size) is fixed to 4 bits, and the data size of the transmission data is 2-8 bits, 8-16 bits, 16-30 bits, For 30-64 bits, 66-80 bits, 80-96 bits, 96-132 bits, 132 bits or more, 1-2, 2-3, 2-4, 4-5, When the transmission data is divided into 4-7 pieces, 6 pieces, 6-8 pieces, 7 pieces or more partial data, the transmission data can be efficiently transmitted by a visible light signal.
 また、送信機は、複数の部分データのそれぞれを含む各パケットに基づく輝度変化を順次行う。例えば、送信機は、各パケットのアドレス順に、そのパケットに基づく輝度変化を行う。さらに、送信機は、アドレス順と異なる順序で、その複数の部分データに基づく輝度変化を再度行ってもよい。これにより、各部分データを確実に受信機に受信させることができる。 In addition, the transmitter sequentially changes the luminance based on each packet including each of a plurality of partial data. For example, the transmitter changes the luminance based on the packets in the order of the addresses of the packets. Further, the transmitter may perform the luminance change based on the plurality of partial data again in an order different from the address order. Thereby, each partial data can be reliably received by the receiver.
 次に、受信機による通知動作の設定方法について説明する。 Next, the method for setting the notification operation by the receiver will be described.
 図79Aは、本実施の形態における設定方法の一例を示す図である。 FIG. 79A is a diagram showing an example of a setting method in the present embodiment.
 まず、受信機は、通知動作を識別するための通知動作識別子と、その通知動作識別子の優先度(具体的には、優先度を示す識別子)とを、受信機の近くにあるサーバから取得する(ステップS10131)。ここで、通知動作は、複数の部分データのそれぞれを含む各パケットが輝度変化によって送信されて受信機に受信されたときに、それらのパケットが受信されたことを受信機のユーザに通知する受信機の動作である。例えば、その動作は、音の鳴動、バイブレーション、または画面表示などである。 First, the receiver acquires a notification operation identifier for identifying the notification operation and a priority of the notification operation identifier (specifically, an identifier indicating the priority) from a server near the receiver. (Step S10131). Here, in the notification operation, when each packet including each of a plurality of partial data is transmitted due to a change in luminance and received by the receiver, reception is performed to notify the user of the receiver that the packets have been received. The operation of the machine. For example, the operation is sounding, vibration, or screen display.
 次に、受信機は、パケット化された可視光信号、つまり複数の部分データのそれぞれを含む各パケットを受信する(ステップS10132)。ここで、受信機は、その可視光信号に含まれている、通知動作識別子と、その通知動作識別子の優先度(具体的には、優先度を示す識別子)とを取得する(ステップS10133)。 Next, the receiver receives a packetized visible light signal, that is, each packet including each of a plurality of partial data (step S10132). Here, the receiver acquires the notification operation identifier and the priority of the notification operation identifier (specifically, an identifier indicating the priority) included in the visible light signal (step S10133).
 さらに、受信機は、受信機の現在の通知動作の設定内容、つまり、受信機に予め設定されている通知動作識別子と、その通知動作識別子の優先度(具体的には、優先度を示す識別子)とを読み出す(ステップS10134)。なお、受信機に予め設定されている通知動作識別子は、例えば、ユーザの操作によって設定されている。 Further, the receiver sets the current notification operation setting of the receiver, that is, the notification operation identifier preset in the receiver and the priority of the notification operation identifier (specifically, an identifier indicating the priority). ) Is read out (step S10134). Note that the notification operation identifier set in advance in the receiver is set by a user operation, for example.
 そして、受信機は、予め設定されている通知動作識別子と、ステップS10131およびステップS10133のそれぞれで取得された通知動作識別子とのうち、優先度が最も高い識別子を選択する(ステップS10135)。次に、受信機は、選択した通知動作識別子を改めて自らに設定し直すことにより、選択した通知動作識別子によって示される動作を行い、可視光信号の受信をユーザに通知する(ステップS10136)。 Then, the receiver selects an identifier having the highest priority among the preset notification operation identifiers and the notification operation identifiers acquired in steps S10131 and S10133 (step S10135). Next, the receiver resets the selected notification operation identifier to itself, thereby performing the operation indicated by the selected notification operation identifier, and notifies the user of reception of the visible light signal (step S10136).
 なお、受信機は、ステップS10131およびステップS10133の何れか一方を行わず、2つの通知動作識別子の中から優先度の高い通知動作識別子を選択してもよい。 Note that the receiver may select a notification operation identifier having a higher priority from the two notification operation identifiers without performing any one of steps S10131 and S10133.
 なお、劇場または美術館などに設置されているサーバから送信される通知動作識別子の優先度、または、それらの施設内で送信される可視光信号に含まれる通知動作識別子の優先度は高く設定されてもよい。これにより、ユーザの設定に関わらず、その施設内では、受信通知のための音を鳴らさないようにすることができる。また、その他の施設では、通知動作識別子の優先度を低くしておくことにより、受信機は、ユーザの設定に応じた動作によって受信を通知することができる。 Note that the priority of notification action identifiers transmitted from servers installed in theaters or museums, or the priority of notification action identifiers included in visible light signals transmitted within those facilities are set high. Also good. Thereby, it is possible to prevent the sound for the reception notification from being sounded in the facility regardless of the setting of the user. In other facilities, by setting the priority of the notification operation identifier low, the receiver can notify the reception by the operation according to the user's setting.
 図79Bは、本実施の形態における設定方法の他の例を示す図である。 FIG. 79B is a diagram showing another example of the setting method in the present embodiment.
 まず、受信機は、通知動作を識別するための通知動作識別子と、その通知動作識別子の優先度(具体的には、優先度を示す識別子)とを、受信機の近くにあるサーバから取得する(ステップS10141)。次に、受信機は、パケット化された可視光信号、つまり複数の部分データのそれぞれを含む各パケットを受信する(ステップS10142)。ここで、受信機は、その可視光信号に含まれている、通知動作識別子と、その通知動作識別子の優先度(具体的には、優先度を示す識別子)とを取得する(ステップS10143)。 First, the receiver acquires a notification operation identifier for identifying the notification operation and a priority of the notification operation identifier (specifically, an identifier indicating the priority) from a server near the receiver. (Step S10141). Next, the receiver receives a packetized visible light signal, that is, each packet including each of a plurality of partial data (step S10142). Here, the receiver acquires the notification operation identifier and the priority of the notification operation identifier (specifically, an identifier indicating the priority) included in the visible light signal (step S10143).
 さらに、受信機は、受信機の現在の通知動作の設定内容、つまり、受信機に予め設定されている通知動作識別子と、その通知動作識別子の優先度(具体的には、優先度を示す識別子)とを読み出す(ステップS10144)。 Further, the receiver sets the current notification operation setting of the receiver, that is, the notification operation identifier preset in the receiver and the priority of the notification operation identifier (specifically, an identifier indicating the priority). ) Is read (step S10144).
 そして、受信機は、予め設定されている通知動作識別子と、ステップS10141およびステップS10143のそれぞれで取得された通知動作識別子との中に、通知音の発生を禁止する動作を示す動作通知識別子が含まれているか否かを判定する(ステップS10145)。ここで、含まれていると判定すると(ステップS10145のY)、受信機は、受信完了を通知するための通知音を鳴らす(ステップS10146)。一方、含まれていないと判定すると(ステップS10145のN)、受信機は、例えばバイブレーションなどによって、受信完了をユーザに通知する(ステップS10147)。 The receiver includes an operation notification identifier indicating an operation for prohibiting the generation of the notification sound in the notification operation identifier set in advance and the notification operation identifier acquired in each of steps S10141 and S10143. It is determined whether or not it is (step S10145). Here, if it is determined that it is included (Y in step S10145), the receiver sounds a notification sound for notifying completion of reception (step S10146). On the other hand, if it is determined that it is not included (N in step S10145), the receiver notifies the user of the completion of reception by, for example, vibration (step S10147).
 なお、受信機は、ステップS10141およびステップS10143の何れか一方を行わず、2つの通知動作識別子の中に、通知音の発生を禁止する動作を示す動作通知識別子が含まれているか否かを判定してもよい。 Note that the receiver does not perform either one of steps S10141 and S10143, and determines whether or not the two notification operation identifiers include an operation notification identifier indicating an operation that prohibits the generation of the notification sound. May be.
 また、受信機は、撮像によって得られる画像に基づいて自己位置推定を行い、推定された位置、またはその位置にある施設に対応付けられた動作によって、受信をユーザに通知してもよい。 Further, the receiver may perform self-position estimation based on an image obtained by imaging, and notify the user of reception by an operation associated with the estimated position or a facility at the position.
 図80は、実施の形態10における情報処理プログラムの処理を示すフローチャートである。 FIG. 80 is a flowchart showing processing of the information processing program in the tenth embodiment.
 この情報処理プログラムは、上述の送信機の発光体を図78に示す分割数にしたがって輝度変化させるためのプログラムである。 This information processing program is a program for changing the luminance of the light emitter of the transmitter described above according to the number of divisions shown in FIG.
 つまり、この情報処理プログラムは、送信対象の情報を輝度変化によって送信するために、送信対象の情報をコンピュータに処理させる情報処理プログラムである。具体的には、この情報処理プログラムは、送信対象の情報を符号化することによって符号化信号を生成する符号化ステップSA41と、生成された符号化信号のビット数が24~64ビットの範囲にある場合、符号化信号を4つの部分信号に分割する分割ステップSA42と、4つの部分信号を順次出力する出力ステップSA43とを、コンピュータに実行させる。なお、これらの部分信号はパケットとして出力される。また、情報処理プログラムは、符号化信号のビット数を特定し、その特定されたビット数に基づいて、部分信号の数を決定することをコンピュータにさせてもよい。この場合、情報処理プログラムは、符号化信号を分割することによって、その決定された数の部分信号を生成することをコンピュータにさせる。 That is, this information processing program is an information processing program for causing a computer to process information to be transmitted in order to transmit the information to be transmitted by a change in luminance. Specifically, the information processing program includes an encoding step SA41 that generates an encoded signal by encoding information to be transmitted, and the number of bits of the generated encoded signal is within a range of 24 to 64 bits. In some cases, the computer executes a division step SA42 for dividing the encoded signal into four partial signals and an output step SA43 for sequentially outputting the four partial signals. These partial signals are output as packets. The information processing program may cause the computer to specify the number of bits of the encoded signal and determine the number of partial signals based on the specified number of bits. In this case, the information processing program causes the computer to generate the determined number of partial signals by dividing the encoded signal.
 これにより、符号化信号のビット数が24~64ビットの範囲にある場合には、符号化信号が4つの部分信号に分割されて出力される。その結果、出力される4つの部分信号にしたがって発光体が輝度変化すると、その4つの部分信号はそれぞれ可視光信号として送信されて受信機によって受信される。ここで、出力される信号のビット数が多いほど、受信機は撮像によってその信号を適切に受信することが難しくなり、受信効率が低下する。そこで、その信号をビット数の少ない信号、つまり小さい信号に分割しておくことが望ましい。しかし、信号を多くの小さい信号に細かく分割しすぎると、受信機は、全ての小さい信号のそれぞれを個別に受信しなければ元の信号を受信することができないため、受信効率が低下する。したがって、上述のように、符号化信号のビット数が24~64ビットの範囲にある場合には、符号化信号を4つの部分信号に分割して順次出力することによって、送信対象の情報を示す符号化信号を最もよい受信効率で可視光信号として送信することができる。その結果、多様な機器間の通信を可能にすることができる。 Thus, when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and output. As a result, when the luminance of the illuminant changes according to the four partial signals that are output, the four partial signals are transmitted as visible light signals and received by the receiver. Here, the larger the number of bits of the output signal, the more difficult it is for the receiver to properly receive the signal by imaging, and the reception efficiency decreases. Therefore, it is desirable to divide the signal into signals having a small number of bits, that is, small signals. However, if the signal is too finely divided into many small signals, the receiver cannot receive the original signal unless each small signal is individually received, resulting in a decrease in reception efficiency. Therefore, as described above, when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and sequentially output to indicate the information to be transmitted. The encoded signal can be transmitted as a visible light signal with the best reception efficiency. As a result, communication between various devices can be enabled.
 また、出力ステップSA43では、第1の順序にしたがって4つの部分信号を出力し、さらに、第1の順序と異なる第2の順序にしたがって4つの部分信号を再び出力してもよい。 Also, in the output step SA43, four partial signals may be output according to the first order, and further, the four partial signals may be output again according to a second order different from the first order.
 これにより、それらの4つの部分信号が順番を変えて繰り返し出力されるため、出力される各信号が可視光信号として受信機に送信される場合には、それらの4つの部分信号の受信効率をさらに高めることができる。つまり、4つの部分信号を同じ順番で繰り返し出力しても、同じ部分信号が受信機に受信されない場合が生じるが、その順番を変えることによって、そのような場合が生じるのを抑えることができる。 As a result, the four partial signals are repeatedly output in a different order. Therefore, when each output signal is transmitted to the receiver as a visible light signal, the reception efficiency of the four partial signals is increased. It can be further increased. That is, even if the four partial signals are repeatedly output in the same order, the same partial signal may not be received by the receiver. However, by changing the order, the occurrence of such a case can be suppressed.
 また、図79Aおよび図79Bに示すように、出力ステップSA43では、さらに、4つの部分信号に通知動作識別子を付随させて出力してもよい。通知動作識別子は、4つの部分信号が輝度変化によって送信されて受信機に受信されたときに、4つの部分信号が受信されたことを受信機のユーザに通知する受信機の動作を識別するための識別子である。 Also, as shown in FIGS. 79A and 79B, in the output step SA43, a notification operation identifier may be further attached to the four partial signals. The notification operation identifier is used to identify the operation of the receiver notifying the user of the receiver that the four partial signals are received when the four partial signals are transmitted by the luminance change and received by the receiver. Identifier.
 これにより、その通知動作識別子が可視光信号として送信されて受信機に受信される場合には、受信機は、その通知動作識別子によって識別される動作にしたがって、4つの部分信号の受信をユーザに通知することができる。つまり、送信対象の情報を送信する側で、受信機による通知動作を設定することができる。 Thus, when the notification operation identifier is transmitted as a visible light signal and received by the receiver, the receiver receives the four partial signals to the user according to the operation identified by the notification operation identifier. You can be notified. That is, the notification operation by the receiver can be set on the side that transmits the information to be transmitted.
 また、図79Aおよび図79Bに示すように、出力ステップSA43では、さらに、通知動作識別子の優先度を識別するための優先度識別子を4つの部分信号に付随させて出力してもよい。 Further, as shown in FIGS. 79A and 79B, in the output step SA43, a priority identifier for identifying the priority of the notification operation identifier may be further output in association with the four partial signals.
 これにより、その優先度識別子および通知動作識別子が可視光信号として送信されて受信機に受信される場合には、受信機は、その優先度識別子によって識別される優先度にしたがって通知動作識別子を扱うことができる。つまり、受信機が他の通知動作識別子を取得している場合には、受信機は、可視光信号として送信された通知動作識別子によって識別される通知動作と、他の通知動作識別子によって識別される通知動作とのうちの一方を、その優先度に基づいて選択することができる。 Thus, when the priority identifier and the notification operation identifier are transmitted as a visible light signal and received by the receiver, the receiver handles the notification operation identifier according to the priority identified by the priority identifier. be able to. That is, when the receiver acquires another notification operation identifier, the receiver is identified by the notification operation identified by the notification operation identifier transmitted as the visible light signal and the other notification operation identifier. One of the notification operations can be selected based on the priority.
 本発明の一態様に係る情報処理プログラムは、送信対象の情報を輝度変化によって送信するために、前記送信対象の情報をコンピュータに処理させる情報処理プログラムであって、前記送信対象の情報を符号化することによって符号化信号を生成する符号化ステップと、生成された前記符号化信号のビット数が24~64ビットの範囲にある場合、前記符号化信号を4つの部分信号に分割する分割ステップと、前記4つの部分信号を順次出力する出力ステップとを、前記コンピュータに実行させる。 An information processing program according to an aspect of the present invention is an information processing program that causes a computer to process information on a transmission target in order to transmit the information on the transmission target with a luminance change, and encodes the information on the transmission target An encoding step for generating an encoded signal by dividing the encoded signal into four partial signals when the number of bits of the generated encoded signal is in a range of 24 to 64 bits; And causing the computer to execute an output step of sequentially outputting the four partial signals.
 これにより、図77~図80に示すように、符号化信号のビット数が24~64ビットの範囲にある場合には、符号化信号が4つの部分信号に分割されて出力される。その結果、出力される4つの部分信号にしたがって発光体が輝度変化すると、その4つの部分信号はそれぞれ可視光信号として送信されて受信機によって受信される。ここで、出力される信号のビット数が多いほど、受信機は撮像によってその信号を適切に受信することが難しくなり、受信効率が低下する。そこで、その信号をビット数の少ない信号、つまり小さい信号に分割しておくことが望ましい。しかし、信号を多くの小さい信号に細かく分割しすぎると、受信機は、全ての小さい信号のそれぞれを個別に受信しなければ元の信号を受信することができないため、受信効率が低下する。したがって、上述のように、符号化信号のビット数が24~64ビットの範囲にある場合には、符号化信号を4つの部分信号に分割して順次出力することによって、送信対象の情報を示す符号化信号を最もよい受信効率で可視光信号として送信することができる。その結果、多様な機器間の通信を可能にすることができる。 Thus, as shown in FIGS. 77 to 80, when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and output. As a result, when the luminance of the illuminant changes according to the four partial signals that are output, the four partial signals are transmitted as visible light signals and received by the receiver. Here, the larger the number of bits of the output signal, the more difficult it is for the receiver to properly receive the signal by imaging, and the reception efficiency decreases. Therefore, it is desirable to divide the signal into signals having a small number of bits, that is, small signals. However, if the signal is too finely divided into many small signals, the receiver cannot receive the original signal unless each small signal is individually received, resulting in a decrease in reception efficiency. Therefore, as described above, when the number of bits of the encoded signal is in the range of 24 to 64 bits, the encoded signal is divided into four partial signals and sequentially output to indicate the information to be transmitted. The encoded signal can be transmitted as a visible light signal with the best reception efficiency. As a result, communication between various devices can be enabled.
 また、前記出力ステップでは、第1の順序にしたがって前記4つの部分信号を出力し、さらに、前記第1の順序と異なる第2の順序にしたがって前記4つの部分信号を再び出力してもよい。 In the output step, the four partial signals may be output according to a first order, and the four partial signals may be output again according to a second order different from the first order.
 これにより、それらの4つの部分信号が順番を変えて繰り返し出力されるため、出力される各信号が可視光信号として受信機に送信される場合には、それらの4つの部分信号の受信効率をさらに高めることができる。つまり、4つの部分信号を同じ順番で繰り返し出力しても、同じ部分信号が受信機に受信されない場合が生じるが、その順番を変えることによって、そのような場合が生じるのを抑えることができる。 As a result, the four partial signals are repeatedly output in a different order. Therefore, when each output signal is transmitted to the receiver as a visible light signal, the reception efficiency of the four partial signals is increased. It can be further increased. That is, even if the four partial signals are repeatedly output in the same order, the same partial signal may not be received by the receiver. However, by changing the order, the occurrence of such a case can be suppressed.
 また、前記出力ステップでは、さらに、前記4つの部分信号に通知動作識別子を付随させて出力し、前記通知動作識別子は、前記4つの部分信号が輝度変化によって送信されて受信機に受信されたときに、前記4つの部分信号が受信されたことを前記受信機のユーザに通知する前記受信機の動作を識別するための識別子であってもよい。 In the output step, the four partial signals are further output with a notification operation identifier attached thereto, and the notification operation identifier is transmitted when the four partial signals are transmitted by a luminance change and received by a receiver. Furthermore, an identifier for identifying the operation of the receiver that notifies the user of the receiver that the four partial signals have been received may be used.
 これにより、その通知動作識別子が可視光信号として送信されて受信機に受信される場合には、受信機は、その通知動作識別子によって識別される動作にしたがって、4つの部分信号の受信をユーザに通知することができる。つまり、送信対象の情報を送信する側で、受信機による通知動作を設定することができる。 Thus, when the notification operation identifier is transmitted as a visible light signal and received by the receiver, the receiver receives the four partial signals to the user according to the operation identified by the notification operation identifier. You can be notified. That is, the notification operation by the receiver can be set on the side that transmits the information to be transmitted.
 また、前記出力ステップでは、さらに、前記通知動作識別子の優先度を識別するための優先度識別子を前記4つの部分信号に付随させて出力してもよい。 Further, in the output step, a priority identifier for identifying a priority of the notification operation identifier may be output along with the four partial signals.
 これにより、その優先度識別子および通知動作識別子が可視光信号として送信されて受信機に受信される場合には、受信機は、その優先度識別子によって識別される優先度にしたがって通知動作識別子を扱うことができる。つまり、受信機が他の通知動作識別子を取得している場合には、受信機は、可視光信号として送信された通知動作識別子によって識別される通知動作と、他の通知動作識別子によって識別される通知動作とのうちの一方を、その優先度に基づいて選択することができる。 Thus, when the priority identifier and the notification operation identifier are transmitted as a visible light signal and received by the receiver, the receiver handles the notification operation identifier according to the priority identified by the priority identifier. be able to. That is, when the receiver acquires another notification operation identifier, the receiver is identified by the notification operation identified by the notification operation identifier transmitted as the visible light signal and the other notification operation identifier. One of the notification operations can be selected based on the priority.
 次に、電子機器のネットワーク接続の登録について説明する。 Next, registration of network connection of electronic devices will be described.
 図81は、本実施の形態における送受信システムの応用例を説明するための図である。 FIG. 81 is a diagram for explaining an application example of the transmission / reception system in the present embodiment.
 この送受信システムは、例えば洗濯機等の電子機器として構成される送信機10131bと、例えばスマートフォンとして構成される受信機10131aと、アクセスポイントまたはルータとして構成される通信装置10131cとを備える。 This transmission / reception system includes a transmitter 10131b configured as an electronic device such as a washing machine, a receiver 10131a configured as a smartphone, and a communication device 10131c configured as an access point or a router.
 図82は、本実施の形態における送受信システムの処理動作を示すフローチャートである。 FIG. 82 is a flowchart showing the processing operation of the transmission / reception system in the present embodiment.
 送信機10131bは、開始ボタンが押下されると(ステップS10165)、SSID、パスワード、IPアドレス、MACアドレス、または暗号鍵等の、自身に接続するための情報を、Wi-Fi、Bluetooth(登録商標)、またはイーサネット(登録商標)などを介して送信し(ステップS10166)、接続を待ち受ける。送信機10131bは、これらの情報を、直接的に送信してもよいし、間接的に送信してもよい。間接に送信する場合、送信機10131bは、それらの情報に関連付けられたIDを送信する。そのIDを受信した受信機10131aは、例えば、そのIDに関連付けられている情報をサーバ等からダウンロードする。 When the start button is pressed (step S10165), the transmitter 10131b transmits Wi-Fi, Bluetooth (registered trademark) information such as an SSID, a password, an IP address, a MAC address, or an encryption key to connect to the transmitter 10131b. ) Or Ethernet (registered trademark) or the like (step S10166), and waits for connection. The transmitter 10131b may transmit these pieces of information directly or indirectly. When transmitting indirectly, the transmitter 10131b transmits an ID associated with the information. For example, the receiver 10131a that has received the ID downloads information associated with the ID from a server or the like.
 受信機10131aは、その情報を受信し(ステップS10151)、送信機10131bへ接続し、アクセスポイントやルータとして構成される通信装置10131cへ接続するための情報(SSID、パスワード、IPアドレス、MACアドレス、または暗号鍵等)を送信機10131bへ送信する(ステップS10152)。受信機10131aは、送信機10131bが通信装置10131cへ接続するための情報(MACアドレス、IPアドレスまたは暗号鍵等)を通信装置10131cへ登録し、通信装置10131cに接続を待ち受けさせる。さらに、受信機10131aは、送信機10131bから通信装置10131cへの接続準備が完了したことを送信機10131bへ通知する(ステップS10153)。 The receiver 10131a receives the information (step S10151), connects to the transmitter 10131b, and connects to the communication device 10131c configured as an access point or router (SSID, password, IP address, MAC address, Alternatively, the encryption key or the like is transmitted to the transmitter 10131b (step S10152). The receiver 10131a registers information (MAC address, IP address, encryption key, etc.) for the transmitter 10131b to connect to the communication device 10131c in the communication device 10131c, and makes the communication device 10131c wait for connection. Further, the receiver 10131a notifies the transmitter 10131b that preparation for connection from the transmitter 10131b to the communication device 10131c is completed (step S10153).
 送信機10131bは、受信機10131aとの接続を切断し(ステップS10168)、通信装置10131cへ接続する(ステップS10169)。接続が成功すれば(ステップS10170のY)、送信機10131bは、通信装置10131cを介して受信機10131aへ接続成功を通知し、画面表示やLEDの状態や音声等でユーザへ接続成功を通知する(ステップS10171)。接続が失敗すれば(ステップS10170のN)、送信機10131bは、可視光通信で受信機10131aに接続失敗を通知し、成功時と同様にユーザへ通知する(ステップS10172)。なお、接続成功を可視光通信で通知してもよい。 The transmitter 10131b disconnects from the receiver 10131a (step S10168) and connects to the communication device 10131c (step S10169). If the connection is successful (Y in step S10170), the transmitter 10131b notifies the receiver 10131a of the connection success via the communication device 10131c, and notifies the user of the connection success with a screen display, LED status, voice, or the like. (Step S10171). If the connection fails (N in step S10170), the transmitter 10131b notifies the receiver 10131a of the connection failure through visible light communication, and notifies the user in the same way as when it is successful (step S10172). The connection success may be notified by visible light communication.
 受信機10131aは、通信装置10131cに接続し(ステップS10154)、接続成功や失敗の通知がなければ(ステップS10155のN、且つステップS10156のN)、通信装置10131c経由で送信機10131bへアクセスが可能かどうか確認する(ステップS10157)。できなければ(ステップS10157のN)、受信機10131aは、送信機10131bから受信した情報を用いた送信機10131bへ接続が所定の回数以上行われたか否かを判定する(ステップS10158)。ここで、所定の回数以上行われていないと判定すると(ステップS10158のN)、受信機10131aは、ステップS10152からの処理を繰り返す。一方、所定の回数以上行われたと判定すると(ステップS10158のY)、受信機10131aは、処理失敗をユーザに通知する(ステップS10159)。また、受信機10131aは、ステップS10156で、接続成功の通知があったと判定すると(ステップS10156のY)、処理成功をユーザに通知する(ステップS10160)。つまり、受信機10131aは、送信機10131bが通信装置10131cへ接続することができたかどうかを、画面表示や音声等でユーザへ通知する。これにより、ユーザに複雑な入力をさせなくても、送信機10131bを通信装置10131cへ接続させることができる。 The receiver 10131a connects to the communication device 10131c (step S10154), and if there is no notification of a connection success or failure (N in step S10155 and N in step S10156), the transmitter 10131b can be accessed via the communication device 10131c. It is confirmed whether or not (step S10157). If not (N in step S10157), the receiver 10131a determines whether or not the connection to the transmitter 10131b using the information received from the transmitter 10131b has been made a predetermined number of times or more (step S10158). If it is determined that the predetermined number of times has not been performed (N in step S10158), the receiver 10131a repeats the process from step S10152. On the other hand, if it is determined that the predetermined number of times has been performed (Y in step S10158), the receiver 10131a notifies the user of the processing failure (step S10159). If the receiver 10131a determines in step S10156 that there has been a notification of a successful connection (Y in step S10156), the receiver 10131a notifies the user of the processing success (step S10160). That is, the receiver 10131a notifies the user whether or not the transmitter 10131b has been able to connect to the communication device 10131c by screen display, voice, or the like. Accordingly, the transmitter 10131b can be connected to the communication device 10131c without requiring complicated input by the user.
 次に、電子機器のネットワーク接続の登録(別の電子機器を介して接続する場合)について説明する。 Next, registration of network connection of electronic devices (when connecting via another electronic device) will be described.
 図83は、本実施の形態における送受信システムの応用例を説明するための図である。 FIG. 83 is a diagram for explaining an application example of the transmission / reception system in the present embodiment.
 この送受信システムは、エアコン10133bと、エアコン10133bに接続された無線アダプタ等の電子機器として構成される送信機10133cと、例えばスマートフォンとして構成される受信機10133a、アクセスポイントまたはルータとして構成される通信装置10133dと、例えば無線アダプタ、無線アクセスポイントまたはルータ等として構成される別の電子機器10133eとを備える。 This transmission / reception system includes an air conditioner 10133b, a transmitter 10133c configured as an electronic device such as a wireless adapter connected to the air conditioner 10133b, a receiver 10133a configured as, for example, a smartphone, and a communication device configured as an access point or a router. 10133d and another electronic device 10133e configured as, for example, a wireless adapter, a wireless access point, or a router.
 図84は、本実施の形態における送受信システムの処理動作を示すフローチャートである。なお、以下、エアコン10133bまたは送信機10133cを電子機器Aと称し、電子機器10133eを電子機器Bと称する。 FIG. 84 is a flowchart showing the processing operation of the transmission / reception system in the present embodiment. Hereinafter, the air conditioner 10133b or the transmitter 10133c is referred to as an electronic device A, and the electronic device 10133e is referred to as an electronic device B.
 まず、電子機器Aは、開始ボタンが押下されると(ステップS10188)、自身に接続するための情報(個体ID、パスワード、IPアドレス、MACアドレス、または暗号鍵等)を送信し(ステップS10189)、接続を待ち受ける(ステップS10190)。電子機器Aは、これらの情報を、上述と同様に、直接的に送信してもよいし、間接的に送信してもよい。 First, when the start button is pressed (step S10188), the electronic device A transmits information (individual ID, password, IP address, MAC address, encryption key, etc.) for connection to itself (step S10189). The connection is awaited (step S10190). The electronic device A may transmit these pieces of information directly or indirectly as described above.
 受信機10133aは、その情報を電子機器Aから受信し(ステップS10181)、電子機器Bへその情報を送信する(ステップS10182)。電子機器Bは、その情報を受信すると(ステップS10196)、その受信した情報にしたがって電子機器Aへ接続する(ステップS10197)。そして、電子機器Bは、電子機器Aとの接続が成されたか否かを判定し(ステップS10198)、その成否を受信機10133aへ通知する(ステップS10199またはステップS101200)。 The receiver 10133a receives the information from the electronic device A (step S10181) and transmits the information to the electronic device B (step S10182). When electronic device B receives the information (step S10196), it connects to electronic device A according to the received information (step S10197). Then, the electronic device B determines whether or not the connection with the electronic device A is established (step S10198), and notifies the receiver 10133a of the success or failure (step S10199 or step S10200).
 電子機器Aは、所定の時間の間に電子機器Bと接続されれば(ステップS10191のY)、電子機器B経由で受信機10133aへ接続成功を通知し(ステップS10192)、接続されなければ(ステップS10191のN)、可視光通信で受信機10133aに接続失敗を通知する(ステップS10193)。また、電子機器Aは、画面表示、発光状態または音声等によって、接続の成否をユーザへ通知する。これにより、ユーザに複雑な入力をさせなくても、電子機器A(送信機10133c)を電子機器B(電子機器10133e)へ接続させることができる。なお、図83に示すエアコン10133bと送信機10133cとは一体に構成されていてもよく、同様に、通信装置10133dと電子機器10133eとも一体に構成されていてもよい。 If the electronic device A is connected to the electronic device B during a predetermined time (Y in step S10191), the electronic device A notifies the receiver 10133a of the connection success via the electronic device B (step S10192) and is not connected (step S10192). N in step S10191), the receiver 10133a is notified of the connection failure by visible light communication (step S10193). In addition, the electronic device A notifies the user of the success or failure of the connection through a screen display, a light emission state, sound, or the like. Accordingly, the electronic device A (transmitter 10133c) can be connected to the electronic device B (electronic device 10133e) without requiring complicated input by the user. Note that the air conditioner 10133b and the transmitter 10133c illustrated in FIG. 83 may be configured integrally, and similarly, the communication device 10133d and the electronic device 10133e may be configured integrally.
 次に、適切な撮像情報の送信について説明する。 Next, transmission of appropriate imaging information will be described.
 図85は、本実施の形態における送受信システムの応用例を説明するための図である。 FIG. 85 is a diagram for explaining an application example of the transmission / reception system according to the present embodiment.
 この送受信システムは、例えばデジタルスチルカメラやデジタルビデオカメラとして構成される受信機10135aと、例えば照明として構成される送信機10135bとを備える。 This transmission / reception system includes a receiver 10135a configured as, for example, a digital still camera or a digital video camera, and a transmitter 10135b configured as, for example, illumination.
 図86は、本実施の形態における送受信システムの処理動作を示すフローチャートである。 FIG. 86 is a flowchart showing the processing operation of the transmission / reception system in the present embodiment.
 まず、受信機10135aは、送信機10135bへ、撮像情報送信命令を送る(ステップS10211)。次に、送信機10135bは、撮像情報送信命令を受信した場合、撮像情報送信ボタンが押下された場合、撮像情報送信スイッチがオンとなっている場合、電源が入れられた場合に(ステップS10221のY)、撮像情報を送信する(ステップS10222)。撮像情報送信命令は、撮像情報を送信させるための命令であって、撮像情報は、例えば照明の色温度、スペクトル分布、照度または配光を示す。送信機10135bは、撮像情報を、上述と同様に、直接的に送信してもよいし、間接的に送信してもよい。間接に送信する場合、送信機10135bは、撮像情報に関連付けられたIDを送信する。そのIDを受信した受信機10135aは、例えば、そのIDに関連付けられている撮像情報をサーバ等からダウンロードする。このとき、送信機10135bは、自身へ送信停止命令を送信するための方法(送信停止命令を伝送する電波、赤外線、または音波の周波数、あるいは、自信へ接続するためのSSID、パスワードまたはIPアドレス等)を送信してもよい。 First, the receiver 10135a sends an imaging information transmission command to the transmitter 10135b (step S10211). Next, the transmitter 10135b receives the imaging information transmission command, the imaging information transmission button is pressed, the imaging information transmission switch is turned on, or the power is turned on (in step S10221). Y), imaging information is transmitted (step S10222). The imaging information transmission command is a command for transmitting imaging information, and the imaging information indicates, for example, the color temperature of illumination, spectral distribution, illuminance, or light distribution. The transmitter 10135b may transmit the imaging information directly or indirectly as described above. When transmitting indirectly, the transmitter 10135b transmits an ID associated with the imaging information. The receiver 10135a that has received the ID downloads, for example, imaging information associated with the ID from a server or the like. At this time, the transmitter 10135b is a method for transmitting a transmission stop command to itself (frequency of radio waves, infrared rays, or sound waves for transmitting the transmission stop command, or an SSID, a password or an IP address for connection to self-confidence) ) May be sent.
 受信機10135aは、撮像情報を受信すると(ステップS10212)、送信停止命令を送信機10135bに送信する(ステップS10213)。ここで、送信機10135bは、受信機10135aから送信停止命令を受信すると(ステップS10223の)、撮像情報の送信を停止し、一様に発光する(ステップS10224)。 When receiving the imaging information (step S10212), the receiver 10135a transmits a transmission stop command to the transmitter 10135b (step S10213). Here, when the transmitter 10135b receives a transmission stop command from the receiver 10135a (step S10223), the transmitter 10135b stops transmission of imaging information and emits light uniformly (step S10224).
 さらに、受信機10135aは、ステップS10212で受信した撮像情報に従って撮像パラメータを設定する(ステップS10214)、あるいは、撮像情報をユーザへ通知する。撮像パラメータは、例えばホワイトバランス、露光時間、焦点距離、感度またはシーンモードである。これにより、照明に合わせて最適な設定で撮像することができる。次に、受信機10135aは、送信機10135bからの撮像情報の送信が停止されてから(ステップS10215のY)、撮像する(ステップS10216)。これにより、信号送信による被写体の明るさの変化をなくして撮像を行うことができる。なお、受信機10135aは、ステップS10216の後、撮像情報の送信開始を促す送信開始命令を送信機10135bに送信してもよい(ステップS10217)。 Furthermore, the receiver 10135a sets the imaging parameter according to the imaging information received in step S10212 (step S10214), or notifies the user of the imaging information. The imaging parameter is, for example, white balance, exposure time, focal length, sensitivity, or scene mode. Thereby, it is possible to take an image with an optimum setting according to the illumination. Next, the receiver 10135a captures an image after the transmission of imaging information from the transmitter 10135b is stopped (Y in step S10215) (step S10216). Thereby, it is possible to perform imaging without changing the brightness of the subject due to signal transmission. Note that, after step S10216, the receiver 10135a may transmit a transmission start command for prompting the start of transmission of imaging information to the transmitter 10135b (step S10217).
 次に、充電状態の表示について説明する。 Next, the display of the state of charge will be described.
 図87は、本実施の形態における送信機の応用例を説明するための図である。 FIG. 87 is a diagram for describing an application example of the transmitter in this embodiment.
 例えば充電器として構成される送信機10137bは、発光部を備え、バッテリーの充電状態を示す可視光信号を発光部から送信する。これにより、高価な表示装置を備えなくても、バッテリーの充電状態を通知することができる。なお、発光部として小さなLEDを用いた場合には、近くからそのLEDを撮像しないと可視光信号を受信することはできない。また、そのLEDの近くに突起部がある送信機10137cでは、突起部が邪魔でLEDを接写しづらい。したがって、送信機10137cからの可視光信号よりも、LEDの付近に突起部がない送信機10137bからの可視光信号の方が、容易に受信することができる。 For example, the transmitter 10137b configured as a charger includes a light emitting unit, and transmits a visible light signal indicating a charged state of the battery from the light emitting unit. Thus, the state of charge of the battery can be notified without an expensive display device. When a small LED is used as the light emitting unit, a visible light signal cannot be received unless the LED is imaged from nearby. In addition, in the transmitter 10137c having a protrusion near the LED, it is difficult to close-up the LED due to the protrusion. Therefore, the visible light signal from the transmitter 10137b having no protrusion near the LED can be received more easily than the visible light signal from the transmitter 10137c.
 (実施の形態11)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 11)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 まず、デモモード時と故障時の送信について説明する。 First, transmission in demo mode and failure will be explained.
 図88は、本実施の形態における送信機の動作の一例を説明する図である。 FIG. 88 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
 送信機は、エラーが発生している場合には、エラーが発生していることを示す信号、または、エラーコードに対応する信号を送信することで、受信機にエラーが発生していることやエラー内容を伝えることができる。受信機は、エラー内容に合わせて適切な対応を示すことで、エラーを修復したり、サービスセンターにエラー内容を適切に報告したりすることができる。 When an error has occurred, the transmitter transmits a signal indicating that the error has occurred, or a signal corresponding to the error code. Can tell error details. The receiver can correct the error or appropriately report the error content to the service center by indicating an appropriate response according to the error content.
 送信機は、デモモードになっている場合は、デモコードを送信する。これにより、例えば店頭で商品である送信機のデモを行っている場合に、来店者がデモコードを受信し、デモコードに関連付けられた商品説明を取得することができる。デモモードであるかどうかの判断は、送信機の動作設定がデモモードになっている、店頭用CASカードが挿入されている、CASカードが挿入されていない、記録用メディアが挿入されていないといった点から判断することができる。 When the transmitter is in demo mode, it will send a demo code. As a result, for example, when a demo of a transmitter that is a product is performed at a storefront, the store visitor can receive the demo code and acquire the product description associated with the demo code. Whether or not the demo mode is selected is determined by whether the transmitter operation setting is in the demo mode, the storefront CAS card is inserted, the CAS card is not inserted, or the recording medium is not inserted. Judging from the point.
 次に、リモコンからの信号送信について説明する。 Next, signal transmission from the remote controller will be described.
 図89は、本実施の形態における送信機の動作の一例を説明する図である。 FIG. 89 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
 例えばエアコンのリモコンとして構成される送信機が、本体情報を受信した際に、送信機が本体情報を送信することで、受信機は、遠くの本体の情報を近くにある送信機から情報を受信することができる。受信機は、ネットワーク越しなど、可視光通信が不可能な場所に存在する本体からの情報を受信することもできる。 For example, when a transmitter configured as an air conditioner remote controller receives main body information, the transmitter transmits the main body information, so that the receiver receives information on a distant main body from a nearby transmitter. can do. The receiver can also receive information from a main body that exists in a place where visible light communication is impossible, such as over a network.
 次に、明るい場所にあるときだけ送信する処理について説明する。 Next, a description will be given of processing for transmitting only when the subject is in a bright place.
 図90は、本実施の形態における送信機の動作の一例を説明する図である。 FIG. 90 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
 送信機は、周囲の明るさが一定以上であれば送信を行い、一定以下になれば送信を停止する。これにより、例えば電車の広告として構成される送信機は、車両が車庫入りした際に自動で動作を停止することができ、電池の消耗を抑えることができる。 The transmitter transmits if the ambient brightness is above a certain level, and stops transmitting if it is below a certain level. Thereby, for example, a transmitter configured as an advertisement for a train can automatically stop its operation when the vehicle enters the garage, and battery consumption can be suppressed.
 次に、送信機の表示に合わせたコンテンツ配信(関連付けの変更・スケジューリング)について説明する。 Next, content distribution (association change / scheduling) in accordance with the transmitter display will be described.
 図91は、本実施の形態における送信機の動作の一例を説明する図である。 FIG. 91 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
 送信機は、表示するコンテンツの表示タイミングに合わせて、受信機に取得させたいコンテンツを送信IDに関連付ける。表示コンテンツが変更される度に、関連付けの変更をサーバへ登録する。 The transmitter associates the content that the receiver wants to acquire with the transmission ID in accordance with the display timing of the content to be displayed. Each time the display content is changed, the association change is registered with the server.
 送信機は、表示コンテンツの表示タイミングが既知である場合は、表示コンテンツの変化タイミングに合わせて別のコンテンツが受信機に渡されるように、サーバに設定する。サーバは、受信機から送信IDに関連付けられたコンテンツの要求が合った際には、設定されたスケジュールに合わせたコンテンツを受信機へ送信する。 When the display timing of the display content is known, the transmitter sets the server so that another content is delivered to the receiver in accordance with the change timing of the display content. When the request for the content associated with the transmission ID is received from the receiver, the server transmits the content according to the set schedule to the receiver.
 これにより、例えばデジタルサイネージとして構成される送信機が表示内容を次々と変更している場合に、受信機は、送信機が表示しているコンテンツに合わせたコンテンツを取得することができる。 Thus, for example, when the transmitter configured as digital signage changes display contents one after another, the receiver can acquire content that matches the content displayed by the transmitter.
 次に、送信機の表示に合わせたコンテンツ配信(時刻による同期)について説明する。 Next, content distribution (synchronization by time) in accordance with the display of the transmitter will be described.
 図92は、本実施の形態における送信機の動作の一例を説明する図である。 FIG. 92 is a diagram for explaining an example of the operation of the transmitter in this embodiment.
 所定のIDに関連付けられたコンテンツ取得の要求に対し、時刻に応じて異なるコンテンツを渡すように、あらかじめサーバへ登録しておく。 In response to a content acquisition request associated with a predetermined ID, it is registered in advance in the server so that different content is passed according to the time.
 送信機は、サーバと時刻を同期し、所定の時刻に所定の部分が表示されるようにタイミングを調整してコンテンツを表示する。 The transmitter synchronizes the time with the server, adjusts the timing so that a predetermined part is displayed at a predetermined time, and displays the content.
 これにより、例えばデジタルサイネージとして構成される送信機が表示内容を次々と変更している場合に、受信機は、送信機が表示しているコンテンツに合わせたコンテンツを取得することができる。 Thus, for example, when the transmitter configured as digital signage changes display contents one after another, the receiver can acquire content that matches the content displayed by the transmitter.
 次に、送信機の表示に合わせたコンテンツ配信(表示時刻の送信)について説明する。 Next, content distribution (transmission of display time) that matches the display of the transmitter will be described.
 図93は、本実施の形態における送信機と受信機の動作の一例を説明する図である。 FIG. 93 is a diagram for explaining an example of operations of the transmitter and the receiver in this embodiment.
 送信機は、送信機のIDに加え、表示中のコンテンツの表示時刻を送信する。コンテンツ表示時刻は、現在表示しているコンテンツを特定できる情報であり、例えばコンテンツの開始時点からの経過時刻などで表現できる。 The transmitter transmits the display time of the content being displayed in addition to the transmitter ID. The content display time is information that can identify the currently displayed content, and can be expressed by, for example, an elapsed time from the start time of the content.
 受信機は、受信したIDに関連付けられたコンテンツをサーバから取得し、受信した表示時刻に合わせてコンテンツを表示する。これにより、例えばデジタルサイネージとして構成される送信機が表示内容を次々と変更している場合に、受信機は、送信機が表示しているコンテンツに合わせたコンテンツを取得することができる。 The receiver acquires the content associated with the received ID from the server and displays the content according to the received display time. Thereby, for example, when a transmitter configured as digital signage changes display contents one after another, the receiver can acquire content that matches the content displayed by the transmitter.
 また、受信機は、時間の経過に従って、表示するコンテンツを変更する。これにより、送信機の表示コンテンツが変化した際に再度信号を受信しなくても、表示コンテンツに合わせたコンテンツが表示される。 Also, the receiver changes the content to be displayed over time. As a result, even if a signal is not received again when the display content of the transmitter changes, the content that matches the display content is displayed.
 次に、ユーザの許諾状況に合わせたデータのアップロードについて説明する。 Next, data uploading according to the user's permission status will be described.
 図94は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 94 is a diagram for explaining an example of the operation of the receiver in this embodiment.
 受信機は、ユーザがアカウント登録をしている場合には、アカウント登録の際等にユーザがアクセス許可を行っている情報(受信機の位置や電話番号やIDやインストールされているアプリやユーザの年齢や性別や職業や嗜好等)を受信したIDと合わせてサーバへ送信する。 If the user has registered for an account, the receiver is authorized to access information such as the location of the receiver, phone number, ID, installed application, and user (Age, sex, occupation, preference, etc.) are transmitted to the server together with the received ID.
 アカウント登録がされていない場合には、ユーザが前記のような情報のアップロードを許可していれば、同様にサーバへ送信し、許可していない場合には、受信したIDのみをサーバへ送信する。 If the account is not registered, if the user permits uploading of information as described above, it is similarly sent to the server. If not permitted, only the received ID is sent to the server. .
 これにより、ユーザは受信時の状況や自身のパーソナリティに合わせたコンテンツを受信することができ、また、サーバはユーザの情報を得ることでデータ解析に役立てることが出来る。 This allows the user to receive content tailored to the situation at the time of reception and his / her personality, and the server can be used for data analysis by obtaining user information.
 次に、コンテンツ再生アプリの起動について説明する。 Next, activation of the content playback application will be described.
 図95は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 95 is a diagram for explaining an example of operation of the receiver in this embodiment.
 受信機は、受信したIDに関連付けられたコンテンツをサーバから取得する。起動中のアプリが取得コンテンツを扱える(表示したり再生したりすることができる)場合には、起動中のアプリで取得コンテンツを表示・再生する。扱えない場合は、扱えるアプリが受信機にインストールされているかどうかを確認し、インストールされている場合は、そのアプリを起動して取得コンテンツの表示・再生を行う。インストールされていない場合は、自動でインストールしたり、インストールを促す表示をしたり、ダウンロード画面を表示させたりし、インストール後に取得コンテンツの表示・再生を行う。 The receiver acquires content associated with the received ID from the server. When the active application can handle (can display or play) the acquired content, the acquired content is displayed / reproduced by the active application. If it cannot be handled, it is confirmed whether or not an app that can be handled is installed in the receiver. If it is installed, the app is activated to display / play the acquired content. If it is not installed, it automatically installs, displays a message prompting installation, displays a download screen, and displays and plays the acquired content after installation.
 これにより、取得コンテンツを適切に扱う(表示・再生等を行う)ことができる。 Thereby, the acquired content can be handled appropriately (display / playback etc.).
 次に、指定アプリの起動について説明する。 Next, activation of the specified application will be described.
 図96は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 96 is a diagram for explaining an example of operation of the receiver in this embodiment.
 受信機は、受信したIDに関連付けられたコンテンツと、起動すべきアプリを指定する情報(アプリID)をサーバから取得する。起動中のアプリが指定アプリである場合は、取得したコンテンツを表示・再生する。指定アプリが受信機にインストールされている場合は、指定アプリを起動して取得コンテンツの表示・再生を行う。インストールされていない場合は、自動でインストールしたり、インストールを促す表示をしたり、ダウンロード画面を表示させたりし、インストール後に取得コンテンツの表示・再生を行う。 The receiver acquires the content associated with the received ID and information (application ID) for specifying the application to be activated from the server. When the running application is the designated application, the acquired content is displayed / reproduced. When the designated application is installed in the receiver, the designated application is activated to display / play the acquired content. If it is not installed, it automatically installs, displays a message prompting installation, displays a download screen, and displays and plays the acquired content after installation.
 受信機は、アプリIDのみをサーバから取得し、指定アプリを起動するとしてもよい。 The receiver may acquire only the application ID from the server and start the designated application.
 受信機は、指定された設定を行うとしてもよい。受信機は、指定されたパラメータを設定して、指定されたアプリを起動するとしてもよい。 The receiver may perform specified settings. The receiver may set the designated parameter and activate the designated application.
 次に、ストリーミング受信と通常受信の選択について説明する。 Next, selection of streaming reception and normal reception will be described.
 図97は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 97 is a diagram for explaining an example of operation of the receiver in this embodiment.
 受信機は、受信したデータの所定のアドレスの値が所定の値である場合や、受信したデータが所定の識別子を含む場合は、信号がストリーミング配信されていると判断し、ストリーミングデータの受信方法で受信を行う。そうでない場合は、通常の受信方法で受信する。 The receiver determines that the signal is being streamed when the value of the predetermined address of the received data is a predetermined value or the received data includes a predetermined identifier, and receives the streaming data. Receive with. Otherwise, it is received by a normal receiving method.
 これにより、ストリーミング配信と通常配信のどちらの方法で信号が送信されていても受信を行うことができる。 This makes it possible to receive a signal regardless of whether it is transmitted using streaming distribution or normal distribution.
 次に、プライベートデータについて説明する。 Next, private data will be explained.
 図98は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 98 is a diagram for explaining an example of operation of the receiver in this embodiment.
 受信機は、受信したIDの値が所定の範囲内である場合や、所定の識別子を含む場合には、アプリ内にテーブルを参照し、受信IDがテーブルに存在すれば、そのテーブルで指定されたコンテンツを取得する。そうでない場合には、サーバから受信IDにしていされたコンテンツを取得する。 When the received ID value is within a predetermined range or includes a predetermined identifier, the receiver refers to the table in the application, and if the reception ID exists in the table, the receiver is designated in the table. Get the content. Otherwise, the content set as the reception ID is acquired from the server.
 これにより、サーバに登録を行わなくてもコンテンツを受信することができる。また、サーバとの通信を行わないため、素早いレスポンスが得られる。 This makes it possible to receive content without registering with the server. Further, since communication with the server is not performed, a quick response can be obtained.
 次に、周波数に合わせた露光時間の設定について説明する。 Next, the setting of the exposure time according to the frequency will be described.
 図99は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 99 is a diagram for explaining an example of operation of the receiver in this embodiment.
 受信機は、信号を検知し、信号の変調周波数を認識する。受信機は、変調周波数の周期(変調周期)に合わせて露光時間を設定する。例えば、変調周期と同程度の露光時間にすることで、信号を受信しやすくすることができる。また、例えば、変調周期の整数倍、または、それに近い値(概ね±30%程度)に露光時間を設定することで、畳み込み復号によって信号を受信しやすくすることができる。 The receiver detects the signal and recognizes the modulation frequency of the signal. The receiver sets the exposure time according to the period of the modulation frequency (modulation period). For example, the signal can be easily received by setting the exposure time to be approximately equal to the modulation period. Further, for example, by setting the exposure time to an integral multiple of the modulation period or a value close to it (approximately ± 30%), it is possible to easily receive a signal by convolutional decoding.
 次に、送信機の最適パラメータ設定について説明する。 Next, the optimal parameter setting for the transmitter will be described.
 図100は、本実施の形態における受信機の動作の一例を説明する図である。 FIG. 100 is a diagram for explaining an example of operation of the receiver in this embodiment.
 受信機は、送信機から受信したデータに加え、現在位置情報やユーザに関連する情報(住所や性別や年齢や嗜好等)をサーバへ送信する。サーバは、受信した情報に合わせて、送信機が最適に動作するためのパラメータを受信機へ送信する。受信機は、受信したパラメータを送信機へ設定できる場合には設定する。設定できない場合には、パラメータを表示し、ユーザが送信機へそのパラメータを設定するように促す。 In addition to the data received from the transmitter, the receiver transmits current location information and information related to the user (address, gender, age, preference, etc.) to the server. The server transmits parameters for optimal operation of the transmitter to the receiver in accordance with the received information. The receiver sets the received parameter if it can be set in the transmitter. If it cannot be set, the parameter is displayed and the user is prompted to set the parameter.
 これにより、例えば、送信機が使われている地域の水の性質に最適化して洗濯機を動作させたり、ユーザの使用している米の種類に最適な方法で炊飯するように炊飯器を動作させたりすることができる。 This allows, for example, operating the washing machine to optimize the water properties of the area where the transmitter is used, or operating the rice cooker to cook rice in a way that is optimal for the type of rice used by the user You can make it.
 次に、データの構成を示す識別子について説明する。 Next, an identifier indicating the data structure will be described.
 図101は、本実施の形態における送信データの構成の一例を説明する図である。 FIG. 101 is a diagram for explaining an example of a configuration of transmission data in the present embodiment.
 送信される情報は識別子を含み、受信機は、その値によって後続する部分の構成を知ることができる。例えば、データの長さ、エラー訂正符号の種類や長さ、データの分割点などを特定することができる。 The information to be transmitted includes an identifier, and the receiver can know the configuration of the subsequent part by its value. For example, it is possible to specify the data length, the type and length of the error correction code, the data division point, and the like.
 これにより、送信機は、送信機や通信路の性質に応じてデータ本体やエラー訂正符号の種類や長さを変更することができる。また、送信機は、送信機のIDに加えて、コンテンツIDを送信することで、受信機にコンテンツIDに応じたIDを取得させることができる。 This allows the transmitter to change the type and length of the data body and error correction code according to the nature of the transmitter and the communication path. Also, the transmitter can cause the receiver to acquire an ID corresponding to the content ID by transmitting the content ID in addition to the ID of the transmitter.
 (実施の形態12)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 12)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 図102は、本実施の形態における受信機の動作を説明するための図である。 FIG. 102 is a diagram for explaining the operation of the receiver in this embodiment.
 本実施の形態における受信機1210aは、イメージセンサによる連続した撮影を行う際に、例えばフレーム単位でシャッター速度を高速と低速とに切り替える。さらに、受信機1210aは、その撮影によって得られるフレームに基づいて、そのフレームに対する処理を、バーコード認識処理と可視光認識処理とに切り替える。ここで、バーコード認識処理とは、低速のシャッター速度によって得られるフレームに映っているバーコードをデコードする処理である。可視光認識処理とは、高速のシャッター速度によって得られるフレームに映っている上述の輝線のパターンをデコードする処理である。 The receiver 1210a in the present embodiment switches the shutter speed between high speed and low speed, for example, in units of frames when performing continuous shooting by the image sensor. Furthermore, the receiver 1210a switches the process for the frame to a barcode recognition process and a visible light recognition process based on the frame obtained by the shooting. Here, the barcode recognition process is a process for decoding a barcode reflected in a frame obtained by a low shutter speed. The visible light recognition process is a process of decoding the above-described bright line pattern reflected in a frame obtained with a high shutter speed.
 このような受信機1210aは、映像入力部1211と、バーコード・可視光識別部1212と、バーコード認識部1212aと、可視光認識部1212bと、出力部1213とを備えている。 Such a receiver 1210a includes a video input unit 1211, a barcode / visible light identification unit 1212, a barcode recognition unit 1212a, a visible light recognition unit 1212b, and an output unit 1213.
 映像入力部1211は、イメージセンサを備え、イメージセンサによる撮影のシャッター速度を切り替える。つまり、映像入力部1211は、例えばフレーム単位でシャッター速度を低速と高速とに交互に切り替える。より具体的には、映像入力部1211は、奇数番目のフレームに対してはシャッター速度を高速に切り替え、偶数番目のフレームに対してはシャッター速度を低速に切り替える。低速のシャッター速度の撮影は、上述の通常撮影モードによる撮影であって、高速のシャッター速度の撮影は、上述の可視光通信モードによる撮影である。つまり、シャッター速度が低速の場合には、イメージセンサに含まれる各露光ラインの露光時間は長く、被写体が映し出された通常撮影画像がフレームとして得られる。また、シャッター速度が高速の場合には、イメージセンサに含まれる各露光ラインの露光時間は短く、上述の輝線が映し出された可視光通信画像がフレームとして得られる。 The video input unit 1211 includes an image sensor, and switches the shutter speed for shooting by the image sensor. That is, the video input unit 1211 switches the shutter speed alternately between low speed and high speed, for example, in units of frames. More specifically, the video input unit 1211 switches the shutter speed to high speed for odd-numbered frames and switches the shutter speed to low speed for even-numbered frames. Shooting at a low shutter speed is shooting in the above-described normal shooting mode, and shooting at a high shutter speed is shooting in the above-described visible light communication mode. That is, when the shutter speed is low, the exposure time of each exposure line included in the image sensor is long, and a normal captured image on which the subject is projected is obtained as a frame. Further, when the shutter speed is high, the exposure time of each exposure line included in the image sensor is short, and a visible light communication image in which the above-described bright line is projected is obtained as a frame.
 バーコード・可視光識別部1212は、映像入力部1211によって得られる画像に、バーコードが現れているか否か、または輝線が現れているか否かを判別することによって、その画像に対する処理を切り替える。例えば、バーコード・可視光識別部1212は、低速のシャッター速度の撮影によって得られたフレームにバーコードが現れていれば、その画像に対する処理をバーコード認識部1212aに実行させる。一方、バーコード・可視光識別部1212は、高速のシャッター速度の撮影によって得られた画像に輝線が現れていれば、その画像に対する処理を可視光認識部1212bに実行させる。 The barcode / visible light identifying unit 1212 switches processing for the image by determining whether a barcode appears or whether a bright line appears in the image obtained by the video input unit 1211. For example, if a barcode appears in a frame obtained by shooting at a low shutter speed, the barcode / visible light identification unit 1212 causes the barcode recognition unit 1212a to perform processing on the image. On the other hand, if a bright line appears in an image obtained by shooting at a high shutter speed, the barcode / visible light identifying unit 1212 causes the visible light recognizing unit 1212b to execute processing on the image.
 バーコード認識部1212aは、低速のシャッター速度の撮影によって得られたフレームに現れているバーコードをデコードする。バーコード認識部1212aは、そのデコードによって、バーコードのデータ(例えばバーコード識別子)を取得し、そのバーコード識別子を出力部1213に出力する。なお、バーコードは、一次元のコードであっても、二次元のコード(例えば、QRコード(登録商標))であってもよい。 The barcode recognition unit 1212a decodes a barcode appearing in a frame obtained by shooting at a low shutter speed. The barcode recognition unit 1212a acquires barcode data (for example, a barcode identifier) by decoding, and outputs the barcode identifier to the output unit 1213. The barcode may be a one-dimensional code or a two-dimensional code (for example, a QR code (registered trademark)).
 可視光認識部1212bは、高速のシャッター速度の撮影によって得られたフレームに現れている輝線のパターンをデコードする。可視光認識部1212bは、そのデコードによって、可視光のデータ(例えば可視光識別子)を取得し、その可視光識別子を出力部1213に出力する。なお、可視光のデータは上述の可視光信号である。 The visible light recognizing unit 1212b decodes the bright line pattern appearing in the frame obtained by photographing at a high shutter speed. The visible light recognizing unit 1212b obtains visible light data (for example, a visible light identifier) by the decoding, and outputs the visible light identifier to the output unit 1213. The visible light data is the above-described visible light signal.
 出力部1213は、低速のシャッター速度の撮影によって得られたフレームのみを表示する。したがって、映像入力部1211による撮影の被写体がバーコードである場合には、出力部1213はバーコードを表示する。また、映像入力部1211による撮影の被写体が、可視光信号を送信するデジタルサイネージなどである場合には、出力部1213は、輝線のパターンを表示することなく、そのデジタルサイネージの像を表示する。そして、出力部1213は、バーコード識別子を取得した場合には、そのバーコード識別子に対応付けられている情報を例えばサーバなどから取得し、その情報を表示する。また、出力部1213は、可視光識別子を取得した場合には、その可視光識別子に対応付けられている情報を例えばサーバなどから取得し、その情報を表示する。 The output unit 1213 displays only frames obtained by shooting at a low shutter speed. Therefore, when the subject imaged by the video input unit 1211 is a barcode, the output unit 1213 displays the barcode. When the subject photographed by the video input unit 1211 is a digital signage that transmits a visible light signal, the output unit 1213 displays an image of the digital signage without displaying the bright line pattern. And when the output part 1213 acquires a barcode identifier, it acquires the information matched with the barcode identifier from a server etc., for example, and displays the information. Further, when the output unit 1213 acquires a visible light identifier, the output unit 1213 acquires information associated with the visible light identifier from, for example, a server and displays the information.
 つまり、端末装置である受信機1210aは、イメージセンサを備え、イメージセンサのシャッター速度を、第1の速度と、第1の速度よりも高速の第2の速度とに交互に切り替えながら、イメージセンサによる連続した撮影を行う。そして、(a)イメージセンサによる撮影の被写体がバーコードである場合には、受信機1210aは、シャッター速度が第1の速度であるときの撮影によって、バーコードが映っている画像を取得し、その画像に映っているバーコードをデコードすることによって、バーコード識別子を取得する。また、(b)イメージセンサによる撮影の被写体が光源(例えばデジタルサイネージなど)である場合には、受信機1210aは、シャッター速度が第2の速度であるときの撮影によって、イメージセンサに含まれる複数の露光ラインのそれぞれに対応する輝線を含む画像である輝線画像を取得する。そして、受信機1210aは、取得された輝線画像に含まれる複数の輝線のパターンをデコードすることによって可視光信号を可視光識別子として取得する。さらに、この受信機1210aは、シャッター速度が第1の速度であるときの撮影によって得られる画像を表示する。 That is, the receiver 1210a as a terminal device includes an image sensor, and the image sensor is switched while alternately switching the shutter speed of the image sensor between a first speed and a second speed higher than the first speed. Perform continuous shooting with. (A) When the subject imaged by the image sensor is a barcode, the receiver 1210a obtains an image showing the barcode by photographing when the shutter speed is the first speed, A barcode identifier is obtained by decoding the barcode reflected in the image. In addition, (b) when the subject to be photographed by the image sensor is a light source (for example, digital signage), the receiver 1210a includes a plurality of images included in the image sensor by photographing when the shutter speed is the second speed. A bright line image that is an image including a bright line corresponding to each of the exposure lines is acquired. Then, the receiver 1210a acquires a visible light signal as a visible light identifier by decoding a plurality of bright line patterns included in the acquired bright line image. Further, the receiver 1210a displays an image obtained by photographing when the shutter speed is the first speed.
 このような本実施の形態における受信機1210aでは、バーコード認識処理と可視光認識処理とを切り替えて行うことによって、バーコードのデコードを行うとともに、可視光信号を受信することができる。さらに、切り替えによって、消費電力を抑えることができる。 In such a receiver 1210a in this embodiment, the barcode is decoded and the visible light signal can be received by switching between the barcode recognition process and the visible light recognition process. Furthermore, power consumption can be suppressed by switching.
 本実施の形態における受信機は、バーコード認識処理の代わりに画像認識処理を可視光処理と同時に行ってもよい。 The receiver in this embodiment may perform image recognition processing simultaneously with visible light processing instead of barcode recognition processing.
 図103Aは、本実施の形態における受信機の他の動作を説明するための図である。 FIG. 103A is a diagram for explaining another operation of the receiver in this embodiment.
 本実施の形態における受信機1210bは、イメージセンサによる連続した撮影を行う際に、例えばフレーム単位でシャッター速度を高速と低速とに切り替える。さらに、受信機1210bは、その撮影によって得られる画像(フレーム)に対して、画像認識処理と上述の可視光認識処理とを同時に実行する。画像認識処理は、低速のシャッター速度によって得られるフレームに映っている被写体を認識する処理である。 The receiver 1210b in the present embodiment switches the shutter speed between high speed and low speed, for example, in units of frames when performing continuous shooting by the image sensor. Further, the receiver 1210b simultaneously performs the image recognition process and the above-described visible light recognition process on the image (frame) obtained by the photographing. The image recognition process is a process for recognizing a subject appearing in a frame obtained with a low shutter speed.
 このような受信機1210bは、映像入力部1211と、画像認識部1212cと、可視光認識部1212bと、出力部1215とを備えている。 Such a receiver 1210b includes a video input unit 1211, an image recognition unit 1212c, a visible light recognition unit 1212b, and an output unit 1215.
 映像入力部1211は、イメージセンサを備え、イメージセンサによる撮影のシャッター速度を切り替える。つまり、映像入力部1211は、例えばフレームン単位でシャッター速度を低速と高速とに交互に切り替える。より具体的には、映像入力部1211は、奇数番目のフレームに対してはシャッター速度を高速に切り替え、偶数番目のフレームに対してはシャッター速度を低速に切り替える。低速のシャッター速度の撮影は、上述の通常撮影モードによる撮影であって、高速のシャッター速度の撮影は、上述の可視光通信モードによる撮影である。つまり、シャッター速度が低速の場合には、イメージセンサに含まれる各露光ラインの露光時間は長く、被写体が映し出された通常撮影画像がフレームとして得られる。また、シャッター速度が高速の場合には、イメージセンサに含まれる各露光ラインの露光時間は短く、上述の輝線が映し出された可視光通信画像がフレームとして得られる。 The video input unit 1211 includes an image sensor, and switches the shutter speed for shooting by the image sensor. That is, the video input unit 1211 switches the shutter speed alternately between a low speed and a high speed, for example, in frame units. More specifically, the video input unit 1211 switches the shutter speed to high speed for odd-numbered frames and switches the shutter speed to low speed for even-numbered frames. Shooting at a low shutter speed is shooting in the above-described normal shooting mode, and shooting at a high shutter speed is shooting in the above-described visible light communication mode. That is, when the shutter speed is low, the exposure time of each exposure line included in the image sensor is long, and a normal captured image on which the subject is projected is obtained as a frame. Further, when the shutter speed is high, the exposure time of each exposure line included in the image sensor is short, and a visible light communication image in which the above-described bright line is projected is obtained as a frame.
 画像認識部1212cは、低速のシャッター速度の撮影によって得られたフレームに現れている被写体を認識するとともに、その被写体のフレーム内の位置を特定する。画像認識部1212cは、認識の結果、その被写体がAR(Augmented Reality)の対象とされるもの(以下、AR対象物という)か否かを判断する。そして、画像認識部1212cは、被写体がAR対象物であると判断すると、その被写体に関する情報を表示するためのデータ(例えば、被写体の位置およびARマーカーなど)である画像認識データを生成し、そのARマーカーを出力部1215に出力する。 The image recognition unit 1212c recognizes a subject appearing in a frame obtained by shooting at a low shutter speed and specifies the position of the subject in the frame. As a result of recognition, the image recognition unit 1212c determines whether or not the subject is an AR (Augmented Reality) target (hereinafter referred to as an AR target). When the image recognition unit 1212c determines that the subject is an AR object, the image recognition unit 1212c generates image recognition data that is data for displaying information about the subject (for example, the position of the subject and the AR marker). The AR marker is output to the output unit 1215.
 出力部1215は、上述の出力部1213と同様に、低速のシャッター速度の撮影によって得られたフレームのみを表示する。したがって、映像入力部1211による撮影の被写体が、可視光信号を送信するデジタルサイネージなどである場合には、出力部1213は、輝線のパターンを表示することなく、そのデジタルサイネージの像を表示する。さらに、出力部1215は、画像認識部1212cから画像認識データを取得すると、画像認識データによって示されるフレーム内の被写体の位置に基づいて、その被写体を囲む白い枠状のインジケータをそのフレームに重畳する。 The output unit 1215 displays only frames obtained by shooting at a low shutter speed, as with the output unit 1213 described above. Therefore, when the subject photographed by the video input unit 1211 is digital signage that transmits a visible light signal, the output unit 1213 displays the image of the digital signage without displaying the bright line pattern. Further, when acquiring the image recognition data from the image recognition unit 1212c, the output unit 1215 superimposes a white frame-shaped indicator surrounding the subject on the frame based on the position of the subject in the frame indicated by the image recognition data. .
 図103Bは、出力部1215によって表示されるインジケータの例を示す図である。 FIG. 103B is a diagram illustrating an example of an indicator displayed by the output unit 1215.
 出力部1215は、例えばデジタルサイネージとして構成された被写体の像1215aを囲む白い枠状のインジケータ1215bをフレームに重畳する。つまり、出力部1215は、画像認識された被写体を示すインジケータ1215bを表示する。さらに、出力部1215は、可視光認識部1212bから可視光識別子を取得すると、そのインジケータ1215bの色を例えば白から赤色に変更する。 The output unit 1215 superimposes a white frame-shaped indicator 1215b surrounding a subject image 1215a configured as digital signage on a frame, for example. That is, the output unit 1215 displays the indicator 1215b indicating the subject whose image has been recognized. Furthermore, when the output unit 1215 acquires the visible light identifier from the visible light recognition unit 1212b, the output unit 1215 changes the color of the indicator 1215b from white to red, for example.
 図103Cは、ARの表示例を示す図である。 FIG. 103C is a diagram showing a display example of AR.
 出力部1215は、さらに、その可視光識別子に対応付けられている、被写体に関する情報を関連情報として例えばサーバなどから取得する。出力部1215は、画像認識データによって示されるARマーカー1215cに関連情報を記載し、関連情報が記載されたARマーカー1215cを、フレーム内の被写体の像1215aに関連付けて表示する。 The output unit 1215 further acquires information related to the subject associated with the visible light identifier as related information from, for example, a server. The output unit 1215 describes related information in the AR marker 1215c indicated by the image recognition data, and displays the AR marker 1215c in which the related information is described in association with the subject image 1215a in the frame.
 このような本実施の形態における受信機1210bでは、画像認識処理と可視光認識処理とを同時に行うことによって、可視光通信を利用したARを実現することができる。なお、図103Aに示す受信機1210aも、受信機1210bと同様に、図103Bに示すインジケータ1215bを表示してもよい。この場合、受信機1210aは、低速のシャッター速度の撮影によって得られたフレームにおいてバーコードが認識されると、そのバーコードを囲む白い枠状のインジケータ1215bを表示する。そして、受信機1210aは、そのバーコードがデコードされると、そのインジケータ1215bの色を白色から赤色に変更する。同様に、受信機1210aは、高速のシャッター速度の撮影によって得られたフレームにおいて輝線のパターンが認識されると、その輝線のパターンがある部位に対応する、低速フレーム内の部位を特定する。例えば、デジタルサイネージが可視光信号を送信している場合には、低速フレーム内のデジタルサイネージの像が特定される。なお、低速フレームとは、低速のシャッター速度の撮影によって得られたフレームである。そして、受信機1210aは、低速フレーム内における特定された部位(例えば、上述のデジタルサイネージの像)を囲む白い枠状のインジケータ1215bを低速フレームに重畳して表示する。そして、受信機1210aは、その輝線のパターンがデコードされると、そのインジケータ1215bの色を白色から赤色に変更する。 In such a receiver 1210b in this embodiment, an AR using visible light communication can be realized by simultaneously performing an image recognition process and a visible light recognition process. Note that the receiver 1210a illustrated in FIG. 103A may also display the indicator 1215b illustrated in FIG. 103B, similarly to the receiver 1210b. In this case, when a barcode is recognized in a frame obtained by shooting at a low shutter speed, the receiver 1210a displays a white frame-shaped indicator 1215b surrounding the barcode. Then, when the barcode is decoded, the receiver 1210a changes the color of the indicator 1215b from white to red. Similarly, when a bright line pattern is recognized in a frame obtained by imaging at a high shutter speed, the receiver 1210a identifies a part in the low-speed frame corresponding to the part where the bright line pattern is located. For example, when the digital signage is transmitting a visible light signal, an image of the digital signage in the low-speed frame is specified. Note that the low speed frame is a frame obtained by shooting at a low shutter speed. Then, the receiver 1210a displays a white frame-shaped indicator 1215b surrounding a specified portion (for example, the above-mentioned digital signage image) in the low-speed frame so as to be superimposed on the low-speed frame. Then, when the bright line pattern is decoded, the receiver 1210a changes the color of the indicator 1215b from white to red.
 図104Aは、本実施の形態における送信機の一例を説明するための図である。 FIG. 104A is a diagram for describing an example of a transmitter in this embodiment.
 本実施の形態における送信機1220aは、送信機1230と同期して可視光信号を送信する。つまり、送信機1220aは、送信機1230が可視光信号を送信するタイミングで、その可視光信号と同一の可視光信号を送信する。なお、送信機1230は、発光部1231を備え、その発光部1231が輝度変化することによって、可視光信号を送信する。 The transmitter 1220a in the present embodiment transmits a visible light signal in synchronization with the transmitter 1230. That is, the transmitter 1220a transmits the same visible light signal as the visible light signal at the timing when the transmitter 1230 transmits the visible light signal. Note that the transmitter 1230 includes a light emitting unit 1231, and transmits a visible light signal when the luminance of the light emitting unit 1231 changes.
 このような送信機1220aは、受光部1221と、信号解析部1222と、送信クロック調整部1223aと、発光部1224とを備える。発光部1224は、送信機1230から送信される可視光信号と同一の可視光信号を輝度変化によって送信する。受光部1221は、送信機1230からの可視光を受光することによって、送信機1230から可視光信号を受信する。信号解析部1222は、受光部1221によって受信された可視光信号を解析し、その解析結果を送信クロック調整部1223aに送信する。送信クロック調整部1223aは、その解析結果に基づいて、発光部1224から送信される可視光信号のタイミングを調整する。つまり、送信クロック調整部1223aは、送信機1230の発光部1231から可視光信号が送信されるタイミングと、発光部1224から可視光信号が送信されるタイミングとが一致するように、発光部1224による輝度変化のタイミングを調整する。 Such a transmitter 1220 a includes a light receiving unit 1221, a signal analyzing unit 1222, a transmission clock adjusting unit 1223 a, and a light emitting unit 1224. The light emitting unit 1224 transmits a visible light signal that is the same as the visible light signal transmitted from the transmitter 1230 by changing the luminance. The light receiving unit 1221 receives a visible light signal from the transmitter 1230 by receiving visible light from the transmitter 1230. The signal analysis unit 1222 analyzes the visible light signal received by the light receiving unit 1221 and transmits the analysis result to the transmission clock adjustment unit 1223a. The transmission clock adjustment unit 1223a adjusts the timing of the visible light signal transmitted from the light emitting unit 1224 based on the analysis result. That is, the transmission clock adjustment unit 1223a uses the light emitting unit 1224 so that the timing at which the visible light signal is transmitted from the light emitting unit 1231 of the transmitter 1230 matches the timing at which the visible light signal is transmitted from the light emitting unit 1224. Adjust the brightness change timing.
 これにより、送信機1220aによって送信される可視光信号の波形と、送信機1230によって送信される可視光信号の波形とをタイミング的に一致させることができる。 Thereby, the waveform of the visible light signal transmitted by the transmitter 1220a and the waveform of the visible light signal transmitted by the transmitter 1230 can be matched in timing.
 図104Bは、本実施の形態における送信機の他の例を説明するための図である。 FIG. 104B is a diagram for describing another example of the transmitter in this embodiment.
 本実施の形態における送信機1220bは、送信機1220aと同様に、送信機1230と同期して可視光信号を送信する。つまり、送信機1200bは、送信機1230が可視光信号を送信するタイミングで、その可視光信号と同一の可視光信号を送信する。 The transmitter 1220b in the present embodiment transmits a visible light signal in synchronization with the transmitter 1230, similarly to the transmitter 1220a. That is, the transmitter 1200b transmits the same visible light signal as the visible light signal at the timing when the transmitter 1230 transmits the visible light signal.
 このような送信機1220bは、第1の受光部1221aと、第2の受光部1221bと、比較部1225と、送信クロック調整部1223bと、発光部1224とを備える。 Such a transmitter 1220b includes a first light receiving unit 1221a, a second light receiving unit 1221b, a comparison unit 1225, a transmission clock adjusting unit 1223b, and a light emitting unit 1224.
 第1の受光部1221aは、受光部1221と同様に、送信機1230からの可視光を受光することによって、その送信機1230から可視光信号を受信する。第2の受光部1221bは、発光部1224からの可視光を受光する。比較部1225は、第1の受光部1221aによって可視光が受光された第1のタイミングと、第2の受光部1221bによって可視光が受光された第2のタイミングとを比較する。そして、比較部1225は、その第1のタイミングと第2のタイミングとの差(つまり遅延時間)を送信クロック調整部1223bに出力する。送信クロック調整部1223bは、その遅延時間が縮まるように、発光部1224から送信される可視光信号のタイミングを調整する。 The first light receiving unit 1221a receives the visible light from the transmitter 1230 by receiving the visible light from the transmitter 1230, similarly to the light receiving unit 1221. The second light receiving unit 1221 b receives visible light from the light emitting unit 1224. The comparison unit 1225 compares the first timing at which visible light is received by the first light receiving unit 1221a and the second timing at which visible light is received by the second light receiving unit 1221b. Then, the comparison unit 1225 outputs the difference (that is, the delay time) between the first timing and the second timing to the transmission clock adjustment unit 1223b. The transmission clock adjusting unit 1223b adjusts the timing of the visible light signal transmitted from the light emitting unit 1224 so that the delay time is shortened.
 これにより、送信機1220bによって送信される可視光信号の波形と、送信機1230によって送信される可視光信号の波形とをタイミング的により正確に一致させることができる。 Thereby, the waveform of the visible light signal transmitted by the transmitter 1220b and the waveform of the visible light signal transmitted by the transmitter 1230 can be matched more accurately in terms of timing.
 なお、図104Aおよび図104Bに示す例では、2つの送信機が同じ可視光信号を送信したが、異なる可視光信号を送信してもよい。つまり、2つの送信機は、同じ可視光信号を送信するときには、上述のように同期をとって送信する。そして、2つの送信機は、異なる可視光信号を送信するときには、2つの送信機のうちの一方の送信機のみが可視光信号を送信し、その間、他方の送信機は一様に点灯または消灯する。その後、一方の送信機は一様に点灯または消灯し、その間、他方の送信機のみが可視光信号を送信する。なお、2つの送信機が、互いに異なる可視光信号を同時に送信してもよい。 In the example shown in FIGS. 104A and 104B, two transmitters transmit the same visible light signal, but different visible light signals may be transmitted. That is, when the two transmitters transmit the same visible light signal, they are transmitted in synchronization as described above. When two transmitters transmit different visible light signals, only one of the two transmitters transmits a visible light signal, while the other transmitter is uniformly lit or extinguished. To do. Thereafter, one transmitter is uniformly lit or extinguished, while only the other transmitter transmits a visible light signal. Two transmitters may transmit different visible light signals simultaneously.
 図105Aは、本実施の形態における複数の送信機による同期送信の一例を説明するための図である。 FIG. 105A is a diagram for describing an example of synchronous transmission by a plurality of transmitters in the present embodiment.
 本実施の形態における複数の送信機1220は、図105Aに示すように、例えば一列に配列される。なお、これらの送信機1220は、図104Aに示す送信機1220aまたは図104Bに示す送信機1220bと同一の構成を有する。このような複数の送信機1220のそれぞれは、両隣の送信機1220のうちの一方の送信機1220と同期して可視光信号を送信する。 The plurality of transmitters 1220 in the present embodiment are arranged, for example, in a line as shown in FIG. 105A. Note that these transmitters 1220 have the same configuration as the transmitter 1220a shown in FIG. 104A or the transmitter 1220b shown in FIG. 104B. Each of the plurality of transmitters 1220 transmits a visible light signal in synchronization with one of the transmitters 1220 on both sides.
 これにより、多くの送信機が可視光信号を同期して送信することができる。 This allows many transmitters to transmit visible light signals synchronously.
 図105Bは、本実施の形態における複数の送信機による同期送信の一例を説明するための図である。 FIG. 105B is a diagram for describing an example of synchronous transmission by a plurality of transmitters in the present embodiment.
 本実施の形態における複数の送信機1220のうちの1つの送信機1220は、可視光信号の同期をとるための基準となり、残りの複数の送信機1220は、その基準に合わせるように可視光信号を送信する。 One transmitter 1220 among the plurality of transmitters 1220 in this embodiment is a reference for synchronizing the visible light signal, and the remaining plurality of transmitters 1220 are visible light signals so as to match the reference. Send.
 これにより、多くの送信機が可視光信号をより正確に同期して送信することができる。 This allows many transmitters to transmit visible light signals more accurately and synchronously.
 図106は、本実施の形態における複数の送信機による同期送信の他の例を説明するための図である。 FIG. 106 is a diagram for describing another example of synchronous transmission by a plurality of transmitters in the present embodiment.
 本実施の形態における複数の送信機1240のそれぞれは、同期信号を受信し、その同期信号に応じて可視光信号を送信する。これにより、複数の送信機1240のそれぞれから可視光信号が同期して送信される。 Each of the plurality of transmitters 1240 in the present embodiment receives a synchronization signal and transmits a visible light signal in accordance with the synchronization signal. Thereby, a visible light signal is transmitted from each of the plurality of transmitters 1240 in synchronization.
 具体的には、複数の送信機1240のそれぞれは、制御部1241と、同期制御部1242と、フォトカプラ1243と、LEDドライブ回路1244と、LED1245と、フォトダイオード1246とを備える。 Specifically, each of the plurality of transmitters 1240 includes a control unit 1241, a synchronization control unit 1242, a photocoupler 1243, an LED drive circuit 1244, an LED 1245, and a photodiode 1246.
 制御部1241は、同期信号を受信し、その同期信号を同期制御部1242に出力する。 The control unit 1241 receives the synchronization signal and outputs the synchronization signal to the synchronization control unit 1242.
 LED1245は、可視光を放出する光源であって、LEDドライブ回路1244による制御に応じて明滅(つまり輝度変化)する。これにより、可視光信号がLED1245から送信機1240の外に送信される。 The LED 1245 is a light source that emits visible light, and blinks (that is, changes in luminance) in accordance with control by the LED drive circuit 1244. Thus, a visible light signal is transmitted from the LED 1245 to the outside of the transmitter 1240.
 フォトカプラ1243は、同期制御部1242とLEDドライブ回路1244との間を電気的に絶縁しながら、その間で信号を伝達する。具体的には、フォトカプラ1243は、同期制御部1242から送信される後述の送信開始信号をLEDドライブ回路1244に伝達する。 The photocoupler 1243 transmits a signal between the synchronization control unit 1242 and the LED drive circuit 1244 while being electrically insulated. Specifically, the photocoupler 1243 transmits a transmission start signal described later transmitted from the synchronization control unit 1242 to the LED drive circuit 1244.
 LEDドライブ回路1244は、同期制御部1242からフォトカプラ1243を介して送信開始信号を受信すると、その送信開始信号を受信したタイミングで、可視光信号の送信をLED1245に開始させる。 When the LED drive circuit 1244 receives the transmission start signal from the synchronization control unit 1242 via the photocoupler 1243, the LED drive circuit 1244 causes the LED 1245 to start transmitting the visible light signal at the timing when the transmission start signal is received.
 フォトダイオード1246は、LED1245から放たれる可視光を検出し、可視光を検出したことを示す検出信号を同期制御部1242に出力する。 The photodiode 1246 detects visible light emitted from the LED 1245, and outputs a detection signal indicating that the visible light has been detected to the synchronization control unit 1242.
 同期制御部1242は、同期信号を制御部1241から受信すると、送信開始信号を、フォトカプラ1243を介してLEDドライブ回路1244に送信する。この送信開始信号が送信されることによって、可視光信号の送信が開始される。また、同期制御部1242は、その可視光信号の送信によってフォトダイオード1246から検出信号を受信すると、その検出信号を受信したタイミングと、制御部1241から同期信号を受信したタイミングとの差である遅延時間を算出する。同期制御部1242は、次の同期信号を制御部1241から受信すると、その算出された遅延時間に基づいて、次の送信開始信号を送信するタイミングを調整する。つまり、同期制御部1242は、次の同期信号に対する遅延時間が予め定められた設定遅延時間になるように、次の送信開始信号を送信するタイミングを調整する。これにより、同期制御部1242は、その調整されたタイミングで、次の送信開始信号を送信する。 When receiving the synchronization signal from the control unit 1241, the synchronization control unit 1242 transmits a transmission start signal to the LED drive circuit 1244 via the photocoupler 1243. By transmitting this transmission start signal, transmission of the visible light signal is started. When the synchronization control unit 1242 receives the detection signal from the photodiode 1246 by transmitting the visible light signal, the synchronization control unit 1242 is a delay that is a difference between the timing at which the detection signal is received and the timing at which the synchronization signal is received from the control unit 1241. Calculate time. When the synchronization control unit 1242 receives the next synchronization signal from the control unit 1241, the synchronization control unit 1242 adjusts the timing for transmitting the next transmission start signal based on the calculated delay time. That is, the synchronization control unit 1242 adjusts the timing of transmitting the next transmission start signal so that the delay time for the next synchronization signal becomes a predetermined set delay time. Thus, the synchronization control unit 1242 transmits the next transmission start signal at the adjusted timing.
 図107は、送信機1240における信号処理を説明するための図である。 FIG. 107 is a diagram for explaining signal processing in the transmitter 1240.
 同期制御部1242は、同期信号を受信すると、所定のタイミングに遅延時間設定パルスが発生する遅延時間設定信号を生成する。なお、同期信号を受信するとは、具体的には同期パルスを受信することである。つまり、同期制御部1242は、同期パルスの立ち下がりから、上述の設定遅延時間だけ経過したタイミングに遅延時間設定パルスが立ち上がるように遅延時間設定信号を生成する。 When receiving the synchronization signal, the synchronization control unit 1242 generates a delay time setting signal that generates a delay time setting pulse at a predetermined timing. Note that receiving the synchronization signal specifically means receiving a synchronization pulse. That is, the synchronization control unit 1242 generates the delay time setting signal so that the delay time setting pulse rises at the timing when the set delay time has passed since the falling edge of the synchronization pulse.
 そして、同期制御部1242は、同期パルスの立ち下がりから、前回に得られた補正値Nだけ遅れたタイミングで送信開始信号を、フォトカプラ1243を介してLEDドライブ回路1244に送信する。その結果、LEDドライブ回路1244によってLED1245から可視光信号が送信される。ここで、同期制御部1242は、同期パルスの立ち下がりから、固有遅延時間と補正値Nとの和だけ遅れたタイミングで、フォトダイオード1246から検出信号を受信する。つまり、そのタイミングから可視光信号の送信が開始される。以下、そのタイミングを送信開始タイミングという。なお、上述の固有遅延時間は、フォトカプラ1243などの回路に起因する遅延時間であり、同期制御部1242が同期信号を受信してすぐに送信開始信号を送信しても発生する遅延時間である。 The synchronization control unit 1242 transmits a transmission start signal to the LED drive circuit 1244 via the photocoupler 1243 at a timing delayed by the correction value N obtained last time from the falling edge of the synchronization pulse. As a result, a visible light signal is transmitted from the LED 1245 by the LED drive circuit 1244. Here, the synchronization control unit 1242 receives the detection signal from the photodiode 1246 at a timing delayed by the sum of the intrinsic delay time and the correction value N from the falling edge of the synchronization pulse. That is, transmission of a visible light signal is started from that timing. Hereinafter, this timing is referred to as transmission start timing. Note that the above-described intrinsic delay time is a delay time caused by a circuit such as the photocoupler 1243, and is a delay time that occurs even when the synchronization control unit 1242 receives the synchronization signal and immediately transmits the transmission start signal. .
 同期制御部1242は、送信開始タイミングから遅延時間設定パルスの立ち上がりまでの時間差を、修正補正値Nとして特定する。そして、同期制御部1242は、補正値(N+1)を、補正値(N+1)=補正値N+修正補正値Nによって算出して保持しておく。これにより、同期制御部1242は、次の同期信号(同期パルス)を受信したときには、その同期パルスの立ち下がりから、補正値(N+1)だけ遅れたタイミングで送信開始信号をLEDドライブ回路1244に送信する。なお、修正補正値Nは正の値だけでなく負の値にも成り得る。 The synchronization control unit 1242 identifies the time difference from the transmission start timing to the rise of the delay time setting pulse as the correction correction value N. Then, the synchronization control unit 1242 calculates and holds the correction value (N + 1) by the correction value (N + 1) = the correction value N + the correction correction value N. Thus, when the synchronization control unit 1242 receives the next synchronization signal (synchronization pulse), it transmits a transmission start signal to the LED drive circuit 1244 at a timing delayed by the correction value (N + 1) from the falling edge of the synchronization pulse. To do. The correction correction value N can be a negative value as well as a positive value.
 これにより、複数の送信機1240のそれぞれは、同期信号(同期パルス)を受信してから、設定遅延時間経過後に可視光信号を送信するため、正確に同期して可視光信号を送信することができる。つまり、複数の送信機1240のそれぞれで、フォトカプラ1243などの回路に起因する固有遅延時間にばらつきがあったとしても、そのばらつきに影響を受けることなく、複数の送信機1240のそれぞれからの可視光信号の送信を正確に同期させることができる。 Accordingly, each of the plurality of transmitters 1240 transmits the visible light signal after the set delay time has elapsed after receiving the synchronization signal (synchronization pulse). it can. That is, even if there is a variation in the inherent delay time caused by a circuit such as the photocoupler 1243 in each of the plurality of transmitters 1240, the visible light from each of the plurality of transmitters 1240 is not affected by the variation. The transmission of the optical signal can be accurately synchronized.
 なお、LEDドライブ回路は、大きな電力を消費するものであり、同期信号を扱う制御回路からはフォトカプラなどを用いて電気的に絶縁される。したがって、このようなフォトカプラが用いられる場合には、上述の固有遅延時間のばらつきによって、複数の送信機からの可視光信号の送信を同期させることが難しい。しかし、本実施の形態における複数の送信機1240では、フォトダイオード1246によってLED1245の発光タイミングが検知され、同期制御部1242によって同期信号からの遅延時間が検知され、その遅延時間が予め設定された遅延時間(上述の設定遅延時間)になるように調整される。これにより、それぞれ例えばLED照明として構成される複数の送信機に備えられるフォトカプラに、個体ばらつきがあっても、複数のLED照明から可視光信号(例えば可視光ID)を高精度に同期した状態で送信させることができる。 Note that the LED drive circuit consumes a large amount of power, and is electrically insulated from the control circuit that handles the synchronization signal using a photocoupler or the like. Therefore, when such a photocoupler is used, it is difficult to synchronize the transmission of visible light signals from a plurality of transmitters due to the variation in the inherent delay time described above. However, in the plurality of transmitters 1240 in this embodiment, the light emission timing of the LED 1245 is detected by the photodiode 1246, the delay time from the synchronization signal is detected by the synchronization control unit 1242, and the delay time is set in advance. The time is adjusted to be the time (the set delay time described above). As a result, even if there are individual variations in the photocouplers provided in a plurality of transmitters each configured as, for example, LED lighting, a visible light signal (for example, visible light ID) is synchronized with high accuracy from the plurality of LED lighting. Can be sent.
 なお、可視光信号送信期間以外はLED照明を点灯させても、消灯させても良い。前記可視光信号送信期間以外を点灯させる場合は、可視光信号の最初の立下りエッジを検出すればよい。前記可視光信号送信期間以外を消灯させる場合は、可視光信号の最初の立ち上がりエッジを検出すればよい。 Note that the LED illumination may be turned on or off outside the visible light signal transmission period. When lighting other than the visible light signal transmission period, the first falling edge of the visible light signal may be detected. In the case of turning off the light other than the visible light signal transmission period, the first rising edge of the visible light signal may be detected.
 なお、上述の例では、送信機1240は、同期信号を受信するたびに、可視光信号を送信するが、同期信号を受信しなくても、可視光信号を送信してもよい。つまり、送信機1240は、同期信号の受信に応じて可視光信号を一度送信すれば、同期信号を受信しなくても可視光信号を順次送信してもよい。具体的には、送信機1240は、同期信号の一度の受信に対して、可視光信号の送信を2~数千回、順次行ってもよい。また、送信機1240は、100m秒に1回の割合または数秒に1回の割合で、同期信号に応じた可視光信号の送信を行ってもよい。 In the above-described example, the transmitter 1240 transmits a visible light signal every time a synchronization signal is received. However, the transmitter 1240 may transmit a visible light signal without receiving the synchronization signal. That is, if the transmitter 1240 transmits a visible light signal once in response to reception of the synchronization signal, the transmitter 1240 may sequentially transmit the visible light signal without receiving the synchronization signal. Specifically, the transmitter 1240 may sequentially transmit the visible light signal 2 to several thousand times with respect to one reception of the synchronization signal. The transmitter 1240 may transmit a visible light signal corresponding to the synchronization signal at a rate of once every 100 milliseconds or once every few seconds.
 また、同期信号に応じた可視光信号の送信が繰り返し行われるときには、上述の設定遅延時間によってLED1245の発光の連続性が失われる可能性がある。つまり、少し長いブランキング期間が発生する可能性がある。その結果、LED1245の点滅が人に視認されてしまい、いわゆるフリッカが発生する可能性がある。そこで、送信機1240は、60Hz以上の周期で、同期信号に応じた可視光信号の送信を行ってもよい。これにより、点滅が高速に行われ、その点滅は人に視認され難くなる。その結果、フリッカの発生を抑えることができる。または、送信機1240は、例えば数分に1回の周期などの十分に長い周期で、同期信号に応じた可視光信号の送信を行ってもよい。これにより、点滅が人に視認されてしまうが、点滅が繰り返し連続して視認されることを防止することができ、フリッカが人に与える不快感を軽減することができる。 Further, when the visible light signal is repeatedly transmitted according to the synchronization signal, the continuity of light emission of the LED 1245 may be lost due to the above-described set delay time. That is, a slightly longer blanking period may occur. As a result, blinking of the LED 1245 is visually recognized by a person, and so-called flicker may occur. Therefore, the transmitter 1240 may transmit a visible light signal corresponding to the synchronization signal at a period of 60 Hz or more. Thereby, blinking is performed at high speed, and the blinking is difficult to be visually recognized by a person. As a result, the occurrence of flicker can be suppressed. Alternatively, the transmitter 1240 may transmit a visible light signal corresponding to the synchronization signal at a sufficiently long cycle such as once every few minutes. Thereby, although blinking is visually recognized by a person, it is possible to prevent the blinking from being viewed repeatedly and continuously, and to reduce the discomfort that flicker gives to the person.
 (受信方法の前処理)
 図108は、本実施の形態における受信方法の一例を示すフローチャートである。また、図109は、本実施の形態における受信方法の一例を説明するための説明図である。
(Preprocessing for receiving method)
FIG. 108 is a flowchart illustrating an example of a reception method in this embodiment. FIG. 109 is an explanatory diagram for describing an example of a reception method in this embodiment.
 まず、受信機は、露光ラインに平行な方向に配列されている複数の画素のそれぞれの画素値の平均値を計算する(ステップS1211)。中心極限定理により、N個の画素の画素値を平均すると、ノイズ量の期待値はNのマイナス1/2乗になり、SN比が改善する。 First, the receiver calculates the average value of the pixel values of a plurality of pixels arranged in a direction parallel to the exposure line (step S1211). If the pixel values of N pixels are averaged according to the central limit theorem, the expected value of the noise amount becomes N minus ½ power, and the SN ratio is improved.
 次に、受信機は、全ての色のそれぞれで、画素値が垂直方向に同じ変化をしている部分のみ残し、異なる変化をしている部分では画素値の変化を取り除く(ステップS1212)。送信機に備えられている発光部の輝度によって送信信号(可視光信号)が表現される場合、送信機である照明やディスプレイのバックライトの輝度が変化する。この際には、図109の(b)の部分のように、全ての色のそれぞれで画素値が同じ方向に変化する。図109の(a)および(c)の部分では、各色で画素値が異なる変化をしている。これらの部分では、受信ノイズあるいは、ディスプレイまたはサイネージの絵によって画素値が変動しているため、これらの変動を取り除くことで、SN比を改善することができる。 Next, the receiver leaves only the part where the pixel value changes in the vertical direction in all the colors, and removes the change in the pixel value where the pixel value changes differently (step S1212). When the transmission signal (visible light signal) is expressed by the luminance of the light emitting unit provided in the transmitter, the luminance of the illumination of the transmitter or the backlight of the display changes. At this time, as in the part (b) of FIG. 109, the pixel values change in the same direction for all the colors. In the portions (a) and (c) of FIG. 109, the pixel values change differently for each color. In these portions, the pixel value fluctuates due to reception noise or a picture of the display or signage. Therefore, by removing these fluctuations, the SN ratio can be improved.
 次に、受信機は、輝度値を求める(ステップS1213)。輝度は色による変化を受けづらいため、ディスプレイまたはサイネージの絵による影響を排除することができ、SN比を改善することができる。 Next, the receiver obtains a luminance value (step S1213). Since the luminance is not easily changed by color, the influence of the display or signage picture can be eliminated, and the SN ratio can be improved.
 次に、受信機は、輝度値をローパスフィルタにかける(ステップS1214)。本実施の形態における受信方法では、露光時間の長さによる移動平均フィルタがかけられているため、高周波数領域にはほとんど信号は含まれておらず、ノイズが支配的となる。そのため、高周波数領域をカットするローパスフィルタを用いることで、SN比を改善することができる。露光時間の逆数までの周波数までは信号成分が多いため、それ以上の周波数を遮断することで、SN比の改善の効果を大きくすることができる。信号に含まれている周波数成分が有限である場合は、その周波数より高い周波数を遮断することで、SN比を改善することができる。ローパスフィルタには、周波数振動成分を含まないフィルタ(バタワースフィルタ等)が適している。 Next, the receiver applies a low-pass filter to the luminance value (step S1214). In the receiving method according to the present embodiment, since a moving average filter based on the length of exposure time is applied, almost no signal is included in the high frequency region, and noise is dominant. Therefore, the S / N ratio can be improved by using a low-pass filter that cuts a high frequency region. Since there are many signal components up to the frequency up to the reciprocal of the exposure time, the effect of improving the S / N ratio can be increased by blocking the higher frequency. When the frequency component included in the signal is finite, the S / N ratio can be improved by blocking a frequency higher than that frequency. A filter (such as a Butterworth filter) that does not include a frequency vibration component is suitable for the low-pass filter.
 (畳み込み最尤復号による受信方法)
 図110は、本実施の形態における受信方法の他の例を示すフローチャートである。以下、この図を用いて、露光時間が送信周期より長い場合の受信方法について説明する。
(Receiving method by convolution maximum likelihood decoding)
FIG. 110 is a flowchart illustrating another example of the reception method in this embodiment. Hereinafter, the reception method when the exposure time is longer than the transmission cycle will be described with reference to FIG.
 露光時間が送信周期の整数倍である場合に、最も精度よく受信を行うことができる。整数倍でない場合であっても、(N±0.33)倍(Nは整数)程度の範囲であれば受信を行うことができる。 When the exposure time is an integral multiple of the transmission period, reception can be performed with the highest accuracy. Even if it is not an integral multiple, it can be received within a range of (N ± 0.33) times (N is an integer).
 まず、受信機は、送受信オフセットを0に設定する(ステップS1221)。送受信オフセットとは、送信のタイミングと受信のタイミングのズレを修正するための値である。このズレは不明であるため、受信機は、その送受信オフセットの候補となる値を少しずつ変化させて、最も辻褄が合う値を送受信オフセットに採用する。 First, the receiver sets the transmission / reception offset to 0 (step S1221). The transmission / reception offset is a value for correcting a difference between the transmission timing and the reception timing. Since this deviation is unknown, the receiver gradually changes the value that is a candidate for the transmission / reception offset, and adopts the most suitable value as the transmission / reception offset.
 次に、受信機は、送受信オフセットが送信周期未満であるか否かを判定する(ステップS1222)。ここで、受信の周期と送信周期は同期していないため、送信周期に合わせた受信値が得られているとは限らない。そのため、受信機は、ステップS1222で、送信周期未満であると判定すると(ステップS1222のY)、その近辺の受信値を用いて、送信周期ごとに、送信周期に合わせた受信値(例えば画素値)を補間によって計算する(ステップS1223)。補間方法には、線形補間、最近傍値、またはスプライン補間等を用いることができる。次に、受信機は、送信周期毎に求めた受信値の差分を求める(ステップS1224)。 Next, the receiver determines whether or not the transmission / reception offset is less than the transmission cycle (step S1222). Here, since the reception cycle and the transmission cycle are not synchronized with each other, a reception value matching the transmission cycle is not always obtained. For this reason, when the receiver determines in step S1222 that it is less than the transmission cycle (Y in step S1222), the reception value (for example, pixel value) matched to the transmission cycle is used for each transmission cycle using the reception value in the vicinity thereof. ) Is calculated by interpolation (step S1223). As the interpolation method, linear interpolation, nearest neighbor value, spline interpolation, or the like can be used. Next, the receiver obtains a difference between the received values obtained for each transmission cycle (step S1224).
 受信機は、送受信オフセットに所定の値を加え(ステップS1226)、ステップS1222からの処理を繰り返し実行する。また、受信機は、ステップS1222で、送信周期未満でないと判定すると(ステップS1222のN)、各送受信オフセットに対して計算された受信信号の尤度のうち最も高い尤度を特定する。そして、受信機は、その最も高い尤度が所定の値以上か否かを判定する(ステップS1227)。所定の値以上と判定すると(ステップS1227のY)、受信機は、最も尤度が高かった受信信号を最終的な推定結果として用いる。または、受信機は、最も高かった尤度から所定の値を引いた値以上の尤度を持つ受信信号を受信信号候補として用いる(ステップS1228)。一方、ステップS1227において、最も高い尤度が所定の値未満と判定すると(ステップS1227のN)、受信機は、推定結果を破棄する(ステップS1229)。 The receiver adds a predetermined value to the transmission / reception offset (step S1226), and repeatedly executes the processing from step S1222. If the receiver determines in step S1222 that it is not less than the transmission cycle (N in step S1222), the receiver specifies the highest likelihood among the likelihoods of the received signals calculated for each transmission / reception offset. Then, the receiver determines whether or not the highest likelihood is greater than or equal to a predetermined value (step S1227). If it is determined that the value is equal to or greater than the predetermined value (Y in step S1227), the receiver uses the received signal with the highest likelihood as the final estimation result. Alternatively, the receiver uses, as a received signal candidate, a received signal having a likelihood equal to or higher than a value obtained by subtracting a predetermined value from the highest likelihood (step S1228). On the other hand, if it is determined in step S1227 that the highest likelihood is less than the predetermined value (N in step S1227), the receiver discards the estimation result (step S1229).
 ノイズが多すぎる場合には受信信号の推定が適切にできないことが多く、同時に尤度が低くなる。したがって、尤度が低い場合には推定結果を破棄することで、受信信号の信頼性を向上させることができる。また、入力画像に有効な信号が含まれていない場合でも、最尤復号では有効な信号を推定結果として出力してしまうという問題がある。しかし、この場合も尤度が低くなるため、尤度が低い場合は推定結果を破棄することで、この問題を回避することもできる。 When there is too much noise, it is often impossible to estimate the received signal properly, and at the same time, the likelihood decreases. Therefore, when the likelihood is low, the reliability of the received signal can be improved by discarding the estimation result. In addition, even when a valid signal is not included in the input image, there is a problem in that the maximum likelihood decoding outputs a valid signal as an estimation result. However, since the likelihood also decreases in this case, this problem can be avoided by discarding the estimation result when the likelihood is low.
 (実施の形態13)
 本実施の形態では、可視光通信のプロトコル送出方式について説明する。
(Embodiment 13)
In this embodiment, a protocol transmission method for visible light communication will be described.
 (多値振幅パルス信号)
 図111、図112および図113は、本実施の形態における送信信号の一例を示す図である。
(Multi-value amplitude pulse signal)
111, 112, and 113 are diagrams illustrating an example of a transmission signal in the present embodiment.
 パルスの振幅に意味を持たせることで、単位時間あたりにより多くの情報を表現することができる。例えば、振幅を3段階に分類すると、図111のように、平均輝度は50%に保ったまま、2スロットの送信時間で3値を表現することができる。ただし、図111の(c)を連続で送信すると輝度変化がないため、信号の存在がわかりにくい。また、デジタル処理では3値は少し扱いにくい。 By giving meaning to the amplitude of the pulse, more information can be expressed per unit time. For example, when the amplitude is classified into three stages, as shown in FIG. 111, three values can be expressed by the transmission time of two slots while maintaining the average luminance at 50%. However, if (c) in FIG. 111 is transmitted continuously, there is no change in luminance, so the presence of a signal is difficult to understand. Also, ternary values are a little difficult to handle in digital processing.
 そこで、図112の(a)から(d)の4種類のシンボルを用いることで、平均輝度は50%に保ったまま、平均3スロットの送信時間で4値を表現することができる。シンボルによって送信時間が異なるが、シンボルの最後の状態を輝度が低い状態とすることで、シンボルの終了時点を認識することができる。輝度が高い状態と低い状態を入れ替えても同様の効果が得られる。図112の(e)は、(a)を2回送信することと区別がつかないため、適さない。図112の(f)と(g)は、中間輝度が連続するため、やや認識しづらいが、利用することはできる。 Therefore, by using the four types of symbols (a) to (d) in FIG. 112, four values can be expressed with an average transmission time of three slots while keeping the average luminance at 50%. Although the transmission time differs depending on the symbol, it is possible to recognize the end point of the symbol by setting the last state of the symbol to a low luminance state. The same effect can be obtained even when the high luminance state and the low luminance state are switched. (E) in FIG. 112 is not suitable because it cannot be distinguished from transmitting (a) twice. 112 (f) and (g) are somewhat difficult to recognize because the intermediate luminance is continuous, but can be used.
 図113の(a)や(b)のパターンをヘッダとして利用することを考える。これらのパターンは周波数解析において特定の周波数成分を強く持つため、これらのパターンをヘッダとすることで、周波数解析によって信号検出を行うことができる。 Consider using patterns (a) and (b) in FIG. 113 as headers. Since these patterns have strong specific frequency components in frequency analysis, signal detection can be performed by frequency analysis by using these patterns as headers.
 図113の(c)のように、(a)や(b)のパターンを用いて送信パケットを構成する。特定の長さのパターンをパケット全体のヘッダとし、異なる長さのパターンをセパレータとして用いることで、データを区切ることができる。また、途中の箇所にこのパターンを含むことで、信号検出を容易にすることができる。これにより、1パケットが1フレームの画像の撮像時間よりも長い場合であっても、受信機は、データをつなぎあわせて復号することができる。また、これにより、セパレータの数を調整することで、パケットの長さを可変とすることができる。パケットヘッダのパターンの長さでパケット全体の長さを表現するとしてもよい。また、セパレータをパケットヘッダとし、セパレータの長さをデータのアドレスとすることで、受信機は、部分的に受信したデータを合成することができる。 As shown in (c) of FIG. 113, a transmission packet is configured using the patterns (a) and (b). Data can be delimited by using a pattern with a specific length as a header of the entire packet and using a pattern with a different length as a separator. Moreover, signal detection can be facilitated by including this pattern in the middle. Thus, even when one packet is longer than the image capturing time of one frame image, the receiver can connect and decode the data. This also makes it possible to make the packet length variable by adjusting the number of separators. The length of the entire packet may be expressed by the length of the packet header pattern. Moreover, the receiver can synthesize the partially received data by using the separator as a packet header and the length of the separator as the data address.
 送信機は、このように構成したパケットを繰り返し送信する。図113の(c)のパケット1~4の内容は全て同じでも良いし、異なるデータとして受信側で合成するとしてもよい。 The transmitter repeatedly transmits the packet configured as described above. The contents of packets 1 to 4 in (c) of FIG. 113 may all be the same, or may be combined as different data on the receiving side.
 (実施の形態14)
 本実施の形態では、上記各実施の形態におけるスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 14)
In this embodiment, each application example using a receiver such as a smartphone in each of the above embodiments and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 図114Aは、本実施の形態における送信機を説明するための図である。 FIG. 114A is a diagram for describing a transmitter according to the present embodiment.
 本実施の形態における送信機は、例えば液晶ディスプレイのバックライトとして構成され、青色LED2303と、緑色蛍光成分2304および赤色蛍光成分2305からなる蛍光体2310とを備える。 The transmitter in this embodiment is configured as a backlight of a liquid crystal display, for example, and includes a blue LED 2303 and a phosphor 2310 including a green fluorescent component 2304 and a red fluorescent component 2305.
 青色LED2303は、青色(B)の光を放つ。蛍光体2310は、青色LED2303から放たれた青色の光を励起光として受けると黄色(Y)に発光する。つまり、蛍光体2310は、黄色の光を放つ。詳細には、蛍光体2130は、緑色蛍光成分2304および赤色蛍光成分2305からなるため、これらの蛍光成分の発光によって黄色の光を放つ。それらの2つの蛍光成分のうち緑色蛍光成分2304は、青色LED2303から放たれた青色の光を励起光として受けると緑色に発光する。つまり、緑色蛍光成分2304は、緑色(G)の光を放つ。上述の2つの蛍光成分のうち赤色蛍光成分2305は、青色LED2303から放たれた青色の光を励起光として受けると赤色に発光する。つまり、赤色蛍光成分2305は、赤色(R)の光を放つ。これにより、RGBまたはY(RG)Bのそれぞれの光が放たれるため、送信機はバックライトとして白色光を出力する。 Blue LED 2303 emits blue (B) light. The phosphor 2310 emits yellow (Y) light when receiving blue light emitted from the blue LED 2303 as excitation light. That is, the phosphor 2310 emits yellow light. Specifically, since the phosphor 2130 includes a green fluorescent component 2304 and a red fluorescent component 2305, yellow light is emitted by the emission of these fluorescent components. Of these two fluorescent components, the green fluorescent component 2304 emits green light when receiving blue light emitted from the blue LED 2303 as excitation light. That is, the green fluorescent component 2304 emits green (G) light. Of the two fluorescent components described above, the red fluorescent component 2305 emits red light when receiving blue light emitted from the blue LED 2303 as excitation light. That is, the red fluorescent component 2305 emits red (R) light. Thereby, since each light of RGB or Y (RG) B is emitted, a transmitter outputs white light as a backlight.
 この送信機は、青色LED2303を上記各実施の形態と同様に輝度変化させることによって、白色光の可視光信号を送信する。このとき、白色光の輝度が変化することによって所定の搬送周波数を有する可視光信号が出力される。 This transmitter transmits a visible light signal of white light by changing the luminance of the blue LED 2303 as in the above embodiments. At this time, a visible light signal having a predetermined carrier frequency is output as the luminance of white light changes.
 ここで、バーコードリーダは、赤色レーザ光をバーコードに照射し、バーコードから反射される赤色レーザ光の輝度変化に基づいて、そのバーコードを読み取る。この赤色レーザ光におけるバーコードの読み取り周波数は、現在実用化されている一般的な送信機から出力される可視光信号の搬送周波数と一致または近似している場合がある。したがって、このような場合に、バーコードリーダが、その一般的な送信機からの可視光信号である白色光に照らされたバーコードを読み取ろうとすると、その白色光に含まれる赤色の光の輝度変化によって、その読み取りに失敗してしまうことがある。つまり、可視光信号(特に赤色の光)の搬送周波数と、バーコードの読み取り周波数との干渉によって、バーコードの読み取りエラーが発生する。 Here, the barcode reader irradiates the barcode with the red laser beam, and reads the barcode based on the luminance change of the red laser beam reflected from the barcode. The barcode reading frequency of the red laser light may coincide with or approximate the carrier frequency of the visible light signal output from a general transmitter currently in practical use. Therefore, in such a case, when the barcode reader attempts to read a barcode illuminated with white light, which is a visible light signal from the general transmitter, the luminance of the red light contained in the white light Depending on the change, the reading may fail. In other words, a barcode reading error occurs due to interference between the carrier frequency of the visible light signal (particularly red light) and the barcode reading frequency.
 そこで本実施の形態における、赤色蛍光成分2305には、緑色蛍光成分2304よりも、残光の継続時間が長い蛍光材料が用いられる。つまり、本実施の形態における赤色蛍光成分2305は、青色LED2303および緑色蛍光成分2304の輝度変化の周波数よりも十分に低い周波数で輝度変化する。言い換えれば、赤色蛍光成分2305は、可視光信号に含まれる赤色の輝度変化の周波数をなまらせる。 Therefore, a fluorescent material having a longer afterglow duration than that of the green fluorescent component 2304 is used for the red fluorescent component 2305 in the present embodiment. That is, the red fluorescent component 2305 in the present embodiment changes in luminance at a frequency sufficiently lower than the luminance change frequency of the blue LED 2303 and the green fluorescent component 2304. In other words, the red fluorescent component 2305 smoothes the frequency of the red luminance change included in the visible light signal.
 図114Bは、RGBのそれぞれの輝度変化を示す図である。 FIG. 114B is a diagram showing luminance changes of RGB.
 青色LED2303からの青色の光は、図114Bの(a)に示すように、可視光信号に含まれて出力される。緑色蛍光成分2304は、図114Bの(b)に示すように、青色LED2303からの青色の光を受けると、緑色に発光する。この緑色蛍光成分2304における残光の継続時間は短い。したがって、その青色LED2303が輝度変化していると、緑色蛍光成分2304は、その青色LED2303の輝度変化の周波数(つまり可視光信号の搬送周波数)と略同一の周波数で輝度変化する緑色の光を放つ。 Blue light from the blue LED 2303 is included in the visible light signal and output as shown in FIG. 114B (a). As shown in FIG. 114B (b), the green fluorescent component 2304 emits green light when receiving blue light from the blue LED 2303. The duration of afterglow in the green fluorescent component 2304 is short. Therefore, when the blue LED 2303 changes in luminance, the green fluorescent component 2304 emits green light whose luminance changes at substantially the same frequency as the luminance change frequency of the blue LED 2303 (that is, the visible light signal carrier frequency). .
 赤色蛍光成分2305は、図114Bの(c)に示すように、青色LED2303からの青色の光を受けると、赤色に発光する。この赤色蛍光成分2305における残光の継続時間は長い。したがって、その青色LED2303が輝度変化していると、赤色蛍光成分2305は、その青色LED2303の輝度変化の周波数(つまり可視光信号の搬送周波数)よりも、低い周波数で輝度変化する赤色の光を放つ。 The red fluorescent component 2305 emits red light when receiving blue light from the blue LED 2303 as shown in (c) of FIG. 114B. The duration of the afterglow in the red fluorescent component 2305 is long. Therefore, when the blue LED 2303 changes in luminance, the red fluorescent component 2305 emits red light whose luminance changes at a frequency lower than the frequency of luminance change of the blue LED 2303 (that is, the carrier frequency of the visible light signal). .
 図115は、本実施の形態における緑色蛍光成分2304および赤色蛍光成分2305の残光特性を示す図である。 FIG. 115 is a diagram showing the afterglow characteristics of the green fluorescent component 2304 and the red fluorescent component 2305 in the present embodiment.
 緑色蛍光成分2304は、例えば、青色LED2303が輝度変化することなく点灯している場合、強度I=I0の緑色の光を輝度変化させることなく(つまり輝度変化の周波数f=0の光を)放つ。また、青色LED2303が低い周波数で輝度変化しても、緑色蛍光成分2304は、その低い周波数と略同じ周波数fで輝度変化する、強度I=I0の緑色の光を放つ。しかし、青色LED2303が高い周波数で輝度変化すると、その高い周波数と略同じ周波数fで輝度変化する、緑色蛍光成分2304から放たれる緑色の光の強度Iは、緑色蛍光成分2304における残光の影響によって、強度I0よりも小さくなる。その結果、緑色蛍光成分2304から放たれる緑色の光の強度Iは、図115の点線に示すように、その光の輝度変化の周波数fが閾値fb未満の場合には、I=I0に保たれるが、周波数fが閾値fbを超えて高くなると、次第に小さくなる。 For example, when the blue LED 2303 is lit without a change in luminance, the green fluorescent component 2304 emits green light having an intensity I = I0 without changing the luminance (that is, light having a luminance change frequency f = 0). . Further, even if the blue LED 2303 changes in luminance at a low frequency, the green fluorescent component 2304 emits green light having an intensity I = I0 that changes in luminance at substantially the same frequency f as the low frequency. However, when the luminance of the blue LED 2303 changes at a high frequency, the intensity I of the green light emitted from the green fluorescent component 2304 that changes in luminance at substantially the same frequency f as the high frequency is influenced by the afterglow in the green fluorescent component 2304. Therefore, the intensity becomes smaller than the intensity I0. As a result, the intensity I of the green light emitted from the green fluorescent component 2304 is maintained at I = I0 when the frequency f of the luminance change of the light is less than the threshold value fb, as shown by the dotted line in FIG. However, when the frequency f exceeds the threshold value fb, it gradually decreases.
 また、本実施の形態における赤色蛍光成分2305の残光の継続時間は、緑色蛍光成分2304の残光の継続時間よりも長い。したがって、赤色蛍光成分2305から放たれる赤色の光の強度Iは、図115の実線に示すように、その光の輝度変化の周波数fが、上記閾値fbよりも低い閾値fa未満まで、I=I0に保たれるが、周波数fが閾値fbを超えて高くなると、次第に小さくなる。言い換えれば、赤色蛍光成分2305から放たれる赤色の光は、緑色蛍光成分2304から放たれる緑色の光の周波数帯域のうちの、高周波領域には存在せず、低周波領域にのみ存在する。 Further, the duration of afterglow of the red fluorescent component 2305 in this embodiment is longer than the duration of afterglow of the green fluorescent component 2304. Therefore, as shown by the solid line in FIG. 115, the intensity I of the red light emitted from the red fluorescent component 2305 is I = I until the frequency f of the luminance change of the light is less than the threshold fa lower than the threshold fb. Although it is kept at I0, it becomes gradually smaller when the frequency f becomes higher than the threshold value fb. In other words, the red light emitted from the red fluorescent component 2305 does not exist in the high frequency region of the frequency band of the green light emitted from the green fluorescent component 2304 and exists only in the low frequency region.
 より具体的には、本実施の形態における赤色蛍光成分2305には、可視光信号の搬送周波数f1と同一の周波数fで放たれる赤色の光の強度IがI=I1となる蛍光材料が用いられる。搬送周波数f1は、送信機に備えられている青色LED2303による輝度変化の搬送周波数である。また、上述の強度I1は、強度I0の1/3の強度、または、強度I0の-10dBの強度である。例えば、搬送周波数f1は10kHz、または5~100kHzである。 More specifically, for the red fluorescent component 2305 in the present embodiment, a fluorescent material is used in which the intensity I of red light emitted at the same frequency f as the carrier frequency f1 of the visible light signal is I = I1. It is done. The carrier frequency f1 is a carrier frequency of luminance change by the blue LED 2303 provided in the transmitter. Further, the above-described intensity I1 is 1/3 of the intensity I0 or −10 dB of the intensity I0. For example, the carrier frequency f1 is 10 kHz or 5 to 100 kHz.
 つまり、本実施の形態における送信機は、可視光信号を送信する送信機であって、輝度変化する青色の光を前記可視光信号に含まれる光として放つ青色LEDと、前記青色の光を受けることによって緑色の光を前記可視光信号に含まれる光として放つ緑色蛍光成分と、前記青色の光を受けることによって赤色の光を前記可視光信号に含まれる光として放つ赤色蛍光成分とを備える。そして、前記赤色蛍光成分における残光の継続時間は、緑色蛍光成分における残光の継続時間よりも長い。なお、前記緑色蛍光成分および前記赤色蛍光成分は、前記青色の光を受けることによって黄色の光を前記可視光信号に含まれる光として放つ単一の蛍光体に含まれていてもよい。あるいは、前記緑色蛍光成分は、緑色蛍光体に含まれ、且つ、前記赤色蛍光成分は、前記緑色蛍光体とは別体の赤色蛍光体に含まれていてもよい。 That is, the transmitter according to the present embodiment is a transmitter that transmits a visible light signal, and receives a blue LED that emits blue light whose luminance changes as light included in the visible light signal, and the blue light. A green fluorescent component that emits green light as light included in the visible light signal, and a red fluorescent component that emits red light as light included in the visible light signal by receiving the blue light. The duration of afterglow in the red fluorescent component is longer than the duration of afterglow in the green fluorescent component. The green fluorescent component and the red fluorescent component may be included in a single phosphor that receives yellow light and emits yellow light as light included in the visible light signal. Alternatively, the green fluorescent component may be included in a green phosphor, and the red fluorescent component may be included in a red phosphor that is separate from the green phosphor.
 これにより、赤色蛍光成分における残光の継続時間が長いため、青色および緑色の光の輝度変化における周波数よりも低い周波数で赤色の光を輝度変化させることができる。したがって、白色光の可視光信号に含まれる青色および緑色の光の輝度変化における周波数が、赤色レーザ光におけるバーコードの読み取り周波数と同一または近似していても、白色光の可視光信号に含まれる赤色の光の周波数を、バーコードの読み取り周波数から大きく異ならせることができる。その結果、バーコードの読み取りエラーの発生を抑制することができる。 Thereby, since the persistence time of the afterglow in the red fluorescent component is long, it is possible to change the luminance of the red light at a frequency lower than the frequency in the luminance change of the blue and green light. Therefore, even if the frequency of the luminance change of blue and green light included in the visible light signal of white light is the same as or close to the barcode reading frequency of the red laser light, it is included in the visible light signal of white light. The frequency of the red light can be significantly different from the barcode reading frequency. As a result, the occurrence of barcode reading errors can be suppressed.
 ここで、前記赤色蛍光成分は、青色LEDから放たれる光の輝度変化の周波数よりも低い周波数で輝度変化する赤色の光を放ってもよい。 Here, the red fluorescent component may emit red light whose luminance changes at a frequency lower than the frequency of luminance change of the light emitted from the blue LED.
 また、前記赤色蛍光成分は、青色の光を受けることによって赤色の光を放つ赤色蛍光材料と、所定の周波数帯域の光のみを透過ささるローパスフィルタとを備えてもよい。例えば、前記ローパスフィルタは、前記青色LEDから放たれる青色の光のうち、低域の周波数帯域の光のみを透過させて前記赤色蛍光材料に当てる。なお、前記赤色蛍光材料は、前記緑色蛍光成分と同じ残光特性を有するものであってもよい。または、前記ローパスフィルタは、前記青色LEDから放たれた青色の光が前記赤色蛍光材料に当たることによって、前記赤色蛍光材料から放たれる赤色の光のうち、低域の周波数帯域の光のみを透過させる。このようなローパスフィルタを用いる場合であっても、上述と同様に、バーコードの読み取りエラーの発生を抑制することができる。 The red fluorescent component may include a red fluorescent material that emits red light by receiving blue light and a low-pass filter that transmits only light in a predetermined frequency band. For example, the low-pass filter transmits only light in a low frequency band out of blue light emitted from the blue LED and applies the light to the red fluorescent material. The red fluorescent material may have the same afterglow characteristics as the green fluorescent component. Alternatively, the low-pass filter transmits only light in a low frequency band out of red light emitted from the red fluorescent material when blue light emitted from the blue LED hits the red fluorescent material. Let Even when such a low-pass filter is used, the occurrence of barcode reading errors can be suppressed as described above.
 また、前記赤色蛍光成分は、予め定められた残光特性を有する蛍光材料からなってもよい。例えば、予め定められた残光特性は、(a)前記赤色蛍光成分から放たれる赤色の光の輝度変化の周波数fが0である場合における前記赤色の光の強度をI0とし、(b)前記青色LEDから放たれる光の輝度変化における搬送周波数をf1とする場合、前記赤色の光の周波数fがf=f1のときに、前記赤色の光の強度が、前記I0の1/3以下、または-10dB以下となる、特性である。 The red fluorescent component may be made of a fluorescent material having a predetermined afterglow characteristic. For example, predetermined afterglow characteristics are: (a) the intensity of the red light when the frequency f of the luminance change of the red light emitted from the red fluorescent component is 0 is I0, and (b) When the carrier frequency in the luminance change of the light emitted from the blue LED is f1, when the frequency f of the red light is f = f1, the intensity of the red light is 1/3 or less of the I0. Or -10 dB or less.
 これにより、可視光信号に含まれる赤色の光の周波数を、バーコードの読み取り周波数から確実に大きく異ならせることができる。その結果、バーコードの読み取りエラーの発生を確実に抑制することができる。 This ensures that the frequency of the red light contained in the visible light signal can be significantly different from the barcode reading frequency. As a result, the occurrence of barcode reading errors can be reliably suppressed.
 また、前記搬送周波数f1は略10kHzであってもよい。 The carrier frequency f1 may be approximately 10 kHz.
 これにより、現在実用化されている、可視光信号の送信に用いられる搬送周波数は9.6kHzであるため、この実用化されている可視光信号の送信において、バーコードの読み取りエラーの発生を有効に抑制することができる。 As a result, since the carrier frequency used for transmitting visible light signals, which is currently in practical use, is 9.6 kHz, it is effective to generate bar code reading errors in this practical transmission of visible light signals. Can be suppressed.
 また、前記搬送周波数f1は略5~100kHzであってもよい。 The carrier frequency f1 may be approximately 5 to 100 kHz.
 可視光信号を受信する受信機のイメージセンサ(撮像素子)の進歩により、今後の可視光通信において、20kHz、40kHz、80kHzまたは100kHzなどの搬送周波数が用いられることが想定される。したがって、上述の搬送周波数f1を略5~100kHzとすることにより、今後の可視光通信においても、バーコードの読み取りエラーの発生を有効に抑制することができる。 It is assumed that carrier frequencies such as 20 kHz, 40 kHz, 80 kHz, and 100 kHz will be used in future visible light communication due to the advancement of image sensors (imaging devices) of receivers that receive visible light signals. Therefore, by setting the above-mentioned carrier frequency f1 to approximately 5 to 100 kHz, it is possible to effectively suppress the occurrence of barcode reading errors in future visible light communication.
 なお、本実施の形態では、緑色蛍光成分および赤色蛍光成分が単一の蛍光体に含まれているか、それらの2つの蛍光成分のそれぞれが別体の蛍光体に含まれているかに関わらず、上記各効果を奏することができる。つまり、単一の蛍光体が用いられる場合であっても、その蛍光体から放たれる赤色の光および緑色の光のそれぞれの残光特性、すなわち周波数特性は異なる。したがって、赤色の光における残光特性または周波数特性が劣り、緑色の光における残光特性または周波数特性が勝る単一蛍光体を用いることによっても、上記各効果を奏することができる。なお、残光特性または周波数特性が劣るとは、残光の継続時間が長い、または、高周波数帯域における光の強度が弱いということであり、残光特性または周波数特性が勝るとは、残光の継続時間が短い、または、高周波数帯域における光の強度が強いということである。 In the present embodiment, regardless of whether the green fluorescent component and the red fluorescent component are contained in a single phosphor, or each of these two fluorescent components is contained in a separate phosphor, The above effects can be achieved. That is, even when a single phosphor is used, the afterglow characteristics, that is, the frequency characteristics, of the red light and the green light emitted from the phosphor are different. Therefore, the above-mentioned effects can also be achieved by using a single phosphor that is inferior in afterglow characteristics or frequency characteristics in red light and superior in afterglow characteristics or frequency characteristics in green light. Note that poor afterglow characteristics or frequency characteristics means that the duration of afterglow is long or the intensity of light in a high frequency band is weak, and that the afterglow characteristics or frequency characteristics are superior means that afterglow characteristics The duration of the light is short, or the light intensity in the high frequency band is high.
 ここで、図114A~図115に示す例では、可視光信号に含まれる赤色の輝度変化の周波数をなまらせることによって、バーコードの読み取りエラーの発生を抑制したが、可視光信号の搬送周波数を高くすることによって、その読み取りエラーの発生を抑制してもよい。 Here, in the example shown in FIGS. 114A to 115, the occurrence of barcode reading errors is suppressed by smoothing the frequency of the red luminance change included in the visible light signal, but the carrier frequency of the visible light signal is reduced. By making it high, the occurrence of the reading error may be suppressed.
 図116は、バーコードの読み取りエラーの発生を抑制するために新たに発生する課題を説明するための図である。 FIG. 116 is a diagram for explaining a problem newly generated in order to suppress occurrence of a barcode reading error.
 図116に示すように、可視光信号の搬送周波数fcが約10kHzである場合、バーコードの読み取りに用いられる赤色レーザ光の読み取り周波数も約10~20kHzであるため、互いの周波数が干渉し、バーコードの読み取りエラーが発生する。 As shown in FIG. 116, when the carrier frequency fc of the visible light signal is about 10 kHz, the reading frequency of the red laser light used for reading the barcode is also about 10 to 20 kHz. A barcode reading error occurs.
 そこで、可視光信号の搬送周波数fcを約10kHzから例えば40kHzに上げることにより、バーコードの読み取りエラーの発生を抑制することができる。 Therefore, by increasing the carrier frequency fc of the visible light signal from about 10 kHz to, for example, 40 kHz, it is possible to suppress the occurrence of barcode reading errors.
 しかし、可視光信号の搬送周波数fcが約40kHzであれば、受信機が撮影によって可視光信号をサンプリングするためのサンプリング周波数fsは、80kHz以上である必要がある。 However, if the carrier frequency fc of the visible light signal is about 40 kHz, the sampling frequency fs for the receiver to sample the visible light signal by photographing needs to be 80 kHz or more.
 つまり、受信機において必要とされるサンプリング周波数fsが高いために、受信機の処理負担が増大するという新たな課題が生じる。そこで、この新たな課題を解決するために、本実施の形態における受信機はダウンサンプリングを行う。 That is, since the sampling frequency fs required in the receiver is high, a new problem arises that the processing load of the receiver increases. In order to solve this new problem, the receiver in this embodiment performs downsampling.
 図117は、本実施の形態における受信機で行われるダウンサンプリングを説明するための図である。 FIG. 117 is a diagram for explaining the downsampling performed by the receiver in this embodiment.
 本実施の形態における送信機2301は、例えば液晶ディスプレイ、デジタルサイネージまたは照明機器として構成されている。そして、送信機2301は、周波数変調された可視光信号を出力する。このとき、送信機2301は、その可視光信号の搬送周波数fcを例えば40kHzと45kHzとに切り替える。 The transmitter 2301 in the present embodiment is configured as, for example, a liquid crystal display, digital signage, or a lighting device. The transmitter 2301 then outputs a frequency-modulated visible light signal. At this time, the transmitter 2301 switches the carrier frequency fc of the visible light signal to, for example, 40 kHz and 45 kHz.
 本実施の形態における受信機2302は、その送信機2301を例えば30fpsのフレームレートで撮影する。このとき、受信機2302は、上記各実施の形態における受信機と同様に、撮影によって得られる各画像(具体的には各フレーム)に輝線が生じるように、短い露光時間で撮影を行う。また、受信機2302の撮影に用いられるイメージセンサには、例えば1000本の露光ラインがある。したがって、1フレームの撮影では、1000本の露光ラインがそれぞれ異なるタイミングに露光を開始することによって、可視光信号がサンプリングされる。その結果、1秒間では、30fps×1000本=30000回のサンプリング(30ks/秒)が行われる。言い換えれば、可視光信号のサンプリング周波数fsは30kHzとなる。 The receiver 2302 in this embodiment photographs the transmitter 2301 at a frame rate of 30 fps, for example. At this time, similarly to the receiver in each of the above embodiments, the receiver 2302 performs imaging with a short exposure time so that bright lines are generated in each image (specifically, each frame) obtained by imaging. The image sensor used for photographing by the receiver 2302 has, for example, 1000 exposure lines. Accordingly, in one-frame shooting, visible light signals are sampled by starting exposure at different timings for 1000 exposure lines. As a result, in 1 second, 30 fps × 1000 lines = 30000 samplings (30 ks / second) are performed. In other words, the sampling frequency fs of the visible light signal is 30 kHz.
 一般的なサンプリング定理にしたがえば、サンプリング周波数fs=30kHzでは、15kHz以下の搬送周波数の可視光信号しか復調することができない。 According to a general sampling theorem, at a sampling frequency fs = 30 kHz, only a visible light signal having a carrier frequency of 15 kHz or less can be demodulated.
 しかし、本実施の形態における受信機2302は、サンプリング周波数fs=30kHzで、搬送周波数fc=40kHzまたは45kHzの可視光信号をダウンサンプリングする。このダウンサンプリングによって、フレームにはエイリアスが発生するが、本実施の形態における受信機2302は、そのエイリアスを観察および分析することによって、可視光信号の搬送周波数fcを推定する。 However, the receiver 2302 in this embodiment downsamples a visible light signal with a sampling frequency fs = 30 kHz and a carrier frequency fc = 40 kHz or 45 kHz. Although aliasing occurs in the frame by this downsampling, the receiver 2302 in this embodiment estimates the carrier frequency fc of the visible light signal by observing and analyzing the aliasing.
 図118は、本実施の形態における受信機2302の処理動作を示すフローチャートである。 FIG. 118 is a flowchart showing a processing operation of the receiver 2302 in this embodiment.
 まず、受信機2302は、被写体を撮影することにより、搬送周波数fc=40kHzまたは45kHzの可視光信号に対して、サンプリング周波数fs=30kHzのダウンサンプリングを行う(ステップS2310)。 First, the receiver 2302 performs down-sampling with a sampling frequency fs = 30 kHz on a visible light signal with a carrier frequency fc = 40 kHz or 45 kHz by photographing a subject (step S2310).
 次に、受信機2302は、そのダウンサンプリングによって得られるフレームに発生するエイリアスを観察および分析する(ステップS2311)。これにより、受信機2302は、そのエイリアスの周波数を例えば5.1kHzまたは5.5kHzとして特定する。 Next, the receiver 2302 observes and analyzes an alias generated in the frame obtained by the downsampling (step S2311). As a result, the receiver 2302 specifies the frequency of the alias as, for example, 5.1 kHz or 5.5 kHz.
 そして、受信機2302は、その特定されたエイリアスの周波数に基づいて、可視光信号の搬送周波数fcを推定する(ステップS2311)。つまり、受信機2302は、エイリアスから元の周波数を復元する。これにより、受信機2302は、可視光信号の搬送周波数fcを例えば40kHzまたは45kHzとして推定する。 Then, the receiver 2302 estimates the carrier frequency fc of the visible light signal based on the identified alias frequency (step S2311). That is, the receiver 2302 restores the original frequency from the alias. Thereby, the receiver 2302 estimates the carrier frequency fc of the visible light signal as 40 kHz or 45 kHz, for example.
 このように、本実施の形態における受信機2302は、ダウンサンプリングと、エイリアスに基づく周波数の復元とを行うことによって、高い搬送周波数の可視光信号を適切に受信することができる。例えば、受信機2302は、サンプリング周波数がfs=30kHzであっても、30kHz~60kHzの搬送周波数の可視光信号を受信することができる。したがって、可視光信号の搬送周波数を、現在実用化されている周波数(約10kHz)から30kHz~60kHzに上げることができる。その結果、可視光信号の搬送周波数とバーコードの読み取り周波数(10~20kHz)とを大きく異ならせることができ、互いの周波数の干渉を抑えることができる。その結果、バーコードの読み取りエラーの発生を抑制することができる。 As described above, the receiver 2302 in this embodiment can appropriately receive a visible light signal having a high carrier frequency by performing downsampling and restoration of a frequency based on an alias. For example, the receiver 2302 can receive a visible light signal having a carrier frequency of 30 kHz to 60 kHz even if the sampling frequency is fs = 30 kHz. Accordingly, the carrier frequency of the visible light signal can be increased from 30 kHz to 60 kHz from the frequency (about 10 kHz) currently in practical use. As a result, the carrier frequency of the visible light signal and the barcode reading frequency (10 to 20 kHz) can be greatly different, and interference between the frequencies can be suppressed. As a result, the occurrence of barcode reading errors can be suppressed.
 このような本実施の形態における受信方法は、被写体から情報を取得する受信方法であって、イメージセンサによる前記被写体の撮影によって得られるフレームに、前記イメージセンサに含まれる複数の露光ラインに対応する複数の輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサに含まれる前記複数の露光ラインのそれぞれが順次異なる時刻で露光を開始することを繰り返すことにより、前記イメージセンサが、所定のフレームレートで、且つ、設定された前記露光時間で、輝度変化する前記被写体を撮影する撮影ステップと、前記撮影によって得られるフレームごとに、当該フレームに含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含む。そして、前記撮影ステップでは、前記複数の露光ラインのそれぞれが順次異なる時刻で露光を開始することを繰り返すことによって、前記被写体の輝度変化によって送信される可視光信号の搬送周波数よりも低いサンプリング周波数で、前記可視光信号をダウンサンプリングし、前記情報取得ステップでは、前記撮影によって得られるフレームごとに、当該フレームに含まれる前記複数の輝線のパターンによって特定されるエイリアスの周波数を特定し、特定された前記エイリアスの周波数から前記可視光信号の周波数を推定し、推定された前記可視光信号の周波数を復調することによって前記情報を取得する。 Such a receiving method in the present embodiment is a receiving method for acquiring information from a subject, and corresponds to a plurality of exposure lines included in the image sensor in a frame obtained by photographing the subject by an image sensor. An exposure time setting step for setting an exposure time of the image sensor so that a plurality of bright lines are generated according to a change in luminance of the subject, and exposure of each of the plurality of exposure lines included in the image sensor at different times sequentially The image sensor causes the image sensor to shoot the subject whose luminance changes at a predetermined frame rate and at the set exposure time, and for each frame obtained by the shooting. , Specified by the plurality of bright line patterns included in the frame Including an information acquisition step of acquiring information by demodulating the over data. In the photographing step, each of the plurality of exposure lines sequentially repeats starting exposure at different times, so that the sampling frequency is lower than the carrier frequency of the visible light signal transmitted by the luminance change of the subject. The visible light signal is down-sampled, and in the information acquisition step, the frequency of the alias specified by the pattern of the plurality of bright lines included in the frame is specified and specified for each frame obtained by the imaging. The information is obtained by estimating the frequency of the visible light signal from the frequency of the alias and demodulating the estimated frequency of the visible light signal.
 このような受信方法では、ダウンサンプリングと、エイリアスに基づく周波数の復元とを行うことによって、高い搬送周波数の可視光信号を適切に受信することができる。 In such a receiving method, a visible light signal having a high carrier frequency can be appropriately received by performing downsampling and frequency restoration based on alias.
 また、前記ダウンサンプリングでは、30kHzよりも高い搬送周波数の可視光信号をダウンサンプリングしてもよい。これにより、可視光信号の搬送周波数とバーコードの読み取り周波数(10~20kHz)との干渉を避けることができ、バーコードの読み取りエラーをより効果的に抑制することができる。 In the downsampling, a visible light signal having a carrier frequency higher than 30 kHz may be downsampled. Accordingly, interference between the carrier frequency of the visible light signal and the barcode reading frequency (10 to 20 kHz) can be avoided, and barcode reading errors can be more effectively suppressed.
 (実施の形態15)
 図119は、受信装置(撮像装置)の処理動作を示す図である。具体的には、図119は、可視光通信を受信する場合における、通常撮像モードとマクロ撮像モードとの切り替え処理の一例について説明するための図である。
(Embodiment 15)
FIG. 119 is a diagram illustrating processing operations of the reception device (imaging device). Specifically, FIG. 119 is a diagram for describing an example of switching processing between the normal imaging mode and the macro imaging mode when receiving visible light communication.
 ここで、受信装置1610は、複数の光源(図119では、4つの光源)から構成される送信装置が発光している可視光を受信する。 Here, the reception device 1610 receives visible light emitted from a transmission device including a plurality of light sources (four light sources in FIG. 119).
 まず、受信装置1610は、可視光通信を行うモードに遷移した場合、通常撮像モードで撮像部を起動する(S1601)。なお、受信装置1610は、可視光通信を行うモードに遷移した場合、光源を撮像する枠1611を画面に表示する。 First, when the receiving device 1610 transitions to a mode for performing visible light communication, the receiving device 1610 activates the imaging unit in the normal imaging mode (S1601). Note that the receiving device 1610 displays a frame 1611 for imaging a light source on the screen when the mode is changed to a mode for performing visible light communication.
 所定時間後に、受信装置1610は、撮像部の撮像モードをマクロ撮像モードに切り替える(S1602)。なお、ステップS1601からステップS1602への切り替えのタイミングは、ステップS1601から所定時間後ではなく、受信装置1610が枠1611内に光源が収まるように撮像されたことを判断したときとしてもよい。このようにマクロ撮像モードに切り替えれば、ユーザは、マクロ撮像モードにより画像がぼける前の通常撮像モードでのクリアな画像で光源を枠1611内に収めればよいので、容易に光源を枠1611内に収めることをできる。 After a predetermined time, the receiving device 1610 switches the imaging mode of the imaging unit to the macro imaging mode (S1602). Note that the timing of switching from step S1601 to step S1602 may not be after a predetermined time from step S1601, but may be when the receiving device 1610 determines that the light source is captured within the frame 1611. By switching to the macro imaging mode in this way, the user can easily place the light source in the frame 1611 because the light source can be stored in the frame 1611 with a clear image in the normal imaging mode before the image is blurred in the macro imaging mode. Can fit in.
 次に、受信装置1610は、光源からの信号を受信したか否かを判定する(S1603)。光源からの信号を受信したと判定すれば(S1603でYes)、ステップS1601の通常撮像モードに戻り、光源からの信号を受信していないと判定すれば(S1603でNo)、ステップ1602のマクロ撮像モードを継続する。なお、ステップS1603でYesの場合には、受信した信号に基づいた処理(例えば、受信した信号に示される画像を表示する処理)を行ってもよい。 Next, the receiving device 1610 determines whether or not a signal from the light source has been received (S1603). If it is determined that the signal from the light source is received (Yes in S1603), the process returns to the normal imaging mode in Step S1601, and if it is determined that the signal from the light source is not received (No in S1603), the macro imaging in Step 1602 is performed. Continue mode. In the case of Yes in step S1603, processing based on the received signal (for example, processing for displaying an image indicated by the received signal) may be performed.
 この受信装置1610によれば、ユーザーがスマートフォンの光源1611の表示部を指でタッチすることにより通常撮像モードからマクロ撮像モードに切り替えることにより、複数の光源をぼけた状態で撮像することができる。このため、マクロ撮像モードで撮像した画像には、通常撮像モードで撮像した場合の画像よりも明るい領域を多く含む。特に、複数の光源のうちの隣接する2つの光源の間では、2つの光源からの光が重なり合うため、図119の(a)の左図に示すようにストライプ状の映像が離れていたため、連続信号として受信できないという課題を、右図のように連続したストライプになるための連続受信信号として、復調することができる。一度に長い符号を受信できるため、レスポンス時間が短くなるという効果がある。図119の(b)のように、撮影画像をまず通常シャッターと通常焦点で撮影すると美しい通常の画像が得られる。しかし文字のように光源が離れているとシャッターを高速化しても連続データがとれないため復調できない。次にシャッターを高速化するとともにレンズの焦点用駆動部を近距離(マクロ)にすると光源がぼけて拡がるため、4つの光源が、つながるため、データが受信できる。次に焦点を戻して、シャッター速度を通常に戻すと元の美しい画像が得られる。(c)のように表示部には、美しい画像をメモリーに記録し、表示することにより、表示部には美しい画像だけが表示されるという効果がある。通常撮像モードで撮像した画像よりもマクロ撮像モードで撮像した画像の方が所定の明るさより明るい領域を多く含む。よって、マクロ撮像モードでは、その被写体に対して輝線を生成することが可能な露光ラインの数を増やすことができる。 According to the receiving device 1610, the user can take an image in a blurred state by switching from the normal imaging mode to the macro imaging mode by touching the display unit of the light source 1611 of the smartphone with a finger. For this reason, the image captured in the macro imaging mode includes more bright areas than the image captured in the normal imaging mode. In particular, since light from two light sources overlaps between two adjacent light sources among a plurality of light sources, stripe-like images are separated as shown in the left diagram of FIG. The problem that it cannot be received as a signal can be demodulated as a continuous reception signal for forming a continuous stripe as shown in the right figure. Since a long code can be received at once, the response time is shortened. As shown in FIG. 119 (b), when a photographed image is first photographed with a normal shutter and a normal focus, a beautiful normal image is obtained. However, if the light source is far away, such as text, even if the shutter speed is increased, continuous data cannot be obtained and cannot be demodulated. Next, when the speed of the shutter is increased and the focus driving unit of the lens is set to a short distance (macro), the light source is blurred and spreads, and the four light sources are connected, so that data can be received. Next, when the focus is returned and the shutter speed is returned to normal, the original beautiful image can be obtained. As shown in (c), by recording and displaying a beautiful image in the memory on the display unit, there is an effect that only the beautiful image is displayed on the display unit. The image captured in the macro imaging mode includes more areas brighter than the predetermined brightness than the image captured in the normal imaging mode. Therefore, in the macro imaging mode, the number of exposure lines that can generate bright lines for the subject can be increased.
 図120は、受信装置(撮像装置)の処理動作を示す図である。具体的には、図120は、可視光通信を受信する場合における、通常撮像モードとマクロ撮像モードとの切り替え処理の別の一例について説明するための図である。 FIG. 120 is a diagram illustrating processing operations of the receiving device (imaging device). Specifically, FIG. 120 is a diagram for describing another example of the switching process between the normal imaging mode and the macro imaging mode when receiving visible light communication.
 ここで、受信装置1620は、複数の光源(図120では、4つの光源)から構成される送信装置が発光している可視光を受信する。 Here, the receiving device 1620 receives visible light emitted from a transmitting device including a plurality of light sources (four light sources in FIG. 120).
 まず、受信装置1620は、可視光通信を行うモードに遷移した場合、通常撮像モードで撮像部を起動し、受信装置1620の画面に表示されている画像1622よりも広い範囲の画像1623を撮像する。そして、撮像した画像1623を示す画像データと、当該画像1623を撮像したときの受信装置1620のジャイロセンサ、地磁気センサ及び加速度センサにより検出された受信装置1620の姿勢を示す姿勢情報とをメモリに保持する(S1611)。なお、撮像した画像1623は、受信装置1620の画面に表示されている画像1622を基準として上下方向及び左右方向に所定の幅だけ広い範囲の画像である。また、受信装置1620は、可視光通信を行うモードに遷移した場合、光源を撮像する枠1621を画面に表示する。 First, when the receiving device 1620 transitions to a mode for performing visible light communication, the imaging device is activated in the normal imaging mode, and an image 1623 having a wider range than the image 1622 displayed on the screen of the receiving device 1620 is captured. . The image data indicating the captured image 1623 and the posture information indicating the posture of the receiving device 1620 detected by the gyro sensor, the geomagnetic sensor, and the acceleration sensor of the receiving device 1620 when the image 1623 is captured are stored in the memory. (S1611). Note that the captured image 1623 is an image having a wide range by a predetermined width in the vertical direction and the horizontal direction with reference to the image 1622 displayed on the screen of the reception device 1620. In addition, when the receiving apparatus 1620 transitions to a mode for performing visible light communication, the receiving apparatus 1620 displays a frame 1621 for imaging a light source on the screen.
 所定時間後に、受信装置1620は、撮像部の撮像モードをマクロ撮像モードに切り替える(S1612)。なお、ステップS1611からステップS1612への切り替えのタイミングは、ステップS1611から所定時間後ではなく、画像1623を撮像し、撮像した画像1623を示す画像データがメモリに保持されたことを判断したときとしてもよい。このとき、受信装置1620は、メモリに保持された画像データに基づいて画像1623のうちの受信装置1620の画面サイズに対応するサイズの画像1624を表示する。 After a predetermined time, the receiving device 1620 switches the imaging mode of the imaging unit to the macro imaging mode (S1612). Note that the timing of switching from step S1611 to step S1612 is not after a predetermined time from step S1611, but when the image 1623 is captured and it is determined that the image data indicating the captured image 1623 is held in the memory. Good. At this time, the receiving device 1620 displays an image 1624 having a size corresponding to the screen size of the receiving device 1620 among the images 1623 based on the image data held in the memory.
 なお、このとき受信装置1620に表示される画像1624は、画像1623のうちの一部の画像であって、ステップS1611で取得された姿勢情報で示される受信装置1620の姿勢(白破線で示される位置)と、現在の受信装置1620の姿勢との差分から現在の受信装置1620により撮像されていると予測される領域の画像である。つまり、画像1624は、画像1623のうちの一部の画像であって、実際にマクロ撮像モードで撮像されている画像1625の撮像対象に対応する領域の画像である。つまり、ステップS1612では、ステップS1611の時点から変化した姿勢(撮像方向)を取得し、取得した現在の姿勢(撮像方向)から現在撮像されていると推測される撮像対象を特定し、予め撮像した画像1623から現在の姿勢(撮像方向)に応じた画像1624を特定し、画像1624を表示する処理を行っている。このため、受信装置1620は、図120の画像1623で示すように、白破線で示す位置から白抜き矢印の方向に受信装置1620が移動した場合に、当該移動量に応じて画像1623から切り出す画像1624の領域を決定し、決定された領域における画像1623である画像1624を表示できる。 Note that an image 1624 displayed on the receiving device 1620 at this time is a part of the image 1623, and the posture of the receiving device 1620 indicated by the posture information acquired in step S1611 (indicated by a white broken line). Position) and the current posture of the receiving device 1620. This is an image of an area predicted to be captured by the current receiving device 1620. That is, the image 1624 is a partial image of the image 1623 and is an image of an area corresponding to the imaging target of the image 1625 actually captured in the macro imaging mode. That is, in step S1612, the posture (imaging direction) changed from the time of step S1611 is acquired, and the imaging target that is presumed to be currently imaged from the acquired current posture (imaging direction) is identified and imaged in advance. An image 1624 corresponding to the current posture (imaging direction) is specified from the image 1623, and processing for displaying the image 1624 is performed. For this reason, as shown by an image 1623 in FIG. 120, when the receiving device 1620 moves from the position indicated by the white broken line in the direction of the white arrow, the receiving device 1620 cuts out the image 1623 according to the amount of movement. An area 1624 can be determined, and an image 1624 that is the image 1623 in the determined area can be displayed.
 これにより、受信装置1620は、マクロ撮像モードで撮像している場合であっても、マクロ撮像モードで撮像されている画像1625を表示せずに、よりクリアな通常撮像モードで撮像した画像1623から、現在の受信装置1620の姿勢に応じて切り出した画像1624を表示できる。焦点をぼかした画像から距離が離れた複数の光源から、連続した可視光情報を得ると同時に、記憶した通常面像を表示部に表示させる本発明の方式においては、ユーザがスマートフォンを用いて撮影する時、手振れが発生して、実際の撮影画像とメモリから表示する静止画像の方向がずれて、目標とする光源にユーザーが方向を合わせることができないという課題が発生することが予想される。この場合、光源からのデータを受信できなくなるため対策が必要である。しかし、改良した本発明により、手振れしても、画像揺動検知手段や振動ジャイロ当の揺動検出手段により、手振れを検知して、静止画像の中の目標画像が所定の方向にシフトされカメラの方向とのずれがユーザーにわかる。この表示により、ユーザーが目標とする光源にカメラを向けることが可能となるため、通常画像を表示しながら分割された複数の光源を、光学的に連結させて撮影でき、連続的に信号を受信することができる。これにより、通常画像を表示させるから複数に分割された光源を受信することができる。この場合、複数の光源が枠1621に合うように受信装置1620の姿勢を調整することが容易にできる。なお、焦点をボケさせる場合、光源が分散されるため、等価的に輝度がおちるため、カメラのISO等の感度を上げることにより、より確実に可視光データを受信できるという効果がある。 Accordingly, the reception device 1620 does not display the image 1625 captured in the macro imaging mode even when capturing in the macro imaging mode, and starts from the image 1623 captured in the clearer normal imaging mode. An image 1624 cut out in accordance with the current posture of the receiving device 1620 can be displayed. In the method of the present invention in which continuous visible light information is obtained from a plurality of light sources that are separated from a defocused image, the stored normal plane image is displayed on the display unit, and the user takes a picture using a smartphone. In this case, camera shake occurs, and the direction of the actual captured image and the direction of the still image displayed from the memory shifts, and a problem that the user cannot adjust the direction to the target light source is expected to occur. In this case, countermeasures are necessary because data from the light source cannot be received. However, according to the present invention, even if the camera shake is detected, the camera shake is detected by the image swing detection means or the swing gyro detection means, and the target image in the still image is shifted in a predetermined direction. The user can see the deviation from the direction. This display makes it possible for the user to point the camera at the target light source, so that multiple divided light sources can be photographed while displaying normal images, and signals are received continuously. can do. Thereby, since the normal image is displayed, the light source divided into a plurality of parts can be received. In this case, it is possible to easily adjust the posture of the reception device 1620 so that a plurality of light sources fits the frame 1621. Note that when the focal point is blurred, the light source is dispersed and the luminance is equivalently reduced. Therefore, there is an effect that the visible light data can be received more reliably by increasing the sensitivity of the ISO of the camera.
 次に、受信装置1620は、光源からの信号を受信したか否かを判定する(S1613)。光源からの信号を受信したと判定すれば(S1613でYes)、ステップS1611の通常撮像モードに戻り、光源からの信号を受信していないと判定すれば(S1613でNo)、ステップ1612のマクロ撮像モードを継続する。なお、ステップS1613でYesの場合には、受信した信号に基づいた処理(例えば、受信した信号に示される画像を表示する処理)を行ってもよい。 Next, the receiving device 1620 determines whether or not a signal from the light source has been received (S1613). If it is determined that the signal from the light source has been received (Yes in S1613), the process returns to the normal imaging mode in Step S1611. If it is determined that the signal from the light source has not been received (No in S1613), the macro imaging in Step 1612 is performed. Continue mode. In the case of Yes in step S1613, processing based on the received signal (for example, processing for displaying an image indicated by the received signal) may be performed.
 この受信装置1620においても受信装置1610と同様に、マクロ撮像モードにおいてより明るい領域を含む画像を撮像できる。このため、マクロ撮像モードでは、その被写体に対して輝線を生成することが可能な露光ラインの数を増やすことができる。 As with the receiving device 1610, the receiving device 1620 can pick up an image including a brighter region in the macro image pickup mode. For this reason, in the macro imaging mode, the number of exposure lines that can generate bright lines for the subject can be increased.
 図121は、受信装置(撮像装置)の処理動作を示す図である。 121 is a diagram showing processing operations of the receiving device (imaging device).
 ここで、送信装置1630は、例えば、テレビなどの表示装置であり、所定時間間隔Δ1630で可視光通信により異なる送信IDを送信している。具体的には、時刻t1631、t1632、t1633、t1634において、それぞれ表示される画像1631、1632、1633、1634に対応するデータにそれぞれ紐付けられた送信IDであるID1631、ID1632、ID1633、ID1634を送信する。つまり、送信装置1630からは、ID1631~ID1634が所定時間間隔Δt1630で次々に送信される。 Here, the transmission device 1630 is a display device such as a television, for example, and transmits different transmission IDs by visible light communication at a predetermined time interval Δ1630. Specifically, at times t1631, t1632, t1633, and t1634, ID1631, ID1632, ID1633, and ID1634, which are transmission IDs associated with the data corresponding to the displayed images 1631, 1632, 1633, and 1634, respectively, are transmitted. To do. That is, ID 1631 to ID 1634 are transmitted one after another at a predetermined time interval Δt 1630 from the transmission device 1630.
 受信装置1640は、可視光通信により受信した送信IDに基づいてサーバ1650に、各送信IDに紐付けられたデータを要求し、サーバからデータを受信し、当該データに対応した画像を表示する。具体的には、ID1631、ID1632、ID1633、ID1634にそれぞれ対応した、画像1641、1642、1643、1644を、それぞれ時刻t1631、t1632、t1633、t1634において表示する。 The receiving device 1640 requests the data associated with each transmission ID to the server 1650 based on the transmission ID received by visible light communication, receives the data from the server, and displays an image corresponding to the data. Specifically, images 1641, 1642, 1643, and 1644 respectively corresponding to ID1631, ID1632, ID1633, and ID1634 are displayed at times t1631, t1632, t1633, and t1634, respectively.
 受信装置1640は、時刻t1631で受信したID1631を取得した場合、サーバ1650から、その後の時刻t1632~t1634で送信装置1630から送信される予定の送信IDを示すID情報を取得してもよい。この場合、受信装置1640は、取得したID情報を用いることで、送信装置1630から送信IDをその都度受信しなくても、時刻t1632~t1634でのID1632~ID1634に紐付けられたデータをサーバ1650に要求し、受信したデータを各時刻t1632~t1634で表示することができる。 When the receiving apparatus 1640 acquires the ID 1631 received at time t1631, the receiving apparatus 1640 may acquire ID information indicating a transmission ID scheduled to be transmitted from the transmitting apparatus 1630 from the server 1650 at subsequent times t1632 to t1634. In this case, the receiving device 1640 uses the acquired ID information, so that the data associated with ID 1632 to ID 1634 at times t1632 to t1634 can be stored in the server 1650 without receiving a transmission ID from the transmitting device 1630 each time. The received data can be displayed at times t1632 to t1634.
 また、受信装置1640は、サーバ1650からその後の時刻t1632~t1634で送信装置1630から送信される予定の送信IDを示す情報を取得しなくても、時刻t1631においてID1631に対応するデータを要求すれば、サーバ1650からその後の時刻t1632~t1634に対応する送信IDに紐付けられたデータを受信し、受信したデータを各時刻t1632~t1634で表示するようにしてもよい。つまり、サーバ1650は、受信装置1640から時刻t1631に送信されたID1631に紐付けられたデータの要求を受信した場合、その後の時刻t1632~t1634に対応する送信IDに紐付けられたデータを受信装置1640からの要求がなくても受信装置1640に対して各時刻t1632~t1634において送信する。つまり、この場合、サーバ1650は、各時刻t1631~1634と、各時刻t1631~1634に対応する送信IDに紐付けられたデータとが関連付けられた関連付け情報を保持しており、関連付け情報に基づいて所定の時刻で当該所定の時刻に関連付けられた所定のデータを送信する。 Also, the receiving device 1640 may request data corresponding to the ID 1631 at time t1631 without acquiring information indicating the transmission ID scheduled to be transmitted from the transmitting device 1630 from time t1632 to t1634 from the server 1650. The data associated with the transmission ID corresponding to the subsequent times t1632 to t1634 may be received from the server 1650, and the received data may be displayed at each time t1632 to t1634. That is, when server 1650 receives a request for data associated with ID 1631 transmitted at time t1631 from reception device 1640, server 1650 receives data associated with a transmission ID corresponding to subsequent times t1632 to t1634. Even if there is no request from 1640, transmission is made to receiving apparatus 1640 at times t1632 to t1634. That is, in this case, the server 1650 holds association information in which each time t1631 to 1634 and data associated with the transmission ID corresponding to each time t1631 to 1634 are associated, and based on the association information The predetermined data associated with the predetermined time is transmitted at the predetermined time.
 このように、受信装置1640は、時刻t1631において送信ID1631を可視光通信により取得できれば、その後の時刻t1632~t1634では、可視光通信を行わなくてもサーバ1650から各時刻t1632~t1634に対応するデータを受信できる。このため、ユーザは、可視光通信により送信IDを取得するために送信装置1630に受信装置1640を向け続ける必要がなくなり、容易に受信装置1640にサーバ1650から取得したデータを表示させることができる。この場合、受信装置1640は、サーバーからIDに対応するデータを毎回取得すると、サーバーからの時間遅れが生じてレスポンス時間が長くなる。従って、レスポンスを早くするためには、サーバー等から予め、IDに対応したデータを受信機の記憶部に記憶しておき、記憶部の中のIDに対応するデータを表示することにより、レスポンス時間をはやくすることができる。この方式においては、可視光送信機からの送信信号に次のIDを出力する時間情報を入れておけば、受信機側は、連続的に可視光信号を受信しなくても、その時間になれば、次のIDの送信時間を知ることができるため、受信装置を光源の方にずーっと、向けておく必要がなくなるという効果がある。この方式は、可視光を受信したときに、送信機側の時間情報(時計)を受信機側の時間情報(時計)の同期をとるだけで、同期後は、送信機のデータを受け取らなくても、送信機と同期した画面を連続的に表示できるという効果がある。 In this way, if receiving apparatus 1640 can obtain transmission ID 1631 by visible light communication at time t1631, data corresponding to each time t1632 to t1634 from server 1650 can be obtained at subsequent times t1632 to t1634 without performing visible light communication. Can be received. For this reason, the user does not need to keep the receiving device 1640 directed at the transmitting device 1630 in order to acquire the transmission ID by visible light communication, and can easily display the data acquired from the server 1650 on the receiving device 1640. In this case, when the receiving device 1640 obtains data corresponding to the ID from the server every time, the time delay from the server occurs and the response time becomes longer. Therefore, in order to speed up the response, data corresponding to the ID is stored in advance in the storage unit of the receiver from the server or the like, and the response time is displayed by displaying the data corresponding to the ID in the storage unit. Can be quick. In this method, if the time information for outputting the next ID is included in the transmission signal from the visible light transmitter, the receiver side can obtain the time even if the visible light signal is not continuously received. In this case, since it is possible to know the transmission time of the next ID, there is an effect that it is not necessary to keep the receiving device directed toward the light source. This system only synchronizes the time information (clock) on the transmitter side with the time information (clock) on the receiver side when receiving visible light. However, there is an effect that a screen synchronized with the transmitter can be continuously displayed.
 また、上述の例では、受信装置1640は、時刻t1631、t1632、t1633、およびt1634のそれぞれにおいて、送信IDであるID1631、ID1632、ID1633およびID1634のそれぞれ対応した、画像1641、1642、1643、1644をそれぞれ表示した。ここで、受信装置1640は、図122に示すように、上記各時刻において画像だけでなく他の情報を提示してもよい。つまり、受信装置1640は、時刻t1631において、ID1631に対応した画像1641を表示するとともに、そのID1631に対応した音または音声を出力する。このときさらに、受信装置1640は、その画像に映し出されている例えば商品の購入サイトを表示してもよい。このような音の出力および購入サイトの表示は、時刻t1631以外の時刻t1632、t1633、およびt1634のそれぞれにおいても、同様に行われる。 In the above-described example, the receiving device 1640 displays images 1641, 1642, 1643, and 1644 corresponding to the transmission IDs ID1631, ID1632, ID1633, and ID1634, respectively, at times t1631, t1632, t1633, and t1634. Displayed respectively. Here, as illustrated in FIG. 122, the reception device 1640 may present not only the image but also other information at each time. That is, at time t1631, the receiving device 1640 displays the image 1641 corresponding to the ID 1631 and outputs sound or sound corresponding to the ID 1631. At this time, the receiving device 1640 may further display, for example, a purchase site for the product displayed in the image. Such sound output and purchase site display are performed in the same manner at times t1632, t1633, and t1634 other than time t1631.
 次に図119の(b)のように立体用の左右2つのカメラを搭載したスマートフォンの場合は、左眼用で通常のシャッター速度、通常の焦点で通常の画質の画像を表示する。同時に右眼用カメラでは、左眼より高速のシャッターで、かつ/もしくは、短い距離の焦点やマクロに設定し、本発明のストライプ状の輝線を得て、データを復調する。これにより、表示部には通常の画質の画像が表示されるとともに、右眼カメラにより、距離的に分割された複数の光源の光通信データを受信できるという効果が得られる。 Next, as shown in FIG. 119 (b), in the case of a smartphone equipped with two left and right cameras for stereoscopic display, an image with a normal image quality is displayed with a normal shutter speed and a normal focus for the left eye. At the same time, the right-eye camera uses a shutter that is faster than the left eye and / or is set to a focal point or macro at a short distance to obtain the stripe-like bright line of the present invention and demodulate the data. As a result, an image having normal image quality is displayed on the display unit, and the optical communication data of a plurality of light sources divided in distance can be received by the right eye camera.
 (実施の形態16)
 ここで、音声同期再生の応用例について以下に説明する。
(Embodiment 16)
Here, an application example of audio synchronized playback will be described below.
 図123は、実施の形態16におけるアプリケーションの一例を示す図である。 FIG. 123 is a diagram illustrating an example of an application according to the sixteenth embodiment.
 例えばスマートフォンとして構成される受信機1800aは、例えば街頭デジタルサイネージとして構成される送信機1800bから送信された信号(可視光信号)を受信する。つまり、受信機1800aは、送信機1800bによる画像再生のタイミングを受信する。受信機1800aは、その画像再生と同じタイミングで、音声を再生する。言い換えれば、受信機1800aは、送信機1800bによって再生される画像と音声とが同期するように、その音声の同期再生を行う。なお、受信機1800aは、送信機1800bによって再生される画像(再生画像)と同一の画像、または、その再生画像に関連する関連画像を、音声とともに再生してもよい。また、受信機1800aは、受信機1800aに接続された機器に、音声などの再生をさせてもよい。また、受信機1800aは、可視光信号を受信した後には、その可視光信号に対応付けられている音声または関連画像などのコンテンツをサーバからダウンロードしてもよい。受信機1800aは、そのダウンロード後に同期再生を行う。 For example, a receiver 1800a configured as a smartphone receives a signal (visible light signal) transmitted from a transmitter 1800b configured as, for example, a street digital signage. That is, the receiver 1800a receives the timing of image reproduction by the transmitter 1800b. The receiver 1800a reproduces sound at the same timing as the image reproduction. In other words, the receiver 1800a performs synchronized reproduction of the sound so that the image and sound reproduced by the transmitter 1800b are synchronized. Note that the receiver 1800a may reproduce the same image as the image (reproduced image) reproduced by the transmitter 1800b or a related image related to the reproduced image together with the sound. Further, the receiver 1800a may cause a device connected to the receiver 1800a to reproduce sound and the like. Further, after receiving the visible light signal, the receiver 1800a may download content such as sound or related images associated with the visible light signal from the server. The receiver 1800a performs synchronous reproduction after the download.
 これにより、送信機1800bからの音声が聞こえない場合や、街頭音声再生が禁止されているため送信機1800bからの音声が再生されていない場合でも、ユーザは、送信機1800bの表示に合わせた音声を聞くことができる。また、音声到達までに時間がかかるような距離がある場合でも、表示に合わせた音声を聞くことが出来る。 Thereby, even when the sound from the transmitter 1800b cannot be heard or when the sound from the transmitter 1800b is not reproduced because the street sound reproduction is prohibited, the user can select the sound that matches the display of the transmitter 1800b. Can hear. Further, even when there is a distance that takes time to reach the voice, it is possible to listen to the voice that matches the display.
 ここで、音声同期再生による多言語対応について以下に説明する。 Here, multilingual support by synchronized playback is described below.
 図124は、実施の形態16におけるアプリケーションの一例を示す図である。 FIG. 124 is a diagram illustrating an example of an application according to the sixteenth embodiment.
 受信機1800aおよび受信機1800cのそれぞれは、その受信機に設定された言語の音声であって、送信機1800dに表示されている例えば映画などの映像に対応する音声を、サーバから取得して再生する。具体的には、送信機1800dは、表示されている映像を識別するためのIDを示す可視光信号を受信機に送信する。受信機は、その可視光信号を受信すると、その可視光信号に示されるIDと、自らに設定されている言語とを含む要求信号をサーバに送信する。受信機は、その要求信号に対応する音声をサーバから取得して再生する。これにより、ユーザは、自分の設定した言語で送信機1800dに表示された作品を楽しむことが出来る。 Each of the receiver 1800a and the receiver 1800c obtains and reproduces audio corresponding to a video such as a movie displayed on the transmitter 1800d from the server, in the language set in the receiver. To do. Specifically, the transmitter 1800d transmits a visible light signal indicating an ID for identifying the displayed video to the receiver. When the receiver receives the visible light signal, the receiver transmits a request signal including the ID indicated in the visible light signal and the language set in the receiver to the server. The receiver acquires the audio corresponding to the request signal from the server and reproduces it. Thereby, the user can enjoy the work displayed on the transmitter 1800d in the language set by the user.
 ここで、音声同期方法について以下に説明する。 Here, the audio synchronization method will be described below.
 図125および図126は、実施の形態16における送信信号の例と音声同期方法の例とを示す図である。 125 and 126 are diagrams showing an example of a transmission signal and an example of a voice synchronization method in the sixteenth embodiment.
 それぞれ異なるデータ(例えば図125に示すデータ:1~6など)は、一定時間(N秒)ごとの時刻に関連付けられている。これらのデータは、例えば、時間を識別するためのIDであってもよく、時間であってもよく、音声データ(例えば64Kbpsのデータ)であってもよい。以下、データがIDであることを前提に説明する。それぞれ異なるIDは、IDに付随する付加情報部分が異なったものであるとしても良い。 Different data (for example, data shown in FIG. 125: 1 to 6 and the like) are associated with a time every fixed time (N seconds). These data may be, for example, an ID for identifying time, may be time, or may be audio data (for example, 64 Kbps data). The following description is based on the assumption that the data is an ID. Different IDs may have different additional information parts attached to the ID.
 IDを構成するパケットは異なっているほうが望ましい。そのためIDは連続していないほうが望ましい。もしくは、IDをパケット化する際に、非連続な部分を一つのパケットとして構成するパケット化方法が望ましい。誤り訂正信号は、連続したIDであっても異なるパターンとなる傾向が高いため、誤り訂正信号を一つのパケットにまとめるのではなく、複数のパケットに分散させて構成するとしても良い。 It is desirable that the packets that make up the ID are different. Therefore, it is desirable that IDs are not continuous. Alternatively, it is desirable to use a packetizing method in which the discontinuous portion is configured as one packet when the ID is packetized. Since error correction signals tend to have different patterns even with consecutive IDs, the error correction signals may be configured to be distributed in a plurality of packets rather than being combined into one packet.
 送信機1800dは、例えば表示している画像の再生時刻に合わせてIDを送信する。受信機は、IDが変更されたタイミングを検出することで、送信機1800dの画像の再生時刻(同期時刻)を認識することができる。 The transmitter 1800d transmits the ID in accordance with the reproduction time of the displayed image, for example. The receiver can recognize the reproduction time (synchronization time) of the image of the transmitter 1800d by detecting the timing when the ID is changed.
 (a)の場合は、ID:1とID:2の変化時点を受信しているため、正確に同期時刻を認識することができる。 In the case of (a), since the change time points of ID: 1 and ID: 2 are received, the synchronization time can be accurately recognized.
 IDが送信されている時間Nが長い場合は、このような機会が少なく、(b)のようにIDが受信されることがある。この場合でも、以下の方法で同期時刻を認識することができる。 When the time N during which the ID is transmitted is long, there are few such opportunities, and the ID may be received as shown in (b). Even in this case, the synchronization time can be recognized by the following method.
 (b1)IDが変化した受信区間の中点をID変化点と想定する。また、過去に推定したID変化点から時間Nの整数倍後の時刻もID変化点と推定し、複数のID変化点の中点をより正確なID変化点と推定する。このような推定のアルゴリズムにより、徐々に正確なID変化点を推定することができる。 (B1) It is assumed that the midpoint of the receiving section where the ID has changed is the ID changing point. Further, the time after an integer multiple of the time N from the ID change point estimated in the past is also estimated as the ID change point, and the midpoint of the plurality of ID change points is estimated as a more accurate ID change point. With such an estimation algorithm, an accurate ID change point can be gradually estimated.
 (b2)上記に加え、IDが変化しなかった受信区間、及び、その時間Nの整数倍後の時刻はID変化点が含まれないと推定することで、徐々にID変化点である可能性のある区間が減り、正確なID変化点を推定することができる。 (B2) In addition to the above, it is possible that the reception section where the ID has not changed and the time after an integer multiple of the time N are gradually included in the ID change point by estimating that no ID change point is included. There are fewer sections of the ID, and an accurate ID change point can be estimated.
 Nを0.5秒以下に設定することで、正確に同期させることができる。 N By setting N to 0.5 seconds or less, it can be synchronized accurately.
 Nを2秒以下に設定することで、ユーザに遅延を感じさせずに同期させることができる。 設定 By setting N to 2 seconds or less, it is possible to synchronize without causing the user to feel a delay.
 Nを10秒以下に設定することで、IDの浪費を抑えて同期させることができる。 ¡By setting N to 10 seconds or less, it is possible to synchronize while suppressing wasted ID.
 図126は、実施の形態16における送信信号の例を示す図である。 FIG. 126 is a diagram illustrating an example of a transmission signal in the sixteenth embodiment.
 図126では、時間パケットによって同期を行うことで、IDの浪費を避けることができる。時間パケットは、送信された時刻を保持しているパケットである。長い時間を表現する必要がある場合は、細かい時間を表す時間パケット1と粗い時間を表す時間パケット2に分割して時間パケットを構成する。例えば、時間パケット2は、時刻のうちの時および分を示し、時間パケット1は、時刻のうちの秒のみを示す。時刻を示すパケットを3以上の時間パケットに分割するとしても良い。粗い時間は必要性が薄いため、細かい時間パケットを荒い時間パケットより多く送信することで、受信機は、素早く正確に同期時刻を認識することができる。 In FIG. 126, waste of ID can be avoided by performing synchronization by time packets. A time packet is a packet that holds the time of transmission. When it is necessary to express a long time, the time packet is divided into a time packet 1 representing a fine time and a time packet 2 representing a rough time. For example, time packet 2 indicates the hour and minute of the time, and time packet 1 indicates only the second of the time. A packet indicating the time may be divided into three or more time packets. Since the coarse time is less necessary, the receiver can recognize the synchronization time quickly and accurately by transmitting more fine time packets than coarse time packets.
 つまり、本実施の形態では、可視光信号は、時刻のうちの時および分を示す第2の情報(時間パケット2)と、時刻のうちの秒を示す第1の情報(時間パケット1)とを含むことによって、可視光信号が送信機1800dから送信される時刻を示す。そして、受信機1800aは、第2の情報を受信するとともに、その第2の情報を受信する回数よりも多くの回数だけ第1の情報を受信する。 That is, in the present embodiment, the visible light signal includes the second information (hour packet 2) indicating the hour and minute of the time, and the first information (time packet 1) indicating the second of the time. The time when the visible light signal is transmitted from the transmitter 1800d is indicated. The receiver 1800a receives the second information and receives the first information more times than the number of times of receiving the second information.
 ここで、同期時刻調整について以下に説明する。 Here, the synchronization time adjustment will be described below.
 図127は、実施の形態16における受信機1800aの処理フローの一例を示す図である。 FIG. 127 is a diagram illustrating an example of a process flow of the receiver 1800a according to the sixteenth embodiment.
 信号が送信されてから受信機1800aで処理され、音声または動画が再生されるまでにはある程度の時間がかかるため、この処理時間を見越して音声または動画を再生する処理を行うことで、正確に同期再生を行うことができる。 Since it takes a certain amount of time for the audio or video to be played back after the signal is transmitted and processed by the receiver 1800a, it is possible to accurately reproduce the audio or video in anticipation of this processing time. Synchronous playback can be performed.
 まず、受信機1800aには、処理遅延時間が指定される(ステップS1801)。これは、処理プログラム中に保持されていてもよいし、ユーザが指定してもよい。ユーザが補正を行うことで、受信機個体に合わせたより正確な同期が実現可能となる。この処理遅延時間は、受信機のモデル毎、受信機の温度やCPU使用割合によって変化させることで、より正確に同期を行うことが出来る。 First, a processing delay time is designated for the receiver 1800a (step S1801). This may be stored in the processing program or specified by the user. When the user performs correction, it is possible to realize more accurate synchronization according to the individual receiver. This processing delay time can be synchronized more accurately by changing it depending on the receiver model, the temperature of the receiver, and the CPU usage rate.
 受信機1800aは、時間パケットを受信したか否か、または、音声同期用として関連付けられたIDを受信したか否かを判定する(ステップS1802)。ここで、受信機1800aは、受信したと判定すると(ステップS1802のY)、さらに、処理待ち画像があるか否かを判定する(ステップS1804)。処理待ち画像があると判定すると(ステップS1804のY)、受信機1800aは、その処理待ち画像を廃棄し、または、処理待ち画像の処理を後に回して、取得された最新の画像からの受信処理を行う(ステップS1805)。これにより、処理待ち量による不測の遅延を回避することができる。 The receiver 1800a determines whether or not a time packet has been received or whether or not an ID associated for voice synchronization has been received (step S1802). Here, when the receiver 1800a determines that it has been received (Y in step S1802), it further determines whether there is an image waiting for processing (step S1804). If it is determined that there is an image waiting for processing (Y in step S1804), the receiver 1800a discards the image waiting for processing or delays processing of the image waiting for processing to receive from the latest acquired image. Is performed (step S1805). Thereby, it is possible to avoid an unexpected delay due to the amount of waiting for processing.
 受信機1800aは、可視光信号(具体的には輝線)が画像中のどの位置にあるのかを計測する(ステップS1806)。つまり、イメージセンサにおける最初の露光ラインから、露光ラインに垂直な方向のどの位置に信号が現れているかを計測することで、画像取得開始時刻から信号受信時刻までの時間差(画像内遅延時間)を計算することができる。 The receiver 1800a measures the position in the image where the visible light signal (specifically the bright line) is located (step S1806). In other words, by measuring the position in the direction perpendicular to the exposure line from the first exposure line in the image sensor, the time difference (delay time in the image) from the image acquisition start time to the signal reception time is obtained. Can be calculated.
 受信機1800aは、認識した同期時刻に、処理遅延時間と画像内遅延時間を加えた時刻の音声または動画を再生することで、正確に同期再生を行うことができる(ステップS1807)。 The receiver 1800a can accurately perform synchronized reproduction by reproducing the sound or moving image at the time obtained by adding the processing delay time and the in-image delay time to the recognized synchronization time (step S1807).
 一方、ステップS1802において、受信機1800aは、時間パケットまたは音声同期用IDを受信していないと判定すると、撮像によって得られた画像から信号を受信する(ステップS1803)。 On the other hand, if it is determined in step S1802 that the receiver 1800a has not received the time packet or the voice synchronization ID, the receiver 1800a receives a signal from the image obtained by imaging (step S1803).
 図128は、実施の形態16における受信機1800aのユーザインタフェースの一例を示す図である。 128 is a diagram illustrating an example of a user interface of the receiver 1800a in Embodiment 16.
 ユーザは、図128の(a)に示すように、受信機1800aに表示されたボタンBt1~Bt4の何れかを押すことで、上述の処理遅延時間を調整することができる。また、図128の(b)のようにスワイプ動作で処理遅延時間を設定できるとしてもよい。これにより、ユーザの感覚に基づいてより正確に同期再生を行うことができる。 The user can adjust the processing delay time described above by pressing any of the buttons Bt1 to Bt4 displayed on the receiver 1800a as shown in FIG. 128 (a). Alternatively, the processing delay time may be set by a swipe operation as shown in FIG. 128 (b). Thereby, synchronous reproduction can be performed more accurately based on the user's sense.
 ここで、イヤホン限定再生について以下に説明する。 Here, the earphone-only playback will be described below.
 図129は、実施の形態16における受信機1800aの処理フローの一例を示す図である。 FIG. 129 is a diagram illustrating an example of a process flow of the receiver 1800a according to the sixteenth embodiment.
 この処理フローによって示されるイヤホン限定再生によって、周囲に迷惑をかけずに音声再生を行うことができる。 The earphone-only playback shown by this processing flow enables audio playback without disturbing the surroundings.
 受信機1800aは、イヤホン限定の設定が行われているかどうかを確認する(ステップS1811)。イヤホン限定の設定が行われている場合には、例えば、受信機1800aにイヤホン限定の設定がなされている。あるいは、受信された信号(可視光信号)中にイヤホン限定である設定がされている。または、イヤホン限定であることが、受信された信号に関連付けられてサーバまたは受信機1800aに記録されている。 The receiver 1800a checks whether or not the setting limited to the earphone is performed (step S1811). When the setting limited to the earphone is performed, for example, the setting limited to the earphone is set in the receiver 1800a. Alternatively, settings that are limited to earphones are made in the received signal (visible light signal). Alternatively, it is recorded in the server or the receiver 1800a in association with the received signal that it is limited to the earphone.
 受信機1800aは、イヤホン限定されていることを確認すると(ステップS1811のY)、イヤホンが受信機1800aに接続されているか否かを判定する(ステップS1813)。 When it is confirmed that the receiver 1800a is limited to the earphone (Y in step S1811), it is determined whether or not the earphone is connected to the receiver 1800a (step S1813).
 受信機1800aは、イヤホン限定がされていないことを確認すると(ステップS1811のN)、または、イヤホンが接続されていると判定すると(ステップS1813のY)、音声を再生する(ステップS1812)。音声を再生するときには、受信機1800aは、音量が設定範囲内となるようにその音量を調整する。この設定範囲は、イヤホン限定の設定と同様に設定されている。 When the receiver 1800a confirms that the earphone is not limited (N in Step S1811) or determines that the earphone is connected (Y in Step S1813), the receiver 1800a reproduces the sound (Step S1812). When playing back audio, the receiver 1800a adjusts the volume so that the volume is within the set range. This setting range is set similarly to the setting limited to the earphone.
 受信機1800aは、イヤホンが接続されていないと判定すると(ステップS1813のN)、イヤホンの接続をユーザに促す通知を行う(ステップS1814)。この通知は、例えば、画面表示、音声出力または振動によって行われる。 If the receiver 1800a determines that the earphone is not connected (N in step S1813), the receiver 1800a performs a notification prompting the user to connect the earphone (step S1814). This notification is performed by, for example, screen display, audio output, or vibration.
 また、受信機1800aは、強制的に音声再生を行うことを禁じる設定がされていない場合には、強制再生のためのインタフェース用意し、ユーザが強制再生の操作を行ったか否かを判定する(ステップS1815)。ここで、強制再生の操作を行ったと判定すると(ステップS1815のY)、受信機1800aは、イヤホンが接続されていない場合でも音声を再生する(ステップS1812)。 In addition, when the setting for prohibiting forced audio reproduction is not set, the receiver 1800a prepares an interface for forced reproduction and determines whether or not the user has performed an operation of forced reproduction ( Step S1815). If it is determined that the forced playback operation has been performed (Y in step S1815), the receiver 1800a plays back the audio even when the earphone is not connected (step S1812).
 一方、強制再生の操作を行っていないと判定すると(ステップS1815のN)、受信機1800aは、あらかじめ受信した音声データ、および解析した同期時刻を保持しておくことで、イヤホンが接続された際に速やかに音声の同期再生を行う。 On the other hand, if it is determined that the forced regeneration operation is not performed (N in step S1815), the receiver 1800a retains the audio data received in advance and the analyzed synchronization time so that the earphone is connected. Quickly synchronize audio playback.
 図130は、実施の形態16における受信機1800aの処理フローの他の例を示す図である。 FIG. 130 is a diagram illustrating another example of the process flow of the receiver 1800a according to the sixteenth embodiment.
 受信機1800aは、まず、送信機1800dからIDを受信する(ステップS1821)。つまり、受信機1800aは、送信機1800dのID、または、送信機1800dに表示されているコンテンツのID、を示す可視光信号を受信する。 First, the receiver 1800a receives an ID from the transmitter 1800d (step S1821). That is, the receiver 1800a receives a visible light signal indicating the ID of the transmitter 1800d or the ID of the content displayed on the transmitter 1800d.
 次に、受信機1800aは、その受信したIDに関連付けられている情報(コンテンツ)を、サーバからダウンロードする(ステップS1822)。または、受信機1800aは、受信機1800aの内部にあるデータ保持部からその情報を読み出す。以下、この情報を関連情報という。 Next, the receiver 1800a downloads information (content) associated with the received ID from the server (step S1822). Alternatively, the receiver 1800a reads out the information from the data holding unit in the receiver 1800a. Hereinafter, this information is referred to as related information.
 次に、受信機1800aは、その関連情報に含まれている同期再生フラグがONを示しているか否かを判定する(ステップS1823)。ここで、同期再生フラグがONを示していないと判定すると(ステップS1823のN)、受信機1800aは、その関連情報によって示される内容を出力する(ステップS1824)。つまり、その内容が画像である場合には、受信機1800aは画像を表示し、その内容が音声である場合には、受信機1800aは音声を出力する。 Next, the receiver 1800a determines whether or not the synchronous reproduction flag included in the related information indicates ON (step S1823). If it is determined that the synchronous reproduction flag does not indicate ON (N in step S1823), the receiver 1800a outputs the content indicated by the related information (step S1824). That is, when the content is an image, the receiver 1800a displays an image, and when the content is audio, the receiver 1800a outputs audio.
 一方、受信機1800aは、同期再生フラグがONを示していると判定すると(ステップS1823のY)、さらに、その関連情報に含まれている時刻合わせモードが、送信機基準モードに設定されているか、絶対時刻モードに設定されているかを判定する(ステップS1825)。絶対時刻モードに設定されていると判定すると、受信機1800aは、最後の時刻合わせが現在時刻から一定時間以内に行われたか否かを判定する(ステップS1826)。このときの時刻合わせは、所定の方法によって時刻情報を入手し、その時刻情報を用いて、受信機1800aに備えられている時計の時刻を、基準クロックの絶対時刻に合わせる処理である。所定の方法は、例えばGPS(Global Positioning System)電波またはNTP(Network Time Protocol)電波を用いた方法である。なお、上述の現在時刻は、端末装置である受信機1800aが可視光信号を受信した時刻であってもよい。 On the other hand, when receiver 1800a determines that the synchronous reproduction flag indicates ON (Y in step S1823), is the time adjustment mode included in the related information set to the transmitter reference mode? Then, it is determined whether or not the absolute time mode is set (step S1825). If it is determined that the absolute time mode is set, the receiver 1800a determines whether or not the last time adjustment has been performed within a certain time from the current time (step S1826). The time adjustment at this time is processing for obtaining time information by a predetermined method and using the time information to adjust the time of a clock provided in the receiver 1800a to the absolute time of the reference clock. The predetermined method is, for example, a method using a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave. Note that the current time described above may be a time when the receiver 1800a, which is a terminal device, receives a visible light signal.
 受信機1800aは、最後の時刻合わせが一定時間以内に行われたと判定すると(ステップS1826のY)、受信機1800aの時計の時刻に基づいて関連情報を出力することにより、送信機1800dに表示されるコンテンツと関連情報とを同期させる(ステップS1827)。関連情報によって示される内容が例えば動画像である場合には、受信機1800aは、送信機1800dに表示されるコンテンツに同期するように、その動画像を表示する。関連情報によって示される内容が例えば音声である場合には、受信機1800aは、送信機1800dに表示されるコンテンツに同期するように、その音声を出力する。例えば、関連情報が音声を示す場合には、関連情報は、音声を構成する各フレームを含み、それらのフレームにはタイムスタンプが付けられている。受信機1800aは、自らの時計の時刻に該当するタイプスタンプが付けられているフレームを再生することによって、送信機1800dのコンテンツに同期された音声を出力する。 If the receiver 1800a determines that the last time adjustment has been performed within a certain time (Y in step S1826), the receiver 1800a outputs the related information based on the time of the clock of the receiver 1800a, and is displayed on the transmitter 1800d. Content and related information are synchronized (step S1827). When the content indicated by the related information is, for example, a moving image, the receiver 1800a displays the moving image so as to be synchronized with the content displayed on the transmitter 1800d. When the content indicated by the related information is, for example, audio, the receiver 1800a outputs the audio so as to be synchronized with the content displayed on the transmitter 1800d. For example, when the related information indicates sound, the related information includes each frame constituting the sound, and these frames are time stamped. The receiver 1800a outputs a sound synchronized with the content of the transmitter 1800d by playing back a frame with a type stamp corresponding to the time of its own clock.
 受信機1800aは、最後の時刻合わせが一定時間以内に行われていないと判定すると(ステップS1826のN)、所定の方法で時刻情報の入手を試み、その時刻情報を入手することができたか否かを判定する(ステップS1828)。ここで、時刻情報を入手することができたと判定すると(ステップS1828のY)、受信機1800aは、その時刻情報を用いて、受信機1800aの時計の時刻を更新する(ステップS1829)。そして、受信機1800aは、上述のステップS1827の処理を実行する。 If the receiver 1800a determines that the last time adjustment has not been performed within a certain time (N in step S1826), the receiver 1800a attempts to obtain the time information by a predetermined method, and whether or not the time information has been obtained. Is determined (step S1828). If it is determined that the time information has been obtained (Y in step S1828), the receiver 1800a updates the time of the clock of the receiver 1800a using the time information (step S1829). Then, the receiver 1800a executes the process of step S1827 described above.
 また、ステップS1825において、時刻合わせモードが送信機基準モードであると判定したとき、または、ステップS1828において、時刻情報を入手することができなかったと判定すると(ステップS1828のN)、受信機1800aは、送信機1800dから時刻情報を取得する(ステップS1830)。つまり、受信機1800aは、可視光通信によって同期信号である時刻情報を送信機1800dから取得する。例えば、同期信号は、図126に示す時間パケット1および時間パケット2である。または、受信機1800aは、Bluetooth(登録商標)またはWi-Fiなどの電波によって時刻情報を送信機1800dから取得する。そして、受信機1800aは、上述のステップS1829およびS1827の処理を実行する。 If it is determined in step S1825 that the time adjustment mode is the transmitter reference mode, or if it is determined in step S1828 that time information could not be obtained (N in step S1828), the receiver 1800a The time information is acquired from the transmitter 1800d (step S1830). That is, the receiver 1800a acquires time information that is a synchronization signal from the transmitter 1800d through visible light communication. For example, the synchronization signals are time packet 1 and time packet 2 shown in FIG. Alternatively, the receiver 1800a acquires time information from the transmitter 1800d by radio waves such as Bluetooth (registered trademark) or Wi-Fi. Then, the receiver 1800a executes the processes of steps S1829 and S1827 described above.
 本実施の形態では、ステップS1829,S1830のように、GPS電波またはNTP電波によって、受信機1800aである端末装置の時計と基準クロックとの間で同期をとるための処理(時刻合わせ)が行われた時刻が、端末装置が可視光信号を受信した時刻から所定の時間より前である場合、送信機1800dから送信された可視光信号が示す時刻により、端末装置の時計と、送信機の時計との間で同期をとる。これにより、端末装置は、送信機1800dで再生される送信機側コンテンツと同期するタイミングに、コンテンツ(動画または音声)を再生することができる。 In the present embodiment, as in steps S1829 and S1830, processing (time adjustment) is performed for synchronization between the clock of the terminal device that is the receiver 1800a and the reference clock by GPS radio waves or NTP radio waves. The time of the terminal device, the time of the terminal device, and the time of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter 1800d. Synchronize between. Accordingly, the terminal device can reproduce the content (moving image or sound) at the timing synchronized with the transmitter-side content reproduced by the transmitter 1800d.
 図131Aは、実施の形態16における同期再生の具体的な方法を説明するための図である。同期再生の方法には、図131Aに示す方法a~eがある。 FIG. 131A is a diagram for explaining a specific method of synchronized playback in the sixteenth embodiment. As a method of synchronous reproduction, there are methods a to e shown in FIG. 131A.
 (方法a)
 方法aでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、コンテンツIDおよびコンテンツ再生中時刻を示す可視光信号を出力する。コンテンツ再生中時刻は、コンテンツIDが送信機1800dから送信されたときに送信機1800dによって再生されている、コンテンツの一部であるデータの再生時刻である。データは、コンテンツが動画像であれば、その動画像を構成するピクチャまたはシーケンスなどであり、コンテンツが音声であれば、その音声を構成するフレームなどである。再生時刻は、例えば、コンテンツの先頭からの再生時間を時刻として示す。コンテンツが動画像であれば、再生時刻はPTS(Presentation Time Stamp)としてコンテンツに含まれている。つまり、コンテンツには、そのコンテンツを構成するデータごとに、そのデータの再生時刻(表示時刻)が含まれている。
(Method a)
In the method a, the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments. The content playback time is the playback time of data that is part of the content that is being played back by the transmitter 1800d when the content ID is transmitted from the transmitter 1800d. The data is a picture or a sequence constituting the moving image if the content is a moving image, or a frame constituting the sound if the content is sound. The playback time indicates, for example, the playback time from the beginning of the content as the time. If the content is a moving image, the playback time is included in the content as a PTS (Presentation Time Stamp). That is, the content includes the reproduction time (display time) of the data for each data constituting the content.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示されるコンテンツIDを含む要求信号をサーバ1800fに送信する。サーバ1800fは、その要求信号を受信し、要求信号に含まれるコンテンツIDに対応付けられているコンテンツを受信機1800aに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal to the server 1800f. The server 1800f receives the request signal, and transmits the content associated with the content ID included in the request signal to the receiver 1800a.
 受信機1800aは、そのコンテンツを受信すると、そのコンテンツを、(コンテンツ再生中時刻+ID受信からの経過時間)の時点から再生する。ID受信からの経過時間は、コンテンツIDが受信機1800aによって受信されたときからの経過時間である。 When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception). The elapsed time from the reception of the ID is an elapsed time from when the content ID is received by the receiver 1800a.
 (方法b)
 方法bでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、コンテンツIDおよびコンテンツ再生中時刻を示す可視光信号を出力する。受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示されるコンテンツIDおよびコンテンツ再生中時刻を含む要求信号をサーバ1800fに送信する。サーバ1800fは、その要求信号を受信し、要求信号に含まれるコンテンツIDに対応付けられているコンテンツのうち、コンテンツ再生中時刻以降の一部のコンテンツのみを受信機1800aに送信する。
(Method b)
In the method b, the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments. The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal and the content playback time to the server 1800f. The server 1800f receives the request signal, and transmits only a part of the content after the content playback time to the receiver 1800a among the content associated with the content ID included in the request signal.
 受信機1800aは、その一部のコンテンツを受信すると、その一部のコンテンツを、(ID受信からの経過時間)の時点から再生する。 When the receiver 1800a receives the part of the content, the receiver 1800a reproduces the part of the content from the time point (elapsed time since the ID reception).
 (方法c)
 方法cでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、送信機IDおよびコンテンツ再生中時刻を示す可視光信号を出力する。送信機IDは、送信機を識別するための情報である。
(Method c)
In the method c, the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the content reproduction time by changing the luminance of the display, as in the above embodiments. The transmitter ID is information for identifying the transmitter.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示される送信機IDを含む要求信号をサーバ1800fに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
 サーバ1800fは、送信機IDごとに、その送信機IDの送信機によって再生されるコンテンツのタイムテーブルである再生予定表を保持している。さらに、サーバ1800fは時計を備えている。このようなサーバ1800fは、その要求信号を受信すると、その要求信号に含まれる送信機IDと、サーバ1800fの時計の時刻(サーバ時刻)とに対応付けられているコンテンツを、再生中のコンテンツとして、再生予定表から特定する。そして、サーバ1800fは、そのコンテンツを受信機1800aに送信する。 The server 1800f holds, for each transmitter ID, a reproduction schedule that is a timetable of content reproduced by the transmitter with the transmitter ID. Further, the server 1800f includes a clock. When such a server 1800f receives the request signal, the content associated with the transmitter ID included in the request signal and the clock time (server time) of the server 1800f is the content being played back. Identify from the playback schedule. Then, the server 1800f transmits the content to the receiver 1800a.
 受信機1800aは、そのコンテンツを受信すると、そのコンテンツを、(コンテンツ再生中時刻+ID受信からの経過時間)の時点から再生する。 When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception).
 (方法d)
 方法dでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、送信機IDおよび送信機時刻を示す可視光信号を出力する。送信機時刻は、送信機1800dに備えられている時計によって示される時刻である。
(Method d)
In the method d, the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the transmitter time by changing the luminance of the display as in the above embodiments. The transmitter time is a time indicated by a clock provided in the transmitter 1800d.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示される送信機IDおよび送信機時刻を含む要求信号をサーバ1800fに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal and the transmitter time to the server 1800f.
 サーバ1800fは、上述の再生予定表を保持している。このようなサーバ1800fは、その要求信号を受信すると、その要求信号に含まれる送信機IDと送信機時刻とに対応付けられているコンテンツを、再生中のコンテンツとして、再生予定表から特定する。さらに、サーバ1800fは、送信機時刻からコンテンツ再生中時刻を特定する。つまり、サーバ1800fは、特定されたコンテンツの再生開始時刻を再生予定表から見つけ出し、送信機時刻と再生開始時刻との間の時間をコンテンツ再生中時刻として特定する。そして、サーバ1800fは、そのコンテンツおよびコンテンツ再生中時刻を受信機1800aに送信する。 The server 1800f holds the above reproduction schedule. When such a server 1800f receives the request signal, the server 1800f identifies the content associated with the transmitter ID and the transmitter time included in the request signal as the content being reproduced from the reproduction schedule. Furthermore, the server 1800f specifies the content playback time from the transmitter time. That is, the server 1800f finds the playback start time of the specified content from the playback schedule, and specifies the time between the transmitter time and the playback start time as the content playback time. Then, the server 1800f transmits the content and the content playback time to the receiver 1800a.
 受信機1800aは、そのコンテンツおよびコンテンツ再生中時刻を受信すると、そのコンテンツを、(コンテンツ再生中時刻+ID受信からの経過時間)の時点から再生する。 Upon receiving the content and the content playback time, the receiver 1800a plays the content from the time of (content playback time + elapsed time since reception of ID).
 このように、本実施の形態では、可視光信号は、その可視光信号が送信機1800dから送信される時刻を示す。したがって、端末装置である受信機1800aは、可視光信号が送信機1800dから送信される時刻(送信機時刻)に対応付けられたコンテンツを受信することができる。例えば、送信機時刻が5時43分であれば、5時43分に再生されるコンテンツを受信することができる。 Thus, in the present embodiment, the visible light signal indicates the time when the visible light signal is transmitted from the transmitter 1800d. Therefore, the receiver 1800a, which is a terminal device, can receive content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter 1800d. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
 また、本実施の形態では、サーバ1800fは、それぞれ時刻に関連付けられている複数のコンテンツを有している。しかし、可視光信号が示す時刻に関連付けられたコンテンツがサーバ1800fに存在しない場合がある。このような場合には、端末装置である受信機1800aは、その複数のコンテンツのうち、可視光信号が示す時刻に最も近く、かつ、可視光信号が示す時刻の後の時刻に関連付けられているコンテンツを受信してもよい。これにより、可視光信号が示す時刻に関連付けられたコンテンツがサーバ1800fに存在しなくても、そのサーバ1800fにある複数のコンテンツの中から、適切なコンテンツを受信することができる。 In the present embodiment, the server 1800f has a plurality of contents each associated with a time. However, the content associated with the time indicated by the visible light signal may not exist in the server 1800f. In such a case, the receiver 1800a as the terminal device is closest to the time indicated by the visible light signal and is associated with the time after the time indicated by the visible light signal among the plurality of contents. Content may be received. Thereby, even if the content associated with the time indicated by the visible light signal does not exist in the server 1800f, it is possible to receive appropriate content from among the plurality of contents in the server 1800f.
 また、本実施の形態における再生方法は、光源の輝度変化により可視光信号を送信する送信機1800dから、可視光信号を受信機1800a(端末装置)のセンサにより受信する信号受信ステップと、受信機1800aから、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する送信ステップと、受信機1800aが、サーバ1800fからコンテンツを受信するコンテンツ受信ステップと、コンテンツを再生する再生ステップとを含む。可視光信号は、送信機IDと送信機時刻とを示す。送信機IDはID情報である。また、送信機時刻は、送信機1800dの時計によって示される時刻であり、その可視光信号が送信機1800dから送信される時刻である。そして、コンテンツ受信ステップでは、受信機1800aは、可視光信号によって示される送信機IDおよび送信機時刻に対応付けられたコンテンツを受信する。これにより、受信機1800aは、送信機IDおよび送信機時刻に対して適切なコンテンツを再生することができる。 In addition, the reproduction method according to the present embodiment includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal according to a luminance change of the light source, and a receiver 1800a transmits a request signal for requesting the content associated with the visible light signal to the server 1800f, the receiver 1800a receives the content from the server 1800f, and reproduces the content. A playback step. The visible light signal indicates a transmitter ID and a transmitter time. The transmitter ID is ID information. The transmitter time is the time indicated by the clock of the transmitter 1800d, and the time when the visible light signal is transmitted from the transmitter 1800d. In the content reception step, the receiver 1800a receives the content associated with the transmitter ID and the transmitter time indicated by the visible light signal. As a result, the receiver 1800a can reproduce appropriate content with respect to the transmitter ID and the transmitter time.
 (方法e)
 方法eでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、送信機IDを示す可視光信号を出力する。
(Method e)
In the method e, the transmitter 1800d outputs a visible light signal indicating the transmitter ID by changing the luminance of the display as in the above embodiments.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示される送信機IDを含む要求信号をサーバ1800fに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
 サーバ1800fは、上述の再生予定表を保持し、さらに、時計を備えている。このようなサーバ1800fは、その要求信号を受信すると、その要求信号に含まれる送信機IDとサーバ時刻とに対応付けられているコンテンツを、再生中のコンテンツとして、再生予定表から特定する。なお、サーバ時刻は、サーバ1800fの時計によって示される時刻である。さらに、サーバ1800fは、特定されたコンテンツの再生開始時刻も再生予定表から見つけ出す。そして、サーバ1800fは、そのコンテンツおよびコンテンツ再生開始時刻を受信機1800aに送信する。 The server 1800f holds the above-described reproduction schedule and further includes a clock. When such a server 1800f receives the request signal, the server 1800f identifies the content associated with the transmitter ID and the server time included in the request signal from the reproduction schedule as content being reproduced. The server time is the time indicated by the clock of the server 1800f. Further, the server 1800f finds the reproduction start time of the specified content from the reproduction schedule table. Then, the server 1800f transmits the content and the content reproduction start time to the receiver 1800a.
 受信機1800aは、そのコンテンツおよびコンテンツ再生開始時刻を受信すると、そのコンテンツを、(受信機時刻-コンテンツ再生開始時刻)の時点から再生する。なお、受信機時刻は、受信機1800aに備えられている時計によって示される時刻である。 When the receiver 1800a receives the content and the content playback start time, the receiver 1800a plays the content from the time of (receiver time-content playback start time). The receiver time is a time indicated by a clock provided in the receiver 1800a.
 このように、本実施の形態における再生方法は、光源の輝度変化により可視光信号を送信する送信機1800dから、可視光信号を受信機1800a(端末装置)のセンサにより受信する信号受信ステップと、受信機1800aから、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する送信ステップと、受信機1800aが、各時刻と、各時刻に再生されるデータとを含むコンテンツを、サーバ1800fから受信するコンテンツ受信ステップと、そのコンテンツのうち、受信機1800aに備えられている時計の時刻に該当するデータを再生する再生ステップとを含む。したがって、受信機1800aは、そのコンテンツにおけるデータを、間違った時刻に再生してしまうことなく、そのコンテンツに示される正しい時刻に、適切に再生することができる。また、送信機1800dにおいても、そのコンテンツに関連するコンテンツ(送信機側コンテンツ)が再生されていれば、受信機1800aは、コンテンツをその送信機側コンテンツに適切に同期させて再生することができる。 As described above, the reproduction method according to the present embodiment includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal due to a luminance change of the light source; The transmitting step of transmitting a request signal for requesting the content associated with the visible light signal from the receiver 1800a to the server 1800f, and the receiver 1800a include each time and data reproduced at each time A content receiving step of receiving content from the server 1800f and a playback step of playing back data corresponding to the time of the clock provided in the receiver 1800a among the content. Therefore, the receiver 1800a can appropriately reproduce the data in the content at the correct time indicated by the content without reproducing the data at the wrong time. Also, in the transmitter 1800d, if content related to the content (transmitter-side content) is reproduced, the receiver 1800a can reproduce the content in synchronization with the transmitter-side content appropriately. .
 なお、上記方法c~eであっても、方法bのように、サーバ1800fは、コンテンツのうち、コンテンツ再生中時刻以降の一部のコンテンツのみを受信機1800aに送信してもよい。 Note that even in the above methods c to e, like the method b, the server 1800f may transmit only a part of the content after the content playback time to the receiver 1800a.
 また、上記方法a~eでは、受信機1800aは、サーバ1800fに要求信号を送信して、サーバ1800fから必要なデータを受信するが、このよう送受信をすることなく、サーバ1800fにあるデータを予め保持しておいてもよい。 In the above methods a to e, the receiver 1800a transmits a request signal to the server 1800f and receives necessary data from the server 1800f. However, the data in the server 1800f is transmitted in advance without performing such transmission / reception. You may keep it.
 図131Bは、上述の方法eによって同期再生を行う再生装置の構成を示すブロック図である。 FIG. 131B is a block diagram showing a configuration of a playback apparatus that performs synchronized playback by the method e described above.
 再生装置B10は、上述の方法eによって同期再生を行う受信機1800aまたは端末装置であって、センサB11と、要求信号送信部B12と、コンテンツ受信部B13と、時計B14と、再生部B15とを備えている。 The playback device B10 is a receiver 1800a or a terminal device that performs synchronous playback by the method e described above, and includes a sensor B11, a request signal transmission unit B12, a content reception unit B13, a clock B14, and a playback unit B15. I have.
 センサB11は、例えばイメージセンサであって、光源の輝度変化により可視光信号を送信する送信機1800dから、その可視光信号を受信する。要求信号送信部B12は、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する。コンテンツ受信部B13は、各時刻と、各時刻に再生されるデータとを含むコンテンツを、サーバ1800fから受信する。再生部B15は、そのコンテンツのうち、時計B14の時刻に該当するデータを再生する。 Sensor B11 is, for example, an image sensor, and receives the visible light signal from a transmitter 1800d that transmits a visible light signal according to a change in luminance of the light source. The request signal transmission unit B12 transmits a request signal for requesting content associated with the visible light signal to the server 1800f. The content receiving unit B13 receives content including each time and data reproduced at each time from the server 1800f. The reproduction unit B15 reproduces data corresponding to the time of the clock B14 in the content.
 図131Cは、上述の方法eによって同期再生を行う端末装置の処理動作を示すフローチャートである。 FIG. 131C is a flowchart showing the processing operation of the terminal device that performs synchronous reproduction by the method e described above.
 再生装置B10は、上述の方法eによって同期再生を行う受信機1800aまたは端末装置であって、ステップSB11~SB15の各処理を実行する。 The playback device B10 is a receiver 1800a or a terminal device that performs synchronized playback by the method e described above, and executes each process of steps SB11 to SB15.
 ステップSB11では、光源の輝度変化により可視光信号を送信する送信機1800dから、その可視光信号を受信する。ステップSB12では、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する。ステップSB13では、各時刻と、各時刻に再生されるデータとを含むコンテンツを、サーバ1800fから受信する。ステップSB15では、そのコンテンツのうち、時計B14の時刻に該当するデータを再生する。 In step SB11, the visible light signal is received from the transmitter 1800d that transmits the visible light signal according to the luminance change of the light source. In step SB12, a request signal for requesting content associated with the visible light signal is transmitted to server 1800f. In step SB13, content including each time and data reproduced at each time is received from server 1800f. In step SB15, data corresponding to the time of the clock B14 is reproduced from the content.
 このように、本実施の形態における再生装置B10および再生方法では、コンテンツにおけるデータを、間違った時刻に再生してしまうことなく、そのコンテンツに示される正しい時刻に、適切に再生することができる。 As described above, in the playback device B10 and the playback method in the present embodiment, the data in the content can be appropriately played back at the correct time indicated by the content without being played back at the wrong time.
 なお、本実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。ここで、本実施の形態の再生装置B10などを実現するソフトウェアは、図131Cに示すフローチャートに含まれる各ステップをコンピュータに実行させるプログラムである。 In the present embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. Here, the software that realizes the playback apparatus B10 and the like of the present embodiment is a program that causes a computer to execute each step included in the flowchart shown in FIG. 131C.
 図132は、実施の形態16における同期再生の事前準備を説明するための図である。 FIG. 132 is a diagram for explaining preparations for synchronized playback in the sixteenth embodiment.
 受信機1800aは、同期再生を行うために、受信機1800aに備えられている時計の時刻を基準クロックの時刻に合わせる時刻合わせを行う。この時刻合わせのために、受信機1800aは、以下の(1)~(5)の処理を行う。 The receiver 1800a adjusts the time of the clock provided in the receiver 1800a to the time of the reference clock in order to perform synchronized playback. For this time adjustment, the receiver 1800a performs the following processes (1) to (5).
 (1)受信機1800aは、信号を受信する。この信号は、送信機1800dのディスプレイの輝度変化によって送信される可視光信号であっても、無線機器からのWi-FiまたはBluetooth(登録商標)に基づく電波信号であってもよい。または、受信機1800aは、このような信号を受信する代わりに、受信機1800aの位置を示す位置情報を例えばGPSなどによって取得する。そして、受信機1800aは、その位置情報によって、受信機1800aが予め定められた場所または建物に入ったことを認識する。 (1) The receiver 1800a receives a signal. This signal may be a visible light signal transmitted by a change in luminance of the display of the transmitter 1800d, or a radio wave signal based on Wi-Fi or Bluetooth (registered trademark) from a wireless device. Alternatively, the receiver 1800a acquires position information indicating the position of the receiver 1800a by, for example, GPS instead of receiving such a signal. Then, the receiver 1800a recognizes that the receiver 1800a has entered a predetermined place or building based on the position information.
 (2)受信機1800aは、上記信号を受信すると、または、予め定められた場所に入ったことを認識すると、その信号または場所などに関連付けられているデータ(関連情報)を要求する要求信号をサーバ(可視光ID解決サーバ)1800fに送信する。 (2) When the receiver 1800a receives the above signal or recognizes that it has entered a predetermined location, it receives a request signal for requesting data (related information) associated with the signal or location. It transmits to the server (visible light ID resolution server) 1800f.
 (3)サーバ1800fは、上述のデータと、受信機1800aに時刻合わせをさせるための時刻合わせ要求とを受信機1800aに送信する。 (3) The server 1800f transmits the above-described data and a time adjustment request for causing the receiver 1800a to adjust the time to the receiver 1800a.
 (4)受信機1800aは、データと時刻合わせ要求とを受信すると、時刻合わせ要求をGPSタイムサーバ、NTPサーバまたは、電気通信事業者(キャリア)の基地局に送信する。 (4) When receiving the data and the time adjustment request, the receiver 1800a transmits the time adjustment request to the GPS time server, the NTP server, or the base station of the telecommunications carrier (carrier).
 (5)上記サーバまたは基地局は、その時刻合わせ要求を受信すると、現在時刻(基準クロックの時刻または絶対時刻)を示す時刻データ(時刻情報)を受信機1800aに送信する。受信機1800aは、自らに備えられている時計の時刻を、その時刻データに示される現在時刻に合わせることによって、時刻合わせを行う。 (5) Upon receiving the time adjustment request, the server or the base station transmits time data (time information) indicating the current time (reference clock time or absolute time) to the receiver 1800a. The receiver 1800a adjusts the time by adjusting the time of the clock provided to the receiver 1800a to the current time indicated by the time data.
 このように本実施の形態では、受信機1800a(端末装置)に備えられている時計と、基準クロックとの間では、GPS(Global Positioning System)電波、または、NTP(Network Time Protocol)電波によって、同期がとられている。したがって、受信機1800aは、基準クロックにしたがった適切な時刻に、その時刻に該当するデータを再生することができる。 As described above, in this embodiment, a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave is used between the clock provided in the receiver 1800a (terminal device) and the reference clock. Synchronized. Therefore, the receiver 1800a can reproduce the data corresponding to the time at an appropriate time according to the reference clock.
 図133は、実施の形態16における受信機1800aの応用例を示す図である。 FIG. 133 is a diagram illustrating an example of application of the receiver 1800a according to the sixteenth embodiment.
 受信機1800aは、上述のようにスマートフォンとして構成されて、例えば、透光性を有する樹脂またはガラスなどの部材で構成されたホルダー1810に保持されて利用される。このホルダー1810は、背板部1810aと、背板部1810aに立設された係止部1810bとを有する。受信機1800aは、背板部1810aと係止部1810bとの間に、その背板部1810aに沿わせるように挿入される。 The receiver 1800a is configured as a smartphone as described above, and is used by being held by a holder 1810 formed of, for example, a translucent resin or glass member. The holder 1810 includes a back plate portion 1810a and a locking portion 1810b provided upright on the back plate portion 1810a. The receiver 1800a is inserted between the back plate portion 1810a and the locking portion 1810b so as to be along the back plate portion 1810a.
 図134Aは、実施の形態16における、ホルダー1810に保持された受信機1800aの正面図である。 FIG. 134A is a front view of receiver 1800a held by holder 1810 in Embodiment 16. FIG.
 受信機1800aは、上述のように挿入された状態でホルダー1810に保持される。このとき、係止部1810bは、受信機1800aの下部と係止し、その下部を背板部1810aと挟持する。また、受信機1800aの背面は、背板部1810aと対向し、受信機1800aのディスプレイ1801は露出した状態となる。 The receiver 1800a is held by the holder 1810 in the inserted state as described above. At this time, the locking portion 1810b locks with the lower portion of the receiver 1800a and sandwiches the lower portion with the back plate portion 1810a. In addition, the back surface of the receiver 1800a faces the back plate portion 1810a, and the display 1801 of the receiver 1800a is exposed.
 図134Bは、実施の形態16における、ホルダー1810に保持された受信機1800aの背面図である。 FIG. 134B is a rear view of receiver 1800a held by holder 1810 in the sixteenth embodiment.
 また、背板部1810aには、通孔1811が形成され、その通孔1811の近くに可変フィルタ1812が取り付けられている。受信機1800aがホルダー1810に保持されると、受信機1800aのカメラ1802は、背板部1810aから通孔1811を介して露出する。また、受信機1800aのフラッシュライト1803は、可変フィルタ1812に対向する。 Further, a through hole 1811 is formed in the back plate portion 1810a, and a variable filter 1812 is attached in the vicinity of the through hole 1811. When receiver 1800a is held by holder 1810, camera 1802 of receiver 1800a is exposed through back hole 1811 from back plate portion 1810a. The flashlight 1803 of the receiver 1800a faces the variable filter 1812.
 可変フィルタ1812は、例えば円盤状に形成され、それぞれ扇状で同じサイズの3つの色フィルタ(赤色フィルタ、黄色フィルタ、および緑色フィルタ)を有する。また、可変フィルタ1812は、可変フィルタ1812の中心を軸にして回転自在に背板部1810aに取り付けられている。また、赤色フィルタは、赤色の透光性を有するフィルタであって、黄色フィルタは、黄色の透光性を有するフィルタであって、緑色フィルタは、緑色の透光性を有するフィルタである。 The variable filter 1812 is formed in a disk shape, for example, and has three color filters (a red filter, a yellow filter, and a green filter) each having a fan shape and the same size. The variable filter 1812 is attached to the back plate portion 1810a so as to be rotatable about the center of the variable filter 1812. The red filter is a filter having red translucency, the yellow filter is a filter having yellow translucency, and the green filter is a filter having green translucency.
 したがって、可変フィルタ1812が回転されて、例えば、赤色フィルタがフラッシュライト1803aに対向する位置に配置される。この場合、フラッシュライト1803aから放たれる光は、赤色フィルタを透過することによって、赤色の光としてホルダー1810の内部で拡散する。その結果、ホルダー1810の略全体が赤色に発光する。 Therefore, the variable filter 1812 is rotated, and, for example, the red filter is disposed at a position facing the flashlight 1803a. In this case, the light emitted from the flashlight 1803a is diffused inside the holder 1810 as red light by passing through the red filter. As a result, substantially the entire holder 1810 emits red light.
 同様に、可変フィルタ1812が回転されて、例えば、黄色フィルタがフラッシュライト1803aに対向する位置に配置される。この場合、フラッシュライト1803aから放たれる光は、黄色フィルタを透過することによって、黄色の光としてホルダー1810の内部で拡散する。その結果、ホルダー1810の略全体が黄色に発光する。 Similarly, the variable filter 1812 is rotated and, for example, the yellow filter is disposed at a position facing the flashlight 1803a. In this case, the light emitted from the flashlight 1803a is diffused inside the holder 1810 as yellow light by passing through the yellow filter. As a result, substantially the entire holder 1810 emits yellow light.
 同様に、可変フィルタ1812が回転されて、例えば、緑色フィルタがフラッシュライト1803aに対向する位置に配置される。この場合、フラッシュライト1803aから放たれる光は、緑色フィルタを透過することによって、緑色の光としてホルダー1810の内部で拡散する。その結果、ホルダー1810の略全体が緑色に発光する。 Similarly, the variable filter 1812 is rotated so that, for example, the green filter is disposed at a position facing the flashlight 1803a. In this case, the light emitted from the flashlight 1803a is diffused inside the holder 1810 as green light by passing through the green filter. As a result, substantially the entire holder 1810 emits green light.
 つまり、ホルダー1810は、ペンライトのように、赤色、黄色または緑色に点灯する。 That is, the holder 1810 lights in red, yellow or green like a penlight.
 図135は、実施の形態16における、ホルダー1810に保持された受信機1800aのユースケースを説明するための図である。 FIG. 135 is a diagram for describing a use case of the receiver 1800a held by the holder 1810 in the sixteenth embodiment.
 例えば、ホルダー1810に保持された受信機1800aであるホルダー付受信機は、遊園地などで利用される。つまり、遊園地において移動するフロートに向けられた複数のホルダー付受信機は、そのフロートから流れる音楽に合わせて、同期しながら点滅する。つまり、フロートは、上記各実施の形態における送信機として構成され、フロートに取り付けられている光源の輝度変化によって可視光信号を送信する。例えば、フロートは、フロートのIDを示す可視光信号を送信する。そして、ホルダー付受信機は、上記各実施の形態と同様に、受信機1800aのカメラ1802の撮影によって、その可視光信号、つまりIDを受信する。IDを受信した受信機1800aは、そのIDに対応付けられたプログラムを例えばサーバから取得する。このプログラムは、所定の各時刻において受信機1800aのフラッシュライト1803を点灯させる命令からなる。この所定の各時刻は、フロートから流れる音楽に合わせて(同期するように)設定されている。そして、受信機1800aは、そのプログラムにしたがって、フラッシュライト1803aを点滅させる。 For example, a receiver with a holder that is a receiver 1800a held by a holder 1810 is used in an amusement park or the like. That is, the plurality of receivers with holders that are directed to the float moving in the amusement park blink in synchronization with the music flowing from the float. That is, the float is configured as a transmitter in each of the above embodiments, and transmits a visible light signal by a change in luminance of a light source attached to the float. For example, the float transmits a visible light signal indicating the ID of the float. And the receiver with a holder receives the visible light signal, ie, ID, by imaging | photography with the camera 1802 of the receiver 1800a similarly to said each embodiment. The receiver 1800a that has received the ID acquires a program associated with the ID from, for example, a server. This program includes instructions for turning on the flashlight 1803 of the receiver 1800a at each predetermined time. Each predetermined time is set in accordance with the music flowing from the float (so as to be synchronized). Then, the receiver 1800a blinks the flashlight 1803a according to the program.
 これにより、そのIDを受信した各受信機1800aのホルダー1810は、そのIDのフロートから流れる音楽に合わせて同じタイミングで点灯することを繰り返す。 Thereby, the holder 1810 of each receiver 1800a that has received the ID repeats lighting at the same timing according to the music flowing from the float of the ID.
 ここで、各受信機1800aは、設定されている色フィルタ(以下、設定フィルタという)に応じてフラッシュライト1803の点滅を行う。設定フィルタとは、受信機1800aのフラッシュライト1803に対向している色フィルタである。また、各受信機1800aは、ユーザによる操作に基づいて、現在の設定フィルタを認識している。または、各受信機1800aは、カメラ1802の撮影によって得られる画像の色などに基づいて、現在の設定フィルタを認識している。 Here, each receiver 1800a blinks the flashlight 1803 in accordance with a set color filter (hereinafter referred to as a setting filter). The setting filter is a color filter that faces the flashlight 1803 of the receiver 1800a. Each receiver 1800a recognizes the current setting filter based on an operation by the user. Alternatively, each receiver 1800a recognizes the current setting filter based on the color of an image obtained by photographing with the camera 1802.
 つまり、IDを受信した複数の受信機1800aのうち、所定の時刻では、設定フィルタが赤色フィルタであることを認識している複数の受信機1800aのホルダー1810のみが同時に点灯する。次の時刻では、設定フィルタが緑色フィルタであることを認識している複数の受信機1800aのホルダー1810のみが同時に点灯する。さらに次の時刻では、設定フィルタが黄色フィルタであることを認識している複数の受信機1800aのホルダー1810のみが同時に点灯する。 That is, among the plurality of receivers 1800a that have received the ID, only the holders 1810 of the plurality of receivers 1800a that recognize that the setting filter is a red filter are lit simultaneously at a predetermined time. At the next time, only the holders 1810 of the plurality of receivers 1800a that recognize that the setting filter is a green filter are lit simultaneously. Further, at the next time, only the holders 1810 of the plurality of receivers 1800a that recognize that the setting filter is a yellow filter are lit simultaneously.
 このように、ホルダー1810に保持される受信機1800aは、上述の図123~図129に示す同期再生と同様に、フロートの音楽と、他のホルダー1810に保持される受信機1800aとに同期して、フラッシュライト1803、すなわちホルダー1810を点滅させる。 As described above, the receiver 1800a held in the holder 1810 is synchronized with the float music and the receiver 1800a held in the other holder 1810 in the same manner as the synchronous playback shown in FIGS. 123 to 129 described above. Then, the flashlight 1803, that is, the holder 1810 is blinked.
 図136は、実施の形態16における、ホルダー1810に保持された受信機1800aの処理動作を示すフローチャートである。 FIG. 136 is a flowchart showing the processing operation of the receiver 1800a held by the holder 1810 in the sixteenth embodiment.
 受信機1800aは、フロートからの可視光信号によって示されるフロートのIDを受信する(ステップS1831)。次に、受信機1800aは、そのIDに対応付けられているプログラムをサーバから取得する(ステップS1832)。次に、受信機1800aは、そのプログラムを実行することにより、設定フィルタに応じた所定の各時刻にフラッシュライト1803を点灯させる(ステップS1833)。 The receiver 1800a receives the float ID indicated by the visible light signal from the float (step S1831). Next, the receiver 1800a acquires a program associated with the ID from the server (step S1832). Next, the receiver 1800a executes the program to turn on the flashlight 1803 at each predetermined time according to the setting filter (step S1833).
 ここで、受信機1800aは、受信したIDまたは取得したプログラムに応じた画像をディスプレイ1801に表示させてもよい。 Here, the receiver 1800a may cause the display 1801 to display an image corresponding to the received ID or the acquired program.
 図137は、実施の形態16における受信機1800aによって表示される画像の一例を示す図である。 FIG. 137 is a diagram illustrating an example of an image displayed by the receiver 1800a according to the sixteenth embodiment.
 受信機1800aは、例えばサンタクロースのフロートからIDを受信すると、図137の(a)に示すように、サンタクロースの画像を表示させる。さらに、受信機1800aは、図137の(b)に示すように、フラッシュライト1803の点灯と同時に、そのサンタクロースの画像の背景色を、設定フィルタの色に変更してもよい。例えば、設定フィルタの色が赤色の場合には、フラッシュライト1803の点灯によって、ホルダー1810が赤色に点灯すると同時に、赤色の背景色を有するサンタクロースの画像がディスプレイ1801に表示される。つまり、ホルダー1810の点滅と、ディスプレイ1801の表示とが同期する。 For example, when the receiver 1800a receives an ID from a Santa Claus float, the receiver 1800a displays a Santa Claus image as shown in FIG. Further, as illustrated in FIG. 137 (b), the receiver 1800a may change the background color of the Santa Claus image to the color of the setting filter simultaneously with the lighting of the flashlight 1803. For example, when the color of the setting filter is red, the holder 1810 is lit red by turning on the flashlight 1803, and at the same time, a Santa Claus image having a red background color is displayed on the display 1801. That is, the blinking of the holder 1810 and the display on the display 1801 are synchronized.
 図138は、実施の形態16におけるホルダーの他の例を示す図である。 FIG. 138 is a diagram showing another example of the holder according to the sixteenth embodiment.
 ホルダー1820は、上述のホルダー1810と同様に構成されているが、通孔1811および可変フィルタ1812がない。このようなホルダー1820は、背板部1820aに受信機1800aのディスプレイ1801が向けられた状態で、その受信機1800aを保持する。この場合、受信機1800aは、フラッシュライト1803の代わりに、ディスプレイ1801を発光させる。これにより、ディスプレイ1801からの光がホルダー1820の略全体に拡散する。したがって、受信機1800aが、上述のプログラムに応じて、赤色の光でディスプレイ1801を発光させると、ホルダー1820は赤色に点灯する。同様に、受信機1800aが、上述のプログラムに応じて、黄色の光でディスプレイ1801を発光させると、ホルダー1820は黄色に点灯する。受信機1800aが、上述のプログラムに応じて、緑色の光でディスプレイ1801を発光させると、ホルダー1820は緑色に点灯する。このようなホルダー1820を用いれば、可変フィルタ1812の設定を省くことができる。 The holder 1820 is configured in the same manner as the holder 1810 described above, but does not include the through hole 1811 and the variable filter 1812. Such a holder 1820 holds the receiver 1800a in a state where the display 1801 of the receiver 1800a is directed to the back plate portion 1820a. In this case, the receiver 1800a causes the display 1801 to emit light instead of the flashlight 1803. As a result, light from the display 1801 is diffused over substantially the entire holder 1820. Therefore, when the receiver 1800a causes the display 1801 to emit light with red light according to the above-described program, the holder 1820 is lit red. Similarly, when the receiver 1800a causes the display 1801 to emit light with yellow light according to the above-described program, the holder 1820 is lit in yellow. When the receiver 1800a causes the display 1801 to emit light with green light according to the above-described program, the holder 1820 lights up in green. If such a holder 1820 is used, the setting of the variable filter 1812 can be omitted.
 (実施の形態17)
 (可視光信号)
 図139A~図139Dは、実施の形態17における可視光信号の一例を示す図である。
(Embodiment 17)
(Visible light signal)
139A to 139D are diagrams illustrating an example of a visible light signal in Embodiment 17. FIG.
 送信機は、上述と同様、例えば図139Aに示すように、4PPMの可視光信号を生成し、この可視光信号にしたがって輝度変化する。具体的には、送信機は、4スロットを一信号単位に割り当て、複数の信号単位からなる可視光信号を生成する。信号単位は、スロットごとにHigh(H)またはLow(L)を示す。そして、送信機は、Hのスロットにおいて明るく発光し、Lのスロットにおいて暗く発光または消灯する。例えば、1スロットは、1/9600秒の時間に相当する期間である。 As described above, the transmitter generates a 4PPM visible light signal and changes the luminance in accordance with the visible light signal, for example, as shown in FIG. 139A. Specifically, the transmitter allocates 4 slots to one signal unit, and generates a visible light signal composed of a plurality of signal units. The signal unit indicates High (H) or Low (L) for each slot. The transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot. For example, one slot is a period corresponding to a time of 1/9600 seconds.
 また、送信機は、例えば図139Bに示すように、一信号単位に割り当てられるスロット数が可変となる可視光信号を生成してもよい。この場合、信号単位では、1つ以上の連続するスロットにおいてHを示す信号と、そのHの信号に続く1つのスロットにおいてLを示す信号とからなる。Hのスロット数が可変であるため、信号単位の全体のスロット数が可変となる。例えば図139Bに示すように、送信機は、3スロットの信号単位、4スロットの信号単位、6スロットの信号単位の順に、それらの信号単位を含む可視光信号を生成する。そして、送信機は、この場合にも、Hのスロットにおいて明るく発光し、Lのスロットにおいて暗く発光または消灯する。 Further, for example, as shown in FIG. 139B, the transmitter may generate a visible light signal in which the number of slots allocated to one signal unit is variable. In this case, the signal unit includes a signal indicating H in one or more consecutive slots and a signal indicating L in one slot following the H signal. Since the number of slots of H is variable, the total number of slots in the signal unit is variable. For example, as shown in FIG. 139B, the transmitter generates a visible light signal including these signal units in the order of a signal unit of 3 slots, a signal unit of 4 slots, and a signal unit of 6 slots. Also in this case, the transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot.
 また、送信機は、例えば図139Cに示すように、複数のスロットを一信号単位に割り当てることなく、任意の期間(信号単位期間)を一信号単位に割り当ててもよい。この信号単位期間は、Hの期間と、そのHの期間に続くLの期間とからなる。Hの期間は、変調前の信号に応じて調整される。Lの期間は、固定であって、上記スロットに相当する期間であってもよい。また、Hの期間およびLの期間はそれぞれ例えば100μs以上の期間である。例えば図139Cに示すように、送信機は、信号単位期間が210μsの信号単位、信号単位期間が220μsの信号単位、信号単位期間が230μsの信号単位の順に、それらの信号単位を含む可視光信号を送信する。そして、送信機は、この場合にも、Hの期間において明るく発光し、Lの期間において暗く発光または消灯する。 Further, for example, as shown in FIG. 139C, the transmitter may assign an arbitrary period (signal unit period) to one signal unit without assigning a plurality of slots to one signal unit. The signal unit period includes an H period and an L period following the H period. The period of H is adjusted according to the signal before modulation. The period L may be fixed and may be a period corresponding to the slot. The H period and the L period are, for example, periods of 100 μs or more. For example, as shown in FIG. 139C, the transmitter transmits a visible light signal including signal units in the order of a signal unit having a signal unit period of 210 μs, a signal unit having a signal unit period of 220 μs, and a signal unit having a signal unit period of 230 μs. Send. In this case as well, the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
 また、送信機は、例えば図139Dに示すように、LとHとを交互に示す信号を可視光信号として生成してもよい。この場合、可視光信号においてLの期間と、Hの期間とは、それぞれ変調前の信号に応じて調整される。例えば図139Dに示すように、送信機は、100μsの期間においてHを示し、次に、120μsの期間においてLを示し、次に、110μsの期間においてHを示し、さらに、200μsの期間においてLを示す可視光信号を送信する。そして、送信機は、この場合にも、Hの期間において明るく発光し、Lの期間において暗く発光または消灯する。 Further, for example, as shown in FIG. 139D, the transmitter may generate a signal indicating L and H alternately as a visible light signal. In this case, the L period and the H period in the visible light signal are adjusted according to the signals before modulation. For example, as shown in FIG. 139D, the transmitter indicates H for a period of 100 μs, then indicates L for a period of 120 μs, then indicates H for a period of 110 μs, and further indicates L for a period of 200 μs. A visible light signal is transmitted. In this case as well, the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
 図140は、実施の形態17における可視光信号の構成を示す図である。 FIG. 140 is a diagram showing a configuration of a visible light signal in the seventeenth embodiment.
 可視光信号は、例えば、信号1と、その信号1に対応する明るさ調整信号と、信号2と、その信号2に対応する明るさ調整信号とを含む。送信機は、変調前の信号を変調することによって信号1および信号2を生成すると、それらの信号に対する明るさ調整信号を生成し、上述の可視光信号を生成する。 The visible light signal includes, for example, a signal 1, a brightness adjustment signal corresponding to the signal 1, a signal 2, and a brightness adjustment signal corresponding to the signal 2. When the transmitter generates the signal 1 and the signal 2 by modulating the signals before modulation, the transmitter generates a brightness adjustment signal for the signals and generates the above-described visible light signal.
 信号1に対応する明るさ調整信号は、信号1にしたがった輝度変化による明るさの増減を補う信号である。信号2に対応する明るさ調整信号は、信号2にしたがった輝度変化による明るさの増減を補う信号である。ここで、信号1と、その信号1の明るさ調整信号とにしたがった輝度変化によって、明るさB1が表現され、信号2と、その信号2の明るさ調整信号とにしたがった輝度変化によって、明るさB2が表現される。本実施の形態における送信機は、その明るさB1と明るさB2とが等しくなるように、信号1および信号2のそれぞれの明るさ調整信号を可視光信号の一部として生成する。これにより、明るさが一定に保たれ、ちらつきを抑えることができる。 The brightness adjustment signal corresponding to signal 1 is a signal that compensates for increase / decrease in brightness due to a luminance change according to signal 1. The brightness adjustment signal corresponding to the signal 2 is a signal that compensates for increase / decrease in brightness due to a luminance change according to the signal 2. Here, the brightness B1 is expressed by the luminance change according to the signal 1 and the brightness adjustment signal of the signal 1, and the brightness change according to the signal 2 and the brightness adjustment signal of the signal 2 Brightness B2 is expressed. The transmitter in the present embodiment generates the brightness adjustment signals of signal 1 and signal 2 as part of the visible light signal so that the brightness B1 and brightness B2 are equal. Thereby, the brightness is kept constant and flicker can be suppressed.
 また、送信機は、上述の信号1を生成するときには、データ1と、そのデータ1に続くプリアンブル(ヘッダ)と、そのプリンブルに続くデータ1とを含む信号1を生成する。ここで、プリアンブルは、その前後に配置されているデータ1に対応する信号である。例えば、このプリアンブルは、データ1を読み出すための識別子となる信号である。このように、2つのデータ1と、それらの間に配置されるプリアンブルとから信号1が構成されているため、受信機は、前にあるデータ1の途中から可視光信号を読み出しても、そのデータ1(すなわち信号1)を正しく復調することができる。 Further, when the transmitter 1 generates the signal 1, the transmitter 1 generates the signal 1 including the data 1, the preamble (header) following the data 1, and the data 1 following the preamble. Here, the preamble is a signal corresponding to data 1 arranged before and after the preamble. For example, this preamble is a signal serving as an identifier for reading data 1. Thus, since the signal 1 is composed of the two data 1 and the preamble arranged between them, even if the receiver reads the visible light signal from the middle of the preceding data 1, Data 1 (ie, signal 1) can be correctly demodulated.
 (輝線画像)
 図141は、実施の形態17における受信機の撮像によって得られる輝線画像の一例を示す図である。
(Bright line image)
FIG. 141 is a diagram illustrating an example of bright line images obtained by imaging of the receiver in Embodiment 17.
 受信機は、上述のように、輝度変化する送信機を撮像することによって、その送信機から送信される可視光信号を輝線パターンとして含む輝線画像を取得する。このような撮像によって、可視光信号が受信機に受信される。 As described above, the receiver captures a bright line image including a visible light signal transmitted from the transmitter as a bright line pattern by capturing an image of the transmitter that changes in luminance. With such imaging, a visible light signal is received by the receiver.
 例えば、図141に示すように、受信機は、イメージセンサに含まれるN個の露光ラインを用いて、時刻t1に撮像することによって、それぞれ輝線パターンが現れている領域aおよび領域bを含む輝線画像を取得する。領域aおよび領域bはそれぞれ、被写体である送信機が輝度変化することによって輝線パターンが現れる領域である。 For example, as shown in FIG. 141, the receiver uses N exposure lines included in the image sensor to capture an image at time t1, so that each line includes a region a and a region b where a bright line pattern appears. Get an image. Regions a and b are regions in which bright line patterns appear when the luminance of the transmitter, which is the subject, changes.
 ここで、受信機は、領域aおよび領域bの輝線パターンから可視光信号を復調する。しかし、受信機は、復調された可視光信号だけでは不十分と判定すると、そのN個の露光ラインのうち、領域aに該当するM(M<N)個の連続する露光ラインのみを用いて、時刻t2に撮像する。これにより、受信機は、領域aおよび領域bのうち領域aのみを含む輝線画像を取得する。受信機は、このような撮像を、時刻t3~t5においても繰り返し実施する。その結果、領域aに対応する被写体からの十分なデータ量の可視光信号を高速に受信することができる。さらに、受信機は、そのN個の露光ラインのうち、領域bに該当するL(L<N)個の連続する露光ラインのみを用いて、時刻t6に撮像する。これにより、受信機は、領域aおよび領域bのうち領域bのみを含む輝線画像を取得する。受信機は、このような撮像を、時刻t7~t9においても繰り返し実施する。その結果、領域bに対応する被写体からの十分なデータ量の可視光信号を高速に受信することができる。 Here, the receiver demodulates the visible light signal from the bright line pattern of the region a and the region b. However, if the receiver determines that the demodulated visible light signal alone is not sufficient, only M (M <N) consecutive exposure lines corresponding to the area a are used among the N exposure lines. The image is taken at time t2. Thereby, the receiver acquires a bright line image including only the region a out of the regions a and b. The receiver repeatedly performs such imaging at times t3 to t5. As a result, a visible light signal having a sufficient amount of data from the subject corresponding to the region a can be received at high speed. Further, the receiver captures an image at time t6 using only L (L <N) consecutive exposure lines corresponding to the region b among the N exposure lines. Thereby, the receiver acquires a bright line image including only the region b out of the regions a and b. The receiver repeatedly performs such imaging at times t7 to t9. As a result, a visible light signal having a sufficient amount of data from the subject corresponding to the region b can be received at high speed.
 また、受信機は、時刻t10およびt11において、時刻t2~t5と同様の撮像を行うことによって、領域aのみを含む輝線画像を取得してもよい。さらに、受信機は、時刻t12およびt13において、時刻t6~t9と同様の撮像を行うことによって、領域bのみを含む輝線画像を取得してもよい。 Further, the receiver may acquire a bright line image including only the region a by performing the same imaging at the times t2 to t5 at the times t10 and t11. Further, the receiver may acquire a bright line image including only the region b by performing imaging similar to that at times t6 to t9 at times t12 and t13.
 また、上述の例では、受信機は、可視光信号が不十分であると判定したときに、時刻t2~t5において、領域aのみを含む輝線画像の連写を行ったが、時刻t1の撮像によって得られた画像に輝線が現れていれば、上述の連写を行ってもよい。同様に、受信機は、可視光信号が不十分であると判定したときに、時刻t6~t9において、領域bのみを含む輝線画像の連写を行ったが、時刻t1の撮像によって得られた画像に輝線が現れていれば、上述の連写を行ってもよい。また、受信機は、領域aのみを含む輝線画像の取得と、領域bのみを含む輝線画像の取得とを交互に行ってもよい。 Further, in the above-described example, when the receiver determines that the visible light signal is insufficient, the receiver performs continuous shooting of the bright line image including only the region a from time t2 to t5. If bright lines appear in the image obtained by the above, the above-described continuous shooting may be performed. Similarly, when the receiver determines that the visible light signal is insufficient, the receiver performs continuous shooting of the bright line image including only the region b from time t6 to time t9, which is obtained by imaging at time t1. If bright lines appear in the image, the above-described continuous shooting may be performed. The receiver may alternately perform acquisition of a bright line image including only the region a and acquisition of a bright line image including only the region b.
 なお、上記の領域aに該当するM個の連続する露光ラインは、領域aの生成に寄与する露光ラインであり、上記の領域bに該当するL個の連続する露光ラインは、領域bの生成に寄与する露光ラインである。 Note that the M consecutive exposure lines corresponding to the area a are exposure lines that contribute to the generation of the area a, and the L consecutive exposure lines corresponding to the area b are the generation of the area b. Is an exposure line that contributes to
 図142は、実施の形態17における受信機の撮像によって得られる輝線画像の他の例を示す図である。 FIG. 142 is a diagram illustrating another example of the bright line image obtained by imaging of the receiver in the seventeenth embodiment.
 例えば、図142に示すように、受信機は、イメージセンサに含まれるN個の露光ラインを用いて、時刻t1に撮像することによって、それぞれ輝線パターンが現れている領域aおよび領域bを含む輝線画像を取得する。領域aおよび領域bはそれぞれ、上述と同様に、被写体である送信機が輝度変化することによって輝線パターンが現れる領域である。また、領域aおよび領域bはそれぞれ、輝線または露光ラインの方向に沿って互いに重なる領域(以下、重なり領域という)を有する。 For example, as shown in FIG. 142, the receiver captures an image at time t1 using N exposure lines included in the image sensor, so that each line includes a region a and a region b where a bright line pattern appears. Get an image. Each of the areas a and b is an area where a bright line pattern appears when the luminance of the transmitter, which is a subject, changes as described above. Further, each of the region a and the region b has a region that overlaps with each other along the direction of the bright line or the exposure line (hereinafter referred to as an overlapping region).
 ここで、受信機は、その領域aおよび領域bの輝線パターンから復調された可視光信号が不十分と判定すると、そのN個の露光ラインのうち、重なり領域に該当するP(P<N)個の連続する露光ラインのみを用いて、時刻t2に撮像する。これにより、受信機は、領域aおよび領域bのそれぞれの重なり領域のみを含む輝線画像を取得する。受信機は、このような撮像を、時刻t3およびt4においても繰り返し実施する。その結果、領域aおよび領域bのそれぞれに対応する被写体からの十分なデータ量の可視光信号を、略同時に、且つ高速に受信することができる。 Here, when the receiver determines that the visible light signal demodulated from the bright line pattern of the area a and the area b is insufficient, P (P <N) corresponding to the overlapping area among the N exposure lines. An image is taken at time t2 using only the continuous exposure lines. As a result, the receiver acquires a bright line image including only the overlapping regions of the region a and the region b. The receiver repeatedly performs such imaging at times t3 and t4. As a result, a visible light signal having a sufficient amount of data from the subject corresponding to each of the region a and the region b can be received substantially simultaneously and at high speed.
 図143は、実施の形態17における受信機の撮像によって得られる輝線画像の他の例を示す図である。 FIG. 143 is a diagram illustrating another example of the bright line image obtained by imaging by the receiver in the seventeenth embodiment.
 例えば、図143に示すように、受信機は、イメージセンサに含まれるN個の露光ラインを用いて、時刻t1に撮像することによって、輝線パターンが不明瞭に現れている部分aと、明瞭に現れている部分bとからなる領域を含む輝線画像を取得する。この領域は、上述と同様に、被写体である送信機が輝度変化することによって輝線パターンが現れる領域である。 For example, as illustrated in FIG. 143, the receiver uses the N exposure lines included in the image sensor to capture an image at time t <b> 1, thereby clearly displaying a portion a in which the bright line pattern appears unclearly. A bright line image including an area including the appearing portion b is acquired. Similar to the above, this region is a region where a bright line pattern appears when the luminance of the transmitter that is the subject changes.
 このような場合、受信機は、上述の領域の輝線パターンから復調された可視光信号が不十分と判定すると、そのN個の露光ラインのうち、部分bに該当するQ(Q<N)個の連続する露光ラインのみを用いて、時刻t2に撮像する。これにより、受信機は、上述の領域のうち部分bのみを含む輝線画像を取得する。受信機は、このような撮像を、時刻t3およびt4においても繰り返し実施する。その結果、上述の領域に対応する被写体からの十分なデータ量の可視光信号を、高速に受信することができる。 In such a case, when the receiver determines that the visible light signal demodulated from the bright line pattern in the above-described region is insufficient, Q (Q <N) corresponding to the portion b of the N exposure lines. The image is taken at time t2 using only the continuous exposure lines. Thereby, the receiver acquires the bright line image including only the part b in the above-described region. The receiver repeatedly performs such imaging at times t3 and t4. As a result, a visible light signal having a sufficient amount of data from the subject corresponding to the above-described region can be received at high speed.
 また、受信機は、部分bのみを含む輝線画像の連写が行われた後に、さらに、部分aのみを含む輝線画像の連写を行ってもよい。 Further, the receiver may perform continuous shooting of the bright line image including only the part a after the continuous shooting of the bright line image including only the part b.
 上述のように、輝線画像において輝線パターンが現れている領域(または部分)が複数含まれている場合には、受信機は、それぞれの領域に順番を付けて、その順番にしたがって、その領域のみを含む輝線画像の連写を行う。この場合、その順番は、信号の大きさ(領域または部分の広さ)に応じた順番であっても、輝線の明瞭度に応じた順番であってもよい。また、その順番は、それらの領域に対応する被写体からの光の色に応じた順番であってもよい。例えば、最初の連写は、赤色の光に対応する領域に対して行われ、次の連写では、白色の光に対応する領域に対して行われる。また、赤色の光に対する領域の連写だけが行われてもよい。 As described above, when a plurality of regions (or portions) where the bright line pattern appears in the bright line image are included, the receiver assigns an order to each region, and according to the order, only that region is included. Performs continuous shooting of bright line images including. In this case, the order may be an order corresponding to the magnitude of the signal (area or area size) or an order corresponding to the clarity of the bright line. Further, the order may be an order corresponding to the color of light from the subject corresponding to these areas. For example, the first continuous shooting is performed on a region corresponding to red light, and the next continuous shooting is performed on a region corresponding to white light. Further, only continuous shooting of the area with respect to red light may be performed.
 (HDR合成)
 図144は、実施の形態17における受信機の、HDR合成を行うカメラシステムへの適応を説明するための図である。
(HDR synthesis)
144 is a diagram for describing adaptation of the receiver in Embodiment 17 to a camera system that performs HDR synthesis. FIG.
 車両には、衝突防止などのためにカメラシステムが搭載されている。このカメラシステムは、カメラの撮像によって得られた画像を用いてHDR(High Dynamic Range)合成を行う。このHDR合成によって、輝度のダイナミックレンジが広い画像が得られる。カメラシステムは、この広いダイナミックレンジの画像に基づいて、周辺の車両、障害物または人などの認識を行う。 The vehicle is equipped with a camera system to prevent collisions. This camera system performs HDR (High Dynamic Range) composition using an image obtained by imaging by a camera. By this HDR synthesis, an image with a wide dynamic range of luminance can be obtained. The camera system recognizes surrounding vehicles, obstacles, or people based on this wide dynamic range image.
 例えば、カメラシステムは、設定モードとして通常設定モードおよび通信設定モードとを有する。設定モードが通常設定モードの場合、例えば図144に示すように、カメラシステムは、時刻t1~t4において、それぞれ同じ1/100秒のシャッタースピードで、且つそれぞれ異なる感度で、4回の撮像を行う。カメラシステムは、この4回の撮像によって得られた4枚の画像を用いてHDR合成を行う。 For example, the camera system has a normal setting mode and a communication setting mode as setting modes. When the setting mode is the normal setting mode, for example, as shown in FIG. 144, the camera system performs four times of imaging at the same 1/100 second shutter speed and at different sensitivities at times t1 to t4. . The camera system performs HDR synthesis using the four images obtained by the four imaging operations.
 一方、設定モードが通信設定モードの場合、例えば図144に示すように、カメラシステムは、時刻t5~t7において、それぞれ同じ1/100秒のシャッタースピードで、且つそれぞれ異なる感度で、3回の撮像を行う。さらに、カメラシステムは、時刻t8において、1/10000秒のシャッタースピードで、且つ、最大の感度(例えばISO=1600)で撮像を行う。カメラシステムは、この4回の撮像のうちの、最初の3回の撮像によって得られた3枚の画像を用いてHDR合成を行う。さらに、カメラシステムは、上述の4回の撮像のうちの最後の撮像によって可視光信号を受信し、その撮像によって得られた画像に現れている輝線パターンを復調する。 On the other hand, when the setting mode is the communication setting mode, for example, as shown in FIG. 144, the camera system captures three times from time t5 to t7 at the same 1/100 second shutter speed and with different sensitivity. I do. Furthermore, the camera system captures an image with a maximum sensitivity (for example, ISO = 1600) at a shutter speed of 1/10000 seconds at time t8. The camera system performs HDR synthesis using the three images obtained by the first three images out of the four images. Further, the camera system receives a visible light signal by the last imaging among the above four imaging operations, and demodulates the bright line pattern appearing in the image obtained by the imaging.
 また、設定モードが通信設定モードの場合には、カメラシステムは、HDR合成を行わなくてもよい。例えば図144に示すように、カメラシステムは、時刻t9において、1/100秒のシャッタースピードで、且つ低い感度(例えば、ISO=200)で、撮像を行う。さらに、カメラシステムは、時刻t10~t12において、1/10000秒のシャッタースピードで、且つ、互いに異なる感度で3回の撮像を行う。カメラシステムは、この4回の撮像のうちの、最初の1回の撮像によって得られた画像から、周辺の車両、障害物または人などの認識を行う。さらに、カメラシステムは、上述の4回の撮像のうちの最後の3回の撮像によって可視光信号を受信し、その撮像によって得られた画像に現れている輝線パターンを復調する。 In addition, when the setting mode is the communication setting mode, the camera system may not perform HDR synthesis. For example, as shown in FIG. 144, the camera system performs imaging at time t9 with a shutter speed of 1/100 second and with low sensitivity (for example, ISO = 200). Furthermore, the camera system performs imaging three times with a shutter speed of 1/10000 seconds and different sensitivities from time t10 to t12. The camera system recognizes a surrounding vehicle, an obstacle, a person, or the like from an image obtained by the first one of the four images. Further, the camera system receives a visible light signal by the last three imagings among the above four imagings, and demodulates the bright line pattern appearing in the image obtained by the imaging.
 なお、図144に示す例では、時刻t10~t12のそれぞれにおいて、互いに異なる感度で撮像が行われているが、同じ感度で撮像を行ってもよい。 In the example shown in FIG. 144, imaging is performed with different sensitivities at times t10 to t12. However, imaging may be performed with the same sensitivity.
 このようなカメラシステムでは、HDR合成を行うことができるとともに、可視光信号の受信も行うことができる。 Such a camera system can perform HDR synthesis and can also receive a visible light signal.
 (セキュリティ)
 図145は、実施の形態17における可視光通信システムの処理動作を説明するための図である。
(Security)
FIG. 145 is a diagram for explaining the processing operation of the visible light communication system in the seventeenth embodiment.
 この可視光通信システムは、例えばレジに配置される送信機と、受信機であるスマートフォンと、サーバとからなる。なお、スマートフォンとサーバとの間の通信と、送信機とサーバとの間の通信とは、それぞれセキュアな通信回線を介して行われる。また、送信機とスマートフォンとの間の通信は、可視光通信によって行われる。本実施の形態における可視光通信システムは、送信機からの可視光信号が正確にスマートフォンに受信されているか否かを判定することにより、セキュリティを確保する。 This visible light communication system includes, for example, a transmitter arranged at a cash register, a smartphone as a receiver, and a server. Note that the communication between the smartphone and the server and the communication between the transmitter and the server are each performed via a secure communication line. Communication between the transmitter and the smartphone is performed by visible light communication. The visible light communication system according to the present embodiment ensures security by determining whether a visible light signal from a transmitter is accurately received by a smartphone.
 具体的には、送信機は、時刻t1において輝度変化することによって、例えば値「100」を示す可視光信号をスマートフォンに送信する。スマートフォンは、時刻t2に、その可視光信号を受信すると、その値「100」を示す電波信号をサーバに送信する。サーバは、時刻t3に、スマートフォンからその電波信号を受信する。このとき、サーバは、その電波信号によって示される値「100」が、送信機からスマートフォンに受信された可視光信号の値であるか否かを判定するための処理を行う。すなわち、サーバは、例えば値「200」を示す電波信号を送信機に送信する。その電波信号を受信した送信機は、時刻t4において輝度変化することによって、その値「200」を示す可視光信号をスマートフォンに送信する。スマートフォンは、時刻t5に、その可視光信号を受信すると、その値「200」を示す電波信号をサーバに送信する。サーバは、時刻t6に、スマートフォンからその電波信号を受信する。サーバは、この受信した電波信号の示す値が、時刻t3において送信した電波信号の示す値と同一であるか否かを判別する。同一であれば、サーバは、時刻t3において受信した可視光信号によって示される値「100」が、送信機からスマートフォンに送信されて受信された可視光信号の値であると判定する。一方、同一でなければ、サーバは、時刻t3において受信した可視光信号によって示される値「100」が、送信機からスマートフォンに送信されて受信された可視光信号の値として疑わしいと判定する。 Specifically, the transmitter transmits, for example, a visible light signal indicating the value “100” to the smartphone by changing the luminance at time t1. When the smartphone receives the visible light signal at time t2, the smartphone transmits a radio signal indicating the value “100” to the server. The server receives the radio signal from the smartphone at time t3. At this time, the server performs a process for determining whether or not the value “100” indicated by the radio signal is the value of the visible light signal received by the smartphone from the transmitter. That is, the server transmits, for example, a radio signal indicating a value “200” to the transmitter. The transmitter that has received the radio wave signal transmits a visible light signal indicating the value “200” to the smartphone by changing the luminance at time t4. When the smartphone receives the visible light signal at time t5, the smartphone transmits a radio signal indicating the value “200” to the server. The server receives the radio signal from the smartphone at time t6. The server determines whether or not the value indicated by the received radio signal is the same as the value indicated by the radio signal transmitted at time t3. If they are the same, the server determines that the value “100” indicated by the visible light signal received at time t3 is the value of the visible light signal transmitted from the transmitter to the smartphone and received. On the other hand, if not the same, the server determines that the value “100” indicated by the visible light signal received at time t3 is suspicious as the value of the visible light signal transmitted from the transmitter to the smartphone and received.
 これにより、サーバは、スマートフォンが送信機から可視光信号を確かに受信したか否かを判定することができる。つまり、スマートフォンが、送信機から可視光信号を受信していないにも関わらず、その可視光信号を受信したかのように見せかけて、信号をサーバに送信するのを防ぐことができる。 This allows the server to determine whether the smartphone has surely received a visible light signal from the transmitter. That is, it is possible to prevent the smartphone from transmitting the signal to the server by making it appear as if it has received the visible light signal even though it has not received the visible light signal from the transmitter.
 なお、上述の例では、スマートフォンとサーバと送信機の間では、電波信号を用いた通信が行われているが、可視光信号以外の光信号による通信、または電気信号による通信が行われてもよい。また、送信機からスマートフォンに送信される可視光信号は、例えば、課金の値、クーポンの値、モンスターの値、またはビンゴの値などを示す。 In the above example, communication using a radio wave signal is performed between the smartphone, the server, and the transmitter. However, communication using an optical signal other than a visible light signal or communication using an electrical signal may be performed. Good. The visible light signal transmitted from the transmitter to the smartphone indicates, for example, a charging value, a coupon value, a monster value, or a bingo value.
 (車両関係)
 図146Aは、実施の形態17における可視光を用いた車車間通信の一例を示す図である。
(Vehicle related)
146A is a diagram illustrating an example of vehicle-to-vehicle communication using visible light in Embodiment 17. FIG.
 例えば、先頭の車両は、その車両に搭載されているセンサ(カメラなど)によって、進行方向に事故があることを認識する。このように事故を認識すると、先頭の車両は、テールランプを輝度変化させることによって、可視光信号を送信する。例えば、先頭の車両は、後続車両に対して減速を促す可視光信号を送信する。後続車両は、その車両に搭載されているカメラによる撮像によって、その可視光信号を受信すると、その可視光信号にしたがって、減速するとともに、さらに後続の車両に対して減速を促す可視光信号を送信する。 For example, the head vehicle recognizes that there is an accident in the direction of travel by a sensor (camera etc.) mounted on the vehicle. When an accident is recognized in this way, the leading vehicle transmits a visible light signal by changing the brightness of the tail lamp. For example, the leading vehicle transmits a visible light signal that prompts the subsequent vehicle to decelerate. When the succeeding vehicle receives the visible light signal by imaging with a camera mounted on the vehicle, the following vehicle decelerates according to the visible light signal and further transmits a visible light signal that prompts the subsequent vehicle to decelerate. To do.
 このように、減速を促す可視光信号は、一列に並んで走行する複数の車両に先頭から順次送信され、その可視光信号を受信した車両は減速する。各車両への可視光信号の送信は早く行われるため、それらの複数の車両は略同時に同じように減速を行うことができる。したがって、事故による渋滞を緩和することができる。 Thus, the visible light signal that prompts deceleration is sequentially transmitted from the head to a plurality of vehicles traveling in a line, and the vehicle that receives the visible light signal decelerates. Since the transmission of the visible light signal to each vehicle is performed quickly, the plurality of vehicles can be decelerated in the same manner at substantially the same time. Therefore, it is possible to reduce traffic congestion due to accidents.
 図146Bは、実施の形態17における可視光を用いた車車間通信の他の例を示す図である。 FIG. 146B is a diagram illustrating another example of vehicle-to-vehicle communication using visible light according to the seventeenth embodiment.
 例えば、前の車両は、テールランプを輝度変化させることによって、後の車両に対するメッセージ(例えば「ありがとう」)を示す可視光信号を送信してもよい。このメッセージは、例えばユーザによるスマートフォンへの操作によって生成される。そして、スマートフォンは、そのメッセージを示す信号を上述の前の車両に送信する。その結果、前の車両は、そのメッセージを示す可視光信号を後の車両に送信することができる。 For example, the front vehicle may transmit a visible light signal indicating a message (for example, “thank you”) to the subsequent vehicle by changing the brightness of the tail lamp. This message is generated, for example, by a user operation on a smartphone. And a smart phone transmits the signal which shows the message to the above-mentioned previous vehicle. As a result, the preceding vehicle can transmit a visible light signal indicating the message to the subsequent vehicle.
 図147は、実施の形態17における複数のLEDの位置決定方法の一例を示す図である。 FIG. 147 is a diagram illustrating an example of a method for determining the positions of a plurality of LEDs in the seventeenth embodiment.
 例えば、車両のヘッドライトは、複数のLED(Light Emitting Diode)を有する。この車両の送信機は、ヘッドライトの複数のLEDのそれぞれを個別に輝度変化させることによって、それぞれのLEDから可視光信号を送信する。他の車両の受信機は、そのヘッドライトを有する車両を撮像することによって、それらの複数のLEDからの可視光信号を受信する。 For example, a vehicle headlight has a plurality of LEDs (Light Emitting Diodes). The transmitter of this vehicle transmits a visible light signal from each LED by individually changing the brightness of each of the plurality of LEDs of the headlight. Other vehicle receivers receive visible light signals from their LEDs by imaging the vehicle with the headlights.
 このとき、受信機は、受信された可視光信号が何れのLEDから送信された信号であるかを認識するために、その撮像によって得られた画像から、複数のLEDのそれぞれの位置を決定する。具体的には、受信機は、その受信機と同じ車両に取り付けられている加速度センサを利用し、その加速度センサによって示される重力の方向(例えば図147中の下向き矢印)を基準に、複数のLEDのそれぞれの位置を決定する。 At this time, the receiver determines the position of each of the plurality of LEDs from the image obtained by the imaging in order to recognize which LED the received visible light signal is transmitted from. . Specifically, the receiver uses an acceleration sensor attached to the same vehicle as the receiver, and uses a plurality of gravitational directions (for example, a downward arrow in FIG. 147) indicated by the acceleration sensor as a reference. Determine the position of each LED.
 なお、上述の例では、輝度変化する発光体の一例としてLEDをあげたが、LED以外の発光体であってもよい。 In the above example, an LED is used as an example of a light-emitting body that changes in luminance, but a light-emitting body other than the LED may be used.
 図148は、実施の形態17における、車両を撮像することによって得られる輝線画像の一例を示す図である。 FIG. 148 is a diagram illustrating an example of a bright line image obtained by imaging a vehicle in the seventeenth embodiment.
 例えば、走行する車両に搭載された受信機は、後の車両(後続車両)を撮像することにより、図148に示す輝線画像を取得する。後続車両に搭載された送信機は、車両の2つのヘッドライトを輝度変化させることによって、可視光信号を前の車両に送信する。前の車両の後部またはサイドミラーなどには、後方を撮像するカメラが取り付けられている。受信機は、後続車両を被写体としたそのカメラによる撮像によって、輝線画像を取得し、その輝線画像に含まれる輝線パターン(可視光信号)を復調する。これにより、後続車両の送信機から送信された可視光信号は、前の車両の受信機に受信される。 For example, a receiver mounted on a traveling vehicle acquires a bright line image shown in FIG. 148 by imaging a subsequent vehicle (following vehicle). The transmitter mounted on the following vehicle transmits a visible light signal to the preceding vehicle by changing the brightness of the two headlights of the vehicle. A camera for imaging the rear is attached to the rear part of the front vehicle or the side mirror. The receiver acquires a bright line image by imaging with the camera of the following vehicle as a subject, and demodulates a bright line pattern (visible light signal) included in the bright line image. Thereby, the visible light signal transmitted from the transmitter of the following vehicle is received by the receiver of the preceding vehicle.
 ここで、受信機は、2つのヘッドライトから送信されて復調された可視光信号のそれぞれから、そのヘッドライトを有する車両のIDと、その車両の速度と、その車両の車種を取得する。受信機は、2つの可視光信号のそれぞれのIDが同じであれば、その2つの可視光信号が同じ車両から送信された信号であると判断する。そして、受信機は、その車両の車種から、その車両が有する2つのヘッドライトの間の長さ(ライト間距離)を特定する。さらに、受信機は、輝線画像に含まれている、輝線パターンが現れている2つの領域の間の距離L1を計測する。そして、受信機は、その距離L1と、ライト間距離とを用いた三角測量によって、その受信機を搭載する車両から、後続車両までの距離(車間距離)を算出する。受信機は、その車間距離と、可視光信号から取得された車両の速度とに基づいて、衝突の危険性を判断し、その判断結果に応じた警告を、車両の運転者に報知する。これにより、車両の衝突を回避することができる。 Here, the receiver acquires the ID of the vehicle having the headlight, the speed of the vehicle, and the type of the vehicle from each of the visible light signals transmitted and demodulated from the two headlights. If the IDs of the two visible light signals are the same, the receiver determines that the two visible light signals are signals transmitted from the same vehicle. And a receiver specifies the length (distance between lights) between the two headlights which the vehicle has from the model of the vehicle. Further, the receiver measures a distance L1 between two regions where the bright line pattern appears, which is included in the bright line image. Then, the receiver calculates the distance (inter-vehicle distance) from the vehicle on which the receiver is mounted to the following vehicle by triangulation using the distance L1 and the inter-light distance. The receiver determines the risk of collision based on the inter-vehicle distance and the vehicle speed acquired from the visible light signal, and notifies the vehicle driver of a warning corresponding to the determination result. Thereby, the collision of a vehicle can be avoided.
 なお、上述の例では、受信機は、可視光信号に含まれる車種からライト間距離を特定したが、車種以外の情報からライト間距離を特定してもよい。また、上述の例では、受信機は、衝突の危険性があると判断したときには、警告を発するが、その危険性を回避する動作を車両に実行させるための制御信号を車両に出力してもよい。例えば、その制御信号は、車両を加速させるための信号、または、車両に車線変更させるための信号である。 In the above-described example, the receiver specifies the inter-light distance from the vehicle type included in the visible light signal, but may specify the inter-light distance from information other than the vehicle type. In the above-described example, the receiver issues a warning when it is determined that there is a risk of a collision. However, the receiver may output a control signal for causing the vehicle to perform an operation to avoid the risk. Good. For example, the control signal is a signal for accelerating the vehicle or a signal for causing the vehicle to change lanes.
 また、上述の例では、カメラは後続車両を撮像するが、対向車両を撮像してもよい。また、受信機は、カメラによる撮像によって得られる画像から、受信機(つまり受信機を備えた車両)周辺に霧が立ち込めていると判断すると、上述のような可視光信号を受信するモードとなってもよい。これにより、車両の受信機は、周辺に霧が立ち込めていても、対向車両のヘッドライトから送信される可視光信号を受信することによって、その対向車両の位置および速度を特定することができる。 In the above example, the camera images the following vehicle, but may image the oncoming vehicle. In addition, when the receiver determines from the image obtained by imaging by the camera that fog is in the vicinity of the receiver (that is, the vehicle equipped with the receiver), the receiver enters a mode for receiving the visible light signal as described above. May be. As a result, the receiver of the vehicle can identify the position and speed of the oncoming vehicle by receiving the visible light signal transmitted from the headlight of the oncoming vehicle, even if fog is in the vicinity.
 図149は、実施の形態17における受信機と送信機の適用例を示す図である。なお、図149は自動車を後ろから見た図である。 FIG. 149 is a diagram illustrating an example of application of the receiver and the transmitter in the seventeenth embodiment. FIG. 149 is a view of the automobile from the back.
 例えば車の2つのテールランプ(発光部またはライト)を有する送信機(車)7006aは、送信機7006aの識別情報(ID)を例えばスマートフォンとして構成される受信機に送信する。受信機は、そのIDを受信すると、そのIDに対応付けられた情報をサーバから取得する。例えば、その情報は、その車または送信機のID、発光部間の距離、発光部の大きさ、車の大きさ、車の形状、車の重さ、車のナンバー、前方の様子、または危険の有無を示す情報である。また、受信機はこれらの情報を送信機7006aから直接取得してもよい。 For example, a transmitter (car) 7006a having two tail lamps (light emitting unit or light) of a car transmits identification information (ID) of the transmitter 7006a to a receiver configured as a smartphone, for example. When the receiver receives the ID, the receiver acquires information associated with the ID from the server. For example, the information includes the ID of the car or transmitter, the distance between the light emitting parts, the size of the light emitting part, the size of the car, the shape of the car, the weight of the car, the car number, the appearance in front, or the danger. This is information indicating the presence or absence of. In addition, the receiver may acquire these pieces of information directly from the transmitter 7006a.
 図150は、実施の形態17における受信機と送信機7006aの処理動作の一例を示すフローチャートである。 FIG. 150 is a flowchart illustrating an example of processing operations of the receiver and the transmitter 7006a in the seventeenth embodiment.
 送信機7006aのIDと、IDを受信した受信機に渡す情報とを関連付けてサーバに記憶する(7106a)。受信機に渡す情報には、送信機7006aとなる発光部の大きさや、発光部間の距離や、送信機7006aを構成要素の一部とする物体の形状や、重量や、車体ナンバー等の識別番号や、受信機から観察しづらい場所の様子や危険の有無などの情報を含めても良い。 The ID of the transmitter 7006a and the information passed to the receiver that has received the ID are associated with each other and stored in the server (Step 7106a). The information to be passed to the receiver includes identification of the size of the light emitting unit that becomes the transmitter 7006a, the distance between the light emitting units, the shape of the object having the transmitter 7006a as a component, the weight, the body number, etc. Information such as numbers, places that are difficult to observe from the receiver, and presence or absence of danger may be included.
 送信機7006aは、IDを送信する(7106b)。送信内容には、前記サーバのURLや、前記サーバに記憶させるとした情報を含めても良い。 The transmitter 7006a transmits the ID (Step 7106b). The transmission content may include the URL of the server and information stored in the server.
 受信機は、送信されたID等の情報を受信する(7106c)。受信機は、受信したIDに紐付いた情報をサーバから取得する(7106d)。受信機は、受信した情報やサーバから取得した情報を表示する(7106e)。 The receiver receives information such as the transmitted ID (Step 7106c). The receiver acquires information associated with the received ID from the server (Step 7106d). The receiver displays the received information and the information acquired from the server (Step 7106e).
 受信機は、発光部の大きさ情報と撮像した発光部の見えの大きさから、または、発光部間の距離情報と撮像した発光部間の距離から三角測量の要領で、受信機と発光部との距離を計算する(7106f)。受信機は、受信機から観察しづらい場所の様子や危険の有無などの情報を基に、危険の警告などを行う(7106g)。 The receiver and the light emitting unit can be triangulated from the size information of the light emitting unit and the appearance size of the imaged light emitting unit, or from the distance information between the light emitting units and the distance between the imaged light emitting units. Is calculated (Step 7106f). The receiver issues a warning of danger based on information such as the state of the place that is difficult to observe from the receiver and the presence or absence of danger (Step 7106g).
 図151は、実施の形態17における受信機と送信機の適用例を示す図である。 FIG. 151 is a diagram illustrating an example of application of the receiver and the transmitter in the seventeenth embodiment.
 例えば車の2つのテールランプ(発光部またはライト)を有する送信機(車)7007bは、送信機7007bの情報を例えば駐車場の送受信装置として構成される受信機7007aに送信する。送信機7007bの情報は、送信機7007bの識別情報(ID)、車のナンバー、車の大きさ、車の形状、または車の重さを示す。受信機7007aは、その情報を受信すると、駐車の可否、課金情報、または駐車位置を送信する。なお、受信機7007aは、IDを受信して、ID以外の情報をサーバから取得してもよい。 For example, a transmitter (car) 7007b having two tail lamps (light emitting unit or light) of a car transmits information of the transmitter 7007b to a receiver 7007a configured as a transmission / reception device of a parking lot, for example. The information of the transmitter 7007b indicates identification information (ID) of the transmitter 7007b, a car number, a car size, a car shape, or a car weight. When receiving the information, the receiver 7007a transmits information indicating whether parking is possible, billing information, or a parking position. Note that the receiver 7007a may receive the ID and acquire information other than the ID from the server.
 図152は、実施の形態17における受信機7007aと送信機7007bの処理動作の一例を示すフローチャートである。なお、送信機7007bは、送信だけでなく受信も行なうとため、車載送信機と車載受信機とを備える。 FIG. 152 is a flowchart illustrating an example of processing operations of the receiver 7007a and the transmitter 7007b according to the seventeenth embodiment. Note that the transmitter 7007b includes an in-vehicle transmitter and an in-vehicle receiver in order to perform not only transmission but also reception.
 送信機7007bのIDと、IDを受信した受信機7007aに渡す情報とを関連付けてサーバ(駐車場管理サーバ)に記憶する(7107a)。受信機7007aに渡す情報には、送信機7007bを構成要素の一部とする物体の形状や、重量や、車体ナンバー等の識別番号や、送信機7007bのユーザの識別番号や支払いのための情報を含めても良い。 The ID of the transmitter 7007b and the information passed to the receiver 7007a that has received the ID are associated with each other and stored in the server (parking lot management server) (Step 7107a). Information to be passed to the receiver 7007a includes an identification number such as the shape, weight, and body number of the object having the transmitter 7007b as a component, the identification number of the user of the transmitter 7007b, and information for payment. May be included.
 送信機7007b(車載送信機)は、IDを送信する(7107b)。送信内容には、前記サーバのURLや、前記サーバに記憶させる情報を含めても良い。駐車場の受信機7007a(駐車場の送受信装置)は、受信した情報を、駐車場を管理するサーバ(駐車場管理サーバ)に送信する(7107c)。駐車場管理サーバは、送信機7007bのIDをキーに、IDに紐付けられた情報を取得する(7107d)。駐車場管理サーバは、駐車場の空き状況を調査する(7107e)。 The transmitter 7007b (on-vehicle transmitter) transmits the ID (Step 7107b). The contents of transmission may include the URL of the server and information stored in the server. The parking lot receiver 7007a (parking lot transmission / reception device) transmits the received information to a server (parking lot management server) that manages the parking lot (Step 7107c). The parking lot management server acquires information associated with the ID using the ID of the transmitter 7007b as a key (Step 7107d). The parking lot management server investigates the parking lot availability (Step 7107e).
 駐車場の受信機7007a(駐車場の送受信装置)は、駐車の可否や、駐車位置情報、または、これらの情報を保持するサーバのアドレスを送信する(7107f)。または、駐車場管理サーバは、これらの情報を別のサーバに送信する。送信機(車載受信機)7007bは、上記で送信された情報を受信する(7107g)。または、車載システムは、別のサーバからこれらの情報を取得する。 The parking lot receiver 7007a (parking lot transmission / reception device) transmits whether or not parking is possible, parking position information, or the address of a server holding these pieces of information (Step 7107f). Or a parking lot management server transmits such information to another server. The transmitter (on-vehicle receiver) 7007b receives the information transmitted above (Step 7107g). Or an in-vehicle system acquires these information from another server.
 駐車場管理サーバは、駐車を行いやすいように駐車場の制御を行う(7107h)。例えば、立体駐車場の制御を行う。駐車場の送受信装置は、IDを送信する(7107i)。車載受信機(送信機7007b)は、車載受信機のユーザ情報と受信したIDとを基に、駐車場管理サーバに問い合わせを行う(7107j)。 The parking lot management server controls the parking lot so as to facilitate parking (Step 7107h). For example, control of a multistory parking lot is performed. The transmission / reception device of the parking lot transmits the ID (Step 7107i). The in-vehicle receiver (transmitter 7007b) makes an inquiry to the parking lot management server based on the user information of the in-vehicle receiver and the received ID (Step 7107j).
 駐車場管理サーバは、駐車時間等に応じて課金を行う(7107k)。駐車場管理サーバは、駐車された車両にアクセスしやすいように駐車場の制御を行う(7107m)。例えば、立体駐車場の制御を行う。車載受信機(送信機7007b)は、駐車位置への地図を表示し、現在地からのナビゲーションを行う(7107n)。 The parking lot management server charges according to the parking time (7107k). The parking lot management server controls the parking lot so that the parked vehicle can be easily accessed (Step 7107m). For example, control of a multistory parking lot is performed. The in-vehicle receiver (transmitter 7007b) displays a map to the parking position and performs navigation from the current location (Step 7107n).
 (電車内)
 図153は、実施の形態17における、電車の車内に適用される可視光通信システムの構成を示す図である。
(On the train)
FIG. 153 is a diagram illustrating a configuration of a visible light communication system applied to the inside of a train in Embodiment 17.
 可視光通信システムは、例えば、電車内に配置された複数の照明装置1905と、ユーザが保持するスマートフォン1906と、サーバ1904と、電車内に配置されたカメラ1903とを備える。 The visible light communication system includes, for example, a plurality of lighting devices 1905 arranged in a train, a smartphone 1906 held by a user, a server 1904, and a camera 1903 arranged in the train.
 複数の照明装置1905のそれぞれは、上述の送信機として構成され、明かりを照らすとともに、輝度変化することによって可視光信号を送信する。この可視光信号は、その可視光信号を送信する照明装置1905のIDを示す。 Each of the plurality of lighting devices 1905 is configured as the above-described transmitter, illuminates the light, and transmits a visible light signal by changing the luminance. This visible light signal indicates the ID of the illumination device 1905 that transmits the visible light signal.
 スマートフォン1906は、上述の受信機として構成され、照明装置1905を撮像することによって、その照明装置1905から送信される可視光信号を受信する。例えば、ユーザは、電車内でトラブル(例えば痴漢または喧嘩など)に巻き込まれた場合、スマートフォン1906にその可視光信号を受信させる。スマートフォン1906は、可視光信号を受信すると、その可視光信号によって示されるIDをサーバ1904に通知する。 The smartphone 1906 is configured as the above-described receiver, and receives a visible light signal transmitted from the lighting device 1905 by imaging the lighting device 1905. For example, when a user is involved in a trouble (for example, a molester or a fight) on a train, the user causes the smartphone 1906 to receive the visible light signal. When the smartphone 1906 receives the visible light signal, the smartphone 1906 notifies the server 1904 of the ID indicated by the visible light signal.
 サーバ1904は、そのIDの通知を受けると、そのIDによって識別される照明装置1905によって照らし出される範囲を撮像範囲とするカメラ1903を特定する。そして、サーバ1904は、その特定されたカメラ1903に、照明装置1905によって照らし出される範囲を撮像させる。 Upon receiving the notification of the ID, the server 1904 specifies the camera 1903 whose imaging range is the range illuminated by the lighting device 1905 identified by the ID. Then, the server 1904 causes the identified camera 1903 to image the range illuminated by the lighting device 1905.
 カメラ1903は、サーバ1904からの指示に応じて撮像し、その撮像によって得られた画像をサーバ1904に送信する。 The camera 1903 captures an image in accordance with an instruction from the server 1904 and transmits an image obtained by the image capture to the server 1904.
 これにより、電車内でのトラブルの状況を示す画像を取得することができる。この画像は、トラブルの証拠として利用することができる。 This makes it possible to acquire an image showing the trouble situation on the train. This image can be used as evidence of trouble.
 また、ユーザは、スマートフォン1906を操作することにより、カメラ1903の撮像によって得られた画像をサーバ1904からスマートフォン1906に送信させもよい。 In addition, the user may cause the server 1904 to transmit an image obtained by imaging with the camera 1903 to the smartphone 1906 by operating the smartphone 1906.
 また、スマートフォン1906は、画面に撮像ボタンを表示し、その撮像ボタンがユーザによってタッチされると、撮像を促す信号をサーバ1904に送信してもよい。これにより、ユーザは、撮像のタイミングを自ら決定することができる。 Further, the smartphone 1906 may display an imaging button on the screen, and when the imaging button is touched by the user, a signal for prompting imaging may be transmitted to the server 1904. Thereby, the user can determine the timing of imaging himself.
 図154は、実施の形態17における、遊園地などの施設に適用される可視光通信システムの構成を示す図である。 FIG. 154 is a diagram illustrating a configuration of a visible light communication system applied to a facility such as an amusement park according to the seventeenth embodiment.
 可視光通信システムは、例えば、施設に配置された複数のカメラ1903と、人に取り付けられる装身具1907とを備える。 The visible light communication system includes, for example, a plurality of cameras 1903 arranged in a facility and a jewelry 1907 attached to a person.
 装身具1907は、例えば複数のLEDが取り付けられたリボンを有するカチューシャなどである。また、この装身具1907は、上述の送信機として構成され、複数のLEDを輝度変化させることによって、可視光信号を送信する。 The accessory 1907 is, for example, a headband having a ribbon to which a plurality of LEDs are attached. Moreover, this accessory 1907 is comprised as an above-mentioned transmitter, and transmits a visible light signal by changing the brightness | luminance of several LED.
 複数のカメラ1903のそれぞれは、上述の受信機として構成され、可視光通信モードと通常撮像モードとを有する。また、これらの複数のカメラ1903のそれぞれは、施設内の通り道における互いに異なる箇所に配置されている。 Each of the plurality of cameras 1903 is configured as the above-described receiver and has a visible light communication mode and a normal imaging mode. In addition, each of the plurality of cameras 1903 is disposed at a different location on the path in the facility.
 具体的には、カメラ1903は、可視光通信モードに設定されているときに、装身具1907が被写体として撮像されると、その装身具1907から可視光信号を受信する。カメラ1903は、その可視光信号を受信すると、設定されているモードを可視光通信モードから通常撮像モードに切り替える。その結果、カメラ1903は、装身具1907を身につけている人を被写体として撮像する。 Specifically, when the accessory 1907 is imaged as a subject when the camera 1903 is set to the visible light communication mode, the camera 1903 receives a visible light signal from the accessory 1907. Upon receiving the visible light signal, the camera 1903 switches the set mode from the visible light communication mode to the normal imaging mode. As a result, the camera 1903 images a person wearing the accessory 1907 as a subject.
 したがって、装身具1907を付けた人が施設内の通り道を歩いていると、その人の近くにあるカメラ1903が次々にその人を撮像することになる。これにより、その人が施設で楽しんでいる様子を映す画像を自動的に取得して保存することができる。 Therefore, when a person wearing the accessory 1907 is walking along the street in the facility, the camera 1903 near the person sequentially captures the person. As a result, it is possible to automatically acquire and save an image showing the person enjoying at the facility.
 なお、カメラ1903は、可視光信号を受信すると直ちに通常撮像モードによる撮像を行うのではなく、例えばスマートフォンから撮像開始の指示を受けたときに、通常撮像モードによる撮像を行ってもよい。これにより、ユーザは、スマートフォンの画面に表示される撮像開始ボタンに触れるタイミングで、自らをカメラ1903に撮像させることができる。 Note that the camera 1903 may perform imaging in the normal imaging mode instead of performing imaging in the normal imaging mode immediately after receiving a visible light signal, for example, when receiving an instruction to start imaging from a smartphone. Accordingly, the user can cause the camera 1903 to image itself at the timing when the user touches the imaging start button displayed on the screen of the smartphone.
 図155は、実施の形態17における、遊具とスマートフォンとからなる可視光通信システムの一例を示す図である。 FIG. 155 is a diagram illustrating an example of a visible light communication system including a playground equipment and a smartphone according to the seventeenth embodiment.
 遊具1901は、例えば複数のLEDを備えた上述の送信機として構成されている。つまり、遊具1901は、その複数のLEDを輝度変化させることによって、可視光信号を送信する。 The play equipment 1901 is configured as the above-described transmitter having a plurality of LEDs, for example. That is, the playground equipment 1901 transmits a visible light signal by changing the luminance of the plurality of LEDs.
 スマートフォン1902は、その遊具1901を撮像することによって、その遊具1901から送信される可視光信号を受信する。そして、図155の(a)に示すように、スマートフォン1902は、その可視光信号を1回目に受信したときには、その可視光信号と1回目とに対応付けられている動画1を例えばサーバなどからダウンロードして再生する。一方、スマートフォン1902は、その可視光信号を2回目に受信したときには、図155の(b)に示すように、その可視光信号と2回目とに対応付けられている動画2を例えばサーバなどからダウンロードして再生する。 The smartphone 1902 receives the visible light signal transmitted from the play equipment 1901 by imaging the play equipment 1901. Then, as shown in FIG. 155 (a), when the smartphone 1902 receives the visible light signal for the first time, the smartphone 1902 displays the moving image 1 associated with the visible light signal and the first time from, for example, a server or the like. Download and play. On the other hand, when the smartphone 1902 receives the visible light signal for the second time, as shown in FIG. 155 (b), the smartphone 1902 displays the moving picture 2 associated with the visible light signal and the second time from, for example, a server or the like. Download and play.
 つまり、スマートフォン1902は、同じ可視光信号を受信しても、その可視光信号を受信した回数に応じて、再生される動画を切り替える。可視光信号を受信した回数は、スマートフォン1902によってカウントされてもよく、サーバによってカウントされてもよい。または、スマートフォン1902は、複数回、同一の可視光信号を受信しても、連続して同じ動画を再生することはしない。または、スマートフォン1902は、同一の可視光信号に対応付けられている複数の動画のうち、既に再生された動画の出現確率を低下させて、出現確率の高い動画を優先的にダウンロードして再生してもよい。 That is, even if the smartphone 1902 receives the same visible light signal, the smartphone 1902 switches the reproduced video according to the number of times the visible light signal is received. The number of times the visible light signal is received may be counted by the smartphone 1902 or may be counted by the server. Or even if the smartphone 1902 receives the same visible light signal a plurality of times, the smartphone 1902 does not continuously reproduce the same moving image. Alternatively, the smartphone 1902 reduces the appearance probability of a video that has already been played among a plurality of videos associated with the same visible light signal, and preferentially downloads and plays a video with a high appearance probability. May be.
 また、スマートフォン1902は、複数の店舗を有する施設の案内所に備えられているタッチパネルから送信される可視光信号を受信し、その可視光信号に応じた画像を表示してもよい。例えば、タッチパネルは、施設の概要を示す初期画面を表示しているときには、その施設の概要を示す可視光信号を輝度変化によって送信している。したがって、スマートフォンは、その初期画面を表示しているタッチパネルを撮像することによって、その可視光信号を受信すると、施設の概要を示す画像を自らのディスプレイに表示することができる。ここで、ユーザによってタッチパネルが操作されると、タッチパネルは、例えば特定の店舗の情報を示す店舗画像を表示する。このとき、タッチパネルは、その特定の店舗の情報を示す可視光信号を送信している。したがって、スマートフォンは、その店舗画像を表示しているタッチパネルを撮像することによって、その可視光信号を受信すると、特定の店舗の情報を示す店舗画像を表示することができる。このように、スマートフォンは、タッチパネルと同期した画像を表示することができる。 Further, the smartphone 1902 may receive a visible light signal transmitted from a touch panel provided in a guide of a facility having a plurality of stores, and may display an image corresponding to the visible light signal. For example, when the touch panel displays an initial screen showing the outline of the facility, the touch panel transmits a visible light signal indicating the outline of the facility by a change in luminance. Therefore, when the smartphone receives the visible light signal by capturing an image of the touch panel displaying the initial screen, the smartphone can display an image showing an outline of the facility on its display. Here, when the touch panel is operated by the user, the touch panel displays, for example, a store image indicating information on a specific store. At this time, the touch panel transmits a visible light signal indicating the information of the specific store. Therefore, the smart phone can display the store image which shows the information of a specific store, if the visible light signal is received by imaging the touch panel which displays the store image. Thus, the smartphone can display an image synchronized with the touch panel.
 (上記実施の形態のまとめ)
 本発明の一態様に係る再生方法は、光源の輝度変化により可視光信号を送信する送信機から、前記可視光信号を端末装置のセンサにより受信する信号受信ステップと、前記端末装置から、前記可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバに送信する送信ステップと、前記端末装置が、各時刻と、前記各時刻に再生されるデータとを含むコンテンツを、前記サーバから受信するコンテンツ受信ステップと、前記コンテンツのうち、前記端末装置に備えられている時計の時刻に該当するデータを再生する再生ステップとを含む。
(Summary of the above embodiment)
The reproduction method according to an aspect of the present invention includes a signal receiving step of receiving the visible light signal by a sensor of a terminal device from a transmitter that transmits a visible light signal according to a luminance change of a light source, and the visible light signal from the terminal device. A transmission step of transmitting a request signal for requesting the content associated with the optical signal to the server, and the terminal device transmits content including each time and data reproduced at each time from the server. A content receiving step of receiving, and a playback step of playing back data corresponding to a time of a clock provided in the terminal device among the content.
 これにより、図131Cに示すように、各時刻と、その各時刻に再生されるデータとを含むコンテンツが端末装置に受信され、端末装置の時計の時刻に該当するデータが再生される。したがって、端末装置は、そのコンテンツにおけるデータを、間違った時刻に再生してしまうことなく、そのコンテンツに示される正しい時刻に、適切に再生することができる。具体的には、図131Aの方法eのように、端末装置である受信機は、コンテンツを(受信機時刻-コンテンツ再生開始時刻)の時点から再生する。上述の端末装置の時計の時刻に該当するデータは、コンテンツのうちの(受信機時刻-コンテンツ再生開始時刻)の時点にあるデータである。また、送信機においても、そのコンテンツに関連するコンテンツ(送信機側コンテンツ)が再生されていれば、端末装置は、コンテンツをその送信機側コンテンツに適切に同期させて再生することができる。なお、コンテンツは音声または画像である。 Thereby, as shown in FIG. 131C, content including each time and data reproduced at each time is received by the terminal device, and data corresponding to the clock time of the terminal device is reproduced. Therefore, the terminal device can appropriately reproduce the data in the content at the correct time indicated by the content without reproducing the data at the wrong time. Specifically, as in the method e of FIG. 131A, the receiver as the terminal device reproduces the content from the time of (receiver time−content reproduction start time). The data corresponding to the clock time of the terminal device described above is data at the time of (receiver time−content reproduction start time) in the content. Also, in the transmitter, if content related to the content (transmitter-side content) is being played back, the terminal device can play back the content appropriately synchronized with the transmitter-side content. The content is sound or image.
 また、前記端末装置に備えられている時計と、基準クロックとの間では、GPS(Global Positioning System)電波、または、NTP(Network Time Protocol)電波によって、同期がとられていてもよい。 Further, the clock provided in the terminal device and the reference clock may be synchronized with each other by GPS (Global Positioning System) radio waves or NTP (Network Time Protocol) radio waves.
 これにより、図130および図132に示すように、端末装置(受信機)の時計と基準クロックとの間で同期がとられているため、基準クロックにしたがった適切な時刻に、その時刻に該当するデータを再生することができる。 As a result, as shown in FIGS. 130 and 132, since the clock of the terminal device (receiver) is synchronized with the reference clock, the time corresponds to the appropriate time according to the reference clock. Can be played back.
 また、前記可視光信号は、前記可視光信号が前記送信機から送信される時刻を示してもよい。 Further, the visible light signal may indicate a time when the visible light signal is transmitted from the transmitter.
 これにより、図131Aの方法dに示すように、端末装置(受信機)は、可視光信号が送信機から送信される時刻(送信機時刻)に対応付けられたコンテンツを受信することができる。例えば、送信機時刻が5時43分であれば、5時43分に再生されるコンテンツを受信することができる。 Thereby, as shown in the method d in FIG. 131A, the terminal device (receiver) can receive the content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
 また、前記再生方法では、さらに、前記GPS電波または前記NTP電波によって、前記端末装置の時計と前記基準クロックとの間で同期をとるための処理が行われた時刻が、前記端末装置が前記可視光信号を受信した時刻から所定の時間より前である場合、前記送信機から送信された前記可視光信号が示す時刻により、前記端末装置の時計と、前記送信機の時計との間で同期をとってもよい。 Further, in the reproduction method, the time at which the process for synchronizing the clock of the terminal device and the reference clock is performed by the GPS radio wave or the NTP radio wave is determined by the terminal device as the visible signal. If it is before a predetermined time from the time when the optical signal is received, synchronization is performed between the clock of the terminal device and the clock of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter. It may be taken.
 例えば、端末装置の時計と基準クロックとの間で同期をとるための処理が行われてから所定の時間が経過してしまうと、その同期が適切に保たれていない場合がある。このような場合には、端末装置は、送信機で再生される送信機側コンテンツと同期する時刻に、コンテンツを再生することできない可能性がある。そこで、上記本発明の一態様に係る再生方法では、図130のステップS1829,S1830のように、所定の時間が経過したときには、端末装置(受信機)の時計と送信機の時計との間で同期がとられる。したがって、端末装置は、送信機で再生される送信機側コンテンツと同期する時刻に、コンテンツを再生することができる。 For example, if a predetermined time elapses after the processing for synchronizing the clock of the terminal device and the reference clock is performed, the synchronization may not be properly maintained. In such a case, there is a possibility that the terminal device cannot reproduce the content at a time synchronized with the transmitter-side content reproduced by the transmitter. Therefore, in the playback method according to one aspect of the present invention, as shown in steps S1829 and S1830 of FIG. 130, when a predetermined time has elapsed, the clock of the terminal device (receiver) and the clock of the transmitter are set. Synchronized. Therefore, the terminal device can reproduce the content at a time synchronized with the transmitter-side content reproduced by the transmitter.
 また、前記サーバは、それぞれ時刻に関連付けられている複数のコンテンツを有しており、前記コンテンツ受信ステップでは、前記可視光信号が示す時刻に関連付けられたコンテンツが前記サーバに存在しない場合には、前記複数のコンテンツのうち、前記可視光信号が示す時刻に最も近く、かつ、前記可視光信号が示す時刻の後の時刻に関連付けられているコンテンツを受信してもよい。 Further, the server has a plurality of contents each associated with a time, and in the contents receiving step, when the contents associated with the time indicated by the visible light signal does not exist in the server, Among the plurality of contents, content that is closest to the time indicated by the visible light signal and that is associated with a time after the time indicated by the visible light signal may be received.
 これにより、図131Aの方法dに示すように、可視光信号が示す時刻に関連付けられたコンテンツがサーバに存在しなくても、そのサーバにある複数のコンテンツの中から、適切なコンテンツを受信することができる。 As a result, as shown in the method d in FIG. 131A, even if the content associated with the time indicated by the visible light signal does not exist in the server, the appropriate content is received from the plurality of contents in the server. be able to.
 また、光源の輝度変化により可視光信号を送信する送信機から、前記可視光信号を端末装置のセンサにより受信する信号受信ステップと、前記端末装置から、前記可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバに送信する送信ステップと、前記端末装置が、前記サーバからコンテンツを受信するコンテンツ受信ステップと、前記コンテンツを再生する再生ステップと、を含み、前記可視光信号は、ID情報と、前記可視光信号が前記送信機から送信される時刻とを示し、前記コンテンツ受信ステップでは、前記可視光信号によって示されるID情報および時刻に対応付けられた前記コンテンツを受信してもよい。 In addition, a signal receiving step of receiving the visible light signal by a sensor of a terminal device from a transmitter that transmits a visible light signal due to a luminance change of the light source, and a content associated with the visible light signal from the terminal device. A transmission step of transmitting a request signal for requesting to the server, a content reception step in which the terminal device receives the content from the server, and a reproduction step of reproducing the content, wherein the visible light signal is: ID information and a time at which the visible light signal is transmitted from the transmitter are indicated. In the content receiving step, the ID information indicated by the visible light signal and the content associated with the time are received. Good.
 これにより、図131Aの方法dのように、ID情報(送信機ID)に関連付けられている複数のコンテンツの中から、可視光信号が送信機から送信される時刻(送信機時刻)に対応付けられたコンテンツが受信されて再生される。したがって、その送信機IDおよび送信機時刻に対して適切なコンテンツを再生することができる。 Thus, as in method d in FIG. 131A, the visible light signal is associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter among the plurality of contents associated with the ID information (transmitter ID). The received content is received and played back. Therefore, it is possible to reproduce content appropriate for the transmitter ID and transmitter time.
 また、前記可視光信号は、時刻のうちの時および分を示す第2の情報と、時刻のうちの秒を示す第1の情報とを含むことによって、前記可視光信号が前記送信機から送信される時刻を示し、前記信号受信ステップでは、前記第2の情報を受信するとともに、前記第2の情報を受信する回数よりも多くの回数だけ前記第1の情報を受信してもよい。 The visible light signal includes second information indicating the hour and minute of the time and first information indicating the second of the time, so that the visible light signal is transmitted from the transmitter. In the signal receiving step, the second information may be received and the first information may be received more times than the number of times the second information is received.
 これにより、例えば、可視光信号に含まれる各パケットが送信される時刻を秒単位で端末装置に通知する場合には、時、分および秒の全てを用いて表現される現時点の時刻を示すパケットを、1秒経過ごとに端末装置に送信する手間を軽減することができる。つまり、図126に示すように、パケットが送信される時刻のうちの時および分が、前に送信されたパケットに示される時および分から更新されていなければ、秒のみを示すパケット(時間パケット1)である第1の情報だけを送信すればよい。したがって、送信機によって送信される、秒を示すパケット(時間パケット1)である第1の情報よりも、時および分を示すパケット(時間パケット2)である第2の情報を少なくすることによって、冗長な内容を含むパケットの送信を抑えることができる。 Thereby, for example, when notifying the terminal device of the time at which each packet included in the visible light signal is transmitted in seconds, the packet indicating the current time expressed using all of the hour, minute, and second Can be saved in every second. That is, as shown in FIG. 126, if the hour and minute of the time when the packet is transmitted is not updated from the time and minute indicated in the previously transmitted packet, the packet indicating only the second (time packet 1 Only the first information is required to be transmitted. Therefore, by reducing the second information that is the packet indicating the hour and the minute (time packet 2) than the first information that is the packet indicating the second (time packet 1) transmitted by the transmitter, Transmission of packets containing redundant contents can be suppressed.
 また、前記端末装置のセンサは、イメージセンサであって、前記信号受信ステップでは、前記イメージセンサのシャッター速度を、第1の速度と、前記第1の速度よりも高速の第2の速度とに交互に切り替えながら、前記イメージセンサによる連続した撮影を行い、(a)前記イメージセンサによる撮影の被写体がバーコードである場合には、前記シャッター速度が前記第1の速度であるときの撮影によって、バーコードが映っている画像を取得し、前記画像に映っているバーコードをデコードすることによって、バーコード識別子を取得し、(b)前記イメージセンサによる撮影の被写体が前記光源である場合には、前記シャッター速度が前記第2の速度であるときの撮影によって、前記イメージセンサに含まれる複数の露光ラインのそれぞれに対応する輝線を含む画像である輝線画像を取得し、取得された輝線画像に含まれる複数の輝線のパターンをデコードすることによって前記可視光信号を可視光識別子として取得し、前記再生方法では、さらに、前記シャッター速度が前記第1の速度であるときの撮影によって得られる画像を表示してもよい。 The sensor of the terminal device is an image sensor, and in the signal receiving step, the shutter speed of the image sensor is set to a first speed and a second speed higher than the first speed. While continuously switching, the image sensor performs continuous shooting, and (a) when the subject imaged by the image sensor is a barcode, by shooting when the shutter speed is the first speed, A barcode identifier is obtained by acquiring an image in which a barcode is reflected and decoding the barcode reflected in the image. (B) When the subject imaged by the image sensor is the light source The plurality of exposure lines included in the image sensor are photographed when the shutter speed is the second speed. The bright line image, which is an image including the bright line corresponding to each, is acquired, the visible light signal is acquired as a visible light identifier by decoding a plurality of bright line patterns included in the acquired bright line image, and the reproduction method Then, an image obtained by photographing when the shutter speed is the first speed may be displayed.
 これにより、図102に示すように、バーコードからでも可視光信号からでも、それらに応じた識別子を適切に取得することができるとともに、被写体とされているバーコードまたは光源が映し出された画像を表示することができる。 As a result, as shown in FIG. 102, an identifier corresponding to the barcode or the visible light signal can be appropriately acquired, and an image on which the barcode or light source as the subject is projected is displayed. Can be displayed.
 また、前記可視光識別子の取得では、前記複数の輝線のパターンから、データ部およびアドレス部を含む第1のパケットを取得し、前記第1のパケットよりも前に既に取得されている少なくとも1つのパケットのうち、前記第1のパケットのアドレス部と同一のアドレス部を含むパケットである第2のパケットが所定の数以上存在するか否かを判定し、前記第2のパケットが前記所定の数以上存在すると判定した場合には、前記所定の数以上の前記第2のパケットのそれぞれのデータ部に対応する前記輝線画像の一部の領域の画素値と、前記第1のパケットのデータ部に対応する前記輝線画像の一部の領域の画素値とを合わせることによって、合成画素値を算出し、前記合成画素値を含むデータ部を復号することによって、前記可視光識別子の少なくとも一部を取得してもよい。 In addition, in the acquisition of the visible light identifier, a first packet including a data part and an address part is acquired from the plurality of bright line patterns, and at least one already acquired before the first packet is obtained. It is determined whether or not there are a predetermined number or more of second packets, which are packets including the same address part as the address part of the first packet, and the second packet is the predetermined number If it is determined that there are more than the predetermined number, the pixel value of a part of the bright line image corresponding to each data portion of the second packet more than the predetermined number and the data portion of the first packet The visible light identification is performed by calculating a composite pixel value by combining the pixel values of a part of the region of the corresponding bright line image and decoding a data portion including the composite pixel value. At least a portion of the may be obtained.
 これにより、図74に示すように、同一のアドレス部を含む複数のパケットのそれぞれでデータ部が少し異なっていても、それらのパケットのデータ部の画素値を合わせることによって、適切なデータ部を復号することができ、可視光識別子の少なくとも一部を正しく取得することができる。 As a result, as shown in FIG. 74, even if the data part is slightly different in each of a plurality of packets including the same address part, an appropriate data part is obtained by matching the pixel values of the data part of those packets. It can be decoded and at least a portion of the visible light identifier can be obtained correctly.
 また、前記第1のパケットは、さらに、前記データ部に対する第1の誤り訂正符号と、前記アドレス部に対する第2の誤り訂正符号とを含み、前記信号受信ステップでは、前記送信機から、第2の周波数にしたがった輝度変化によって送信される前記アドレス部および前記第2の誤り訂正符号を受信し、前記第2の周波数よりも高い第1の周波数にしたがった輝度変化によって送信される前記データ部および前記第1の誤り訂正符号を受信してもよい。 The first packet further includes a first error correction code for the data portion and a second error correction code for the address portion. In the signal receiving step, a second error correction code is sent from the transmitter. The data part transmitted by the luminance change according to the first frequency higher than the second frequency, receiving the address part and the second error correction code transmitted by the luminance change according to the frequency of The first error correction code may be received.
 これにより、アドレス部を誤って受信することを抑えるとともに、データ量の多いデータ部を迅速に取得することができる。 This makes it possible to prevent the address part from being received in error and to obtain a data part with a large amount of data quickly.
 また、前記可視光識別子の取得では、前記複数の輝線のパターンから、データ部およびアドレス部を含む第1のパケットを取得し、前記第1のパケットよりも前に既に取得されている少なくとも1つのパケットのうち、前記第1のパケットのアドレス部と同一のアドレス部を含むパケットである少なくとも1つの第2のパケットが存在するか否かを判定し、前記少なくとも1つの第2のパケットが存在すると判定した場合には、前記少なくとも1つの第2のパケットと前記第1のパケットとのそれぞれのデータ部が全て等しいか否かを判定し、それぞれの前記データ部が全て等しくないと判定した場合には、前記少なくとも1つの第2のパケットのそれぞれにおいて、当該第2のパケットのデータ部に含まれる各部分のうち、前記第1のパケットのデータ部に含まれる各部分と異なる部分の数が、所定の数以上存在するか否かを判定し、前記少なくとも1つの第2のパケットのうち、異なる部分の数が前記所定の数以上存在すると判定された第2のパケットがある場合には、前記少なくとも1つの第2のパケットを破棄し、前記少なくとも1つの第2のパケットのうち、異なる部分の数が前記所定の数以上存在すると判定された第2パケットがない場合には、前記第1のパケットおよび前記少なくとも1つの第2のパケットのうち、同一のデータ部を有するパケットの数が最も多い複数のパケットを特定し、当該複数のパケットのそれぞれに含まれるデータ部を、前記第1のパケットに含まれるアドレス部に対応するデータ部として復号することによって、前記可視光識別子の少なくとも一部を取得してもよい。 In addition, in the acquisition of the visible light identifier, a first packet including a data part and an address part is acquired from the plurality of bright line patterns, and at least one already acquired before the first packet is obtained. It is determined whether or not there is at least one second packet that is a packet including the same address part as the address part of the first packet, and if there is the at least one second packet. If it is determined, it is determined whether or not the data portions of the at least one second packet and the first packet are all equal, and when it is determined that the data portions are not all equal. In each of the at least one second packet, among the parts included in the data portion of the second packet, the first packet It is determined whether there are more than a predetermined number of parts different from each part included in the data portion of the packet, and the number of different parts of the at least one second packet is the predetermined number. If there is a second packet determined to exist, the at least one second packet is discarded, and the number of different parts of the at least one second packet is greater than or equal to the predetermined number Then, if there is no second packet determined, the plurality of packets having the largest number of packets having the same data part are identified from among the first packet and the at least one second packet, By decoding the data part included in each of the plurality of packets as the data part corresponding to the address part included in the first packet, the visible light identifier is reduced. The phrase may also get the part.
 これにより、図73に示すように、同一のアドレス部を有する複数のパケットが受信されたときに、それらのパケットのデータ部が異なっていても、適切なデータ部を復号することができ、可視光識別子の少なくとも一部を正しく取得することができる。つまり、同一の送信機から送信される同一のアドレス部を有する複数のパケットは、基本的に同一のデータ部を有する。しかし、端末装置が、パケットの送信元となる送信機を切り替える場合には、端末装置は、同一のアドレス部を有していても互いに異なるデータ部を有する複数のパケットを受信することがある。このような場合には、上記本発明の一態様に係る再生方法では、図73のステップS10106のように、既に受信されているパケット(第2のパケット)が破棄され、最新のパケット(第1のパケット)のデータ部を、そのアドレス部に対応する正しいデータ部として復号することができる。さらに、上述のような送信機の切り替えがない場合であっても、可視光信号の送受信状況に応じて、同一のアドレス部を有する複数のパケットのデータ部が少し異なることがある。このような場合には、上記本発明の一態様に係る再生方法では、図73のステップS10107のように、いわゆる多数決によって、適切なデータ部を復号することができる。 As a result, as shown in FIG. 73, when a plurality of packets having the same address part are received, even if the data parts of those packets are different, the appropriate data part can be decoded and visible. At least a part of the optical identifier can be acquired correctly. That is, a plurality of packets having the same address part transmitted from the same transmitter basically have the same data part. However, when the terminal device switches the transmitter that is the transmission source of the packet, the terminal device may receive a plurality of packets having different data portions even though they have the same address portion. In such a case, in the reproduction method according to one aspect of the present invention, the already received packet (second packet) is discarded as in step S10106 of FIG. Data portion of the packet) can be decoded as a correct data portion corresponding to the address portion. Furthermore, even when there is no transmitter switching as described above, the data portions of a plurality of packets having the same address portion may be slightly different depending on the transmission / reception state of the visible light signal. In such a case, in the reproduction method according to one aspect of the present invention, an appropriate data portion can be decoded by a so-called majority decision as in step S10107 of FIG.
 また、前記可視光識別子の取得では、前記複数の輝線のパターンから、それぞれデータ部およびアドレス部を含む複数のパケットを取得し、取得された前記複数のパケットのうち、前記データ部に含まれる全てのビットが0を示すパケットである0終端パケットが存在するか否かを判定し、前記0終端パケットが存在すると判定した場合には、前記複数のパケットのうち、前記0終端パケットのアドレス部に関連付けられているアドレス部を含むパケットであるN個(Nは1以上の整数)の関連パケットが全て存在するか否かを判定し、前記N個の関連パケットが全て存在すると判定した場合には、前記N個の関連パケットのそれぞれのデータ部を並べて復号することによって、前記可視光識別子を取得してもよい。例えば、前記0終端パケットのアドレス部に関連付けられている前記アドレス部は、前記0終端パケットのアドレス部に示されるアドレスよりも小さく0以上のアドレスを示すアドレス部である。 Further, in the acquisition of the visible light identifier, a plurality of packets each including a data portion and an address portion are acquired from the plurality of bright line patterns, and all of the plurality of acquired packets included in the data portion are acquired. It is determined whether or not there is a 0-termination packet that is a packet whose bit indicates 0, and if it is determined that the 0-termination packet exists, the address part of the 0-termination packet is included in the plurality of packets. When it is determined whether or not all N related packets (N is an integer of 1 or more) that are packets including the associated address part exist, and when it is determined that all the N related packets exist The visible light identifier may be acquired by arranging and decoding the data portions of the N related packets. For example, the address part associated with the address part of the zero-termination packet is an address part that indicates an address that is smaller than or equal to the address indicated in the address part of the zero-termination packet.
 具体的には、図75に示すように、0終端パケットのアドレス以下のアドレスを有するパケットが関連パケットとして全て揃っているか否かが判定され、揃っていると判定された場合に、それらの関連パケットのそれぞれのデータ部が復号される。これにより、端末装置は、可視光識別子を取得するために、何個の関連パケットが必要であることを事前に知っていなくても、さらに、それらの関連パケットのアドレスを事前に知っていなくても、0終端パケットを取得した時点で、容易に知ることができる。その結果、端末装置は、N個の関連パケットのそれぞれのデータ部を並べて復号することによって、適切な可視光識別子を取得することができる。 Specifically, as shown in FIG. 75, it is determined whether or not all packets having addresses equal to or less than the address of the zero-termination packet are prepared as related packets. Each data portion of the packet is decoded. As a result, the terminal device does not know in advance how many related packets are necessary to obtain the visible light identifier, and further does not know the addresses of those related packets in advance. Also, when the zero-termination packet is acquired, it can be easily known. As a result, the terminal device can obtain an appropriate visible light identifier by arranging and decoding the data portions of the N related packets.
 (実施の形態18)
 以下、可変長・可変分割数対応プロトコルについて説明する。
(Embodiment 18)
Hereinafter, the variable length / variable division number compatible protocol will be described.
 図156は、本実施の形態における送信信号の一例を示す図である。 FIG. 156 is a diagram illustrating an example of a transmission signal in the present embodiment.
 送信パケットは、プリアンブル、TYPE、ペイロード、およびチェック部で構成される。パケットは連続で送信されても良いし、断続的に送信されても良い。パケットを送信しない期間を設けることで、バックライト消灯時に液晶の状態を変化させ、液晶ディスプレイの動解像感を向上させることが出来る。パケット送信間隔をランダムにすることで、混信を回避することができる。 The transmission packet is composed of a preamble, a TYPE, a payload, and a check unit. Packets may be transmitted continuously or intermittently. By providing a period during which no packet is transmitted, the state of the liquid crystal can be changed when the backlight is turned off, and the dynamic resolution of the liquid crystal display can be improved. Interference can be avoided by making the packet transmission interval random.
 プリアンブルには、4PPMに出現しないパターンを用いる。短い基本パターンを用いることで、受信処理を簡単にすることができる。 The pattern that does not appear in 4PPM is used for the preamble. The reception process can be simplified by using a short basic pattern.
 プリアンブルの種類によってデータの分割数を表現することで、余計な送信スロットを用いることなくデータ分割数を可変にすることができる。 By expressing the number of data divisions according to the type of preamble, the number of data divisions can be made variable without using extra transmission slots.
 TYPEの値によってペイロード長を変化させることで、送信データを可変長にすることができる。TYPEでは、ペイロード長を表現してもよいし、分割する前のデータ長を表現してもよい。TYPEの値によって、パケットのアドレスを表現することで、受信機は受信したパケットを正しく並べることができる。また、プリアンブルの種類または分割数によって、TYPEの値が表現するペイロード長(データ長)を変化させてもよい。 The transmission data can be made variable by changing the payload length according to the value of TYPE. In TYPE, the payload length may be expressed, or the data length before division may be expressed. By expressing the packet address by the value of TYPE, the receiver can arrange the received packets correctly. Further, the payload length (data length) represented by the value of TYPE may be changed depending on the type of preamble or the number of divisions.
 ペイロード長によってチェック部の長さを変化させることによって、効率的な誤り訂正(検出)ができる。チェック部の最短の長さを2ビットとすることで、効率的に4PPMに変換できる。また、ペイロード長によって誤り訂正(検出)符号の種類を変化させることで、効率的に誤り訂正(検出)ができる。プリアンブルの種類またはTYPEの値によってチェック部の長さまた誤り訂正(検出)符号の種類を変化させるとしてもよい。 Efficient error correction (detection) can be performed by changing the length of the check part according to the payload length. By setting the shortest length of the check part to 2 bits, it can be efficiently converted to 4PPM. Also, error correction (detection) can be performed efficiently by changing the type of error correction (detection) code depending on the payload length. The length of the check unit or the type of error correction (detection) code may be changed depending on the type of preamble or the value of TYPE.
 ペイロードと分割数の異なる組み合わせで同じデータ長となる組み合わせが存在する。このような場合は、同じデータ値であっても組み合わせごとに異なる意味を持たせることで、より多くの値を表現することができる。 There are combinations that have the same data length with different combinations of payload and number of divisions. In such a case, even if the data value is the same, more values can be expressed by giving different meanings to each combination.
 以下、高速送信・輝度変調プロトコルについて説明する。 Hereinafter, the high-speed transmission / luminance modulation protocol will be described.
 図157は、本実施の形態における送信信号の一例を示す図である。 FIG. 157 is a diagram illustrating an example of a transmission signal in the present embodiment.
 送信パケットは、プリアンブル部とボディ部と輝度調整部で構成される。ボディには、アドレス部とデータ部と誤り訂正(検出)符号部を含む。断続的な送信を許可することで、前記と同様の効果が得られる。 The transmission packet is composed of a preamble part, a body part, and a brightness adjustment part. The body includes an address part, a data part, and an error correction (detection) code part. By allowing intermittent transmission, the same effect as described above can be obtained.
 (実施の形態19)
 (Single frame transmissionのフレーム構成)
 図158は、本実施の形態における送信信号の一例を示す図である。
(Embodiment 19)
(Frame configuration of Single frame transmission)
FIG. 158 is a diagram illustrating an example of a transmission signal in this embodiment.
 送信フレームは、プリアンブル(PRE)、フレーム長(FLEN)、IDタイプ(IDTYPE)、コンテンツ(ID/DATA)、および検査符号(CRC)とで構成され、コンテンツタイプ(CONTENTTYPE)を含んでもよい。各領域のビット数は一例である。 The transmission frame includes a preamble (PRE), a frame length (FLEN), an ID type (IDTYPE), a content (ID / DATA), and a check code (CRC), and may include a content type (CONTENTTYPE). The number of bits in each area is an example.
 FLENでID/DATAの長さを指定することで、可変長のコンテンツを送信することができる。 By specifying the ID / DATA length with FLEN, variable length content can be transmitted.
 CRCは、PRE以外の部分の誤りを訂正、または、検出する検査符号である。検査領域の長さに応じてCRC長を変化させることで、検査能力を一定以上に保つことが出来る。また、検査領域の長さに応じて異なる検査符号を用いることで、CRC長あたりの検査能力を向上させることができる。 CRC is a check code that corrects or detects errors other than PRE. By changing the CRC length in accordance with the length of the inspection area, the inspection capability can be maintained above a certain level. Moreover, the inspection capability per CRC length can be improved by using different inspection codes depending on the length of the inspection region.
 (Multiple frame transmissionのフレーム構成)
 図159は、本実施の形態における送信信号の一例を示す図である。
(Frame configuration of multiple frame transmission)
FIG. 159 is a diagram illustrating an example of a transmission signal in this embodiment.
 送信フレームは、プリアンブル(PRE)とアドレス(ADDR)と分割されたデータの一部(DATAPART)から構成され、分割数(PARTNUM)とアドレスフラグ(ADDRFRAG)のそれぞれを含んでもよい。各領域のビット数は一例である。 The transmission frame is composed of a preamble (PRE), an address (ADDR), and a part of the divided data (DATAPART), and may include a division number (PARTNUM) and an address flag (ADDRFRAG). The number of bits in each area is an example.
 コンテンツを複数の部分に分割して送信することで、遠距離通信を行うことが出来る。 ・ Dividing content into multiple parts and transmitting them enables long-distance communications.
 分割する大きさを等分とすることで、最大フレーム長を小さくすることができ、安定して通信を行うことができる。 By dividing the size into equal parts, the maximum frame length can be reduced, and stable communication can be performed.
 等分割ができない場合には、一部の分割部分を他の分割部分より小さくすることで、ちょうどよいサイズのデータを送信することができる。 If equal division is not possible, data of just the right size can be transmitted by making some division parts smaller than other division parts.
 分割する大きさを異なる大きさとし、分割サイズの組み合わせに意味を持たせることで、より多くの情報を送信することができる。例えば、32bitの同じ値のデータであったとしても、8bitが4回で送信された場合と、16bitが2回で送信された場合と、15bitが1回と17が1回で送信された場合では異なる情報として扱うことで、より多くの情報量を表現することができる。 * More information can be transmitted by making the division size different and giving meaning to combinations of division sizes. For example, even if the data is the same value of 32 bits, when 8 bits are transmitted 4 times, when 16 bits are transmitted 2 times, when 15 bits are transmitted once and 17 is transmitted once Then, it is possible to express a larger amount of information by treating it as different information.
 PARTNUMで分割数を示すことで、受信機は分割数を即座に知ることができ、受信の進捗を正確に表示することができる。 By indicating the number of divisions with PARTNUM, the receiver can immediately know the number of divisions and can accurately display the progress of reception.
 ADDRFRAGが0の場合は最後のアドレスではなく、1の場合は最後のアドレスであるとすることで、分割数を示す領域が不要となり、より短い時間で送信することができる。 If ADDRFRAG is 0, it is not the last address, but if it is 1, the area indicating the number of divisions is not required, and transmission can be performed in a shorter time.
 CRCは、前記と同様に、PRE以外の部分の誤りを訂正、または、検出する検査符号である。この検査により、複数の送信元からの送信フレームを受信した際に、混信を検出することができる。CRC長をDATAPART長の整数倍とすることで、最も効率よく混信を検出することができる。 CRC is a check code for correcting or detecting errors other than PRE, as described above. By this inspection, interference can be detected when transmission frames from a plurality of transmission sources are received. By setting the CRC length to an integer multiple of the DATAPART length, it is possible to detect interference most efficiently.
 分割されたフレーム(図159の(a)、(b)または(c)によって示されるフレーム)の末尾に、各フレームのPRE以外の部分を検査する検査符号を加えるとしても良い。 A check code for checking a portion other than the PRE of each frame may be added to the end of the divided frame (the frame indicated by (a), (b), or (c) in FIG. 159).
 図159の(d)によって示されるIDTYPEは、図158の(a)~(d)と同様に、4bitまたは5bitなどの固定長としてもよいし、IDTYPE長をID/DATA長によって変化させるとしてもよい。これにより、前記と同様の効果が得られる。 The IDTYPE shown by (d) in FIG. 159 may be a fixed length such as 4 bits or 5 bits as in FIGS. 158 (a) to (d), or the IDTYPE length may be changed by the ID / DATA length. Good. Thereby, the same effect as described above can be obtained.
 (ID/DATA長の指定)
 図160は、本実施の形態における送信信号の一例を示す図である。
(Designation of ID / DATA length)
FIG. 160 is a diagram illustrating an example of a transmission signal in this embodiment.
 図158の(a)~(d)の場合に、それぞれ図160に示す表(a)および(b)のように設定することで、128bitのときにucodeを表すことができる。 In the case of (a) to (d) in FIG. 158, the ucode can be represented at 128 bits by setting as shown in the tables (a) and (b) shown in FIG.
 (CRC長と生成多項式)
 図161は、本実施の形態における送信信号の一例を示す図である。
(CRC length and generator polynomial)
FIG. 161 is a diagram illustrating an example of a transmission signal in this embodiment.
 このようにCRC長を設定することで、検査対象の長さに依らず検査能力を保つことができる。 By setting the CRC length in this way, the inspection capability can be maintained regardless of the length of the inspection target.
 生成多項式は一例であり、別の生成多項式を用いても良い。また、CRC以外の検査符号を用いるとしても良い。これらにより、検査能力を向上することができる。 The generator polynomial is an example, and another generator polynomial may be used. A check code other than CRC may be used. As a result, the inspection capability can be improved.
 (プリアンブルの種類によるDATAPART長の指定と最後のアドレスの指定)
 図162は、本実施の形態における送信信号の一例を示す図である。
(Designation of DATAPART length by the type of preamble and designation of the last address)
FIG. 162 is a diagram illustrating an example of a transmission signal in this embodiment.
 プリアンブルの種類でDATAPART長を示すことで、DATAPART長を示す領域が必要なくなり、より短い送信時間で情報を送信することができる。また、最後のアドレスであるかどうかを示すことで、分割の個数を示す領域が必要なくなり、より短い送信時間で情報を送信することができる。また、図162の(b)の場合は、最後のアドレスの場合はDATAPART長がわからないため、そのフレーム受信の直前または直後に受信した最後のアドレスではないフレームのDATAPART長と同一であると推定して受信処理を行うことで、正常に受信することができる。 By indicating the DATAPART length as the type of preamble, an area indicating the DATAPART length is not necessary, and information can be transmitted in a shorter transmission time. Further, by indicating whether or not it is the last address, an area indicating the number of divisions is not necessary, and information can be transmitted in a shorter transmission time. In the case of (b) in FIG. 162, since the DATAPART length is not known in the case of the last address, it is estimated that it is the same as the DATAPART length of the frame that is not the last address received immediately before or after the frame reception. By performing the receiving process, it is possible to receive normally.
 プリアンブルの種類によってアドレス長が異なるとしても良い。これにより、送信情報の長さの組み合わせを多くしたり、短い時間で送信したりすることができる。 The address length may be different depending on the type of preamble. Thereby, the combination of the length of transmission information can be increased, or it can transmit in a short time.
 図162の(c)の場合は、プリアンブルで分割数を規定し、DATAPART長を示す領域を加える。 In the case of (c) in FIG. 162, the number of divisions is defined by the preamble, and an area indicating the DATAPART length is added.
 (アドレスの指定)
 図163は、本実施の形態における送信信号の一例を示す図である。
(Specify address)
FIG. 163 is a diagram illustrating an example of a transmission signal in this embodiment.
 ADDRの値でそのフレームのアドレスを示すことで、受信機は、正しく送信された情報を再構成することができる。 By indicating the address of the frame with the value of ADDR, the receiver can reconstruct the correctly transmitted information.
 PARTNUMの値で分割数を示すことで、受信機は最初のフレームを受信した時点で必ず分割数を知ることができ、受信の進捗を正確に表示することができる。 By indicating the number of divisions by the value of PARTNUM, the receiver can always know the number of divisions when the first frame is received, and can accurately display the progress of reception.
 (分割数の違いによる混信の防止)
 図164と図165は、本実施の形態における送受信システムの一例を示す図とフローチャートである。
(Prevention of interference due to differences in the number of divisions)
FIGS. 164 and 165 are a diagram and a flowchart illustrating an example of the transmission / reception system in this embodiment.
 送信情報を等分割して分割送信する場合、図164の送信機Aと送信機Bからの信号は、プリアンブルが異なるため、これらの信号を同時に受信した場合でも、受信機は送信元を混同することなく、送信情報を再構成することができる。 When transmission information is divided equally and transmitted separately, the signals from transmitter A and transmitter B in FIG. 164 have different preambles, so even if these signals are received simultaneously, the receiver confuses the transmission source. Transmission information can be reconstructed without any problem.
 送信機A、Bは、分割数設定部を備えることで、ユーザは、近くに設置した送信機の分割数が異なるように設定することができ、混信を防ぐことができる。 Transmitters A and B are provided with a division number setting unit, so that the user can set the division number of transmitters installed nearby to be different, thereby preventing interference.
 受信機は、受信した信号の分割数をサーバに登録することで、サーバは送信機の設定されている分割数を知ることができ、他の受信機はその情報をサーバから取得することで、受信の進捗状況を正確に表示することができる。 The receiver registers the number of divisions of the received signal with the server, so that the server can know the number of divisions set by the transmitter, and the other receivers obtain the information from the server, The progress of reception can be accurately displayed.
 受信機は、付近の、または、対応する送信機からの信号は等長分割であるかどうかをサーバから、あるいは、受信機の記憶部から取得する。前記取得した情報が等長分割である場合は、同じDATAPART長のフレームのみから信号を復元する。そうでない場合や、同じDATAPART長のフレームで全てのアドレスが揃わない状況が所定の時間以上継続した場合は、異なるDATAPART長のフレームを合わせて信号を復元する。 The receiver obtains from the server or from the storage unit of the receiver whether the signal from the nearby or corresponding transmitter is divided into equal lengths. When the acquired information is equal length division, a signal is restored only from a frame having the same DATAPART length. If this is not the case, or if a situation in which all addresses are not complete in a frame with the same DATAPART length continues for a predetermined time or longer, the signal is restored by combining frames with different DATAPART lengths.
 (分割数の違いによる混信の防止)
 図166は、本実施の形態におけるサーバの動作を示すフローチャートである。
(Prevention of interference due to differences in the number of divisions)
FIG. 166 is a flowchart showing the operation of the server in this embodiment.
 サーバは、受信機が受信したIDと分割構成(どのようなDATAPART長の組み合わせで信号を受信したか)を受信機から受け取る。前記IDが、分割構成による拡張の対象である場合は、分割構成のパターンを数値化したものを補助IDとし、前記IDと前記補助IDを合わせた拡張IDをキーとして関連付けられた情報を受信機へ渡す。 The server receives from the receiver the ID received by the receiver and the divided configuration (what combination of DATAPART lengths the signal was received in). When the ID is an object to be expanded by a divided configuration, a numerical value of the divided configuration pattern is used as an auxiliary ID, and information associated with the extended ID obtained by combining the ID and the auxiliary ID as a key is received by the receiver. To pass.
 分割構成による拡張の対象でない場合は、IDに関連付けられた分割構成が記憶部に存在するかどうか確認し、受信した分割構成と同じであるかどうか確認する。異なる場合は再確認命令を受信機へ送信する。これにより、受信機の受信エラーによって誤った情報が提示されることを防ぐことができる。 If it is not the target of expansion by the split configuration, it is checked whether the split configuration associated with the ID exists in the storage unit, and whether it is the same as the received split configuration. If they are different, send a reconfirmation command to the receiver. Thereby, it is possible to prevent erroneous information from being presented due to a reception error of the receiver.
 再確認命令を送信後、所定の時間以内に同じIDで同じ分割構成を受信した場合には、分割構成が変更されたと判断し、IDに関連付けられた分割構成を更新する。これにより、図164の説明として記述したように、分割構成が変更された場合に対応することができる。 When the same division configuration is received with the same ID within a predetermined time after the reconfirmation command is transmitted, it is determined that the division configuration has been changed, and the division configuration associated with the ID is updated. Thereby, as described in the explanation of FIG. 164, it is possible to cope with a case where the division configuration is changed.
 分割構成が記憶されていない場合、受信した分割構成と記憶されている分割構成が一致した場合、または、分割構成を更新する場合には、IDをキーとして関連付けられた情報を受信機へ渡し、分割構成をIDと関連付けて記憶部へ記憶する。 When the division configuration is not stored, when the received division configuration matches the stored division configuration, or when the division configuration is updated, the associated information is passed to the receiver using the ID as a key, The divided configuration is associated with the ID and stored in the storage unit.
 (受信の進捗状況の表示)
 図167~図172は、本実施の形態における受信機の動作の一例を示すフローチャートと図である。
(Display of reception progress)
167 to 172 are a flowchart and a diagram illustrating an example of operation of the receiver in this embodiment.
 受信機は、受信機が対応している送信機、または、受信機の付近にある送信機の分割数の種類と割合を、サーバや受信機の記憶領域から取得する。また、一部の分割データを既に受信している場合は、その一部に一致する情報を送信している送信機の分割数の種類と割合を取得する。 The receiver obtains from the storage area of the server or the receiver the type and ratio of the number of divisions of the transmitter supported by the receiver or the transmitter in the vicinity of the receiver. Further, when a part of the divided data has already been received, the type and ratio of the number of divisions of the transmitter that transmits the information that matches the part of the divided data is acquired.
 受信機は、分割されたフレームを受信する。 The receiver receives the divided frame.
 最後のアドレスを既に受信している場合、前記取得した分割数が1種類だけである場合、または、実行中の受信アプリの対応している分割数が1種類だけである場合は、分割数が既知であるため、その分割数を基準に進捗状況を表示する。 If the last address has already been received, if the obtained number of divisions is only one, or if the number of divisions supported by the receiving application being executed is only one, the number of divisions is Since it is known, the progress status is displayed based on the number of divisions.
 そうでない場合であって、利用可能な処理リソースが少ない、または省エネモードである場合には、受信機は、簡易モードで進捗状況を計算して表示する。一方、利用可能な処理リソースが多い、または省エネモードではない場合には、最尤推定モードで進捗状況を計算して表示する。 Otherwise, if the available processing resources are few or the energy saving mode is set, the receiver calculates and displays the progress status in the simple mode. On the other hand, when there are many available processing resources or when the energy saving mode is not set, the progress status is calculated and displayed in the maximum likelihood estimation mode.
 図168は、簡易モードでの進捗状況の計算方法を示すフローチャートである。 FIG. 168 is a flowchart showing a method for calculating the progress in the simple mode.
 まず、受信機は、標準分割数Nsを、サーバから取得する。または、受信機は、自らの内部のデータ保持部から標準分割数Nsを読み出す。なお、標準分割数は、(a)その分割数で送信する送信機数の最頻値または期待値、(b)パケット長ごとに定められた分割数、(c)アプリケーションごとに定められた分割数、または、(d)受信機がある場所であって、識別可能な範囲ごとに定められた分割数である。 First, the receiver acquires the standard division number Ns from the server. Alternatively, the receiver reads the standard division number Ns from its own data holding unit. Note that the standard number of divisions is (a) the mode value or expected value of the number of transmitters to be transmitted with the number of divisions, (b) the number of divisions determined for each packet length, and (c) the number of divisions determined for each application. Or (d) the number of divisions determined for each identifiable range where the receiver is located.
 次に、受信機は、最終アドレスであることを示すパケットを受信しているか否かを判定する。受信していると判定すると、最終パケットのアドレスをNとする。一方、受信していないと判定すると、受信済みの最大アドレスAmaxに1または2以上の数を加えた数をNeとする。ここで、受信機は、Ne>Nsか否かを判定する。Ne>Nsであると判定すると、受信機は、N=Neとする。一方、Ne>Nsではないと判定すると、受信機は、N=Nsとする。 Next, the receiver determines whether or not a packet indicating the final address has been received. If it is determined that the packet is received, the address of the last packet is set to N. On the other hand, if it is determined that it has not been received, Ne is set to a number obtained by adding 1 or 2 or more to the received maximum address Amax. Here, the receiver determines whether Ne> Ns. If it is determined that Ne> Ns, the receiver sets N = Ne. On the other hand, if it is determined that Ne> Ns is not satisfied, the receiver sets N = Ns.
 そして、受信機は、受信中の信号の分割数がNであるとして、信号全体の受信に必要なパケットのうち、受信済みパケット数の割合を計算する。 Then, the receiver calculates the ratio of the number of received packets among the packets necessary for receiving the entire signal, assuming that the number of divisions of the signal being received is N.
 このような簡易モードでは、最尤推定モードよりも単純な計算で進捗状況を計算することができ、処理時間または消費エネルギーの点で有利である。 In such a simple mode, the progress can be calculated with simpler calculation than the maximum likelihood estimation mode, which is advantageous in terms of processing time or energy consumption.
 図169は、最尤推定モードでの進捗状況の計算方法を示すフローチャートである。 FIG. 169 is a flowchart showing a method of calculating the progress in the maximum likelihood estimation mode.
 まず、受信機は、分割数の事前分布を、サーバから取得する。または、受信機は、自らの内部のデータ保持部から事前分布を読み出す。なお、事前分布は、(a)その分割数で送信する送信機数の分布として定められている、(b)パケット長ごとに定められている、(c)アプリケーションごとに定められている、または、(d)受信機がある場所であって、識別可能な範囲ごとに定められている。 First, the receiver acquires the prior distribution of the number of divisions from the server. Alternatively, the receiver reads the prior distribution from its own data holding unit. The prior distribution is defined as (a) the distribution of the number of transmitters to be transmitted in the division number, (b) defined for each packet length, (c) defined for each application, or (D) The location where the receiver is located, and is determined for each identifiable range.
 次に、受信機は、パケットxを受信し、分割数がyのときにパケットxを受信する確率P(x|y)を計算する。そして、受信機は、パケットxを受信した場合に送信信号の分割数がyである確率P(y|x)を、P(x|y)×P(y)÷Aとして求める(なお、Aは正規化乗数である)。さらに、受信機は、P(y)=P(y|x)とする。 Next, the receiver receives the packet x, and calculates the probability P (x | y) of receiving the packet x when the division number is y. Then, when the receiver receives the packet x, the receiver obtains a probability P (y | x) that the number of transmission signal divisions is y as P (x | y) × P (y) ÷ A (note that A Is a normalized multiplier). Further, the receiver sets P (y) = P (y | x).
 ここで、受信機は、分割数推定モードが最尤モードであるか、尤度平均モードであるか否かを判定する。最尤モードである場合、受信機は、P(y)が最大となるyを分割数として受信済みのパケット数の割合を算出する。一方、尤度平均モードである場合、受信機は、y×P(y)の総和を分割数として受信済みのパケット数の割合を計算する。 Here, the receiver determines whether the division number estimation mode is the maximum likelihood mode or the likelihood average mode. In the case of the maximum likelihood mode, the receiver calculates the ratio of the number of received packets, with y being the maximum P (y) as the division number. On the other hand, in the likelihood average mode, the receiver calculates the ratio of the number of received packets with the sum of y × P (y) as the number of divisions.
 このような最尤推定モードでは、簡易モードよりも正確な進捗度合いを計算することができる。 In such maximum likelihood estimation mode, it is possible to calculate a more accurate degree of progress than in the simple mode.
 また、分割数推定モードが最尤モードの場合は、これまでに受信したアドレスから最後のアドレスが何番であるかの尤度を計算し、最尤のものを分割数であると推定して受信の進捗を表示する。この表示方法は、実際の進捗状況に最も近い進捗状況を表示できる。 In addition, when the division number estimation mode is the maximum likelihood mode, the likelihood of what the last address is from the addresses received so far is calculated, and the maximum likelihood is estimated as the division number. Displays the progress of reception. This display method can display the progress status closest to the actual progress status.
 図170は、進捗状況が減少しない表示方法を示すフローチャートである。 FIG. 170 is a flowchart showing a display method in which the progress status does not decrease.
 まず、受信機は、信号全体の受信に必要なパケットのうち、受信済みパケット数の割合を計算する。そして、受信機は、計算した割合が、表示中の割合よりも小さいか否かを判定する。表示中の割合よりも小さいと判定すると、受信機は、さらに、表示中の割合が所定の時間以上前の計算結果か否かを判定する。所定の時間以上前の計算結果であると判定すると、受信機は、計算した割合を表示する。一方、所定の時間以上前の計算結果ではないと判定すると、受信機は、表示中の割合を表示し続ける。 First, the receiver calculates the ratio of the number of received packets among the packets necessary for receiving the entire signal. Then, the receiver determines whether or not the calculated ratio is smaller than the ratio being displayed. If it is determined that the ratio is smaller than the display ratio, the receiver further determines whether the display ratio is a calculation result before a predetermined time or more. If it is determined that the calculation result is more than a predetermined time, the receiver displays the calculated ratio. On the other hand, if it is determined that the calculation result is not more than a predetermined time ago, the receiver continues to display the displayed ratio.
 また、受信機は、計算した割合が、表示中の割合以上であると判定すると、受信済みの最大アドレスAmaxに1または2以上の数を加えた数をNeとする。そして、受信機は、その計算した割合を表示する。 Further, when the receiver determines that the calculated ratio is equal to or higher than the ratio being displayed, Ne sets the number obtained by adding one or two or more to the received maximum address Amax. The receiver then displays the calculated percentage.
 最終パケットを受信したときなどに、進捗状況の計算結果がそれまでよりも小さくなること、つまり、表示される進捗状況(進捗度合い)が下がることは、不自然である。しかし、上述の表示方法では、このような不自然な表示を抑えることができる。 It is unnatural that the progress calculation result becomes smaller than before, that is, when the final packet is received, that is, the displayed progress (degree of progress) decreases. However, the above-described display method can suppress such unnatural display.
 図171は、複数のパケット長がある場合の進捗状況の表示方法を示すフローチャートである。 FIG. 171 is a flowchart showing a progress display method when there are a plurality of packet lengths.
 まず、受信機は、受信済みパケット数の割合Pを、パケット長ごとに計算する。ここで、受信機は、表示モードが最大モード、全表示モードおよび最新モードのうちの何れであるかを判定する。最大モードであると判定すると、受信機は、複数のパケット長のそれぞれの割合Pのうちの最大の割合を表示する。全表示モードであると判定すると、受信機は、全ての割合Pを表示する。最新モードであると判定すると、受信機は、最後に受信したパケットのパケット長の割合Pを表示する。 First, the receiver calculates the ratio P of the number of received packets for each packet length. Here, the receiver determines whether the display mode is the maximum mode, the full display mode, or the latest mode. If it is determined that the mode is the maximum mode, the receiver displays the maximum ratio among the ratios P of the plurality of packet lengths. When it is determined that the display mode is the full display mode, the receiver displays all the ratios P. If it is determined that the mode is the latest mode, the receiver displays the packet length ratio P of the last received packet.
 図172で、(a)は前記簡易モードとして計算した進捗状況、(b)は前記最尤モードとして計算した進捗状況、(c)は取得した分割数のうち最小のものを分割数として計算した場合の進捗状況である。(a)(b)(c)の順で進捗状況は大きくなるため、このように(a)(b)(c)を重ねて表示することで、全ての進捗状況を同時に表示することができる。 In FIG. 172, (a) is the progress status calculated as the simple mode, (b) is the progress status calculated as the maximum likelihood mode, and (c) is the minimum of the obtained number of divisions. Is the progress of the case. Since the progress status increases in the order of (a), (b), and (c), all the progress statuses can be displayed at the same time by displaying (a), (b), and (c) in this manner. .
 (共通スイッチと画素スイッチによる発光制御)
 本実施の形態における送信方法では、例えば、映像表示用のLEDディスプレイに含まれる各LEDを、共通スイッチおよび画素スイッチのスイッチングに応じて、輝度変化させることにより、可視光信号(可視光通信信号ともいう)を送信する。
(Light emission control by common switch and pixel switch)
In the transmission method according to the present embodiment, for example, each LED included in the LED display for video display is changed in luminance according to the switching of the common switch and the pixel switch, so that a visible light signal (also a visible light communication signal) is obtained. Send).
 LEDディスプレイは、例えば屋外に配設される大型ディスプレイとして構成されている。また、LEDディスプレイは、マトリクス状に配列された複数のLEDを備え、映像信号に応じて、これらのLEDを明滅させることにより映像を表示する。このようなLEDディスプレイは、複数の共通ライン(COMライン)からなるとともに、複数の画素ライン(SEGライン)からなる。各共通ラインは、水平方向に一列に配列された複数のLEDからなり、各画素ラインは、垂直方向に一列に配列された複数のLEDからなる。また、複数の共通ラインのそれぞれは、その共通ラインに対応する共通スイッチに接続される。共通スイッチは例えばトランジスタである。複数の画素ラインのそれぞれは、その画素ラインに対応する画素スイッチに接続される。複数の画素ラインに対応する複数の画素スイッチは、例えばLEDドライバ回路(定電流回路)に備えられている。なお、このLEDドライバ回路は、複数の画素スイッチをスイッチングする画素スイッチ制御部として構成されている。 The LED display is configured as a large display disposed outdoors, for example. The LED display includes a plurality of LEDs arranged in a matrix, and displays an image by blinking these LEDs in accordance with a video signal. Such an LED display includes a plurality of common lines (COM lines) and a plurality of pixel lines (SEG lines). Each common line is composed of a plurality of LEDs arranged in a line in the horizontal direction, and each pixel line is composed of a plurality of LEDs arranged in a line in the vertical direction. Each of the plurality of common lines is connected to a common switch corresponding to the common line. The common switch is, for example, a transistor. Each of the plurality of pixel lines is connected to a pixel switch corresponding to the pixel line. A plurality of pixel switches corresponding to a plurality of pixel lines are provided in, for example, an LED driver circuit (constant current circuit). The LED driver circuit is configured as a pixel switch control unit that switches a plurality of pixel switches.
 より具体的には、共通ラインに含まれる各LEDのアノードおよびカソードのうちの一方が、その共通ラインに対応するトランジスタのコレクタなどの端子に接続される。また、画素ラインに含まれる各LEDのアノードおよびカソードのうちの他方が、上記LEDドライバ回路における、その画素ラインに対応する端子(画素スイッチ)に接続される。 More specifically, one of the anode and the cathode of each LED included in the common line is connected to a terminal such as a collector of a transistor corresponding to the common line. The other of the anode and the cathode of each LED included in the pixel line is connected to a terminal (pixel switch) corresponding to the pixel line in the LED driver circuit.
 このようなLEDディスプレイが映像を表示するときには、複数の共通スイッチを制御する共通スイッチ制御部が、それらの共通スイッチを時分割でオンにする。例えば、共通スイッチ制御部は、第1の期間中、複数の共通スイッチのうちの第1の共通スイッチのみをオンにし、次の第2の期間中、複数の共通スイッチのうちの第2の共通スイッチのみをオンにする。そして、LEDドライバ回路は、何れかの共通スイッチがオンにされている期間に、映像信号に応じて各画素スイッチをオンにする。これにより、共通スイッチがオンであり、かつ、画素スイッチがオンである期間だけ、その共通スイッチおよび画素スイッチに対応するLEDが点灯する。この点灯する期間によって、映像中の画素の輝度が表現される。つまり、映像の画素の輝度はPWM制御される。 When such an LED display displays an image, a common switch control unit that controls a plurality of common switches turns on those common switches in a time-sharing manner. For example, the common switch control unit turns on only the first common switch among the plurality of common switches during the first period, and the second common among the plurality of common switches during the next second period. Turn on only the switch. Then, the LED driver circuit turns on each pixel switch according to the video signal during a period when any one of the common switches is turned on. As a result, the LEDs corresponding to the common switch and the pixel switch are lit only during a period in which the common switch is on and the pixel switch is on. The luminance of the pixels in the video is expressed by this lighting period. That is, the luminance of the image pixels is PWM-controlled.
 本実施の形態における送信方法では、このようなLEDディスプレイと、共通スイッチおよび画素スイッチと、共通スイッチ制御部および画素スイッチ制御部とを利用して、可視光信号を送信する。また、このような送信方法によって可視光信号を送信する本実施の形態における送信装置(送信機ともいう)は、その共通スイッチ制御部および画素スイッチ制御部を備える。 In the transmission method in the present embodiment, a visible light signal is transmitted using such an LED display, a common switch and a pixel switch, and a common switch control unit and a pixel switch control unit. In addition, the transmission device (also referred to as a transmitter) in the present embodiment that transmits a visible light signal by such a transmission method includes the common switch control unit and the pixel switch control unit.
 図173は、本実施の形態における送信信号の一例を示す図である。 FIG. 173 is a diagram illustrating an example of a transmission signal in this embodiment.
 送信機は、予め定められたシンボル周期にしたがって、可視光信号に含まれる各シンボルを送信する。例えば、送信機は、シンボル「00」を4PPMによって送信するときには、4スロットからなるシンボル周期において、そのシンボル(「00」の輝度変化パターン)にしたがって共通スイッチをスイッチングする。そして、送信機は、映像信号などによって示される平均輝度に応じて、画素スイッチをスイッチングする。 The transmitter transmits each symbol included in the visible light signal according to a predetermined symbol period. For example, when transmitting the symbol “00” by 4PPM, the transmitter switches the common switch according to the symbol (the luminance change pattern of “00”) in a symbol period of 4 slots. Then, the transmitter switches the pixel switch according to the average luminance indicated by the video signal or the like.
 より具体的には、シンボル周期における平均輝度を75%にする場合(図173の(a))、送信機は、第1スロットの期間中、共通スイッチをオフにして、第2スロット~第4スロットまでの期間中、共通スイッチをオンにする。さらに、送信機は、第1スロットの期間中、画素スイッチをオフにして、第2スロット~第4スロットまでの期間中、画素スイッチをオンにする。これにより、共通スイッチがオンであり、かつ、画素スイッチがオンである期間だけ、その共通スイッチおよび画素スイッチに対応するLEDが点灯する。つまり、LEDは、4スロットのそれぞれにおいてLO(Low)、HI(High)、HI、HIの輝度で点灯することによって輝度変化する。その結果、シンボル「00」が送信される。 More specifically, when the average luminance in the symbol period is set to 75% (FIG. 173 (a)), the transmitter turns off the common switch during the first slot, and the second to fourth slots. The common switch is turned on during the period up to the slot. Further, the transmitter turns off the pixel switch during the period of the first slot, and turns on the pixel switch during the period from the second slot to the fourth slot. As a result, the LEDs corresponding to the common switch and the pixel switch are lit only during a period in which the common switch is on and the pixel switch is on. In other words, the luminance of the LED changes by lighting at the luminance of LO (Low), HI (High), HI, and HI in each of the four slots. As a result, the symbol “00” is transmitted.
 また、シンボル周期における平均輝度が25%の場合(図173の(e))、送信機は、第1スロットの期間中、共通スイッチをオフにして、第2スロット~第4スロットまでの期間中、共通スイッチをオンにする。さらに、送信機は、第1スロット、第3スロットおよび第4スロットの期間中、画素スイッチをオフにして、第2スロットの期間中、画素スイッチをオンにする。これにより、共通スイッチがオンであり、かつ、画素スイッチがオンである期間だけ、その共通スイッチおよび画素スイッチに対応するLEDが点灯する。つまり、LEDは、4スロットのそれぞれにおいてLO(Low)、HI(High)、LO、LOのように点灯することによって輝度変化する。その結果、シンボル「00」が送信される。なお、本実施の形態における送信機は、上述のV4PPM(variable 4PPM)に近い可視光信号を送信するため、同じシンボルを送信する場合でも、平均輝度を可変とすることができる。つまり、互いに異なる平均輝度で同じシンボル(例えば「00」)を送信するときには、送信機は、図173の(a)~(e)に示すように、そのシンボルに固有の輝度の立ち上がり位置(タイミング)を平均輝度に関わらず一定にしている。これにより、受信機は、輝度を意識することなく可視光信号を受信することができる。 When the average luminance in the symbol period is 25% ((e) in FIG. 173), the transmitter turns off the common switch during the first slot and during the period from the second slot to the fourth slot. Turn on the common switch. Further, the transmitter turns off the pixel switch during the first slot, the third slot, and the fourth slot, and turns on the pixel switch during the second slot. As a result, the LEDs corresponding to the common switch and the pixel switch are lit only during a period in which the common switch is on and the pixel switch is on. In other words, the luminance of the LED changes by being lit like LO (Low), HI (High), LO, and LO in each of the four slots. As a result, the symbol “00” is transmitted. In addition, since the transmitter in this Embodiment transmits the visible light signal close | similar to the above-mentioned V4PPM (variable 4PPM), even when transmitting the same symbol, it can make an average luminance variable. In other words, when transmitting the same symbol (for example, “00”) with different average luminance, the transmitter, as shown in FIGS. ) Is constant regardless of the average brightness. Thereby, the receiver can receive a visible light signal without being conscious of luminance.
 なお、共通スイッチは、上述の共通スイッチ制御部によってスイッチングされ、画素スイッチは、上述の画素スイッチ制御部によってスイッチングされる。 Note that the common switch is switched by the above-described common switch control unit, and the pixel switch is switched by the above-described pixel switch control unit.
 このように、本実施の形態における送信方法は、輝度変化によって可視光信号を送信する送信方法であって、決定ステップと、共通スイッチ制御ステップと、第1の画素スイッチ制御ステップとを含む。決定ステップでは、可視光信号を変調することにより、輝度変化パターンを決定する。共通スイッチ制御ステップでは、ディスプレイに備えられた光源群(共通ライン)に含まれる、それぞれ映像中の画素を表すための複数の光源(LED)を、共通に点灯させるための共通スイッチを、その輝度変化パターンにしたがってスイッチングする。第1の画素スイッチ制御ステップでは、その光源群に含まれる複数の光源のうちの第1の光源を点灯させるための第1の画素スイッチをオンにすることにより、共通スイッチがオンであり、かつ、第1の画素スイッチがオンである期間のみに、第1の光源を点灯させることによって、可視光信号を送信する。 As described above, the transmission method in the present embodiment is a transmission method for transmitting a visible light signal by a change in luminance, and includes a determination step, a common switch control step, and a first pixel switch control step. In the determining step, the luminance change pattern is determined by modulating the visible light signal. In the common switch control step, a common switch for commonly lighting a plurality of light sources (LEDs) included in a light source group (common line) provided in the display and representing each pixel in the video is displayed with the brightness. Switching according to the change pattern. In the first pixel switch control step, the common switch is turned on by turning on the first pixel switch for turning on the first light source among the plurality of light sources included in the light source group, and The visible light signal is transmitted by turning on the first light source only during a period in which the first pixel switch is on.
 これにより、複数のLEDなどを光源として備えたディスプレイから可視光信号を適切に送信することができる。したがって、照明以外の機器を含む態様な機器間の通信を可能とする。また、そのディスプレイが、共通スイッチおよび第1の画素スイッチの制御によって映像を表示するためのディスプレイである場合、その共通スイッチおよび第1の画素スイッチを利用して、可視光信号を送信することができる。したがって、ディスプレイに映像表示するための構成に対して大幅な変更を行うことなく、簡単に可視光信号を送信することができる。 Thereby, a visible light signal can be appropriately transmitted from a display including a plurality of LEDs as light sources. Therefore, the communication between the apparatuses of the aspect containing apparatuses other than illumination is enabled. When the display is a display for displaying an image by controlling the common switch and the first pixel switch, a visible light signal may be transmitted using the common switch and the first pixel switch. it can. Therefore, a visible light signal can be easily transmitted without making a significant change to the configuration for displaying an image on a display.
 また、画素スイッチの制御タイミングを送信シンボル(4PPM1回分)と一致させ、図173のように制御することで、ちらつきなくLEDディスプレイから可視光信号を送信することができる。画像信号(すなわち映像信号)は通常1/30秒や1/60秒周期で変化するが、シンボル送信周期(シンボル周期)に合わせて画像信号を変化させることで、回路に変更を加えることなく実現することができる。 Also, by making the control timing of the pixel switch coincide with the transmission symbol (one 4PPM) and controlling as shown in FIG. 173, a visible light signal can be transmitted from the LED display without flickering. Image signals (that is, video signals) usually change at 1/30 second or 1/60 second cycles, but by changing the image signal according to the symbol transmission cycle (symbol cycle), this can be achieved without changing the circuit. can do.
 このように、本実施の形態における送信方法の上記決定ステップでは、輝度変化パターンをシンボル周期ごとに決定する。また、上記第1の画素スイッチ制御ステップでは、シンボル周期に同期させて、画素スイッチをスイッチングする。これにより、シンボル周期が例えば1/2400秒であっても、そのシンボル周期にしたがって可視光信号を適切に送信することができる。 Thus, in the determination step of the transmission method in the present embodiment, the luminance change pattern is determined for each symbol period. In the first pixel switch control step, the pixel switch is switched in synchronization with the symbol period. Thereby, even if a symbol period is 1/2400 second, a visible light signal can be appropriately transmitted according to the symbol period.
 信号(シンボル)が「10」で平均輝度が50%付近のときは、輝度変化パターンが0101に近くなり、輝度の立ち上がり箇所が2箇所となる。しかし、その場合は、後の立ち上がり箇所を優先することで、受信機は正しく信号を受信することができる。すなわち、後の立ち上がり箇所は、シンボル「10」に固有の輝度の立ち上がりが得られるタイミングである。 When the signal (symbol) is “10” and the average luminance is near 50%, the luminance change pattern is close to 0101, and there are two luminance rising points. However, in that case, the receiver can correctly receive the signal by giving priority to the subsequent rising point. In other words, the subsequent rising point is the timing at which the luminance rising inherent to the symbol “10” is obtained.
 平均輝度が高いほど、4PPMで変調された信号に近い信号を出力することができる。したがって、画面全体、あるいは、電源ラインが共通な部分の輝度が低い場合は、電流を少なくして輝度の瞬時値を下げることで、HI区間を長くすることができ、エラーを低減させることができる。この場合、画面の最高輝度が下がるが、屋内での用途など、そもそも高い輝度が必要ない場合、または可視光通信を優先する場合などは、これを有効にするスイッチを有効にすることで、通信品質と画質のバランスを最適に設定することができる。 The higher the average brightness, the closer to the signal modulated with 4 PPM, the more the signal can be output. Therefore, when the luminance of the entire screen or the portion where the power supply line is common is low, the HI section can be lengthened and the error can be reduced by reducing the current and decreasing the instantaneous luminance value. . In this case, the maximum brightness of the screen is reduced, but when high brightness is not necessary in the first place, such as indoor use, or when priority is given to visible light communication, enabling the switch to enable this enables communication. The balance between quality and image quality can be set optimally.
 また、本実施の形態における送信方法の上記第1の画素スイッチ制御ステップでは、ディスプレイ(LEDディスプレイ)に映像を表示させるときには、上記第1の光源に対応する、映像中の画素の画素値を表現するための点灯期間のうち、可視光信号の送信のために第1の光源が消灯される期間だけ、その点灯期間を補うように、第1の画素スイッチをスイッチングする。つまり、本実施の形態における送信方法では、LEDディスプレイに映像が表示されているときに、可視光信号を送信する。したがって、映像信号によって示される画素値(具体的には輝度値)を表現するためにLEDが点灯すべき期間において、可視光信号の送信のためにそのLEDが消灯されることがある。このような場合には、本実施の形態における送信方法では、そのLEDが消灯される期間だけ、その点灯期間を補うように、第1の画素スイッチをスイッチングする。 In the first pixel switch control step of the transmission method according to the present embodiment, when an image is displayed on the display (LED display), the pixel value of the pixel in the image corresponding to the first light source is expressed. The first pixel switch is switched so that the lighting period is supplemented only during the period during which the first light source is turned off in order to transmit the visible light signal. That is, in the transmission method in the present embodiment, a visible light signal is transmitted when an image is displayed on the LED display. Therefore, the LED may be turned off for transmission of a visible light signal in a period in which the LED is to be turned on in order to express a pixel value (specifically, a luminance value) indicated by the video signal. In such a case, in the transmission method according to the present embodiment, the first pixel switch is switched so as to compensate for the lighting period only during the period when the LED is turned off.
 例えば、可視光信号を送信せずに映像信号によって示される映像を表示するときは、1つのシンボル周期中、共通スイッチはオンになり、画素スイッチは、その映像信号によって示される画素値である平均輝度に応じた期間だけオンになる。平均輝度が75%である場合、共通スイッチは、シンボル周期の第1スロット~第4スロットにおいてオンになる。さらに、画素スイッチは、シンボル周期の第1スロット~第3スロットにおいてオンになる。これにより、シンボル周期中、LEDは第1スロット~第3スロットにおいて点灯するため、上述の画素値を表現することができる。しかし、シンボル「01」の送信のためには、第2スロットが消灯される。そこで、本実施の形態における送信方法では、そのLEDが消灯される第2スロットだけ、そのLEDの点灯期間を補うように、つまり、第4スロットにおいてLEDが点灯するように、画素スイッチをスイッチングする。 For example, when displaying an image indicated by a video signal without transmitting a visible light signal, the common switch is turned on during one symbol period, and the pixel switch is an average value that is the pixel value indicated by the video signal. It is turned on only for the period according to the brightness. When the average luminance is 75%, the common switch is turned on in the first slot to the fourth slot of the symbol period. Further, the pixel switch is turned on in the first to third slots of the symbol period. As a result, during the symbol period, the LEDs are lit in the first slot to the third slot, so that the above-described pixel values can be expressed. However, in order to transmit the symbol “01”, the second slot is turned off. Therefore, in the transmission method according to the present embodiment, the pixel switch is switched so that only the second slot in which the LED is turned off compensates for the lighting period of the LED, that is, the LED is turned on in the fourth slot. .
 また、本実施の形態における送信方法では、映像中の画素の画素値を変更することによって、その点灯期間を補う。例えば、上述のような場合には、平均輝度75%の画素値を、平均輝度100%の画素値に変更する。平均輝度100%の場合、LEDは第1スロット~第4スロットで点灯しようとするが、シンボル「01」の送信のためには、第1スロットは消灯される。したがって、可視光信号を送信する場合でも、本来の画素値(平均輝度75%)でLEDを点灯させることができる。 In the transmission method according to the present embodiment, the lighting period is compensated by changing the pixel value of the pixel in the video. For example, in the above case, the pixel value with an average luminance of 75% is changed to a pixel value with an average luminance of 100%. When the average luminance is 100%, the LED tries to light up in the first slot to the fourth slot, but for transmission of the symbol “01”, the first slot is turned off. Therefore, even when a visible light signal is transmitted, the LED can be lit with the original pixel value (average luminance 75%).
 これにより、可視光信号の送信によって映像が崩れてしまうことを抑えることができる。 This makes it possible to prevent the image from being corrupted by the transmission of the visible light signal.
 (画素毎にずらした発光制御)
 図174は、本実施の形態における送信信号の一例を示す図である。
(Light emission control shifted for each pixel)
FIG. 174 is a diagram illustrating an example of a transmission signal in this embodiment.
 本実施の形態における送信機は、図174のように、同じシンボル(例えば「10」)を画素Aと、その画素Aの付近の画素(例えば、画素Bおよび画素C)から送信するときには、それらの画素の発光タイミングをずらす。ただし、送信機は、そのシンボルに固有の輝度の立ち上がりのタイミングを、それらの画素間でずらすことなく、それらの画素を発光させる。なお、画素A~画素Cはそれぞれ、光源(具体的にはLED)に相当する。また、シンボルに固有の輝度の立ち上がりのタイミングは、そのシンボルが「10」であれば、第3スロットと第4スロットとの境界のタイミングである。また、このようなタイミングを、以下、シンボル固有タイミングという。受信機は、このシンボル固有タイミングを特定することによって、そのタイミングに応じたシンボルを受信することができる。 When the transmitter in the present embodiment transmits the same symbol (for example, “10”) from the pixel A and the pixels in the vicinity of the pixel A (for example, the pixel B and the pixel C) as shown in FIG. The light emission timing of the pixels is shifted. However, the transmitter causes the pixels to emit light without shifting the rise timing of the luminance specific to the symbol between the pixels. Note that each of the pixels A to C corresponds to a light source (specifically, an LED). In addition, when the symbol is “10”, the luminance rise timing specific to the symbol is the boundary timing between the third slot and the fourth slot. Such timing is hereinafter referred to as symbol specific timing. The receiver can receive a symbol corresponding to the timing by specifying the symbol specific timing.
 このように発光タイミングをずらすことによって、画素間の平均輝度推移を示す波形は、図174に示すように、シンボル固有タイミングにおける立ち上がりを除いて、緩やかな立ち上がりまたは立下りを有する。つまり、シンボル固有タイミングのける立ち上がりは、他のタイミングの立ち上がりよりも急峻である。したがって、受信機は、複数の立ち上がりのうち、最も急峻な立ち上がりを優先して受信することで、適切なシンボル固有タイミングを特定することができ、その結果、受信誤りを抑えることができる。 By shifting the light emission timing in this manner, the waveform indicating the average luminance transition between pixels has a gradual rise or fall except for the rise at the symbol specific timing, as shown in FIG. That is, the rise at the symbol specific timing is steeper than the rise at other timings. Therefore, the receiver can identify an appropriate symbol specific timing by giving priority to receiving the steepest rise among a plurality of rises, and as a result, reception errors can be suppressed.
 つまり、所定の画素からシンボル「10」を送信する場合で、その所定の画素の輝度が25%から75%の中間値の場合は、送信機は、その所定の画素に対応する画素スイッチの開区間を短く、あるいは、長く設定する。さらに、送信機は、その所定の画素の付近の画素に対応する画素スイッチの開区間を逆に調整する。このように、その所定の画素と付近の画素とを含む全体の輝度が変わらないように、各画素スイッチの開区間を設定することでも、エラーを抑えることができる。なお、開区間とは、画素スイッチがオンしている区間である。 In other words, when the symbol “10” is transmitted from a predetermined pixel and the luminance of the predetermined pixel is an intermediate value of 25% to 75%, the transmitter opens the pixel switch corresponding to the predetermined pixel. Set the section short or long. Further, the transmitter reversely adjusts the open period of the pixel switch corresponding to the pixel in the vicinity of the predetermined pixel. Thus, errors can also be suppressed by setting the open interval of each pixel switch so that the overall luminance including the predetermined pixel and the neighboring pixels does not change. The open section is a section where the pixel switch is on.
 このように、本実施の形態における送信方法は、さらに、第2の画素スイッチ制御ステップを含む。この第2の画素スイッチ制御ステップでは、上述の光源群(共通ライン)に含まれる、第1の光源の周囲にある第2の光源を点灯させるための第2の画素スイッチをオンにすることにより、共通スイッチがオンであり、かつ、第2の画素スイッチがオンである期間のみに、その第2の光源を点灯させることによって、可視光信号を送信する。なお、第2の光源は、例えば第1の光源の隣にある光源である。 Thus, the transmission method in the present embodiment further includes a second pixel switch control step. In this second pixel switch control step, by turning on the second pixel switch for turning on the second light source around the first light source included in the light source group (common line) described above. The visible light signal is transmitted by turning on the second light source only during the period when the common switch is on and the second pixel switch is on. Note that the second light source is, for example, a light source adjacent to the first light source.
 そして、その第1および第2の画素スイッチ制御ステップでは、第1および第2の光源のそれぞれから、可視光信号に含まれる同一のシンボルを同時に送信するときには、第1および第2の画素スイッチのそれぞれが同一のシンボルを送信するためにオンまたはオフする複数のタイミングのうち、その同一のシンボルに固有の輝度の立ち上がりが得られるタイミングを、第1および第2の画素スイッチのそれぞれで同一にし、他のタイミングを、第1および第2の画素スイッチのそれぞれで異ならせ、その同一のシンボルが送信される期間における、第1および第2の光源の全体の平均輝度を、予め定められた輝度に一致させる。 In the first and second pixel switch control steps, when the same symbol included in the visible light signal is simultaneously transmitted from each of the first and second light sources, the first and second pixel switches are controlled. Among a plurality of timings each of which is turned on or off to transmit the same symbol, the timing at which a rise in luminance specific to the same symbol is obtained is made the same in each of the first and second pixel switches, Other timings are made different for each of the first and second pixel switches, and the overall average luminance of the first and second light sources in a period in which the same symbol is transmitted is set to a predetermined luminance. Match.
 これにより、図174に示す画素間平均輝度推移のように、空間的に平均された輝度において、シンボルに固有の輝度の立ち上がりが得られるタイミングでのみ、その立ち上がりを急峻にすることができ、受信エラーの発生を抑えることができる。つまり、受信機による可視光信号の受信エラーを抑えることができる。 As a result, as shown in FIG. 174, the rising edge can be sharpened only at the timing when the rising edge inherent in the symbol is obtained in the spatially averaged brightness as in the transition between the average brightness of pixels. The occurrence of errors can be suppressed. That is, the reception error of the visible light signal by the receiver can be suppressed.
 また、所定の画素からシンボル「10」を送信する場合で、その所定の画素の輝度が25%から75%の中間値の場合は、送信機は、第1の期間における、その所定の画素に対応する画素スイッチの開区間を短く、あるいは、長く設定する。さらに、送信機は、第1の期間と時間的に前または後の第2の期間(例えばフレーム)において、その画素スイッチの開区間を逆に調整する。このように、所定の画素における、第1の期間と第2の期間を含む全体の時間平均輝度が変わらないように、画素スイッチの開区間を設定することでも、エラーを抑えることができる。 In addition, when the symbol “10” is transmitted from a predetermined pixel and the luminance of the predetermined pixel is an intermediate value of 25% to 75%, the transmitter transmits the symbol “10” to the predetermined pixel in the first period. The open section of the corresponding pixel switch is set to be short or long. Further, the transmitter reversely adjusts the open period of the pixel switch in a second period (for example, a frame) before or after the first period. As described above, the error can be suppressed by setting the open period of the pixel switch so that the entire time average luminance including the first period and the second period in the predetermined pixel does not change.
 すなわち、本実施の形態における送信方法における、上述の第1の画素スイッチ制御ステップでは、例えば、第1の期間と、その第1の期間に続く第2の期間とで、可視光信号に含まれる同一のシンボルを送信する。このとき、その第1および第2の期間のそれぞれにおいて、第1の画素スイッチがその同一のシンボルを送信するためにオンまたはオフする複数のタイミングのうち、同一のシンボルに固有の輝度の立ち上がりが得られるタイミングを同一にし、他のタイミングを異ならせる。そして、その第1および第2の期間の全体における第1の光源の平均輝度を、予め定められた輝度に一致させる。この第1の期間および第2の期間はそれぞれ、フレームを表示するための期間とその次のフレームを表示するための期間であってもよい。また、第1の期間および第2の期間はそれぞれシンボル周期であってもよい。つまり、第1の期間および第2の期間はそれぞれ、1つのシンボルを送信するための期間と次のシンボルを送信するための期間であってもよい。 That is, in the first pixel switch control step in the transmission method according to the present embodiment, for example, it is included in the visible light signal in the first period and the second period following the first period. Send the same symbol. At this time, in each of the first and second periods, a rise in luminance specific to the same symbol among a plurality of timings at which the first pixel switch is turned on or off to transmit the same symbol. The obtained timing is made the same, and other timings are made different. Then, the average luminance of the first light source in the entire first and second periods is matched with a predetermined luminance. Each of the first period and the second period may be a period for displaying a frame and a period for displaying the next frame. Further, the first period and the second period may each be a symbol period. That is, each of the first period and the second period may be a period for transmitting one symbol and a period for transmitting the next symbol.
 これにより、図174に示す画素間平均輝度推移と同じように、時間的に平均化された輝度において、シンボルに固有の輝度の立ち上がりが得られるタイミングでのみ、その立ち上がりを急峻にすることができ、受信エラーの発生を抑えることができる。つまり、受信機による可視光信号の受信エラーを抑えることができる。 As a result, like the average luminance transition between pixels shown in FIG. 174, in the luminance averaged over time, the rising edge can be made steep only at the timing when the rising edge inherent in the symbol is obtained. The occurrence of reception errors can be suppressed. That is, the reception error of the visible light signal by the receiver can be suppressed.
 (画素スイッチが倍速駆動可能な場合の発光制御)
 図175は、本実施の形態における送信信号の一例を示す図である。
(Light emission control when the pixel switch can be driven at double speed)
FIG. 175 is a diagram illustrating an example of a transmission signal in this embodiment.
 画素スイッチを、送信シンボル周期の半分の周期で開閉できる場合、つまり、画素スイッチが倍速駆動可能な場合は、図175に示すとおり、V4PPMと同じ発光パターンとすることができる。 When the pixel switch can be opened and closed at half the transmission symbol period, that is, when the pixel switch can be driven at double speed, the same light emission pattern as V4PPM can be obtained as shown in FIG.
 言い換えれば、シンボル周期(シンボルが送信される期間)が4スロットからなる場合、画素スイッチを制御するLEDドライバ回路などの画素スイッチ制御部は、2スロットごとに、画素スイッチを制御することができる。つまり、画素スイッチ制御部は、そのシンボル周期の最初の時点から2スロット分の期間において、画素スイッチを任意の時間だけオンすることができる。さらに、画素スイッチ制御部は、そのシンボル周期の3スロット目の最初の時点から2スロット分の期間において、画素スイッチを任意の時間だけオンすることができる。 In other words, when the symbol period (symbol transmission period) is composed of 4 slots, a pixel switch control unit such as an LED driver circuit that controls the pixel switch can control the pixel switch every two slots. That is, the pixel switch control unit can turn on the pixel switch for an arbitrary time in a period of two slots from the first time point of the symbol period. Further, the pixel switch control unit can turn on the pixel switch for an arbitrary time in a period of two slots from the first time point of the third slot of the symbol period.
 つまり、本実施の形態における送信方法では、上述のシンボル周期の1/2の周期で画素値を変更してもよい。 That is, in the transmission method according to the present embodiment, the pixel value may be changed at a period that is ½ of the above-described symbol period.
 この場合、画素スイッチの開閉の1回あたりの細かさが減ってしまう(精度が低下してしまう)可能性がある。そこで、送信優先スイッチが有効のときのみこれを行うことで、画質と送信品質のバランスを最適に設定することができる。 In this case, the fineness per opening / closing of the pixel switch may be reduced (accuracy will be reduced). Therefore, by performing this only when the transmission priority switch is valid, the balance between image quality and transmission quality can be set optimally.
 (画素値調整による発光制御のブロック)
 図176は、本実施の形態における送信機の一例を示すブロック図である。
(Light emission control block by pixel value adjustment)
FIG. 176 is a block diagram illustrating an example of a transmitter in this embodiment.
 図176の(a)は、可視光信号の送信を行わず、映像の表示のみを行う装置、すなわち、上述のLEDディスプレイに映像を表示する表示装置の構成を示すブロック図である。この表示装置は、図176の(a)に示すように、画像・映像入力部1911と、N倍速化部1912と、共通スイッチ制御部1913と、画素スイッチ制御部1914とを備える。 FIG. 176 (a) is a block diagram showing a configuration of a device that does not transmit a visible light signal and only displays an image, that is, a display device that displays an image on the LED display described above. As shown in FIG. 176 (a), the display device includes an image / video input unit 1911, an N-times speed increasing unit 1912, a common switch control unit 1913, and a pixel switch control unit 1914.
 画像・映像入力部1911は、画像または映像を例えば60Hzのフレームレートで示す映像信号をN倍速化部1912に出力する。 The image / video input unit 1911 outputs a video signal indicating an image or video at a frame rate of 60 Hz, for example, to the N-times speed increasing unit 1912.
 N倍速化部1912は、画像・映像入力部1911から入力される映像信号のフレームレートをN(N>1)倍に上げ、その映像信号を出力する。例えば、N倍速化部1912は、フレームレートを10倍(N=10)に、すなわち600Hzのフレームレートに上げる。 The N-times speed increasing unit 1912 increases the frame rate of the video signal input from the image / video input unit 1911 to N (N> 1) times and outputs the video signal. For example, the N-times speed increasing unit 1912 increases the frame rate to 10 times (N = 10), that is, to a frame rate of 600 Hz.
 共通スイッチ制御部1913は、その600Hzのフレームレートの映像に基づいて共通スイッチをスイッチングする。同様に、画素スイッチ制御部1914は、その600Hzのフレームレートの映像に基づいて画素スイッチをスイッチングする。このように、N倍速化部1912によってフレームレートが上がることによって、共通スイッチまたは画素スイッチなどのスイッチの開閉によるチラつきを回避することができる。また、撮像装置によってLEDディスプレイが高速シャッターで撮像される場合にも、画素抜け、またはチラつきのない画像をその撮像装置に撮像させることができる。 The common switch control unit 1913 switches the common switch based on the 600 Hz frame rate video. Similarly, the pixel switch control unit 1914 switches pixel switches based on the 600 Hz frame rate video. Thus, flickering due to opening and closing of a switch such as a common switch or a pixel switch can be avoided by increasing the frame rate by the N-times speed increasing unit 1912. Even when the LED display is imaged by the imaging device with a high-speed shutter, it is possible to cause the imaging device to capture an image without missing pixels or flicker.
 図176の(b)は、映像の表示だけでなく、上述の可視光信号の送信を行う表示装置、すなわち送信機(送信装置)の構成を示すブロック図である。この送信機は、画像・映像入力部1911と、共通スイッチ制御部1913と、画素スイッチ制御部1914と、信号入力部1915と、画素値調整部1916とを備える。信号入力部1915は、複数のシンボルからなる可視光信号を、2400シンボル/秒のシンボルレート(周波数)で画素値調整部1916に出力する。 FIG. 176 (b) is a block diagram showing a configuration of a display device that transmits not only video but also the above-described visible light signal, that is, a transmitter (transmitting device). This transmitter includes an image / video input unit 1911, a common switch control unit 1913, a pixel switch control unit 1914, a signal input unit 1915, and a pixel value adjustment unit 1916. The signal input unit 1915 outputs a visible light signal including a plurality of symbols to the pixel value adjustment unit 1916 at a symbol rate (frequency) of 2400 symbols / second.
 画素値調整部1916は、その可視光信号のシンボルレートに合わせて、画像・映像入力部1911から入力された画像を複製し,上述の方法にしたがって画素値を調整する。これにより、画素値調整部1916から後段の共通スイッチ制御部1913および画素スイッチ制御部1914は、画像または映像の輝度を変えることなく、可視光信号を出力することができる。 The pixel value adjustment unit 1916 duplicates the image input from the image / video input unit 1911 according to the symbol rate of the visible light signal, and adjusts the pixel value according to the above-described method. Accordingly, the common switch control unit 1913 and the pixel switch control unit 1914 in the subsequent stage from the pixel value adjustment unit 1916 can output a visible light signal without changing the luminance of the image or video.
 例えば、図176に示す例の場合、可視光信号のシンボルレートが2400シンボル/秒であれば、画素値調整部1916は、映像信号のフレームレート60Hzが4800Hzになるように、映像信号に含まれる画像を複製する。例えば、可視光信号に含まれるシンボルの値が「00」で、複製前の1枚目の画像に含まれる画素の画素値(輝度値)は50%である。この場合、画素値調整部1916は、その画素値を複製後の1枚目の画像では100%に調整し、2枚目の画像では50%に調整する。これにより、図175の(c)に示す、シンボル「00」の場合の輝度変化のように、共通スイッチと画素スイッチのアンドによって、輝度は50%となる。その結果、元の画像の輝度と等しく保ちつつ、可視光信号を送信することができる。なお、共通スイッチと画素スイッチのアンドとは、共通スイッチがオンであり、かつ画素スイッチがオンである期間でのみ、その共通スイッチおよび画素スイッチに対応する光源(すなわちLED)が点灯することである。 For example, in the case of the example shown in FIG. 176, if the symbol rate of the visible light signal is 2400 symbols / second, the pixel value adjustment unit 1916 is included in the video signal so that the frame rate 60 Hz of the video signal is 4800 Hz. Duplicate the image. For example, the value of the symbol included in the visible light signal is “00”, and the pixel value (luminance value) of the pixel included in the first image before duplication is 50%. In this case, the pixel value adjustment unit 1916 adjusts the pixel value to 100% for the first image after duplication and to 50% for the second image. As a result, the luminance is 50% due to the AND of the common switch and the pixel switch as in the luminance change in the case of the symbol “00” shown in FIG. As a result, a visible light signal can be transmitted while maintaining the same luminance as the original image. Note that the AND of the common switch and the pixel switch means that the light source (that is, the LED) corresponding to the common switch and the pixel switch is lit only when the common switch is on and the pixel switch is on. .
 また、本実施の形態における送信方法では、映像の表示と可視光信号の送信とを同時に行うことなく、それらを信号送信期間と映像表示時間とで分けて行ってもよい。 Further, in the transmission method according to the present embodiment, the video display and the visible light signal transmission may be performed separately in the signal transmission period and the video display time without simultaneously performing the video display and the visible light signal transmission.
 つまり、本実施の形態における上述の第1の画素スイッチ制御ステップでは、共通スイッチが輝度変化パターンにしたがってスイッチングしている信号送信期間中、第1の画素スイッチをオンにする。そして、本実施の形態における送信方法は、さらに、その信号送信期間と異なる映像表示期間中、その共通スイッチをオンにし、映像表示期間において第1の画素スイッチを表示対象の映像にしたがってオンにすることにより、共通スイッチがオンであり、かつ、第1の画素スイッチがオンである期間のみに、第1の光源を点灯させることによって、その映像中の画素を表示する映像表示ステップを含んでもよい。 That is, in the above-described first pixel switch control step in the present embodiment, the first pixel switch is turned on during the signal transmission period in which the common switch is switching according to the luminance change pattern. The transmission method in the present embodiment further turns on the common switch during a video display period different from the signal transmission period, and turns on the first pixel switch according to the video to be displayed in the video display period. Accordingly, a video display step of displaying the pixels in the video by turning on the first light source only during a period in which the common switch is on and the first pixel switch is on may be included. .
 これにより、映像の表示と可視光信号の送信とが互いに異なる期間に行われるためその表示と送信を簡単に行うことができる。 Thereby, since the display of the image and the transmission of the visible light signal are performed in different periods, the display and the transmission can be easily performed.
 (電源変更のタイミング)
 電源ライン変更時には、信号オフの区間が発生してしまうが、4PPMの最後の部分は発光していなくても受信には影響しないため、4PPMシンボルの送信周期に合わせて電源ラインを変更することで、受信品質に影響を与えずに電源ラインを変更することができる。
(Power change timing)
When the power supply line is changed, a signal off period occurs. However, even if the last part of 4PPM is not emitting light, it does not affect reception. By changing the power supply line according to the transmission period of the 4PPM symbol, The power line can be changed without affecting the reception quality.
 また、4PPMのLO期間に電源ラインを変更することでも、受信品質に影響を与えずに電源ラインを変更することができる。この場合は、さらに、最大輝度を高く保ったまま送信することができる。 Also, changing the power supply line during the 4PPM LO period can change the power supply line without affecting the reception quality. In this case, transmission can be performed while keeping the maximum luminance high.
 (駆動タイミング)
 また、本実施の形態では、図177~図179に示すタイミングでLEDディスプレイを駆動してもよい。
(Drive timing)
In the present embodiment, the LED display may be driven at the timings shown in FIGS. 177 to 179.
 図177~図179は、LEDディスプレイを本発明の光ID変調信号で駆動する場合のタイミングチャートである。 FIGS. 177 to 179 are timing charts when the LED display is driven by the optical ID modulation signal of the present invention.
 例えば、図178に示すように、可視光信号(光ID)を送信するために、共通スイッチ(COM1)がオフにされるとき(期間t1)には、映像信号の示す輝度でLEDを点灯させることができないため、その期間t1以降に、そのLEDを点灯させる。これにより、可視光信号を適切に送信しながら、映像信号によって示される映像を崩すことなく、その映像を適切に表示することができる。 For example, as shown in FIG. 178, when the common switch (COM1) is turned off (period t1) in order to transmit the visible light signal (light ID), the LED is turned on with the luminance indicated by the video signal. Therefore, the LED is lit after the period t1. Thereby, the video can be appropriately displayed without breaking the video indicated by the video signal while appropriately transmitting the visible light signal.
 (まとめ)
 図180Aは、本発明の一態様に係る送信方法を示すフローチャートである。
(Summary)
FIG. 180A is a flowchart illustrating a transmission method according to one embodiment of the present invention.
 本発明の一態様に係る送信方法は、輝度変化によって可視光信号を送信する送信方法であって、ステップSC11~SC13を含む。 The transmission method according to an aspect of the present invention is a transmission method for transmitting a visible light signal by a change in luminance, and includes steps SC11 to SC13.
 ステップSC11では、上述の各実施の形態と同様に、可視光信号を変調することにより、輝度変化パターンを決定する。 In step SC11, the luminance change pattern is determined by modulating the visible light signal as in the above-described embodiments.
 ステップSC12では、ディスプレイに備えられた光源群に含まれる、それぞれ映像中の画素を表すための複数の光源を、共通に点灯させるための共通スイッチを、その輝度変化パターンにしたがってスイッチングする。 In Step SC12, a common switch for commonly lighting a plurality of light sources included in the light source group provided in the display for representing pixels in the video is switched according to the luminance change pattern.
 ステップS13では、その光源群に含まれる複数の光源のうちの第1の光源を点灯させるための第1の画素スイッチ(すなわち画素スイッチ)をオンにすることにより、共通スイッチがオンであり、かつ、第1の画素スイッチがオンである期間のみに、第1の光源を点灯させることによって、可視光信号を送信する。 In step S13, the common switch is turned on by turning on the first pixel switch (that is, the pixel switch) for turning on the first light source among the plurality of light sources included in the light source group, and The visible light signal is transmitted by turning on the first light source only during a period in which the first pixel switch is on.
 図180Bは、本発明の一態様に係る送信装置の機能構成を示すブロック図である。 FIG. 180B is a block diagram illustrating a functional configuration of the transmission device according to one embodiment of the present invention.
 本発明の一態様に係る送信装置C10は、輝度変化によって可視光信号を送信する送信装置(または送信機)であって、決定部C11と、共通スイッチ制御部C12と、画素スイッチ制御部C13とを備える。決定部C11は、上述の各実施の形態と同様に、可視光信号を変調することにより、輝度変化パターンを決定する。なお、この決定部C11は、例えば、図176に示す信号入力部1915に備えられる。 A transmission device C10 according to an aspect of the present invention is a transmission device (or transmitter) that transmits a visible light signal by a luminance change, and includes a determination unit C11, a common switch control unit C12, a pixel switch control unit C13, Is provided. The determination unit C11 determines the luminance change pattern by modulating the visible light signal, as in the above-described embodiments. In addition, this determination part C11 is provided in the signal input part 1915 shown in FIG. 176, for example.
 共通スイッチ制御部C12は、共通スイッチをその輝度変化パターンにしたがってスイッチングする。この共通スイッチは、ディスプレイに備えられた光源群に含まれる、それぞれ映像中の画素を表すための複数の光源を、共通に点灯させるためスイッチである。 The common switch control unit C12 switches the common switch according to the luminance change pattern. This common switch is a switch for commonly lighting a plurality of light sources included in a light source group provided in the display, each representing a pixel in an image.
 画素スイッチ制御部C13は、光源群に含まれる複数の光源のうちの制御対象の光源を点灯させるための画素スイッチをオンにすることにより、共通スイッチがオンであり、かつ、画素スイッチがオンである期間のみに、制御対象の光源を点灯させることによって、可視光信号を送信する。なお、制御対象の光源は、上述の第1の光源である。 The pixel switch control unit C13 turns on the pixel switch for turning on the light source to be controlled among the plurality of light sources included in the light source group, so that the common switch is on and the pixel switch is on. The visible light signal is transmitted by turning on the light source to be controlled only during a certain period. The light source to be controlled is the first light source described above.
 これにより、複数のLEDなどを光源として備えたディスプレイから可視光信号を適切に送信することができる。したがって、照明以外の機器を含む態様な機器間の通信を可能とする。また、そのディスプレイが、共通スイッチおよび画素スイッチの制御によって映像を表示するためのディスプレイである場合、その共通スイッチおよび画素スイッチを利用して、可視光信号を送信することができる。したがって、ディスプレイに映像表示するための構成(すなわち表示装置)に対して大幅な変更を行うことなく、簡単に可視光信号を送信することができる。 Thereby, a visible light signal can be appropriately transmitted from a display including a plurality of LEDs as light sources. Therefore, the communication between the apparatuses of the aspect containing apparatuses other than illumination is enabled. Further, when the display is a display for displaying an image by controlling the common switch and the pixel switch, the visible light signal can be transmitted using the common switch and the pixel switch. Therefore, a visible light signal can be easily transmitted without making a significant change to the configuration for displaying an image on a display (that is, a display device).
 (Single frame transmissionのフレーム構成)
 図181は、本実施の形態における送信信号の一例を示す図である。
(Frame configuration of Single frame transmission)
FIG. 181 is a diagram illustrating an example of a transmission signal in this embodiment.
 送信フレームは、図181の(a)に示すように、プリアンブル(PRE)、ID長(IDLEN)、IDタイプ(IDTYPE)、コンテンツ(ID/DATA)、および検査符号(CRC)で構成される。各領域のビット数は一例である。 The transmission frame includes a preamble (PRE), an ID length (IDLEN), an ID type (IDTYPE), content (ID / DATA), and a check code (CRC), as shown in FIG. The number of bits in each area is an example.
 図181の(b)に示すようなプリアンブルを用いることで、受信機は、4PPM、I-4PPMまたはV4PPMでエンコードされている他の部分と区別することができ、信号の区切りを見つけることができる。 By using the preamble as shown in FIG. 181 (b), the receiver can distinguish from other parts encoded by 4PPM, I-4PPM or V4PPM, and can find a signal delimiter. .
 図181の(c)に示すように、IDLENでID/DATAの長さを指定することで、可変長のコンテンツを送信することができる。 As shown in FIG. 181 (c), variable length content can be transmitted by specifying the ID / DATA length with IDLEN.
 CRCは、PRE以外の部分の誤りを訂正、または、検出する検査符号である。検査領域の長さに応じてCRC長を変化させることで、検査能力を一定以上に保つことが出来る。また、検査領域の長さに応じて異なる検査符号を用いることで、CRC長あたりの検査能力を向上させることができる。 CRC is a check code that corrects or detects errors other than PRE. By changing the CRC length in accordance with the length of the inspection area, the inspection capability can be maintained above a certain level. Moreover, the inspection capability per CRC length can be improved by using different inspection codes depending on the length of the inspection region.
 (Multiple frame transmissionのフレーム構成)
 図182と図183は、本実施の形態における送信信号の一例を示す図である。
(Frame configuration of multiple frame transmission)
182 and 183 are diagrams illustrating an example of a transmission signal in this embodiment.
 送信データ(BODY)には、パーティションタイプ(PTYPE)と検査符号(CRC)が付加され、Joined dataとなる。Joined dataは、いくつかのDATAPARTに分割され、プリアンブル(PRE)とアドレス(ADDR)が付加されて送信される。 The transmission data (BODY) is added with a partition type (PTYPE) and a check code (CRC), and becomes Joined data. Joined data is divided into several DATA PARTs, and transmitted with a preamble (PRE) and an address (ADDR) added.
 PTYPE(または、パーティションモード(PMODE))は、BODYの分割方法または意味を示す。図182の(a)に示すように2bitとすることで、4PPMでちょうどよく符号化することができる。図182の(b)に示すように1bitとすることで、送信時間を短くすることができる。 PTYPE (or partition mode (PMODE)) indicates the division method or meaning of BODY. By using 2 bits as shown in FIG. 182 (a), it is possible to encode with 4PPM. By setting 1 bit as shown in FIG. 182 (b), the transmission time can be shortened.
 CRCはPTYPEとBODYを検査する検査符号である。図161で定めるように、検査される部分の長さによってCRCの符号長を変化させることで、検査能力を一定以上に保つことができる。 CRC is an inspection code for inspecting PTYPE and BODY. As defined in FIG. 161, by changing the CRC code length according to the length of the portion to be inspected, the inspection capability can be maintained at a certain level or more.
 プリアンブルは、図162のように定めることで、分割パターンのバリエーションを確保しつつ、送信時間を短くすることができる。 By defining the preamble as shown in FIG. 162, it is possible to shorten the transmission time while ensuring variations of the division pattern.
 アドレスは、図163のように定めることで、受信機は、フレームを受信した順序に関わらず、データを復元することができる。 If the address is determined as shown in FIG. 163, the receiver can restore the data regardless of the order in which the frames are received.
 図183は、可能なJoined data長とフレーム数との組み合わせである。下線が引かれた組み合わせは、後述のPTYPEがSingle frame compatibleのときに用いられる組み合わせである。 FIG. 183 shows a combination of possible Joined data length and the number of frames. The underlined combination is a combination used when a PTYPE described later is a single frame compatible.
 (BODYフィールドの構成)
 図184は、本実施の形態における送信信号の一例を示す図である。
(Configuration of BODY field)
FIG. 184 is a diagram illustrating an example of a transmission signal in this embodiment.
 BODYを図のようなフィールド構成とすることで、シングルフレーム送信と同様のIDを送信することができる。 By setting BODY to a field configuration as shown in the figure, it is possible to transmit the same ID as in single frame transmission.
 同じIDTYPEで同じIDの場合は、シングルフレーム送信かマルチフレーム送信か、また、パケット送信の組み合わせにかかわらず、同じ意味を表すとすることで、連続送信・受信時間が短い場合などに柔軟に信号を送信することができる。 In the case of the same IDTYPE and the same ID, it is possible to flexibly signal when the continuous transmission / reception time is short by expressing the same meaning regardless of the combination of single frame transmission or multiframe transmission and packet transmission. Can be sent.
 IDLENでIDの長さを指定し、余った部分はPADDINGを送信する。この部分は全て0または1としてもよいし、IDを拡張するデータを送信してもよいし、検査符号としてもよい。PADDINGは左詰めであっても良い。 Specify the ID length with IDLEN, and send PADDING for the remaining part. This part may be all 0 or 1, may transmit data for extending the ID, or may be a check code. PADDING may be left-justified.
 図184の(b)、(c)または(d)では、図184の(a)よりも送信時間を短くすることができる。このときIDの長さは、IDとして取れる長さのうち最大のものであるとする。 184 (b), (c), or (d) in FIG. 184 can make the transmission time shorter than (a) in FIG. 184. At this time, the length of the ID is assumed to be the maximum of the lengths that can be taken as the ID.
 図184の(b)または(c)の場合は、IDTYPEのビット数が奇数となるが、図182の(b)に示す1bitのPTYPEと組み合わせることで、偶数となり、4PPMで効率よくエンコードすることができる。 In the case of (b) or (c) in FIG. 184, the number of IDTYPE bits is an odd number, but when combined with the 1-bit PTYPE shown in FIG. Can do.
 図184の(c)では、より長いIDを送信することができる。 In FIG. 184 (c), a longer ID can be transmitted.
 図184の(d)では、より多くのIDTYPEを表現することができる。 In FIG. 184 (d), more IDTYPEs can be expressed.
 (PTYPE)
 図185は、本実施の形態における送信信号の一例を示す図である。
(PTYPE)
FIG. 185 is a diagram illustrating an example of a transmission signal in this embodiment.
 PTYPEが所定のビットであるときは、BODYがSingle frame compatibleモードであることを示す。これにより、シングルフレーム送信の場合と同じIDを送信することができる。 When PTYPE is a predetermined bit, it indicates that BODY is in single frame compatible mode. Thereby, the same ID as in the case of single frame transmission can be transmitted.
 例えば、PTYPE=00のときには、そのPTYPEに対応するIDまたはIDタイプを、シングルフレーム送信で送信されたIDまたはIDタイプと同様に扱うことができ、IDまたはIDタイプの管理を簡単にすることができる。 For example, when PTYPE = 00, the ID or ID type corresponding to the PTYPE can be handled in the same manner as the ID or ID type transmitted by single frame transmission, and the management of the ID or ID type can be simplified. it can.
 PTYPEが所定のビットであるときは、BODYはData streamモードであることを示す。このとき、送信フレーム数とDATAPART長は全ての組み合わせを用いることができ、異なる組み合わせのデータは異なる意味を持つとすることができる。PTYPEのビットによって、前記異なる組み合わせが同じ意味を持つ場合と、異なる意味を保つ場合としてもよい。これにより、送信方法を柔軟に選択することができる。 When PTYPE is a predetermined bit, BODY indicates a Data stream mode. At this time, all combinations of the number of transmission frames and the DATAPART length can be used, and different combinations of data can have different meanings. Depending on the PTYPE bits, the different combinations may have the same meaning or different meanings. Thereby, a transmission method can be selected flexibly.
 例えば、PTYPE=01のときには、シングルフレーム送信に定義されていないサイズのIDを送信することができる。また、そのPTYPEに対応するIDがシングルフレーム送信のIDと同一であっても、そのPTYPEに対応するIDを、そのシングルフレーム送信のIDとは別のIDとして扱うことができる。その結果、表現可能なIDの数を多くすることができる。 For example, when PTYPE = 01, an ID having a size not defined for single frame transmission can be transmitted. Further, even if the ID corresponding to the PTYPE is the same as the ID for single frame transmission, the ID corresponding to the PTYPE can be handled as an ID different from the ID for the single frame transmission. As a result, the number of IDs that can be expressed can be increased.
 (Single frame compatible モードのフィールド構成)
 図186は、本実施の形態における送信信号の一例を示す図である。
(Field structure in Single frame compatible mode)
FIG. 186 is a diagram illustrating an example of a transmission signal in this embodiment.
 図184の(a)を用いる場合、Single frame compatibleモードでは、図186に示す表の組み合わせで送信する場合が最も効率が良い。 When (a) in FIG. 184 is used, in the single frame compatible mode, it is most efficient to transmit in the combination of the tables shown in FIG. 186.
 図184の(b)、(c)または(d)を用いる場合は、IDが32bitの場合は、フレーム数13でDATAPART長4bitの組み合わせが効率が良い。また、IDが64bitの場合は、フレーム数が11でDATAPART長が8bitの組み合わせが効率が良い。 184 When (b), (c) or (d) in FIG. 184 is used, when the ID is 32 bits, a combination of 13 frames and a DATAPART length of 4 bits is efficient. When the ID is 64 bits, the combination of 11 frames and DATAPART length of 8 bits is efficient.
 表の組み合わせのみで送信されるとすることで、異なる組み合わせは受信エラーと判断することができるようになり、受信エラー率を下げることができる。 Suppose that only combinations of tables are transmitted, so that different combinations can be judged as reception errors, and the reception error rate can be lowered.
 (実施の形態19のまとめ)
 本発明の一態様に係る送信方法は、輝度変化によって可視光信号を送信する送信方法であって、可視光信号を変調することにより、輝度変化パターンを決定する決定ステップと、ディスプレイに備えられた光源群に含まれる、それぞれ映像中の画素を表すための複数の光源を共通に点灯させるための共通スイッチを、前記輝度変化パターンにしたがってスイッチングする共通スイッチ制御ステップと、前記光源群に含まれる複数の光源のうちの第1の光源を点灯させるための第1の画素スイッチをオンにすることにより、前記共通スイッチがオンであり、かつ、前記第1の画素スイッチがオンである期間のみに、前記第1の光源を点灯させることによって、前記可視光信号を送信する第1の画素スイッチ制御ステップとを含む。
(Summary of Embodiment 19)
A transmission method according to an aspect of the present invention is a transmission method for transmitting a visible light signal by a luminance change, and includes a determination step of determining a luminance change pattern by modulating the visible light signal, and a display. A common switch control step of switching a common switch included in the light source group for commonly lighting a plurality of light sources for representing pixels in the video according to the luminance change pattern; and a plurality of light sources included in the light source group By turning on the first pixel switch for turning on the first light source among the light sources, the common switch is on and only during the period when the first pixel switch is on. And a first pixel switch control step of transmitting the visible light signal by turning on the first light source.
 これにより、例えば図173~図180Bに示すように、複数のLEDなどを光源として備えたディスプレイから可視光信号を適切に送信することができる。したがって、照明以外の機器を含む態様な機器間の通信を可能とする。また、そのディスプレイが、共通スイッチおよび第1の画素スイッチの制御によって映像を表示するためのディスプレイである場合、その共通スイッチおよび第1の画素スイッチを利用して、可視光信号を送信することができる。したがって、ディスプレイに映像表示するための構成に対して大幅な変更を行うことなく、簡単に可視光信号を送信することができる。 Thereby, for example, as shown in FIGS. 173 to 180B, a visible light signal can be appropriately transmitted from a display having a plurality of LEDs as light sources. Therefore, the communication between the apparatuses of the aspect containing apparatuses other than illumination is enabled. When the display is a display for displaying an image by controlling the common switch and the first pixel switch, a visible light signal may be transmitted using the common switch and the first pixel switch. it can. Therefore, a visible light signal can be easily transmitted without making a significant change to the configuration for displaying an image on a display.
 また、前記決定ステップでは、前記輝度変化パターンをシンボル周期ごとに決定し、前記第1の画素スイッチ制御ステップでは、前記シンボル周期に同期させて、前記第1の画素スイッチをスイッチングしてもよい。 In the determination step, the luminance change pattern may be determined for each symbol period, and in the first pixel switch control step, the first pixel switch may be switched in synchronization with the symbol period.
 これにより、例えば図173に示すように、シンボル周期が例えば1/2400秒であっても、そのシンボル周期にしたがって可視光信号を適切に送信することができる。 Thereby, as shown in FIG. 173, for example, even if the symbol period is 1/2400 seconds, a visible light signal can be appropriately transmitted according to the symbol period.
 また、前記第1の画素スイッチ制御ステップでは、前記ディスプレイに映像を表示させるときには、前記第1の光源に対応する、前記映像中の画素の画素値を表現するための点灯期間のうち、前記可視光信号の送信のために前記第1の光源が消灯される期間だけ、前記点灯期間を補うように、前記第1の画素スイッチをスイッチングしてもよい。例えば、前記映像中の画素の画素値を変更することによって、前記点灯期間を補ってもよい。 Further, in the first pixel switch control step, when displaying an image on the display, the visible period of the lighting period for expressing the pixel value of the pixel in the image corresponding to the first light source is displayed. The first pixel switch may be switched so as to compensate for the lighting period only during a period in which the first light source is turned off to transmit an optical signal. For example, the lighting period may be supplemented by changing pixel values of pixels in the video.
 これにより、例えば図173および図175に示すように、可視光信号の送信のために第1の光源が消灯される場合でも、点灯期間が補われるため、本来の映像を崩すことなく適切に表示することができる。 As a result, for example, as shown in FIGS. 173 and 175, even when the first light source is turned off for the transmission of a visible light signal, the lighting period is supplemented, so that the original image is displayed properly without being destroyed. can do.
 また、前記シンボル周期の1/2の周期で前記画素値を変更してもよい。 Further, the pixel value may be changed at a cycle that is ½ of the symbol cycle.
 これにより、例えば図175に示すように、映像の表示と可視光信号の送信とを適切に行うことができる。 Thereby, for example, as shown in FIG. 175, it is possible to appropriately display the image and transmit the visible light signal.
 また、前記送信方法は、さらに、前記光源群に含まれる、前記第1の光源の周囲にある第2の光源を点灯させるための第2の画素スイッチをオンにすることにより、前記共通スイッチがオンであり、かつ、前記第2の画素スイッチがオンである期間のみに、前記第2の光源を点灯させることによって、前記可視光信号を送信する第2の画素スイッチ制御ステップとを含み、前記第1および第2の画素スイッチ制御ステップでは、前記第1および第2の光源のそれぞれから、前記可視光信号に含まれる同一のシンボルを同時に送信するときには、前記第1および第2の画素スイッチのそれぞれが前記同一のシンボルを送信するためにオンまたはオフする複数のタイミングのうち、前記同一のシンボルに固有の輝度の立ち上がりが得られるタイミングを、前記第1および第2の画素スイッチのそれぞれで同一にし、他のタイミングを、前記第1および第2の画素スイッチのそれぞれで異ならせ、前記同一のシンボルが送信される期間における、前記第1および第2の光源の全体の平均輝度を、予め定められた輝度に一致させてもよい。 In the transmission method, the common switch is further turned on by turning on a second pixel switch included in the light source group for turning on a second light source around the first light source. A second pixel switch control step of transmitting the visible light signal by turning on the second light source only during a period in which the second pixel switch is on. In the first and second pixel switch control steps, when the same symbol included in the visible light signal is simultaneously transmitted from each of the first and second light sources, the first and second pixel switches Among a plurality of timings at which each is turned on or off to transmit the same symbol, a timing at which a rise in luminance specific to the same symbol is obtained. The same timing is used for each of the first and second pixel switches, and other timings are made different for each of the first and second pixel switches. You may make the average brightness | luminance of the whole 1st and 2nd light source correspond to a predetermined brightness | luminance.
 これにより、例えば図174に示すように、空間的に平均化された輝度において、シンボルに固有の輝度の立ち上がりが得られるタイミングでのみ、その立ち上がりを急峻にすることができ、受信エラーの発生を抑えることができる。 As a result, for example, as shown in FIG. 174, in the spatially averaged luminance, the rising edge can be made sharp only at the timing when the rising edge inherent to the symbol is obtained, and the occurrence of a reception error can be prevented. Can be suppressed.
 また、前記第1の画素スイッチ制御ステップでは、第1の期間と、前記第1の期間に続く第2の期間とで、前記可視光信号に含まれる同一のシンボルを送信するときには、前記第1および第2の期間のそれぞれにおいて、前記第1の画素スイッチが前記同一のシンボルを送信するためにオンまたはオフする複数のタイミングのうち、前記同一のシンボルに固有の輝度の立ち上がりが得られるタイミングを同一にし、他のタイミングを異ならせ、前記第1および第2の期間の全体における前記第1の光源の平均輝度を、予め定められた輝度に一致させてもよい。 In the first pixel switch control step, when the same symbol included in the visible light signal is transmitted in the first period and the second period following the first period, In each of the second period and the second period, among the plurality of timings at which the first pixel switch is turned on or off to transmit the same symbol, a timing at which a rise in luminance specific to the same symbol is obtained. The same brightness may be set at different timings so that the average brightness of the first light source in the whole of the first and second periods matches the predetermined brightness.
 これにより、例えば図174に示すように、時間的に平均化された輝度において、シンボルに固有の輝度の立ち上がりが得られるタイミングでのみ、その立ち上がりを急峻にすることができ、受信エラーの発生を抑えることができる。 As a result, for example, as shown in FIG. 174, in the luminance averaged over time, the rising edge can be made sharp only at the timing when the rising edge inherent in the symbol is obtained, and the occurrence of a reception error can be prevented. Can be suppressed.
 また、前記第1の画素スイッチ制御ステップでは、前記共通スイッチが前記輝度変化パターンにしたがってスイッチングしている信号送信期間中、前記第1の画素スイッチをオンにし、前記送信方法は、さらに、前記信号送信期間と異なる映像表示期間中、前記共通スイッチをオンにし、前記映像表示期間において前記第1の画素スイッチを表示対象の映像にしたがってオンにすることにより、前記共通スイッチがオンであり、かつ、前記第1の画素スイッチがオンである期間のみに、前記第1の光源を点灯させることによって、前記映像中の画素を表示する映像表示ステップを含んでもよい。 In the first pixel switch control step, the first pixel switch is turned on during a signal transmission period in which the common switch is switched according to the luminance change pattern, and the transmission method further includes the signal During the video display period different from the transmission period, turning on the common switch, and turning on the first pixel switch according to the display target video in the video display period, and the common switch is on, and A video display step of displaying pixels in the video by turning on the first light source only during a period in which the first pixel switch is on may be included.
 これにより、映像の表示と可視光信号の送信とが互いに異なる期間に行われるためその表示と送信を簡単に行うことができる。 Thereby, since the display of the image and the transmission of the visible light signal are performed in different periods, the display and the transmission can be easily performed.
 (実施の形態20)
 本実施の形態では、上記各実施の形態における可視光信号の詳細または変形例について、具体的に説明する。なお、カメラのトレンドは、高解像度化(4K)、高フレームレート化(60fps)である。高フレームレート化によって、フレームスキャン時間が減少する。その結果、受信距離が減少し、受信時間が増加する。そのため、可視光信号を送信する送信機では、パケット送信時間を短くする必要がある。また、ラインスキャン時間の減少により、受信の時間分解能が高くなる。また、露光時間は1/8000秒である。4PPMでは、信号表現と調光を同時に行っているため、信号密度が低く、効率が悪い。したがって、本実施の形態における可視光信号では、信号部分と調光部分を分離して、受信に必要な部分が短くされている。
(Embodiment 20)
In this embodiment, details or modifications of the visible light signal in each of the above embodiments will be specifically described. The camera trend is to increase the resolution (4K) and increase the frame rate (60 fps). The frame scan time is reduced by increasing the frame rate. As a result, the reception distance decreases and the reception time increases. Therefore, in a transmitter that transmits a visible light signal, it is necessary to shorten the packet transmission time. In addition, the time resolution of reception increases due to the reduction of the line scan time. The exposure time is 1/8000 second. In 4PPM, signal expression and dimming are performed simultaneously, so signal density is low and efficiency is poor. Therefore, in the visible light signal in the present embodiment, the signal portion and the dimming portion are separated, and the portion necessary for reception is shortened.
 図187は、本実施の形態における可視光信号の構成の一例を示す図である。 FIG. 187 is a diagram illustrating an example of a configuration of a visible light signal in the present embodiment.
 可視光信号は、図187に示すように、信号部と調光部との組み合わせを複数含む。この組み合わせの時間長は例えば2ms以下(周波数は500Hz以上)である。 As shown in FIG. 187, the visible light signal includes a plurality of combinations of a signal part and a light control part. The time length of this combination is, for example, 2 ms or less (frequency is 500 Hz or more).
 図188は、本実施の形態における可視光信号の詳細な構成の一例を示す図である。 FIG. 188 is a diagram illustrating an example of a detailed configuration of a visible light signal in the present embodiment.
 可視光信号は、データL(DataL)と、プリアンブル(Preamble)と、データR(DataR)と、調光部(Dimming)とを含む。データLとプリアンブルとデータRとから、上記信号部が構成される。 The visible light signal includes data L (Data L), preamble (Preamble), data R (Data R), and a dimming unit (Dimming). The signal portion is composed of the data L, the preamble, and the data R.
 プリアンブルは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、プリアンブルは、時間長PだけHighの輝度値を示し、次の時間長PだけLowの輝度値を示し、次の時間長PだけHighの輝度値を示し、次の時間長PだけLowの輝度値を示す。なお、時間長P~Pは、例えば100μsである。 The preamble alternately indicates High and Low luminance values along the time axis. That is, the preamble indicates a luminance value of High for a time length P 1, it indicates the luminance value of the next time length P 2 only Low, only the following time length P 3 represents a luminance value at High, the next time length P Only 4 indicates a low luminance value. The time lengths P 1 to P 4 are, for example, 100 μs.
 データRは、時間軸に沿ってHighとLowの輝度値を交互に示し、プリアンブルの直後に配置される。つまり、データRは、時間長DR1だけHighの輝度値を示し、次の時間長DR2だけLowの輝度値を示し、次の時間長DR3だけHighの輝度値を示し、次の時間長DR4だけLowの輝度値を示す。なお、時間長DR1~DR4は、送信対象の信号に応じた数式にしたがって決定される。この数式は、DRi=120+20x(i∈1~4、x∈0~15)である。なお、120および20などの数値は時間(μs)を示す。また、これらの数値は一例である。 Data R alternately indicates High and Low luminance values along the time axis, and is arranged immediately after the preamble. That is, the data R indicates the luminance value of the High only time length D R1, represents a luminance value at Low only the following time length D R2, represents a luminance value at High only the following time length D R3, the next time length only D R4 indicates a luminance value of Low. Note that the time lengths D R1 to D R4 are determined according to a mathematical formula corresponding to the transmission target signal. This equation is D Ri = 120 + 20x i (iε1 to 4, x i ε0 to 15). Numerical values such as 120 and 20 indicate time (μs). Moreover, these numerical values are examples.
 データLは、時間軸に沿ってHighとLowの輝度値を交互に示し、プリアンブルの直前に配置される。つまり、データLは、時間長DL1だけHighの輝度値を示し、次の時間長DL2だけLowの輝度値を示し、次の時間長DL3だけHighの輝度値を示し、次の時間長DL4だけLowの輝度値を示す。なお、時間長DL1~DL4は、送信対象の信号に応じた数式にしたがって決定される。この数式は、DLi=120+20×(15-x)である。なお、上述と同様に、120および20などの数値は時間(μs)を示す。また、これらの数値は一例である。 Data L alternately indicates High and Low luminance values along the time axis, and is arranged immediately before the preamble. That is, the data L indicates a high luminance value for the time length D L1 , indicates a low luminance value for the next time length D L2 , indicates a high luminance value for the next time length D L3, and displays the next time length. Only D L4 indicates a low luminance value. Note that the time lengths D L1 to D L4 are determined according to a mathematical formula corresponding to the transmission target signal. This formula is D Li = 120 + 20 × (15−x i ). As described above, numerical values such as 120 and 20 indicate time (μs). Moreover, these numerical values are examples.
 なお、送信対象の信号は4×4=16ビットからなり、xは、その送信対象の信号のうちの4ビットの信号である。可視光信号では、データRにおける時間長DR1~DR4のそれぞれ、またはデータLにおける時間長DL1~DL4のそれぞれによって、そのx(4ビットの信号)の数値を示す。また、送信対象の信号の16ビット中、4ビットはアドレスを示し、8ビットはデータを示し、4ビットはエラー検出に用いられる。 The signal to be transmitted consists of 4 × 4 = 16 bits, and xi is a 4-bit signal among the signals to be transmitted. In the visible light signal, the numerical value of x i (4-bit signal) is indicated by each of the time lengths D R1 to D R4 in the data R or each of the time lengths D L1 to D L4 in the data L. Of the 16 bits of the signal to be transmitted, 4 bits indicate an address, 8 bits indicate data, and 4 bits are used for error detection.
 ここで、データRとデータLとは、明るさに対して補完関係がある。つまり、データRの明るさが明るければ、データLの明るさは暗く、逆に、データRの明るさが暗ければ、データLの明るさは明るくなる。つまり、データRの全体の時間長とデータLの時間長との和は、送信対象の信号に関わらずに一定である。 Here, the data R and the data L are complementary to the brightness. That is, if the brightness of the data R is bright, the brightness of the data L is dark. Conversely, if the brightness of the data R is dark, the brightness of the data L is bright. That is, the sum of the entire time length of the data R and the time length of the data L is constant regardless of the signal to be transmitted.
 調光部は、可視光信号の明るさ(輝度)を調整するための信号であって、時間長CだけHighの輝度値を示し、次の時間長CだけLowの信号を示す。時間長CおよびCは、任意に調整される。なお、調光部は、可視光信号に含まれていても、含まれていなくてもよい。 The dimming unit is a signal for adjusting the brightness (luminance) of the visible light signal, and indicates a high luminance value for the time length C 1 and indicates a low signal for the next time length C 2 . Time length C 1 and C 2 are adjusted arbitrarily. Note that the dimming unit may or may not be included in the visible light signal.
 また、図188に示す例では、データRおよびデータLが可視光信号に含まれているが、データRおよびデータLのうちの何れか一方のみが含まれていてもよい。可視光信号を明るくしたいときには、データRおよびデータLのうちの明るいデータのみを送信してもよい。また、データRおよびデータLの配置を逆にしてもよい。また、データRが含まれている場合には、調光部の時間長Cは例えば100μsよりも長く、データLが含まれている場合には、調光部の時間長Cは例えば100μsよりも長い。 In the example illustrated in FIG. 188, the data R and the data L are included in the visible light signal, but only one of the data R and the data L may be included. When it is desired to brighten the visible light signal, only bright data of the data R and data L may be transmitted. Further, the arrangement of the data R and the data L may be reversed. Also, if it contains data R is tone duration C 1 of the optical part, for example greater than 100 [mu] s, if the data L is included, the time length of the dimmer C 2, for example 100 [mu] s Longer than.
 図189Aは、本実施の形態における可視光信号の他の一例を示す図である。 FIG. 189A is a diagram illustrating another example of the visible light signal in this embodiment.
 図188に示す可視光信号では、Highの輝度値を示す時間長と、Lowの輝度値を示す時間長とのそれぞれによって送信対象の信号を表現したが、図189Aの(a)に示すように、Lowの輝度値を示す時間長のみで送信対象の信号を表現してもよい。なお、図189Aの(b)は、図188の可視光信号を示す。 In the visible light signal shown in FIG. 188, the signal to be transmitted is represented by each of the time length indicating the high luminance value and the time length indicating the low luminance value, as shown in FIG. 189A (a). The signal to be transmitted may be expressed only by the time length indicating the Low luminance value. Note that (b) in FIG. 189A shows the visible light signal in FIG. 188.
 例えば、図189Aの(a)に示すように、プリアンブルでは、Highの輝度値を示す時間長は何れも等しくて比較的短く、Lowの輝度値を示す時間長P~Pはそれぞれ例えば100μsである。また、データRでは、Highの輝度値を示す時間長は何れも等しくて比較的短く、Lowの輝度値を示す時間長DR1~DR4はそれぞれ信号xに応じて調整される。なお、プリアンブルおよびデータRにおいて、Highの輝度値を示す時間長は例えば10μs以下である。 For example, as shown in (a) of FIG. 189A, in the preamble, the time lengths indicating the high luminance value are all equal and relatively short, and the time lengths P 1 to P 4 indicating the low luminance value are each 100 μs, for example. It is. In the data R, the time lengths indicating the high luminance value are all equal and relatively short, and the time lengths D R1 to D R4 indicating the low luminance value are respectively adjusted according to the signal x i . In the preamble and data R, the time length indicating the high luminance value is, for example, 10 μs or less.
 図189Bは、本実施の形態における可視光信号の他の一例を示す図である。 FIG. 189B is a diagram illustrating another example of the visible light signal in this embodiment.
 例えば、図189Bに示すように、プリアンブルでは、Highの輝度値を示す時間長は何れも等しくて比較的短く、Lowの輝度値を示す時間長P~Pはそれぞれ例えば160μs、180μs、160μsである。また、データRでは、Highの輝度値を示す時間長は何れも等しくて比較的短く、Lowの輝度値を示す時間長DR1~DR4はそれぞれ信号xに応じて調整される。なお、プリアンブルおよびデータRにおいて、Highの輝度値を示す時間長は例えば10μs以下である。 For example, as shown in FIG. 189B, in the preamble, the time lengths indicating the high luminance value are all equal and relatively short, and the time lengths P 1 to P 3 indicating the low luminance value are, for example, 160 μs, 180 μs, and 160 μs, respectively. It is. In the data R, the time lengths indicating the high luminance value are all equal and relatively short, and the time lengths D R1 to D R4 indicating the low luminance value are respectively adjusted according to the signal x i . In the preamble and data R, the time length indicating the high luminance value is, for example, 10 μs or less.
 図189Cは、本実施の形態における可視光信号の信号長を示す図である。 FIG. 189C is a diagram illustrating the signal length of the visible light signal in the present embodiment.
 図190は、本実施の形態における可視光信号と、規格IEC(International Electrotechnical Commission)の可視光信号との輝度値の比較結果を示す図である。なお、規格IECは、具体的には、"VISIBLE LIGHT BEACON SYSTEM FOR MULTIMEDIA APPLICATIONS"である。 FIG. 190 is a diagram showing a comparison result of luminance values between the visible light signal in the present embodiment and the visible light signal of the standard IEC (International Electrotechnical Commission). The standard IEC is specifically “VISIBLEILIGHT BEACON SYSTEM FOR MULTIMEDIA APPLICATIONS”.
 本実施の形態における可視光信号(実施の形態の方式(Data片側))では、規格IECの可視光信号の最高輝度よりも高い最高輝度82%を得ることができ、規格IECの可視光信号の最低輝度よりも低い最低輝度18%を得ることができる。なお、最高輝度82%および最低輝度18%は、本実施の形態における、データRおよびデータLのうちの何れか一方のみを含む可視光信号によって得られた数値である。 In the visible light signal in this embodiment (the method of the embodiment (Data one side)), a maximum luminance of 82% higher than the maximum luminance of the standard IEC visible light signal can be obtained. A minimum luminance of 18% lower than the minimum luminance can be obtained. Note that the maximum luminance of 82% and the minimum luminance of 18% are numerical values obtained by a visible light signal including only one of the data R and the data L in the present embodiment.
 図191は、本実施の形態における可視光信号と、規格IECの可視光信号との、画角に対する受信パケット数および信頼度の比較結果を示す図である。 FIG. 191 is a diagram showing a comparison result of the number of received packets and the reliability with respect to the angle of view between the visible light signal and the standard IEC visible light signal in the present embodiment.
 本実施の形態における可視光信号(実施の形態の方式(both))では、画角が小さくなっても、つまり、可視光信号を送信する送信機から受信機までの距離が長くなっても、規格IECの可視光信号よりも受信パケット数が多く、高い信頼度を得ることができる。なお、図191に示す実施形態の方式(both)の数値は、データRおよびデータLの両方を含む可視光信号によって得られた数値である。 In the visible light signal in the present embodiment (method of the embodiment (both)), even if the angle of view decreases, that is, the distance from the transmitter that transmits the visible light signal to the receiver increases, The number of received packets is larger than that of a standard IEC visible light signal, and high reliability can be obtained. In addition, the numerical value of the system (both) of embodiment shown in FIG. 191 is a numerical value obtained by the visible light signal including both the data R and the data L.
 図192は、本実施の形態における可視光信号と、規格IECの可視光信号との、ノイズに対する受信パケット数および信頼度の比較結果を示す図である。 FIG. 192 is a diagram illustrating a comparison result of the number of received packets and reliability with respect to noise between the visible light signal and the standard IEC visible light signal according to the present embodiment.
 本実施の形態における可視光信号(IEEE)では、ノイズ(ノイズの分散値)に関わらず、規格IECの可視光信号よりも受信パケット数が多く、高い信頼度を得ることができる。 In the visible light signal (IEEE) in this embodiment, the number of received packets is larger than that of the standard IEC visible light signal regardless of noise (noise dispersion value), and high reliability can be obtained.
 図193は、本実施の形態における可視光信号と、規格IECの可視光信号との、受信側クロック誤差に対する受信パケット数および信頼度の比較結果を示す図である。 FIG. 193 is a diagram showing a comparison result of the number of received packets and the reliability of the visible light signal according to the present embodiment and the standard IEC visible light signal with respect to the reception-side clock error.
 本実施の形態における可視光信号(IEEE)では、受信側クロック誤差の広い範囲で、規格IECの可視光信号よりも受信パケット数が多く、高い信頼度を得ることができる。なお、受信側クロック誤差は、受信機のイメージセンサにおける露光ラインが露光を開始するタイミングの誤差である。 In the visible light signal (IEEE) in the present embodiment, the number of received packets is larger than that of the standard IEC visible light signal in a wide range of the receiving side clock error, and high reliability can be obtained. The reception-side clock error is an error in timing at which the exposure line in the image sensor of the receiver starts exposure.
 図194は、本実施の形態における送信対象の信号の構成を示す図である。 FIG. 194 is a diagram illustrating a configuration of a transmission target signal in the present embodiment.
 送信対象の信号は、上述のように4ビットの信号(x)を4つ(4×4=16ビット)含む。例えば、送信対象の信号は、信号x~xを含む。信号xは、ビットx11~x14からなり、信号xは、ビットx21~x24からなる。また、信号xは、ビットx31~x34からなり、信号xは、ビットx41~x44からなる。ここで、ビットx11、ビットx21、ビットx31およびビットx41は間違いやすく、それら以外のビットは間違い難い。そこで、信号xに含まれるビットx42~ビットx44はそれぞれ、信号xのビットx11、信号xのビットx21、信号xのビットx31のパリティに用いられ、信号xに含まれるビットx41は使われずに常に0を示す。ビットx42、x43、x44のそれぞれの算出には、図194に示す数式が用いられる。この数式によって、ビットx42=ビットx11、ビットx43=ビットx21、ビットx44=ビットx31のように、ビットx42、x43、x44がそれぞれ算出される。 The signal to be transmitted includes four 4-bit signals (x i ) (4 × 4 = 16 bits) as described above. For example, the signal to be transmitted includes signals x 1 to x 4 . The signal x 1 is composed of bits x 11 to x 14 , and the signal x 2 is composed of bits x 21 to x 24 . The signal x 3 is composed of bits x 31 to x 34 , and the signal x 4 is composed of bits x 41 to x 44 . Here, the bit x 11 , the bit x 21 , the bit x 31, and the bit x 41 are easy to mistake, and other bits are hard to mistake. Therefore, each bit x 42 ~ bits x 44 included in the signal x 4, the bit x 11 signals x 1, bit x 21 signal x 2, used for the parity bits x 31 signal x 3, signal x 4 The bit x 41 included in is not used and always indicates 0. Formulas shown in FIG. 194 are used to calculate each of the bits x 42 , x 43 , and x 44 . According to this equation, bits x 42 , x 43 , and x 44 are calculated such that bit x 42 = bit x 11 , bit x 43 = bit x 21 , and bit x 44 = bit x 31 .
 図195Aは、本実施の形態における可視光信号の受信方法を示す図である。 FIG. 195A is a diagram illustrating a visible light signal receiving method according to the present embodiment.
 受信機は、上述の可視光信号の信号部を順次取得する。信号部は、4ビットのアドレス(Addr)と、8ビットのデータ(Data)とを含む。受信機は、それらの信号部のデータをそれぞれ結合し、複数のデータからなるIDと、1つまたは複数のデータからなるパリティ(Partity)とを生成する。 The receiver sequentially acquires the signal part of the visible light signal described above. The signal part includes a 4-bit address (Addr) and 8-bit data (Data). The receiver combines the data of these signal parts, and generates an ID composed of a plurality of data and a parity composed of one or a plurality of data.
 図195Bは、本実施の形態における可視光信号の並び替えを示す図である。 FIG. 195B is a diagram showing rearrangement of visible light signals in the present embodiment.
 図196は、本実施の形態における可視光信号の他の例を示す図である。 FIG. 196 is a diagram illustrating another example of the visible light signal according to the present embodiment.
 図196に示す可視光信号は、図188に示す可視光信号に対して高周波信号を重畳することによって構成されている。高周波信号の周波数は例えば1~数Gbpsである。これにより、図188に示す可視光信号よりも高速にデータを送信することができる。 The visible light signal shown in FIG. 196 is configured by superimposing a high-frequency signal on the visible light signal shown in FIG. The frequency of the high frequency signal is, for example, 1 to several Gbps. Thus, data can be transmitted at a higher speed than the visible light signal shown in FIG.
 図197は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。なお、図197に示す可視光信号の構成は、図188に示す構成と同様であるが、図197に示す可視光信号における調光部の時間長C1およびC2は、図188に示す時間長C1およびC2とは異なる。 FIG. 197 is a diagram illustrating another example of the detailed configuration of the visible light signal according to the present embodiment. The configuration of the visible light signal shown in FIG. 197 is the same as the configuration shown in FIG. 188. However, the time lengths C1 and C2 of the light control unit in the visible light signal shown in FIG. 197 are the time length C1 shown in FIG. And different from C2.
 図198は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。この図198に示す可視光信号では、データRおよびデータLは、V4PPMのシンボルを8つ含む。データLに含まれるシンボルDLiの立ち上がり位置または立ち下り位置は、データRに含まれるシンボルDRiの立ち上がり位置または立ち下がり位置と同じである。しかし、シンボルDLiの平均輝度とシンボルDRiの平均輝度は、同一であっても、異なっていてもよい。 FIG. 198 is a diagram illustrating another example of a detailed configuration of a visible light signal in this embodiment. In the visible light signal shown in FIG. 198, data R and data L include eight V4PPM symbols. The rising position or falling position of the symbol D Li included in the data L is the same as the rising position or the falling position of the symbol D Ri included in the data R. However, the average luminance of the symbol D Li and the average luminance of the symbol D Ri may be the same or different.
 図199は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。この図199に示す可視光信号は、ID通信用または低平均輝度用途の信号であって、図189Bに示す可視光信号と同一である。 FIG. 199 is a diagram illustrating another example of the detailed configuration of the visible light signal according to the present embodiment. The visible light signal shown in FIG. 199 is a signal for ID communication or low average luminance use, and is the same as the visible light signal shown in FIG. 189B.
 図200は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。この図200に示す可視光信号では、データ(Data)における偶数番目の時間長D2iと奇数番目の時間長D2i+1とは等しい。 FIG. 200 is a diagram illustrating another example of a detailed configuration of a visible light signal in this embodiment. In the visible light signal shown in FIG. 200, the even-numbered time length D 2i and the odd-numbered time length D 2i + 1 in the data (Data) are equal.
 図201は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。この図201に示す可視光信号におけるデータ(Data)は、パルス位置変調の信号であるシンボルを複数含む。 FIG. 201 is a diagram showing another example of the detailed configuration of the visible light signal in the present embodiment. The data (Data) in the visible light signal shown in FIG. 201 includes a plurality of symbols that are signals of pulse position modulation.
 図202は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。この図202に示す可視光信号は、連続通信用の信号であって、図198に示す可視光信号と同一である。 FIG. 202 is a diagram showing another example of the detailed configuration of the visible light signal in the present embodiment. The visible light signal shown in FIG. 202 is a signal for continuous communication and is the same as the visible light signal shown in FIG.
 図203~図211は、図197のx1~x4の値を決定する方法を説明するための図である。なお、図203~図211に示すx1~x4は、以下の変形例に示す符号w1~w4の値(W1~W4)を決定する方法と同様の方法で決定される。ただし、x1~x4のそれぞれは、4ビットからなる符号であって、第1ビットにパリティを含む点が、以下の変形例に示す符号w1~w4とは異なる。 203 to 211 are diagrams for explaining a method of determining the values x1 to x4 in FIG. 197. Note that x1 to x4 shown in FIGS. 203 to 211 are determined by a method similar to the method of determining the values (W1 to W4) of the symbols w1 to w4 shown in the following modifications. However, each of x1 to x4 is a code consisting of 4 bits, and is different from the codes w1 to w4 shown in the following modified example in that the first bit includes a parity.
 (変形例1)
 図212は、本実施の形態の変形例1に係る可視光信号の詳細な構成の一例を示す図である。この変形例1に係る可視光信号は、上記実施の形態の図188に示す可視光信号と同様であるが、HighまたはLowの輝度値を示す時間長が図188に示す可視光信号と異なっている。例えば、本変形例に係る可視光信号では、プリアンブルの時間長P、Pは90μsである。また、本変形例に係る可視光信号では、上記実施の形態と同様に、データRにおける時間長DR1~DR4は、送信対象の信号に応じた数式にしたがって決定される。しかし、本変形例における数式は、DRi=120+30×wi(i∈1~4、wi∈0~7)である。なお、wiは、3ビットからなる符号であって、0~7の何れかの整数の値を示す送信対象の信号である。また、本変形例に係る可視光信号では、上記実施の形態と同様に、データLにおける時間長DL1~DL4は、送信対象の信号に応じた数式にしたがって決定される。しかし、本変形例における数式は、DLi=120+30×(7-wi)である。
(Modification 1)
FIG. 212 is a diagram showing an example of a detailed configuration of a visible light signal according to the first modification of the present embodiment. The visible light signal according to Modification 1 is the same as the visible light signal shown in FIG. 188 in the above embodiment, but the time length indicating the high or low luminance value is different from the visible light signal shown in FIG. Yes. For example, in the visible light signal according to this modification, the preamble time lengths P 2 and P 3 are 90 μs. Further, in the visible light signal according to the present modification, the time lengths D R1 to D R4 in the data R are determined according to a mathematical expression corresponding to the transmission target signal, as in the above embodiment. However, the mathematical formula in this modification is D Ri = 120 + 30 × wi (iε1 to 4, wiε0 to 7). Note that wi is a 3-bit code and is a transmission target signal indicating an integer value of 0 to 7. Further, in the visible light signal according to the present modification, the time lengths D L1 to D L4 in the data L are determined according to a mathematical expression corresponding to the transmission target signal, as in the above embodiment. However, the mathematical formula in this modification is D Li = 120 + 30 × (7−wi).
 また、図212に示す例では、データRおよびデータLが可視光信号に含まれているが、データRおよびデータLのうちの何れか一方のみが可視光信号に含まれていてもよい。可視光信号を明るくしたいときには、データRおよびデータLのうちの明るいデータのみを送信してもよい。また、データRおよびデータLの配置を逆にしてもよい。 In the example shown in FIG. 212, the data R and the data L are included in the visible light signal, but only one of the data R and the data L may be included in the visible light signal. When it is desired to brighten the visible light signal, only bright data of the data R and data L may be transmitted. Further, the arrangement of the data R and the data L may be reversed.
 図213は、本変形例に係る可視光信号の他の例を示す図である。 FIG. 213 is a diagram illustrating another example of the visible light signal according to the present modification.
 変形例1に係る可視光信号は、図189Aの(a)および図189Bに示す例と同様に、Lowの輝度値を示す時間長のみで送信対象の信号を表現してもよい。 The visible light signal according to the modified example 1 may represent the signal to be transmitted only by the time length indicating the Low luminance value, similarly to the example illustrated in FIG. 189A (a) and FIG. 189B.
 例えば、図213に示すように、プリアンブルでは、Highの輝度値を示す時間長は例えば10μs未満であって、Lowの輝度値を示す時間長P~Pのそれぞれは例えば160μs、180μs、160μsである。また、データ(Data)では、Highの輝度値を示す時間長は10μs未満であって、Lowの輝度値を示す時間長D~Dはそれぞれ信号wiに応じて調整される。具体的には、Lowの輝度値を示す時間長Dは、D=180+30×wi(i∈1~4、wi∈0~7)である。 For example, as shown in FIG. 213, in the preamble, the time length indicating the high luminance value is less than 10 μs, for example, and the time lengths P 1 to P 3 indicating the low luminance value are 160 μs, 180 μs, and 160 μs, respectively. It is. In the data (Data), the time length indicating the High luminance value is less than 10 μs, and the time lengths D 1 to D 3 indicating the Low luminance value are adjusted according to the signal wi. Specifically, the time length D i indicating the Low luminance value is D i = 180 + 30 × wi (i∈1 to 4, wi∈0 to 7).
 図214は、本変形例に係る可視光信号のさらに他の例を示す図である。 FIG. 214 is a diagram showing still another example of the visible light signal according to the present modification.
 本変形例に係る可視光信号は、図214に示すようなプリアンブルとデータとを含んでいてもよい。プリアンブルは、図212に示すプリアンブルと同様に、時間軸に沿ってHighとLowの輝度値を交互に示す。また、プリアンブルにおける時間長P~Pのそれぞれは、50μs、40μs、40μs、50μsである。データ(Data)は、時間軸に沿ってHighとLowの輝度値を交互に示す。例えば、データLは、時間長DだけHighの輝度値を示し、次の時間長DだけLowの輝度値を示し、次の時間長DだけHighの輝度値を示し、次の時間長DだけLowの輝度値を示す。 The visible light signal according to this modification may include a preamble and data as shown in FIG. As in the preamble shown in FIG. 212, the preamble alternately indicates High and Low luminance values along the time axis. The time lengths P 1 to P 4 in the preamble are 50 μs, 40 μs, 40 μs, and 50 μs, respectively. The data (Data) alternately indicates high and low luminance values along the time axis. For example, the data L is a luminance value of High only time length D 1, represents a luminance value at the next time length D 2 only Low, represents a luminance value at the next time length D 3 only High, the next time length Only D 4 indicates a low luminance value.
 ここで、時間長D2i-1+D2iは、送信対象の信号に応じた数式にしたがって決定される。つまり、Highの輝度値を示す時間長と、その輝度値に続くLowの輝度値を示す時間長との和が、その数式にしたがって決定される。この数式は、例えば、D2i-1+D2i=100+20×x(i∈1~N、x∈0~7、D2i>50μs、D2i+1>50μs)である。 Here, the time length D 2i-1 + D 2i is determined according to a mathematical expression corresponding to the signal to be transmitted. That is, the sum of the time length indicating the high luminance value and the time length indicating the low luminance value following the luminance value is determined according to the mathematical formula. This formula is, for example, D 2i-1 + D 2i = 100 + 20 × x i (i∈1 ~ N, x i ∈0 ~ 7, D 2i> 50μs, D 2i + 1> 50μs).
 図215は、パケット変調の一例を示す図である。 FIG. 215 is a diagram illustrating an example of packet modulation.
 信号生成装置は、本変形例に係る可視光信号の生成方法によって、可視光信号を生成する。本変形例に係る可視光信号の生成方法では、パケットを上述の送信対象の信号wiに変調(すなわち変換)する。なお、上述の信号生成装置は、上記各実施の形態における送信機に備えられていてもよく、その送信機に備えられていなくてもよい。 The signal generation device generates a visible light signal by the visible light signal generation method according to this modification. In the visible light signal generation method according to this modification, the packet is modulated (ie, converted) into the transmission target signal wi described above. Note that the signal generation device described above may be provided in the transmitter in each of the above embodiments, or may not be provided in the transmitter.
 例えば、信号生成装置は、図215に示すように、パケットを、符号w1、w2、w3およびw4のそれぞれによって示される数値を含む送信対象の信号に変換する。これらの符号w1、w2、w3およびw4は、それぞれ第1ビットから第3ビットまでの3ビットからなる符号であって、図212に示すように、0~7の整数値を示す。 For example, as shown in FIG. 215, the signal generation device converts the packet into a transmission target signal including numerical values indicated by the symbols w1, w2, w3, and w4. These codes w1, w2, w3 and w4 are codes each consisting of 3 bits from the first bit to the third bit, and indicate integer values of 0 to 7, as shown in FIG.
 ここで、符号w1~w4のそれぞれにおいて、第1ビットの値をb1、第2ビットの値をb2、第3ビットの値をb3とする。なお、b1、b2およびb3は、0または1である。この場合、符号w1~w4によって示される数値W1~W4のそれぞれは、例えば、b1×2+b2×2+b3×2である。 Here, in each of the codes w1 to w4, the value of the first bit is b1, the value of the second bit is b2, and the value of the third bit is b3. B1, b2 and b3 are 0 or 1. In this case, each of the numerical values W1 to W4 indicated by the symbols w1 to w4 is, for example, b1 × 2 0 + b2 × 2 1 + b3 × 2 2 .
 パケットは、0~4ビットからなるアドレスデータ(A1~A4)と、4~7ビットからなる主データDa(Da1~Da7)と、3~4ビットからなる副データDb(Db1~Db4)と、ストップビットの値(S)とをデータとして含む。なお、Da1~Da7、A1~A4、Db1~Db4、およびSのそれぞれは、ビットの値、つまり0または1を示す。 The packet is composed of 0 to 4 bits of address data (A1 to A4), 4 to 7 bits of main data Da (Da1 to Da7), 3 to 4 bits of sub data Db (Db1 to Db4), The stop bit value (S) is included as data. Each of Da1 to Da7, A1 to A4, Db1 to Db4, and S represents a bit value, that is, 0 or 1.
 すなわち、信号生成装置は、パケットを送信対象の信号に変調するときには、そのパケットに含まれるデータを、符号w1、w2、w3、およびw4の何れかのビットに割り当てる。これによって、パケットは、符号w1、w2、w3、およびw4のそれぞれによって示される数値を含む送信対象の信号に変換される。 That is, when the signal generation apparatus modulates a packet into a signal to be transmitted, the signal generation apparatus assigns data included in the packet to any bit of the codes w1, w2, w3, and w4. As a result, the packet is converted into a signal to be transmitted that includes numerical values indicated by the symbols w1, w2, w3, and w4.
 信号生成装置は、パケットに含まれるデータを割り当てるときには、具体的には、符号w1~w4のそれぞれの第1ビット(bit1)からなる第1のビット列に、パケットに含まれる主データDaの少なくとも一部(Da1~Da4)を割り当てる。さらに、信号生成装置は、符号w1の第2ビット(bit2)に、パケットに含まれるストップビットの値(S)を割り当てる。さらに、信号生成装置は、符号w2~w4のそれぞれの第2ビット(bit2)からなる第2のビット列に、パケットに含まれる主データDaの一部(Da5~Da7)、または、パケットに含まれるアドレスデータの少なくとも一部(A1~A3)を割り当てる。さらに、信号生成装置は、符号w1~w4のそれぞれの第3ビット(bit3)からなる第3のビット列に、パケットに含まれる副データDbの少なくとも一部(Db1~Db3)と、その副データDbの一部(Db4)またはアドレスデータの一部(A4)とを割り当てる。 When the signal generator allocates the data included in the packet, specifically, at least one of the main data Da included in the packet is added to the first bit string composed of the first bits (bit 1) of the codes w1 to w4. Parts (Da1 to Da4) are allocated. Further, the signal generation device assigns the value (S) of the stop bit included in the packet to the second bit (bit2) of the code w1. Further, the signal generation device includes a part (Da5 to Da7) of the main data Da included in the packet or the packet in the second bit string including the second bits (bit2) of the codes w2 to w4. At least a part (A1 to A3) of the address data is allocated. Further, the signal generation device includes at least a part (Db1 to Db3) of the sub data Db included in the packet and the sub data Db in the third bit string including the third bits (bit 3) of the codes w1 to w4. A part (Db4) or a part (A4) of address data is allocated.
 なお、符号w1~w4のそれぞれの第3ビット(bit3)が全て0の場合には、上述の「b1×2+b2×2+b3×2」によって、それらの符号が示す数値は、3以下に抑えられる。したがって、図212に示す数式DRi=120+30×w(i∈1~4、w∈0~7)によって、時間長DRiを短くすることができる。その結果、1パケットを送信する時間を短くすることができ、より遠方からでも、そのパケットを受信することができる。 When the third bits (bit 3) of the codes w1 to w4 are all 0, the numerical values indicated by the codes are 3 by the above-mentioned “b1 × 2 0 + b2 × 2 1 + b3 × 2 2 ”. It is suppressed to the following. Therefore, the time length D Ri can be shortened by the formula D Ri = 120 + 30 × w i (iε1 to 4, w i ε0 to 7) shown in FIG. As a result, the time for transmitting one packet can be shortened, and the packet can be received from a further distance.
 図216~図226は、元データからパケットを生成する処理を示す図である。 216 to 226 are diagrams showing processing for generating a packet from original data.
 本変形例に係る信号生成装置は、元データのビット長に応じてその元データを分割するか否かを判定する。そして、信号生成装置は、その判定の結果に応じた処理を行うことにより、元データから少なくとも1つのパケットを生成する。つまり、信号生成装置は、元データのビット長が長いほど、その元データを多くのパケットに分割する。逆に、信号生成装置は、元データのビット長が所定のビット長よりも短ければ、元データを分割することなくパケットを生成する。 The signal generation device according to this modification determines whether to divide the original data according to the bit length of the original data. Then, the signal generation device generates at least one packet from the original data by performing processing according to the determination result. That is, the signal generation device divides the original data into a larger number of packets as the bit length of the original data is longer. Conversely, if the bit length of the original data is shorter than the predetermined bit length, the signal generation device generates a packet without dividing the original data.
 信号生成装置は、このように、元データから少なくとも1つのパケットを生成すると、その少なくとも1つのパケットのそれぞれを上述の送信対象の信号、すなわち、符号w1~w4に変換する。 In this way, when the signal generating device generates at least one packet from the original data, each of the at least one packet is converted into the above-described transmission target signals, that is, the codes w1 to w4.
 なお、図216~図226において、Dataは、元データを示し、Dataaは、元データに含まれる主元データを示し、Databは、元データに含まれる副元データを示す。また、Da(k)は、主元データそのもの、または、主元データとパリティとを含むデータを構成する複数の部分のうちのk番目の部分を示す。同様に、Db(k)は、副元データそのもの、または、副元データとパリティとを含むデータを構成する複数の部分のうちのk番目の部分を示す。例えば、Da(2)は、主元データとパリティとを含むデータを構成する複数の部分のうちの2番目の部分を示す。また、Sは、スタートビットを示し、Aは、アドレスデータを示す。 In FIGS. 216 to 226, Data indicates original data, Data indicates main original data included in the original data, and Data indicates sub-original data included in the original data. Da (k) indicates the main element data itself or the k-th part of a plurality of parts constituting the data including the main element data and the parity. Similarly, Db (k) indicates the sub-element data itself or the k-th part of a plurality of parts constituting the data including the sub-element data and the parity. For example, Da (2) indicates a second part of a plurality of parts constituting data including main element data and parity. S represents a start bit, and A represents address data.
 また、各ブロック内に示される最上段の表記は、元データ、主元データ、副元データ、スタートビット、およびアドレスデータなどを識別するためのラベルである。また、各ブロック内に示される、中央の数値は、ビットサイズ(ビット数)であり、最下段の数値は、各ビットの値である。 Also, the top notation shown in each block is a label for identifying original data, main original data, sub original data, start bit, address data, and the like. Further, the numerical value at the center shown in each block is the bit size (number of bits), and the numerical value at the bottom is the value of each bit.
 図216は、元データを1分割する処理を示す図である。 FIG. 216 is a diagram illustrating a process of dividing the original data into one.
 例えば、信号生成装置は、元データ(Data)のビット長が7ビットであれば、その元データを分割することなく、1つのパケットを生成する。具体的には、元データは、4ビットの主元データDataa(Da1~Da4)と、3ビットの副元データDatab(Db1~Db3)とをそれぞれ、主データDa(1)と副データDb(1)として含む。この場合、信号生成装置は、スタートビットS(S=1)と、4ビットからなり「0000」を示すアドレスデータ(A1~A4)とを、その元データに対して付加することによって、パケットを生成する。なお、スタートビットS=1は、そのスタートビットを含むパケットが終端のパケットであることを示す。 For example, if the bit length of the original data (Data) is 7 bits, the signal generating device generates one packet without dividing the original data. Specifically, the original data includes 4-bit main original data Data (Da1 to Da4) and 3-bit suboriginal data Data (Db1 to Db3), respectively, as main data Da (1) and sub data Db ( Included as 1). In this case, the signal generator adds a packet by adding a start bit S (S = 1) and address data (A1 to A4) consisting of 4 bits and indicating “0000” to the original data. Generate. Note that the start bit S = 1 indicates that the packet including the start bit is a terminal packet.
 信号生成装置は、このパケットを変換することによって、符号w1=(Da1,S=1,Db1)、符号w2=(Da2,A1=0,Db2)、符号w3=(Da3,A2=0,Db3)、および符号w4=(Da4,A3=0,A4=0)を生成する。さらに、信号生成装置は、符号w1、w2、w3およびw4のそれぞれによって示される数値W1、W2、W3およびW4を含む送信対象の信号を生成する。 By converting this packet, the signal generator converts the code w1 = (Da1, S = 1, Db1), the code w2 = (Da2, A1 = 0, Db2), and the code w3 = (Da3, A2 = 0, Db3). ), And a code w4 = (Da4, A3 = 0, A4 = 0). Further, the signal generation device generates a signal to be transmitted including numerical values W1, W2, W3, and W4 indicated by the symbols w1, w2, w3, and w4, respectively.
 なお、本変形例では、wiは、3ビットの符号として表現されるとともに、10進数の数値としても表現される。そこで、本変形例では、説明を分かりやすくするために、10進数の数値として用いられるwi(w1~w4)を、数値Wi(W1~W4)として表記する。 In this modification, wi is expressed as a 3-bit code and also as a decimal number. Therefore, in this modified example, to make the explanation easy to understand, wi (w1 to w4) used as a decimal number is expressed as a numerical value Wi (W1 to W4).
 図217は、元データを2分割する処理を示す図である。 FIG. 217 is a diagram showing a process of dividing the original data into two.
 例えば、信号生成装置は、元データ(Data)のビット長が16ビットであれば、その元データを分割することにより、2つの中間データを生成する。具体的には、元データは、10ビットの主元データDataaと、6ビットの副元データDatabとを含む。この場合、信号生成装置は、主元データDataaと、その主元データDataaに対応する1ビットのパリティとを含む第1中間データを生成し、副元データDatabと、その副元データDatabに対応する1ビットのパリティとを含む第2中間データを生成する。 For example, if the bit length of the original data (Data) is 16 bits, the signal generator generates two intermediate data by dividing the original data. Specifically, the original data includes 10-bit main original data Dataa and 6-bit secondary original data Datab. In this case, the signal generation device generates first intermediate data including main original data Dataa and 1-bit parity corresponding to the main original data Dataa, and corresponds to the sub original data Data and the sub original data Datab. Second intermediate data including 1-bit parity is generated.
 次に、信号生成装置は、第1中間データを、7ビットからなる主データDa(1)と4ビットからなる主データDa(2)とに分割する。さらに、信号生成装置は、第2中間データを、4ビットからなる副データDb(1)と3ビットからなる副データDb(2)とに分割する。なお、主データは、主元データとパリティとを含むデータを構成する複数の部分のうちの1つの部分である。同様に、副データは、副元データとパリティとを含むデータを構成する複数の部分のうちの1つの部分である。 Next, the signal generation device divides the first intermediate data into 7-bit main data Da (1) and 4-bit main data Da (2). Further, the signal generation device divides the second intermediate data into sub-data Db (1) consisting of 4 bits and sub-data Db (2) consisting of 3 bits. The main data is one part of a plurality of parts constituting data including main original data and parity. Similarly, the sub data is one part of a plurality of parts constituting data including the sub original data and the parity.
 次に、信号生成装置は、スタートビットS(S=0)と、主データDa(1)と、副データDb(1)とを含む12ビットの第1パケットを生成する。これにより、アドレスデータを含まない第1パケットが生成される。 Next, the signal generation device generates a 12-bit first packet including a start bit S (S = 0), main data Da (1), and sub data Db (1). As a result, a first packet that does not include address data is generated.
 さらに、信号生成装置は、スタートビットS(S=1)と、4ビットからなり「1000」を示すアドレスデータと、主データDa(2)と、副データDb(2)とを含む12ビットの第2パケットを生成する。なお、スタートビットS=0は、生成される複数のパケットのうち、そのスタートビットを含むパケットが終端にないパケットであることを示す。また、スタートビットS=1は、生成される複数のパケットのうち、そのスタートビットを含むパケットが終端にあるパケットであることを示す。 Further, the signal generation device includes a 12-bit start bit S (S = 1), 4-bit address data indicating “1000”, main data Da (2), and sub-data Db (2). A second packet is generated. Note that the start bit S = 0 indicates that, among a plurality of generated packets, a packet including the start bit is a packet that does not end. Further, the start bit S = 1 indicates that a packet including the start bit among a plurality of generated packets is a packet at the end.
 これにより、元データは、第1パケットと第2パケットに分割される。 Thereby, the original data is divided into the first packet and the second packet.
 信号生成装置は、第1パケットを変換することによって、符号w1=(Da1,S=0,Db1)、符号w2=(Da2,Da7,Db2)、符号w3=(Da3,Da6,Db3)、および符号w4=(Da4,Da5,Db4)を生成する。さらに、信号生成装置は、符号w1、w2、w3およびw4のそれぞれによって示される数値W1、W2、W3およびW4を含む送信対象の信号を生成する。 The signal generator converts the first packet to generate a code w1 = (Da1, S = 0, Db1), a code w2 = (Da2, Da7, Db2), a code w3 = (Da3, Da6, Db3), and A code w4 = (Da4, Da5, Db4) is generated. Further, the signal generation device generates a signal to be transmitted including numerical values W1, W2, W3, and W4 indicated by the symbols w1, w2, w3, and w4, respectively.
 さらに、信号生成装置は、第2パケットを変換することによって、符号w1=(Da1,S=1,Db1)、符号w2=(Da2,A1=1,Db2)、符号w3=(Da3,A2=0,Db3)、および符号w4=(Da4,A3=0,A4=0)を生成する。さらに、信号生成装置は、符号w1、w2、w3およびw4のそれぞれによって示される数値W1、W2、W3およびW4を含む送信対象の信号を生成する。 Further, the signal generation device converts the second packet to generate a code w1 = (Da1, S = 1, Db1), a code w2 = (Da2, A1 = 1, Db2), a code w3 = (Da3, A2 = 0, Db3), and a code w4 = (Da4, A3 = 0, A4 = 0). Further, the signal generation device generates a signal to be transmitted including numerical values W1, W2, W3, and W4 indicated by the symbols w1, w2, w3, and w4, respectively.
 図218は、元データを3分割にする処理を示す図である。 FIG. 218 is a diagram showing processing for dividing the original data into three parts.
 例えば、信号生成装置は、元データ(Data)のビット長が17ビットであれば、その元データを分割することにより、2つの中間データを生成する。具体的には、元データは、10ビットの主元データDataaと、7ビットの副元データDatabとを含む。この場合、信号生成装置は、主元データDataaと、その主元データDataaに対応する6ビットのパリティとを含む第1中間データを生成する。さらに、信号生成装置は、副元データDatabと、その副元データDatabに対応する4ビットのパリティとを含む第2中間データを生成する。例えば、信号生成装置は、CRC(Cyclic Redundancy Check)によってパリティを生成する。 For example, if the bit length of the original data (Data) is 17 bits, the signal generator generates two intermediate data by dividing the original data. Specifically, the original data includes 10-bit main original data Dataa and 7-bit secondary original data Datab. In this case, the signal generation device generates first intermediate data including main original data Dataa and 6-bit parity corresponding to the main original data Dataa. Further, the signal generation device generates second intermediate data including the secondary original data Datab and 4-bit parity corresponding to the secondary original data Datab. For example, the signal generation device generates a parity by CRC (Cyclic Redundancy Check).
 次に、信号生成装置は、第1中間データを、6ビットのパリティからなる主データDa(1)と、6ビットからなる主データDa(2)と、4ビットからなる主データDa(3)とに分割する。さらに、信号生成装置は、第2中間データを、4ビットのパリティからなる副データDb(1)と、4ビットからなる副データDb(2)と、3ビットからなる副データDb(3)とに分割する。 Next, the signal generating device converts the first intermediate data into main data Da (1) having 6-bit parity, main data Da (2) having 6 bits, and main data Da (3) having 4 bits. And split into Further, the signal generating device converts the second intermediate data into sub-data Db (1) having 4-bit parity, sub-data Db (2) having 4 bits, and sub-data Db (3) having 3 bits. Divide into
 次に、信号生成装置は、スタートビットS(S=0)と、1ビットからなり「0」を示すアドレスデータと、主データDa(1)と、副データDb(1)とを含む12ビットの第1パケットを生成する。さらに、信号生成装置は、スタートビットS(S=0)と、1ビットからなり「1」を示すアドレスデータと、主データDa(2)と、副データDb(2)とを含む12ビットの第2パケットを生成する。さらに、信号生成装置は、スタートビットS(S=1)と、4ビットからなり「0100」を示すアドレスデータと、主データDa(3)と、副データDb(3)とを含む12ビットの第3パケットを生成する。 Next, the signal generating device has 12 bits including a start bit S (S = 0), address data including 1 bit and indicating “0”, main data Da (1), and sub data Db (1). The first packet is generated. Further, the signal generation device includes a 12-bit start bit S (S = 0), 1-bit address data indicating “1”, main data Da (2), and sub-data Db (2). A second packet is generated. Further, the signal generation device has 12 bits including start bit S (S = 1), 4-bit address data indicating “0100”, main data Da (3), and sub-data Db (3). A third packet is generated.
 これにより、元データは、第1パケットと、第2パケットと、第3パケットとに分割される。 Thereby, the original data is divided into the first packet, the second packet, and the third packet.
 信号生成装置は、第1パケットを変換することによって、符号w1=(Da1,S=0,Db1)、符号w2=(Da2,A1=0,Db2)、符号w3=(Da3,Da6,Db3)、および符号w4=(Da4,Da5,Db4)を生成する。さらに、信号生成装置は、符号w1、w2、w3およびw4のそれぞれによって示される数値W1、W2、W3およびW4を含む送信対象の信号を生成する。 By converting the first packet, the signal generator converts the code w1 = (Da1, S = 0, Db1), the code w2 = (Da2, A1 = 0, Db2), and the code w3 = (Da3, Da6, Db3). , And code w4 = (Da4, Da5, Db4). Further, the signal generation device generates a signal to be transmitted including numerical values W1, W2, W3, and W4 indicated by the symbols w1, w2, w3, and w4, respectively.
 同様に、信号生成装置は、第2パケットを変換することによって、符号w1=(Da1,S=0,Db1)、符号w2=(Da2,A1=1,Db2)、符号w3=(Da3,Da6,Db3)、および符号w4=(Da4,Da5,Db4)を生成する。さらに、信号生成装置は、符号w1、w2、w3およびw4のそれぞれによって示される数値W1、W2、W3およびW4を含む送信対象の信号を生成する。 Similarly, by converting the second packet, the signal generation device converts the code w1 = (Da1, S = 0, Db1), the code w2 = (Da2, A1 = 1, Db2), and the code w3 = (Da3, Da6). , Db3) and w4 = (Da4, Da5, Db4). Further, the signal generation device generates a signal to be transmitted including numerical values W1, W2, W3, and W4 indicated by the symbols w1, w2, w3, and w4, respectively.
 同様に、信号生成装置は、第3パケットを変換することによって、符号w1=(Da1,S=1,Db1)、符号w2=(Da2,A1=0,Db2)、符号w3=(Da3,A2=1,Db3)、および符号w4=(Da4,A3=0,A4=0)を生成する。さらに、信号生成装置は、符号w1、w2、w3およびw4のそれぞれによって示される数値W1、W2、W3およびW4を含む送信対象の信号を生成する。 Similarly, by converting the third packet, the signal generation device converts the code w1 = (Da1, S = 1, Db1), the code w2 = (Da2, A1 = 0, Db2), and the code w3 = (Da3, A2). = 1, Db3), and a code w4 = (Da4, A3 = 0, A4 = 0). Further, the signal generation device generates a signal to be transmitted including numerical values W1, W2, W3, and W4 indicated by the symbols w1, w2, w3, and w4, respectively.
 図219は、元データを3分割にする処理の他の例を示す図である。 FIG. 219 is a diagram showing another example of the process of dividing the original data into three parts.
 図218に示す例では、CRCによって6ビットまたは4ビットのパリティを生成するが、1ビットのパリティを生成してもよい。 In the example shown in FIG. 218, 6-bit or 4-bit parity is generated by CRC, but 1-bit parity may be generated.
 この場合、信号生成装置は、元データ(Data)のビット長が25ビットであれば、その元データを分割することにより、2つの中間データを生成する。具体的には、元データは、15ビットの主元データDataaと、10ビットの副元データDatabとを含む。この場合、信号生成装置は、主元データDataaと、その主元データDataaに対応する1ビットのパリティとを含む第1中間データを生成し、副元データDatabと、その副元データDatabに対応する1ビットのパリティとを含む第2中間データを生成する。 In this case, if the bit length of the original data (Data) is 25 bits, the signal generation device generates two intermediate data by dividing the original data. Specifically, the original data includes 15-bit main original data Dataa and 10-bit auxiliary original data Datab. In this case, the signal generation device generates first intermediate data including main original data Dataa and 1-bit parity corresponding to the main original data Dataa, and corresponds to the sub original data Data and the sub original data Datab. Second intermediate data including 1-bit parity is generated.
 次に、信号生成装置は、第1中間データを、パリティを含む6ビットからなる主データDa(1)と、6ビットからなる主データDa(2)と、4ビットからなる主データDa(3)とに分割する。さらに、信号生成装置は、第2中間データを、パリティを含む4ビットからなる副データDb(1)と、4ビットからなる副データDb(2)と、3ビットからなる副データDb(3)とに分割する。 Next, the signal generation device converts the first intermediate data into 6-bit main data Da (1) including parity, 6-bit main data Da (2), and 4-bit main data Da (3 ) And split. Further, the signal generation device converts the second intermediate data into 4-bit sub data Db (1) including parity, 4-bit sub data Db (2), and 3-bit sub data Db (3). And split into
 次に、信号生成装置は、図218に示す例と同様に、第1中間データおよび第2中間データから、第1パケットと、第2パケットと、第3パケットとを生成する。 Next, the signal generation device generates a first packet, a second packet, and a third packet from the first intermediate data and the second intermediate data, as in the example shown in FIG.
 図220は、元データを3分割にする処理の他の例を示す図である。 FIG. 220 is a diagram illustrating another example of the process of dividing the original data into three parts.
 図218に示す例では、主元データDataaに対するCRCによって6ビットのパリティを生成し、副元データDatabに対するCRCによって4ビットのパリティを生成する。しかし、主元データDataaおよび副元データDatabの全体に対するCRCによってパリティを生成してもよい。 In the example shown in FIG. 218, a 6-bit parity is generated by CRC for the main original data Dataa, and a 4-bit parity is generated by CRC for the secondary original data Datab. However, the parity may be generated by CRC for the entire main element data Dataa and sub element data Dataab.
 この場合、信号生成装置は、元データ(Data)のビット長が22ビットであれば、その元データを分割することにより、2つの中間データを生成する。 In this case, if the bit length of the original data (Data) is 22 bits, the signal generating device generates two intermediate data by dividing the original data.
 具体的には、元データは、15ビットの主元データDataaと、7ビットの副元データDatabとを含む。信号生成装置は、主元データDataaと、その主元データDataaに対応する1ビットのパリティとを含む第1中間データを生成する。さらに、信号生成装置は、主元データDataaおよび副元データDatabの全体に対するCRCによって、その4ビットのパリティを生成する。そして、信号生成装置は、副元データDatabと、4ビットのパリティとを含む第2中間データを生成する。 Specifically, the original data includes 15-bit main original data Dataa and 7-bit secondary original data Datab. The signal generation device generates first intermediate data including main original data Dataa and 1-bit parity corresponding to the main original data Dataa. Further, the signal generation device generates the 4-bit parity by CRC with respect to the whole of the main original data Dataa and the sub original data Datab. Then, the signal generation device generates second intermediate data including the sub-element data Datab and 4-bit parity.
 次に、信号生成装置は、第1中間データを、パリティを含む6ビットからなる主データDa(1)と、6ビットからなる主データDa(2)と、4ビットからなる主データDa(3)とに分割する。さらに、信号生成装置は、第2中間データを、4ビットからなる副データDb(1)と、CRCのパリティの一部を含む4ビットからなる副データDb(2)と、CRCのパリティの残りを含む3ビットからなる副データDb(3)とに分割する。 Next, the signal generation device converts the first intermediate data into 6-bit main data Da (1) including parity, 6-bit main data Da (2), and 4-bit main data Da (3 ) And split. Further, the signal generating device converts the second intermediate data into sub data Db (1) consisting of 4 bits, sub data Db (2) consisting of 4 bits including a part of CRC parity, and the remainder of CRC parity. Are divided into sub-data Db (3) consisting of 3 bits.
 次に、信号生成装置は、図218に示す例と同様に、第1中間データおよび第2中間データから、第1パケットと、第2パケットと、第3パケットとを生成する。 Next, the signal generation device generates a first packet, a second packet, and a third packet from the first intermediate data and the second intermediate data, as in the example shown in FIG.
 なお、元データを3分割にする処理の各具体例のうち、図218に示す処理をバージョン1と称し、図219に示す処理をバージョン2と称し、図220に示す処理をバージョン3と称す。 Of the specific examples of the process of dividing the original data into three, the process shown in FIG. 218 is referred to as version 1, the process shown in FIG. 219 is referred to as version 2, and the process shown in FIG. 220 is referred to as version 3.
 図221は、元データを4分割にする処理を示す図である。また、図222は、元データを5分割にする処理を示す図である。 FIG. 221 is a diagram showing processing for dividing the original data into four parts. FIG. 222 is a diagram showing processing for dividing the original data into five parts.
 信号生成装置は、元データを3分割にする処理と同様に、つまり図218~図220に示す処理と同様に、元データを4分割または5分割にする。 The signal generation device divides the original data into four or five as in the process of dividing the original data into three, that is, the processes shown in FIGS. 218 to 220.
 図223は、元データを6、7または8分割にする処理を示す図である。 FIG. 223 is a diagram showing a process of dividing the original data into 6, 7 or 8 parts.
 例えば、信号生成装置は、元データ(Data)のビット長が31ビットであれば、その元データを分割することにより、2つの中間データを生成する。具体的には、元データは、16ビットの主元データDataaと、15ビットの副元データDatabとを含む。この場合、信号生成装置は、主元データDataaと、その主元データDataaに対応する8ビットのパリティとを含む第1中間データを生成する。さらに、信号生成装置は、副元データDatabと、その副元データDatabに対応する8ビットのパリティとを含む第2中間データを生成する。例えば、信号生成装置は、リード-ソロモン符号によってパリティを生成する。 For example, if the bit length of the original data (Data) is 31 bits, the signal generator generates two intermediate data by dividing the original data. Specifically, the original data includes 16-bit main original data Dataa and 15-bit auxiliary original data Datab. In this case, the signal generation device generates first intermediate data including main original data Dataa and 8-bit parity corresponding to the main original data Dataa. Further, the signal generation device generates second intermediate data including the secondary original data Datab and the 8-bit parity corresponding to the secondary original data Datab. For example, the signal generator generates parity using a Reed-Solomon code.
 ここで、リード-ソロモン符号において4ビットを1シンボルとして扱う場合、主元データDataaおよび副元データDatabのそれぞれのビット長は、4ビットの整数倍でなければならない。ところが、副元データDatabは上述のように15ビットであって、4ビットの整数倍である16ビットよりも1ビット少ない。 Here, when 4 bits are handled as one symbol in the Reed-Solomon code, the bit length of each of the main original data Dataa and the sub-original data Data must be an integer multiple of 4 bits. However, the secondary data Datab is 15 bits as described above, and is 1 bit less than 16 bits, which is an integer multiple of 4 bits.
 したがって、信号生成装置は、第2中間データを生成するときには、副元データDatabに対してパディングを行い、そのパディングが行われた16ビットの副元データDatabに対応する8ビットのパリティをリード-ソロモン符号によって生成する。 Therefore, when generating the second intermediate data, the signal generation device performs padding on the secondary original data Datab, and reads the 8-bit parity corresponding to the padded 16-bit secondary data Data- Generated by Solomon code.
 次に、信号生成装置は、第1中間データおよび第2中間データのそれぞれを上述と同様の手法で6つの部分(4ビットまたは3ビット)に分割する。そして、信号生成装置は、スタートビットと、3ビットまたは4ビットからなるアドレスデータと、1番目の主データと、1番目の副データとを含む第1パケットを生成する。同様に、信号生成装置は、第2パケット~第6パケットを生成する。 Next, the signal generation device divides each of the first intermediate data and the second intermediate data into six parts (4 bits or 3 bits) in the same manner as described above. Then, the signal generation device generates a first packet that includes a start bit, 3-bit or 4-bit address data, first main data, and first sub-data. Similarly, the signal generation device generates the second packet to the sixth packet.
 図224は、元データを6、7または8分割にする処理の他の例を示す図である。 FIG. 224 is a diagram showing another example of the process for dividing the original data into 6, 7 or 8 parts.
 図223に示す例では、リード-ソロモン符号によってパリティを生成したが、CRCによってパリティを生成してもよい。 In the example shown in FIG. 223, the parity is generated by the Reed-Solomon code, but the parity may be generated by CRC.
 例えば、信号生成装置は、元データ(Data)のビット長が39ビットであれば、その元データを分割することにより、2つの中間データを生成する。具体的には、元データは、20ビットの主元データDataaと、19ビットの副元データDatabとを含む。この場合、信号生成装置は、主元データDataaと、その主元データDataaに対応する4ビットのパリティとを含む第1中間データを生成し、副元データDatabと、その副元データDatabに対応する4ビットのパリティとを含む第2中間データを生成する。例えば、信号生成装置は、CRCによってパリティを生成する。 For example, if the bit length of the original data (Data) is 39 bits, the signal generation device generates two intermediate data by dividing the original data. Specifically, the original data includes 20-bit main original data Dataa and 19-bit auxiliary original data Datab. In this case, the signal generation device generates first intermediate data including main original data Dataa and 4-bit parity corresponding to the main original data Dataa, and corresponds to the sub original data Data and the sub original data Datab. Second intermediate data including 4-bit parity is generated. For example, the signal generation device generates parity by CRC.
 次に、信号生成装置は、第1中間データおよび第2中間データのそれぞれを上述と同様の手法で6つの部分(4ビットまたは3ビット)に分割する。そして、信号生成装置は、スタートビットと、3ビットまたは4ビットからなるアドレスデータと、1番目の主データと、1番目の副データとを含む第1パケットを生成する。同様に、信号生成装置は、第2パケット~第6パケットを生成する。 Next, the signal generation device divides each of the first intermediate data and the second intermediate data into six parts (4 bits or 3 bits) in the same manner as described above. Then, the signal generation device generates a first packet that includes a start bit, 3-bit or 4-bit address data, first main data, and first sub-data. Similarly, the signal generation device generates the second packet to the sixth packet.
 なお、元データを6、7または8分割にする処理の各具体例のうち、図223に示す処理をバージョン1と称し、図224に示す処理をバージョン2と称す。 Of the specific examples of the process of dividing the original data into 6, 7 or 8 parts, the process shown in FIG. 223 is referred to as version 1 and the process shown in FIG. 224 is referred to as version 2.
 図225は、元データを9分割にする処理を示す図である。 FIG. 225 is a diagram showing a process for dividing the original data into nine parts.
 例えば、信号生成装置は、元データ(Data)のビット長が55ビットであれば、その元データを分割することにより、第1パケット~第9パケットまでの9つのパケットを生成する。なお、図225では、第1中間データおよび第2中間データを省略している。 For example, if the bit length of the original data (Data) is 55 bits, the signal generator generates nine packets from the first packet to the ninth packet by dividing the original data. In FIG. 225, the first intermediate data and the second intermediate data are omitted.
 具体的には、元データ(Data)のビット長は55ビットであって、4ビットの整数倍である56ビットよりも1ビット少ない。したがって、信号生成装置は、その元データに対してパディングを行い、パディングが行われた56ビットからなる元データに対するパリティ(16ビット)を、リード-ソロモン符号によって生成する。 Specifically, the bit length of the original data (Data) is 55 bits, which is 1 bit less than 56 bits, which is an integer multiple of 4 bits. Therefore, the signal generation device performs padding on the original data, and generates parity (16 bits) for the 56-bit original data subjected to the padding by the Reed-Solomon code.
 次に、信号生成装置は、16ビットのパリティと、55ビットの元データと含むデータ全体を、9つのデータDaDb(1)~DaDb(9)に分割する。 Next, the signal generating apparatus divides the entire data including 16-bit parity and 55-bit original data into nine data DaDb (1) to DaDb (9).
 データDaDb(k)のそれぞれは、主元データDataaに含まれるk番目の4ビットからなる部分と、副元データDatabに含まれるk番目の4ビットからなる部分とを含む。なお、kは1~8の何れかの整数である。また、データDaDb(9)は、主元データDataaに含まれる9番目の4ビットからなる部分と、副元データDatabに含まれる9番目の3ビットからなる部分とを含む。 Each of the data DaDb (k) includes a k-th 4 bits part included in the main original data Dataa and a k-th 4 bits part included in the sub-original data Datab. K is an integer from 1 to 8. The data DaDb (9) includes a ninth 4-bit portion included in the main original data Dataa and a ninth 3-bit portion included in the sub-original data Datab.
 次に、信号生成装置は、9つのデータDaDb(1)~DaDb(9)のそれぞれに、スタートビットSとアドレスデータとを付加することによって、第1パケット~第9パケットを生成する。 Next, the signal generator generates the first packet to the ninth packet by adding the start bit S and the address data to each of the nine data DaDb (1) to DaDb (9).
 図226は、元データを10~16の何れかの数に分割する処理を示す図である。 FIG. 226 is a diagram showing a process of dividing the original data into any number between 10 and 16.
 例えば、信号生成装置は、元データ(Data)のビット長が7×(N-2)ビットであれば、その元データを分割することにより、第1パケット~第NパケットまでのN個のパケットを生成する。なお、Nは10~16の何れかの整数である。また、図226では、第1中間データおよび第2中間データを省略している。 For example, if the bit length of the original data (Data) is 7 × (N−2) bits, the signal generating apparatus divides the original data to obtain N packets from the first packet to the Nth packet. Is generated. N is an integer from 10 to 16. In FIG. 226, the first intermediate data and the second intermediate data are omitted.
 具体的には、信号生成装置は、7×(N-2)ビットからなる元データに対するパリティ(14ビット)を、リード-ソロモン符号によって生成する。なお、このリード-ソロモン符号では、7ビットが1シンボルとして扱われる。 Specifically, the signal generation device generates parity (14 bits) for the original data composed of 7 × (N−2) bits by the Reed-Solomon code. In this Reed-Solomon code, 7 bits are treated as one symbol.
 次に、信号生成装置は、14ビットのパリティと、7×(N-2)ビットの元データと含むデータ全体を、N個のデータDaDb(1)~DaDb(N)に分割する。 Next, the signal generator divides the entire data including the 14-bit parity and the 7 × (N−2) -bit original data into N pieces of data DaDb (1) to DaDb (N).
 データDaDb(k)のそれぞれは、主元データDataaに含まれるk番目の4ビットからなる部分と、副元データDatabに含まれるk番目の3ビットからなる部分とを含む。なお、kは1~(N-1)の何れかの整数である。 Each of the data DaDb (k) includes a portion made up of the kth 4 bits included in the main original data Dataa and a portion made up of the kth 3 bits included in the sub original data Datab. K is an integer from 1 to (N−1).
 次に、信号生成装置は、9つのデータDaDb(1)~DaDb(N)のそれぞれに、スタートビットSとアドレスデータとを付加することによって、第1パケット~第Nパケットを生成する。 Next, the signal generator generates the first packet to the Nth packet by adding the start bit S and the address data to each of the nine data DaDb (1) to DaDb (N).
 図227~図229は、元データの分割数と、データサイズと、誤り訂正符号との関係の一例を示す図である。 FIG. 227 to FIG. 229 are diagrams showing an example of the relationship between the number of divisions of the original data, the data size, and the error correction code.
 具体的には、図227~図229は、図216~図226に示す各処理における上記関係をまとめて示す。また、上述のように、元データを3分割にする処理には、バージョン1~3があり、元データを6、7または8分割にする処理には、バージョン1およびバージョン2がある。図227は、分割数に対して複数のバージョンがあれば、複数のバージョンのうちのバージョン1における上記関係を示す。同様に、図228は、分割数に対して複数のバージョンがあれば、複数のバージョンのうちのバージョン2における上記関係を示す。同様に、図229は、分割数に対して複数のバージョンがあれば、複数のバージョンのうちのバージョン3における上記関係を示す。 Specifically, FIGS. 227 to 229 collectively show the above relationships in the processes shown in FIGS. 216 to 226. As described above, there are versions 1 to 3 for the process of dividing the original data into three, and there are version 1 and version 2 for the process of dividing the original data into 6, 7 or 8 parts. FIG. 227 shows the above relationship in version 1 of a plurality of versions if there are a plurality of versions for the number of divisions. Similarly, FIG. 228 shows the above relationship in version 2 of a plurality of versions if there are a plurality of versions for the number of divisions. Similarly, FIG. 229 shows the above relationship in version 3 of a plurality of versions if there are a plurality of versions for the number of divisions.
 また、本変形例では、ショートモードとフルモードとがある。シートモードの場合、パケットにおける副データが0であって、図215に示す第3のビット列の全てのビットが0である。この場合、符号w1~w4によって示される数値W1~W4は、上述の「b1×2+b2×2+b3×2」によって、3以下に抑えられる。その結果、図212に示すように、データRにおける時間長DR1~DR4は、DRi=120+30×wi(i∈1~4、wi∈0~7)によって決定されるため、短くなる。すなわち、ショートモードの場合には、1パケットあたりの可視光信号を短くすることができる。1パケットあたりの可視光信号を短くすることによって、受信機はそのパケットを遠くからでも受信することができ、通信距離を長くすることができる。 In this modification, there are a short mode and a full mode. In the sheet mode, the sub data in the packet is 0, and all the bits of the third bit string shown in FIG. 215 are 0. In this case, the numerical values W1 to W4 indicated by the symbols w1 to w4 are suppressed to 3 or less by the above-described “b1 × 2 0 + b2 × 2 1 + b3 × 2 2 ”. As a result, as shown in FIG. 212, the time lengths D R1 to D R4 in the data R become shorter because they are determined by D Ri = 120 + 30 × wi (iε1 to 4, wiε0 to 7). That is, in the short mode, the visible light signal per packet can be shortened. By shortening the visible light signal per packet, the receiver can receive the packet even from a distance, and the communication distance can be increased.
 一方、フルモードの場合、図215に示す第3のビット列のうちの何れかのビットは1である。この場合、可視光信号は、ショートモードのように短くはならない。 On the other hand, in the full mode, one of the bits in the third bit string shown in FIG. In this case, the visible light signal is not as short as in the short mode.
 本変形例では、図227~図229に示すように、分割数が少なければショートモードの可視光信号を生成することができる。なお、図227~図229におけるショートモードのデータサイズは、主元データ(Dataa)のビット数を示し、フルモードのデータサイズは、元データ(Data)のビット数を示す。 In this modification, as shown in FIGS. 227 to 229, a short mode visible light signal can be generated if the number of divisions is small. In FIG. 227 to FIG. 229, the short mode data size indicates the number of bits of the main original data (Dataa), and the full mode data size indicates the number of bits of the original data (Data).
 (実施の形態20のまとめ)
 図230Aは、本実施の形態における可視光信号の生成方法を示すフローチャートである。
(Summary of Embodiment 20)
FIG. 230A is a flowchart illustrating a visible light signal generation method according to this embodiment.
 本実施の形態における可視光信号の生成方法は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する方法であって、ステップSD1~SD3を含む。 The visible light signal generation method in the present embodiment is a method for generating a visible light signal transmitted by a change in luminance of a light source provided in a transmitter, and includes steps SD1 to SD3.
 ステップSD1では、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、所定の時間長だけ、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 In step SD1, a preamble which is data in which each of the first and second luminance values, which are different luminance values, appears alternately along the time axis for a predetermined time length is generated.
 ステップSD2では、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、前記第1および第2の輝度値のそれぞれが継続する時間長を、送信対象の信号に応じた第1の方式にしたがって決定することにより、第1のデータを生成する。 In step SD2, in the data in which the first and second luminance values appear alternately along the time axis, the length of time that each of the first and second luminance values continues depends on the signal to be transmitted. The first data is generated by determining according to the first method.
 最後に、ステップSD3では、プリアンブルと第1のデータとを結合することによって可視光信号を生成する。 Finally, in step SD3, a visible light signal is generated by combining the preamble and the first data.
 例えば、図188に示すように、第1および第2の輝度値は、HighおよびLowであり、第1のデータは、データRまたはデータLである。このように生成された可視光信号を送信することによって、図191~図193に示すように、受信パケット数を増やすことができるとともに、信頼度を高めることができる。その結果、多様な機器間の通信を可能にすることができる。 For example, as shown in FIG. 188, the first and second luminance values are High and Low, and the first data is data R or data L. By transmitting the visible light signal generated in this way, as shown in FIGS. 191 to 193, the number of received packets can be increased and the reliability can be increased. As a result, communication between various devices can be enabled.
 また、前記可視光信号の生成方法は、さらに、前記第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、前記第1および第2の輝度値のそれぞれが継続する時間長を、送信対象の信号に応じた第2の方式にしたがって決定することによって、前記第1のデータによって表現される明るさと補完関係を有する第2のデータを生成し、前記可視光信号の生成では、前記第1のデータ、前記プリアンブル、前記第2のデータの順に、前記プリアンブルと前記第1および第2のデータとを結合することによって、前記可視光信号を生成してもよい。 The visible light signal generation method may further include a time in which each of the first and second luminance values continues in the data in which the first and second luminance values appear alternately along the time axis. The second data having a complementary relationship with the brightness expressed by the first data is generated by determining the length according to the second method according to the signal to be transmitted, and the generation of the visible light signal Then, the visible light signal may be generated by combining the preamble and the first and second data in the order of the first data, the preamble, and the second data.
 例えば、図188に示すように、第1および第2の輝度値は、HighおよびLowであり、第1および第2のデータは、データRおよびデータLである。 For example, as shown in FIG. 188, the first and second luminance values are High and Low, and the first and second data are data R and data L.
 また、aおよびbをそれぞれ定数とし、前記送信対象の信号に含まれる数値をnとし、数値nの取り得る最大値である定数をmとする場合、前記第1の方式は、a+b×nによって、前記第1のデータにおける、前記第1または第2の輝度値が継続する時間長を決定する方式であり、前記第2の方式は、a+b×(m-n)によって、前記第2のデータにおける、前記第1または第2の輝度値が継続する時間長を決定する方式であってもよい。 Further, when each of a and b is a constant, a numerical value included in the transmission target signal is n, and a constant that is the maximum value that the numerical value n can take is m, the first method is expressed by a + b × n. , A time length in which the first or second luminance value lasts in the first data is determined. The second method uses the second data by a + b × (mn). The time length in which the first or second luminance value lasts may be determined.
 例えば、図188に示すように、aは120μsであり、bは20μsであり、nは0~15のうちの何れかの整数値(信号xの示す数値)であり、mは15である。 For example, as shown in FIG. 188, a is 120 μs, b is 20 μs, n is any integer value from 0 to 15 (numerical value indicated by signal x i ), and m is 15. .
 また、前記補完関係では、前記第1のデータの全体における時間長と、前記第2のデータの全体における時間長との和が一定となってもよい。 In the complementary relationship, the sum of the time length of the entire first data and the time length of the entire second data may be constant.
 また、前記可視光信号の生成方法は、さらに、前記可視光信号によって表現される明るさを調整するためのデータである調光部を生成し、前記可視光信号の生成では、さらに前記調光部を結合することによって前記可視光信号を生成してもよい。 The visible light signal generation method further generates a dimming unit that is data for adjusting brightness expressed by the visible light signal, and the visible light signal generation further includes the dimming unit. The visible light signal may be generated by combining parts.
 調光部は、例えば図188における、時間長CだけHighの輝度値を示し、時間長CだけLowの輝度値を示す信号(Dimming)である。これにより、可視光信号の明るさを任意に調整することができる。 Light regulator, for example in FIG. 188, indicates the luminance values of High for a time length C 1, a signal indicating the luminance value of the Low only time length C 2 (Dimming). Thereby, the brightness of the visible light signal can be arbitrarily adjusted.
 図230Bは、本実施の形態における信号生成装置の構成を示すブロック図である。 FIG. 230B is a block diagram illustrating a configuration of the signal generation device according to the present embodiment.
 本実施の形態における信号生成装置D10は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する信号生成装置であって、プリアンブル生成部D11と、データ生成部D12と、結合部D13とを備える。 The signal generation device D10 in the present embodiment is a signal generation device that generates a visible light signal transmitted by the luminance change of the light source provided in the transmitter, and includes a preamble generation unit D11, a data generation unit D12, and a combining unit D13.
 プリアンブル生成部D11は、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、所定の時間長だけ、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 The preamble generation unit D11 generates a preamble that is data in which the first and second luminance values, which are different from each other, appear alternately along the time axis for a predetermined time length.
 データ生成部D12は、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1および第2の輝度値のそれぞれが継続する時間長を、送信対象の信号に応じた第1の方式にしたがって決定することにより、第1のデータを生成する。 In the data in which the first and second luminance values appear alternately along the time axis, the data generation unit D12 determines the time length that each of the first and second luminance values continues according to the transmission target signal. The first data is generated by determining according to the first method.
 結合部D13は、プリアンブルと第1のデータとを結合することによって可視光信号を生成する。 The combining unit D13 generates a visible light signal by combining the preamble and the first data.
 このように生成された可視光信号を送信することによって、図191~図193に示すように、受信パケット数を増やすことができるとともに、信頼度を高めることができる。その結果、多様な機器間の通信を可能にすることができる。 By transmitting the visible light signal generated in this way, as shown in FIGS. 191 to 193, the number of received packets can be increased and the reliability can be increased. As a result, communication between various devices can be enabled.
 (実施の形態20の変形例1のまとめ)
 また、実施の形態20の変形例1のように、前記可視光信号の生成方法は、さらに、元データのビット長に応じて前記元データを分割するか否かを判定し、その判定の結果に応じた処理を行うことにより、元データから少なくとも1つのパケットを生成してもよい。そして、その少なくとも1つのパケットのそれぞれを送信対象の信号に変換してもよい。
(Summary of Modification 1 of Embodiment 20)
Further, as in Modification 1 of Embodiment 20, the visible light signal generation method further determines whether or not to divide the original data according to the bit length of the original data, and the result of the determination By performing processing according to the above, at least one packet may be generated from the original data. Then, each of the at least one packet may be converted into a signal to be transmitted.
 その送信対象の信号への変換では、図215に示すように、その少なくとも1つのパケットに含まれる対象パケットごとに、当該対象パケットに含まれるデータを、それぞれ第1ビットから第3ビットまでの3ビットからなる符号w1、w2、w3およびw4の何れかのビットに割り当てることによって、その対象パケットを、符号w1、w2、w3およびw4のそれぞれによって示される数値を含む送信対象の信号に変換する。 In the conversion to the signal to be transmitted, as shown in FIG. 215, for each target packet included in the at least one packet, the data included in the target packet is changed to 3 from the first bit to the third bit. By assigning the bit to any one of the bits w1, w2, w3, and w4, the target packet is converted into a signal to be transmitted that includes a numerical value indicated by each of the symbols w1, w2, w3, and w4.
 そのデータの割り当てでは、符号w1~w4のそれぞれの第1ビットからなる第1のビット列に、対象パケットに含まれる主データの少なくとも一部を割り当てる。符号w1の第2ビットに、対象パケットに含まれるストップビットの値を割り当てる。符号w2~w4のそれぞれの第2ビットからなる第2のビット列に、対象パケットに含まれる主データの一部、または、対象パケットに含まれるアドレスデータの少なくとも一部を割り当てる、符号w1~w4のそれぞれの第3ビットからなる第3のビット列に、対象パケットに含まれる副データを割り当てる。 In the data allocation, at least a part of the main data included in the target packet is allocated to the first bit string including the first bits of the codes w1 to w4. The value of the stop bit included in the target packet is assigned to the second bit of the code w1. A part of the main data included in the target packet or at least a part of the address data included in the target packet is assigned to the second bit string composed of the second bits of each of the codes w2 to w4. The sub-data included in the target packet is allocated to the third bit string including the third bits.
 ここで、ストップビットは、生成された少なくとも1つのパケットのうち、対象パケットが終端にあるか否かを示す。アドレスデータは、生成された少なくとも1つのパケットのうち、対象パケットの順番をアドレスとして示す。主データおよび副データのそれぞれは、元データを復元するためのデータである。 Here, the stop bit indicates whether or not the target packet is at the end among the generated at least one packet. The address data indicates the order of the target packet among at least one generated packet as an address. Each of the main data and the sub data is data for restoring the original data.
 また、aおよびbをそれぞれ定数とし、符号w1、w2、w3およびw4のそれぞれによって示される数値をW1、W2、W3およびW4とする場合、例えば図212に示すように、上述の第1の方式は、a+b×W1、a+b×W2、a+b×W3およびa+b×W4によって、第1のデータにおける、第1または第2の輝度値が継続する時間長を決定する方式である。 Also, when a and b are constants and the numerical values indicated by the symbols w1, w2, w3 and w4 are W1, W2, W3 and W4, for example, as shown in FIG. Is a method of determining the length of time for which the first or second luminance value in the first data lasts by a + b × W1, a + b × W2, a + b × W3 and a + b × W4.
 例えば、符号w1~w4のそれぞれにおいて、第1ビットの値をb1、第2ビットの値をb2、第3ビットの値をb3とする。この場合、符号w1~w4によって示される値W1~W4のそれぞれは、例えば、b1×2+b2×2+b3×2である。したがって、符号w1~w4において、第1ビットを1にするよりも、第2ビットを1にした方が、その符号w1~w4によって示される値W1~W4は大きくなる。また、第2ビットを1にするよりも、第3ビットを1にした方が、その符号w1~w4によって示される値W1~W4は大きくなる。これらの符号w1~w4によって示される値W1~W4が大きいと、上述の第1および第2の輝度値のそれぞれが継続する時間長(例えばDRi)は長くなるため、可視光信号の輝度の誤検知を抑制することでき、受信エラーを低減することができる。逆に、これらの符号w1~w4によって示される値W1~W4が小さいと、上述の第1および第2の輝度値のそれぞれが継続する時間長は短くなるため、可視光信号の輝度の誤検知が比較的生じやすい。 For example, in each of the codes w1 to w4, the value of the first bit is b1, the value of the second bit is b2, and the value of the third bit is b3. In this case, each of the values W1 to W4 indicated by the symbols w1 to w4 is, for example, b1 × 2 0 + b2 × 2 1 + b3 × 2 2 . Accordingly, in the codes w1 to w4, the values W1 to W4 indicated by the codes w1 to w4 are larger when the second bit is set to 1 than the first bit is set to 1. Further, when the third bit is set to 1, the values W1 to W4 indicated by the symbols w1 to w4 are larger than when the second bit is set to 1. When the values W1 to W4 indicated by these codes w1 to w4 are large, the time length (for example, D Ri ) in which each of the first and second luminance values described above continues is increased, and therefore the luminance of the visible light signal is increased. False detection can be suppressed and reception errors can be reduced. On the other hand, if the values W1 to W4 indicated by these symbols w1 to w4 are small, the time length for which each of the first and second luminance values described above lasts becomes short, so that the erroneous detection of the luminance of the visible light signal is detected. Is relatively easy to occur.
 そこで、実施の形態20の変形例1では、元データを受信するために重要とされるストップビットおよびアドレスを優先的に、符号w1~w4の第2ビットに割り当てることによって、その受信エラーの低下を図ることができる。また、符号w1は、プリアンブルに最も近いHighまたはLowの輝度値が継続する時間長を定義する。つまり、符号w1は、他の符号w2~w4よりもプリアンブルに近いため、これらの他の符号よりも適切に受信されやすい。そこで、実施の形態20の変形例1では、ストップビットを符号w1の第2ビットに割り当てることによって、受信エラーの低下をより抑えることができる。 Therefore, in the first modification of the twentieth embodiment, the stop bits and addresses important for receiving the original data are preferentially assigned to the second bits of the codes w1 to w4, thereby reducing the reception error. Can be achieved. The code w1 defines the length of time that the High or Low luminance value closest to the preamble continues. That is, since the code w1 is closer to the preamble than the other codes w2 to w4, it is more likely to be received more appropriately than these other codes. Therefore, in the first modification of the twentieth embodiment, it is possible to further suppress a reduction in reception error by assigning the stop bit to the second bit of code w1.
 また、実施の形態20の変形例1では、主データは、誤検知が比較的に生じやすい第1のビット列に優先的に割り当てられる。しかし、主データに誤り訂正符号(パリティ)を入れておけば、その主データの受信エラーを抑えることができる。 Further, in the first modification of the twentieth embodiment, the main data is preferentially assigned to the first bit string in which erroneous detection is relatively likely to occur. However, if an error correction code (parity) is included in the main data, reception errors of the main data can be suppressed.
 さらに、実施の形態20の変形例1では、符号w1~w4の第3ビットからなる第3のビット列に、副データが割り当てられる。したがって、副データを0にすれば、符号w1~w4によって定義されるHighおよびLowの輝度値のそれぞれが継続する時間長を大幅に短くすることができる。その結果、1パケットあたりの可視光信号の送信時間を大幅に短くすることができる、いわゆるショートモードを実現することができる。このショートモードでは、上述のように送信時間が短いため、パケットを遠くからでも容易に受信することができる。したがって、可視光通信の通信距離を長くすることができる。 Furthermore, in the first modification of the twentieth embodiment, the sub data is assigned to the third bit string composed of the third bits of the codes w1 to w4. Therefore, if the sub data is set to 0, the length of time that each of the High and Low luminance values defined by the symbols w1 to w4 continues can be significantly shortened. As a result, a so-called short mode in which the transmission time of the visible light signal per packet can be significantly shortened can be realized. In this short mode, since the transmission time is short as described above, the packet can be easily received even from a distance. Therefore, the communication distance of visible light communication can be increased.
 また、実施の形態20の変形例1では、図217に示すように、少なくとも1つのパケットの生成では、元データを2つのパケットに分割することによって、2つのパケットを生成し、データの割り当てでは、2つのパケットのうちの終端にないパケットを対象パケットとして送信対象の信号に変換する場合、第2のビット列には、アドレスデータの少なくとも一部を割り当てることなく、終端にないパケットに含まれる主データの一部を割り当てる。 Further, in the first modification of the twentieth embodiment, as shown in FIG. 217, in the generation of at least one packet, the original data is divided into two packets to generate two packets. When a packet that is not at the end of the two packets is converted into a signal to be transmitted as a target packet, the second bit string includes a main bit included in the packet at the end without assigning at least a part of the address data. Allocate part of the data.
 例えば、図217に示す終端にないパケット(Packet1)にはアドレスデータは含まれていない。そして、その終端にないパケットには、主データDa(1)が7ビットある。したがって、図215に示すように、7ビットの主データDa(1)に含まれる、データDa1~Da4が第1のビット列に割り当てられ、データDa5~Da7が第2のビット列に割り当てられる。 For example, the address data is not included in the packet (Packet 1) not at the end shown in FIG. A packet not at the end has 7 bits of main data Da (1). Therefore, as shown in FIG. 215, data Da1 to Da4 included in 7-bit main data Da (1) are assigned to the first bit string, and data Da5 to Da7 are assigned to the second bit string.
 このように、元データが2つのパケットに分割される場合、終端にないパケット、つまり、1番目のパケットには、スタートビット(S=0)があれば、アドレスデータは不要であれる。したがって、第2のビット列のすべてのビットを主データに用いることでき、パケットに含まれるデータ量を増すことができる。 Thus, when the original data is divided into two packets, if there is a start bit (S = 0) in the packet not at the end, that is, the first packet, the address data is unnecessary. Therefore, all the bits of the second bit string can be used for the main data, and the amount of data included in the packet can be increased.
 また、実施の形態20の変形例1におけるデータの割り当てでは、第2のビット列に含まれる3つのビットのうち、配列順で先頭側のビットを優先的にアドレスデータの割り当てに用い、第2のビット列の先頭側の1つまたは2つのビットに対して、アドレスデータの全てを割り当る場合には、第2のビット列においてアドレスデータが割り当てられない1つまたは2つのビットに対しては、主データの一部を割り当てる。例えば、図218におけるPaket1では、第2のビット列の先頭側の1つのビット(符号w2の第2ビット)に対して、1ビットのアドレスデータA1を割り当る。この場合には、第2のビット列においてアドレスデータが割り当てられない2つのビット(符号w3、w4のそれぞれの第2ビット)に対しては、主データDa6、Da5を割り当てる。 In the data allocation in the first modification of the twentieth embodiment, among the three bits included in the second bit string, the first bit in the arrangement order is preferentially used for address data allocation, and the second When all of the address data is assigned to one or two bits on the head side of the bit string, the main data is assigned to one or two bits to which no address data is assigned in the second bit string. Allocate a part of For example, in Packet 1 in FIG. 218, 1-bit address data A1 is assigned to one bit (second bit of code w2) on the head side of the second bit string. In this case, main data Da6 and Da5 are assigned to two bits (second bits of the symbols w3 and w4) to which no address data is assigned in the second bit string.
 これにより、第2のビット列をアドレスデータと主データの一部とで共用することができ、パケット構成の自由度を増すことができる。 Thereby, the second bit string can be shared by the address data and a part of the main data, and the degree of freedom of the packet configuration can be increased.
 また、実施の形態20の変形例1におけるデータの割り当てでは、第2のビット列にアドレスデータの全てを割り当てることができない場合には、第3のビット列のうちの何れかのビットに、アドレスデータのうちの第2のビット列に割り当てられる部分を除く残りの部分を割り当てる。例えば、図218におけるPaket3では、第2のビット列に4ビットのアドレスデータA1~A4の全てを割り当てることができない。この場合には、第3のビット列のうちの最後のビット(符号w4の第3ビット)に、アドレスデータA1~A4のうちの第2のビット列に割り当てられる部分A1~A3を除く残りの部分A4を割り当てる。 In addition, in the data assignment in the first modification of the twentieth embodiment, when all of the address data cannot be assigned to the second bit string, the address data is assigned to any bit of the third bit string. The remaining part excluding the part assigned to the second bit string is assigned. For example, in Packet 3 in FIG. 218, all of the 4-bit address data A1 to A4 cannot be assigned to the second bit string. In this case, the remaining part A4 excluding the parts A1 to A3 assigned to the second bit string of the address data A1 to A4 is added to the last bit (third bit of the code w4) of the third bit string. Assign.
 これにより、アドレスデータを符号w1~w4に適切に割り当てることができる。 This makes it possible to appropriately assign the address data to the codes w1 to w4.
 また、実施の形態20の変形例1におけるデータの割り当てでは、少なくとも1つのパケットのうちの終端のパケットを対象パケットとして送信対象の信号に変換する場合、第2のビット列、および、第3のビット列に含まれる何れか1つのビットに、アドレスデータを割り当てる。例えば、図217~図226における終端のパケットのアドレスデータのビット数は4である。この場合、第2のビット列、および、第3のビット列のうちの最後のビット(符号w4の第3ビット)に、4ビットのアドレスデータA1~A4を割り当てる。 In addition, in the data allocation in the first modification of the twentieth embodiment, when the terminal packet of at least one packet is converted into the transmission target signal as the target packet, the second bit string and the third bit string The address data is assigned to any one of the bits included in. For example, the number of bits of the address data of the end packet in FIGS. 217 to 226 is 4. In this case, 4-bit address data A1 to A4 are allocated to the second bit string and the last bit (third bit of the code w4) in the third bit string.
 これにより、アドレスデータを符号w1~w4に適切に割り当てることができる。 This makes it possible to appropriately assign the address data to the codes w1 to w4.
 また、実施の形態20の変形例1における、少なくとも1つのパケットの生成では、元データを2つに分割することによって、2つの分割元データを生成し、前記2つの分割元データのそれぞれの誤り訂正符号を生成する。そして、2つの分割元データと、当該2つの分割元データのそれぞれに対して生成された前記誤り訂正符号とを用いて、2つ以上のパケットを生成する。2つの分割元データのそれぞれの誤り訂正符号の生成では、2つの分割元データのうちの何れかの分割元データのビット数が、誤り訂正符号の生成に必要とされるビット数に満たない場合には、分割元データに対してパディングを行い、パディングされた分割元データの誤り訂正符号を生成する。例えば、図223に示すように、分割元データであるDatabに対して、リード-ソロモン符号によってパリティを生成する際に、そのDatabが15ビットしかなく、16ビットに満たない場合には、そのDatabに対してパディングを行い、パディングされた分割元データ(16ビット)に対して、リード-ソロモン符号によってパリティを生成する。 Further, in the generation of at least one packet in the first modification of the twentieth embodiment, the original data is divided into two to generate two divided original data, and each error of the two divided original data A correction code is generated. Then, two or more packets are generated using the two divided original data and the error correction code generated for each of the two divided original data. When generating the error correction code for each of the two divided source data, the number of bits of any one of the two divided source data is less than the number of bits required for generating the error correcting code First, padding is performed on the divided original data, and an error correction code for the padded divided original data is generated. For example, as shown in FIG. 223, when generating parity by Reed-Solomon code for Data which is the division source data, if the Data is only 15 bits and less than 16 bits, the Data And the parity is generated by Reed-Solomon code for the padded divided original data (16 bits).
 これにより、分割元データのビット数が、誤り訂正符号の生成に必要とされるビット数に満たなくても、適切な誤り訂正符号を生成することができる。 Thereby, even if the number of bits of the division source data is less than the number of bits required for generating the error correction code, an appropriate error correction code can be generated.
 また、実施の形態20の変形例1におけるデータの割り当てでは、副データが0を示す場合、第3のビット列に含まれる全てのビットに対して0を割り当てる。これにより、上述のショートモードを実現することができ、可視光通信の通信距離を長くすることができる。 In addition, in the data assignment in the first modification of the twentieth embodiment, when the sub data indicates 0, 0 is assigned to all the bits included in the third bit string. Thereby, the short mode described above can be realized, and the communication distance of visible light communication can be increased.
 (実施の形態21)
 図231は、本実施の形態における高周波可視光信号を受信する方法を示す図である。
(Embodiment 21)
FIG. 231 is a diagram illustrating a method of receiving a high-frequency visible light signal in this embodiment.
 受信機は、高周波可視光信号を受信するときには、例えば、図231の(a)に示すように、可視光信号の立ち上がりおよび立ち下がりにおいてガードタイム(ガードインターバル)を設ける。そして、受信機は、そのガードタイムにおける高周波信号を用いることなく、そのガードタイムの直前に受信された高周波信号をコピーすることによって、そのガードタイムにおける高周波信号を補う。なお、可視光信号に重畳される高周波信号は、OFDM(Orthogonal Frequency Division Multiplexing)によって変調されていてもよい。 When the receiver receives a high-frequency visible light signal, for example, as shown in FIG. 231 (a), the receiver provides a guard time (guard interval) at the rise and fall of the visible light signal. The receiver compensates the high frequency signal at the guard time by copying the high frequency signal received immediately before the guard time without using the high frequency signal at the guard time. Note that the high-frequency signal superimposed on the visible light signal may be modulated by OFDM (Orthogonal Frequency Division Multiplexing).
 また、受信機は、Highの輝度値を示す高周波信号とLowの輝度値を示す高周波信号を高周波可視光信号から分離すると、それらの高周波信号のゲインを自動調整する(Automatic Gain Control)。これにより、高周波信号のゲイン(輝度値)が統一される。 In addition, when the high frequency signal indicating the high luminance value and the high frequency signal indicating the low luminance value are separated from the high frequency visible light signal, the receiver automatically adjusts the gain of these high frequency signals (Automatic Gain Control). Thereby, the gain (luminance value) of the high frequency signal is unified.
 図232Aは、本実施の形態における高周波可視光信号を受信する他の方法を示す図である。 FIG. 232A is a diagram showing another method for receiving a high-frequency visible light signal in the present embodiment.
 高周波可視光信号を受信する受信機は、上記各実施の形態と同様にイメージセンサを備えるとともに、さらに、DMD(Digital Mirror Device)素子と、フォトセンサとを備える。フォトセンサは、フォトダイオードまたはアバランシェフォトダイオードである。 A receiver that receives a high-frequency visible light signal includes an image sensor as in the above embodiments, and further includes a DMD (Digital Mirror Device) element and a photosensor. The photosensor is a photodiode or an avalanche photodiode.
 受信機は、高周波可視光信号を送信する送信機(光源)をイメージセンサによって撮影する。これにより、受信機は、輝線の縞模様を含む輝線画像を取得する。この輝線の縞模様は、高周波可視光信号における高周波信号以外の信号、つまり図188に示す可視光信号の輝度変化によって現れる。受信機は、その輝線画像における、輝線の縞模様の位置(x1,y1)および(x2,y2)を特定する。そして、受信機は、DMD素子における、その位置(x1,y1)および(x2、y2)のそれぞれに対応するマイクロミラーを特定する。これらのマイクロミラーが、輝線の縞模様を現わす高周波可視光信号の光を受ける。したがって、受信機は、DMD素子に含まれる複数のマイクロミラーのうち、特定されたマイクロミラーによる反射光のみフォトセンサに受光されるように、各マイクロミラーの角度を調整する。つまり、受信機は、位置(x1,y1)に対応するマイクロミラーによる反射光のみフォトセンサ1に受光されるように、そのマイクロミラーをONにする。さらに、受信機は、位置(x2,y2)に対応するマイクロミラーによる反射光のみフォトセンサ2に受光されるように、そのマイクロミラーをONにする。そして、受信機は、それらの特定されたマイクロミラー以外の各マイクロミラーをOFFにする。これによりOFFにされたマイクロミラーによる反射光は、光吸収体(黒体)に吸収される。また、ONにされたマイクロミラーによって、高周波可視光信号が適切にフォトセンサによって受光される。なお、DMD素子の各マイクロミラーは、ONとOFFとの切り替えによって、傾斜角度(+0°または-0°)が切り替えられる。マイクロミラーがONのときには、そのマイクロミラーは、フォトセンサに向けて反射光を出力し、マイクロミラーがOFFのときには、そのマイクロミラーは、光吸収部に向けて反射光を出力する。 The receiver takes an image of a transmitter (light source) that transmits a high-frequency visible light signal. Thereby, the receiver acquires a bright line image including a stripe pattern of bright lines. This bright stripe pattern appears due to a change in luminance of the signal other than the high-frequency signal in the high-frequency visible light signal, that is, the visible light signal shown in FIG. The receiver specifies the positions (x1, y1) and (x2, y2) of the stripe pattern of the bright line in the bright line image. Then, the receiver specifies the micromirror corresponding to each of the positions (x1, y1) and (x2, y2) in the DMD element. These micromirrors receive light of a high-frequency visible light signal that shows a stripe pattern of bright lines. Therefore, the receiver adjusts the angle of each micromirror so that only the reflected light from the identified micromirror is received by the photosensor among the plurality of micromirrors included in the DMD element. That is, the receiver turns on the micromirror so that the photosensor 1 receives only the light reflected by the micromirror corresponding to the position (x1, y1). Further, the receiver turns on the micromirror so that only the light reflected by the micromirror corresponding to the position (x2, y2) is received by the photosensor 2. Then, the receiver turns off each micromirror other than the identified micromirrors. As a result, the reflected light from the micromirror turned off is absorbed by the light absorber (black body). Further, the high-frequency visible light signal is appropriately received by the photosensor by the micromirror that is turned on. Note that the tilt angle (+ 0 ° or −0 °) of each micromirror of the DMD element is switched by switching between ON and OFF. When the micromirror is ON, the micromirror outputs reflected light toward the photosensor, and when the micromirror is OFF, the micromirror outputs reflected light toward the light absorbing portion.
 また、受信機は、図232Aに示すように、ハーフミラーと発光素子を備えていてもよい。発光素子1は、光を発して輝度変化することにより、可視光信号(または高周波可視光信号)を送信する。この発光素子1から出力された光は、ハーフミラーによって反射されて、さらに、DMD素子において、位置(x1,y1)に対応するONのマイクロミラーによっても反射される。その結果、発光素子1からの可視光信号は、位置(x1,y1)にある輝線の縞模様に対応する送信機に送信される。これにより、受信機と、位置(x1,y1)にある輝線の縞模様に対応する送信機とは、双方向通信を行うことができる。同様に、発光素子2から出力された光は、ハーフミラーによって反射されて、さらに、DMD素子において、位置(x2,y2)に対応するONのマイクロミラーによって反射される。その結果、発光素子2からの可視光信号は、位置(x2,y2)にある輝線の縞模様に対応する送信機に送信される。これにより、受信機と、位置(x2,y2)にある輝線の縞模様に対応する送信機とは、双方向通信を行うことができる。 Further, as shown in FIG. 232A, the receiver may include a half mirror and a light emitting element. The light emitting element 1 transmits a visible light signal (or a high frequency visible light signal) by emitting light and changing the luminance. The light output from the light emitting element 1 is reflected by the half mirror, and further reflected by the ON micromirror corresponding to the position (x1, y1) in the DMD element. As a result, the visible light signal from the light emitting element 1 is transmitted to the transmitter corresponding to the stripe pattern of the bright line at the position (x1, y1). As a result, the receiver and the transmitter corresponding to the stripe pattern of the bright line at the position (x1, y1) can perform bidirectional communication. Similarly, the light output from the light emitting element 2 is reflected by the half mirror, and further reflected by the ON micromirror corresponding to the position (x2, y2) in the DMD element. As a result, the visible light signal from the light emitting element 2 is transmitted to the transmitter corresponding to the stripe pattern of the bright line at the position (x2, y2). Accordingly, the receiver and the transmitter corresponding to the bright stripe pattern at the position (x2, y2) can perform bidirectional communication.
 これにより、イメージセンサによって撮影される送信機(光源)が複数あっても、受信機は、これらの送信機と同時に、且つ高速に双方向通信を行うことができる。例えば、受信機が、10Gbpsで受信可能なフォトセンサを100個備え、それらの受信機が100個の送信機と通信する場合には、1Tbpsの通信速度を実現することができる。 Thus, even if there are a plurality of transmitters (light sources) photographed by the image sensor, the receiver can perform bidirectional communication simultaneously with these transmitters at high speed. For example, when the receiver includes 100 photosensors capable of receiving at 10 Gbps and these receivers communicate with 100 transmitters, a communication speed of 1 Tbps can be realized.
 図232Bは、本実施の形態における高周波可視光信号を受信する、さらに他の方法を示す図である。 FIG. 232B is a diagram showing still another method for receiving a high-frequency visible light signal in the present embodiment.
 受信機は、例えば、レンズL1およびL2と、複数のハーフミラーと、DMD素子と、イメージセンサと、光吸収部(黒体)と、処理部と、DMD制御部と、フォトセンサ1および2と、発光素子1および2とを備える。 The receiver includes, for example, lenses L1 and L2, a plurality of half mirrors, a DMD element, an image sensor, a light absorption unit (black body), a processing unit, a DMD control unit, and photosensors 1 and 2. And light emitting elements 1 and 2.
 このような受信機は、図232Aに示す例と同様の原理で、2つの車と双方向通信する。2つの車は、ヘッドライトから光を出力してそのヘッドライトを輝度変化させることによって、高周波可視光信号を送信する。また、1つの車は、ヘッドライトから通常の光(輝度変化しない光)を出力する。 Such a receiver performs two-way communication with two cars on the same principle as the example shown in FIG. 232A. Two cars transmit high-frequency visible light signals by outputting light from the headlights and changing the brightness of the headlights. One car outputs normal light (light that does not change in luminance) from the headlight.
 イメージセンサは、レンズL1を介してそれらの高周波可視光信号と通常の光を受ける。これにより、図232Aに示す例と同様に、それらの高周波可視光信号によって生じる輝線の縞模様を含む輝線画像が得られる。処理部は、その輝線画像におけるそれらの縞模様の位置を特定する。DMD制御部は、DMD素子に含まれる複数のマイクロミラーの中から、それらの特定された縞模様の位置に対応するマイクロミラーを特定して、それらのマイクロミラーをONにする。 The image sensor receives these high-frequency visible light signals and normal light via the lens L1. Thereby, similarly to the example shown in FIG. 232A, a bright line image including a bright line stripe pattern generated by the high-frequency visible light signal is obtained. The processing unit specifies the positions of these striped patterns in the bright line image. The DMD control unit identifies micromirrors corresponding to the identified stripe pattern positions from among the plurality of micromirrors included in the DMD element, and turns on those micromirrors.
 これにより、2つの車のそれぞれからレンズL1およびハーフミラーを透過した高周波可視光信号は、DMD素子のマイクロミラーによって反射されて、レンズL2に向かう。また、1つの車のヘッドライトの通常の光は、その光によって輝線の縞模様が生じないため、レンズL1およびハーフミラーを透過しても、DMD素子のOFFのマイクロミラーによって反射される。OFFのマイクロミラーによって反射された光は、光吸収部(黒体)によって吸収される。 Thus, the high-frequency visible light signal transmitted from each of the two cars through the lens L1 and the half mirror is reflected by the micromirror of the DMD element and travels toward the lens L2. Further, normal light from a headlight of one car does not generate a stripe pattern of bright lines due to the light, so that even if it passes through the lens L1 and the half mirror, it is reflected by the OFF micromirror of the DMD element. The light reflected by the OFF micromirror is absorbed by the light absorbing portion (black body).
 レンズL2を通過した高周波可視光信号は、ハーフミラーを通過してフォトセンサ1または2によって受信される。これにより、各車からの高周波可視光信号を受信することができる。また、発光素子1および2がハーフミラーに対して可視光信号(または高周波可視光信号)を出力すれば、その可視光信号はハーフミラーによって反射されて、レンズL2を透過して、さらに、DMD素子におけるONのマイクロミラーによって反射される。その結果、発光素子1および2からの可視光信号は、ハーフミラーおよびレンズL1を介して、高周波可視光信号を送信した車に対して送信される。つまり、受信機は、高周波可視光信号を送信する複数の車との間で、双方向通信を行うことができる。 The high-frequency visible light signal that has passed through the lens L2 passes through the half mirror and is received by the photosensor 1 or 2. Thereby, the high frequency visible light signal from each vehicle can be received. If the light-emitting elements 1 and 2 output a visible light signal (or high-frequency visible light signal) to the half mirror, the visible light signal is reflected by the half mirror, passes through the lens L2, and further DMD. Reflected by the ON micromirror in the element. As a result, the visible light signals from the light emitting elements 1 and 2 are transmitted to the vehicle that has transmitted the high-frequency visible light signal through the half mirror and the lens L1. That is, the receiver can perform two-way communication with a plurality of vehicles that transmit high-frequency visible light signals.
 このように、本実施の形態における受信機は、イメージセンサにより輝線画像を取得し、その輝線画像における輝線の縞模様の位置を特定する。そして、受信機は、DMD素子に含まれる複数のマイクロミラーのうち、その縞模様の位置に対応するマイクロミラーを特定する。そして、受信機は、そのマイクロミラーをONにすることによって、高周波可視光信号をフォトセンサで受信する。また、受信機は、発光素子から可視光信号を出力して、そのONにされたマイクロミラーに反射させることによって、その可視光信を送信機に送信することができる。 As described above, the receiver in the present embodiment acquires the bright line image by the image sensor, and specifies the position of the bright line stripe pattern in the bright line image. And a receiver specifies the micromirror corresponding to the position of the striped pattern among the several micromirrors contained in a DMD element. And a receiver receives a high frequency visible light signal with a photo sensor by turning on the micromirror. Further, the receiver can transmit the visible light signal to the transmitter by outputting a visible light signal from the light emitting element and reflecting the signal to the micromirror which is turned on.
 なお、図232Aおよび図232Bに示す例では、光学機器としてハーフミラーおよびレンズなど用いたが、これらと同様の機能を有するものであれば、どのような光学機器を用いてもよい。また、DMD素子、ハーフミラー、およびレンズなどの配置は、一例であり、どのように配置されてもよい。また、図232Aおよび図232Bに示す例では、受信機はフォトセンサと発光素子との組を2組備えるが、1組だけ備えてもよく、3組以上備えてもよい。また、1つの発光素子が、複数のONのマイクロレンズに対して可視光信号を送信してもよい。これにより、受信機は、複数の送信機に対して同じ可視光信号を同時に送信することができる。また、受信機は、図232Aおよび図232Bに示す各構成要素を全て備えることなく、それらの構成要素の一部だけを備えていてもよい。 In the example shown in FIGS. 232A and 232B, a half mirror and a lens are used as the optical device, but any optical device may be used as long as it has the same function as these. Further, the arrangement of the DMD element, the half mirror, and the lens is an example and may be arranged in any manner. In the example illustrated in FIGS. 232A and 232B, the receiver includes two sets of photosensors and light-emitting elements, but may include only one set or three or more sets. One light emitting element may transmit a visible light signal to a plurality of ON microlenses. Thereby, the receiver can simultaneously transmit the same visible light signal to a plurality of transmitters. Further, the receiver may include only a part of the components without including all the components illustrated in FIGS. 232A and 232B.
 図233は、本実施の形態における高周波信号を出力する方法を示す図である。 FIG. 233 is a diagram showing a method for outputting a high-frequency signal in the present embodiment.
 図188に示す可視光信号に重畳される高周波信号を出力する信号出力装置は、例えば、ブルーレーザーと蛍光体とを備える。つまり、図114Aに示す例と同様に、その信号出力装置は、ブルーレーザーから高周波数の青色レーザー光を蛍光体に照射させる。これにより、信号出力装置は、高周波数の自然光を高周波信号として出力する。 The signal output device that outputs a high-frequency signal superimposed on the visible light signal shown in FIG. 188 includes, for example, a blue laser and a phosphor. That is, similarly to the example shown in FIG. 114A, the signal output device irradiates the phosphor with high-frequency blue laser light from the blue laser. Thus, the signal output device outputs high-frequency natural light as a high-frequency signal.
 (実施の形態22)
 本実施の形態では、上記各実施の形態における可視光通信を利用した自律飛行装置(ドローンともいう)について説明する。
(Embodiment 22)
In this embodiment, an autonomous flight device (also referred to as a drone) using visible light communication in each of the above embodiments will be described.
 図234は、本実施の形態における自律飛行装置を説明するための図である。 FIG. 234 is a diagram for explaining the autonomous flight device according to the present embodiment.
 本実施の形態における自律飛行装置1921は、監視カメラ1922の内部に収納されている。例えば、監視カメラ1922によって、不審者の画像が捉えられると、監視カメラ1922の扉が開き、内部に収納されている自律飛行装置1921は、その監視カメラ1922から飛び立って、その不審者の追跡を開始する。自律飛行装置1921は、小型カメラを備え、監視カメラ1922によって捉えられた不審者の画像が、その小型カメラによっても捉えられるように追跡を行う。また、自律飛行装置1921は、飛行などを行うための電力が不足していることを検知すると、監視カメラ1922に戻って、監視カメラ1922の内部に収納される。このときに、監視カメラ1922に他の自律飛行装置1921が収納されていれば、この他の自律飛行装置1921が、電力不足の自律飛行装置1921に代わって、不審者の追跡を開始する。また、電力不足の自律飛行装置1921は、監視カメラ1922に備えられているワイヤレス給電装置1921aによって給電される。なお、ワイヤレス給電装置1921aによる給電は、例えば規格Qiにしたがって行われる。 The autonomous flight device 1921 in the present embodiment is housed inside the surveillance camera 1922. For example, when an image of a suspicious person is captured by the monitoring camera 1922, the door of the monitoring camera 1922 opens, and the autonomous flight device 1921 housed inside takes off the monitoring camera 1922 and tracks the suspicious person. Start. The autonomous flight device 1921 includes a small camera, and performs tracking so that an image of a suspicious person captured by the surveillance camera 1922 can be captured by the small camera. Further, when the autonomous flying device 1921 detects that power for performing a flight or the like is insufficient, the autonomous flying device 1921 returns to the monitoring camera 1922 and is accommodated in the monitoring camera 1922. At this time, if another autonomous flying device 1921 is accommodated in the monitoring camera 1922, the other autonomous flying device 1921 starts tracking the suspicious person in place of the autonomous flying device 1921 having insufficient power. In addition, the autonomous flight device 1921 with insufficient power is supplied with power by the wireless power supply device 1921 a provided in the monitoring camera 1922. Note that the power supply by the wireless power supply apparatus 1921a is performed according to the standard Qi, for example.
 自律飛行装置1921の小型カメラおよび監視カメラ1922は、上記各実施の形態における可視光信号を受信することができ、この受信された可視光信号に応じた動作を行うことができる。また、自律飛行装置1921および監視カメラ1922の少なくとも一方に、可視光信号の送信機を備えれば、自律飛行装置1921と監視カメラ1922との間で可視光通信を行うことができる。その結果、不審者の追跡をより効率的に行うことができる。 The small camera and surveillance camera 1922 of the autonomous flight apparatus 1921 can receive the visible light signal in each of the above embodiments, and can perform an operation according to the received visible light signal. Further, if at least one of the autonomous flight device 1921 and the monitoring camera 1922 is provided with a visible light signal transmitter, visible light communication can be performed between the autonomous flight device 1921 and the monitoring camera 1922. As a result, the suspicious person can be tracked more efficiently.
 (実施の形態23)
 本実施の形態では、光IDを用いたAR(Augmented Reality)を実現する表示方法などについて説明する。
(Embodiment 23)
In the present embodiment, a display method for realizing AR (Augmented Reality) using an optical ID will be described.
 図235は、本実施の形態における受信機がAR画像を表示する例を示す図である。 FIG. 235 is a diagram illustrating an example in which the receiver according to the present embodiment displays an AR image.
 本実施の形態における受信機200は、上記実施の形態1~22のうちの何れかの実施の形態における、イメージセンサおよびディスプレイ201を備えた受信機であって、例えばスマートフォンとして構成されている。このような受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Paと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 according to the present embodiment is a receiver including the image sensor and the display 201 according to any one of the first to second embodiments, and is configured as a smartphone, for example. Such a receiver 200 acquires the above-described captured display image Pa, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは、駅名標として構成されている送信機100を撮像する。送信機100は、上記実施の形態1~22のうちの何れかの実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)を備える。この送信機100は、その1つまたは複数の発光素子を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100 configured as a station name sign. The transmitter 100 is the transmitter according to any one of Embodiments 1 to 22, and includes one or a plurality of light emitting elements (for example, LEDs). The transmitter 100 changes in luminance by blinking one or more light emitting elements, and transmits an optical ID (light identification information) by the change in luminance. This light ID is the above-mentioned visible light signal.
 受信機200は、送信機100を通常露光時間で撮像することによって、その送信機100が映し出された撮像表示画像Paを取得するとともに、その通常露光時間よりも短い通信用露光時間で送信機100を撮像することによって、復号用画像を取得する。なお、通常露光時間は、上述の通常撮影モードにおける露光時間であり、通信用露光時間は、上述の可視光通信モードにおける露光時間である。 The receiver 200 captures the transmitter 100 with the normal exposure time, thereby acquiring the captured display image Pa projected by the transmitter 100, and the transmitter 100 with a communication exposure time shorter than the normal exposure time. A decoding image is acquired by imaging. The normal exposure time is the exposure time in the above-described normal photographing mode, and the communication exposure time is the exposure time in the above-described visible light communication mode.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P1と認識情報とをサーバから取得する。受信機200は、撮像表示画像Paのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100である駅名標が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P1を重畳し、AR画像P1が重畳された撮像表示画像Paをディスプレイ201に表示する。例えば、送信機100である駅名標に、駅名として日本語で「京都駅」が記載されている場合、受信機200は、英語で駅名が記載されたAR画像P1、つまり「Kyoto Station」と記載されているAR画像P1を取得する。この場合、撮像表示画像Paの対象領域にそのAR画像P1が重畳されるため、英語で駅名が記載されている駅名標が現実に存在するように、撮像表示画像Paを表示することができる。その結果、英語を理解できるユーザは、日本語が読めなくても、その撮像表示画像Paを見れば、その送信機100である駅名標に記載されている駅名を容易に理解することができる。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P1 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. For example, the receiver 200 recognizes an area in which a station name sign that is the transmitter 100 is displayed as a target area. Then, the receiver 200 superimposes the AR image P1 on the target area, and displays the captured display image Pa on which the AR image P1 is superimposed on the display 201. For example, when “Kyoto Station” is written in Japanese as a station name in the station name that is the transmitter 100, the receiver 200 describes the AR image P1 in which the station name is written in English, that is, “Kyoto Station”. Obtained AR image P1. In this case, since the AR image P1 is superimposed on the target area of the captured display image Pa, the captured display image Pa can be displayed so that a station name with a station name written in English actually exists. As a result, a user who can understand English can easily understand the station name described in the station name sign that is the transmitter 100 by looking at the captured display image Pa even if the user cannot read Japanese.
 例えば、認識情報は、認識対象の画像(例えば上述の駅名標の画像)であってもよく、その画像の特徴点および特徴量であってもよい。特徴点および特徴量は、例えば、SIFT(Scale-invariant feature transform)、SURF(Speed-Upped Robust Feature)、ORB(Oriented-BRIEF)、AKAZE(Accelerated KAZE)などの画像処理によって得られる。または、認識情報は、認識対象の画像に類似する白い四角形の画像であってもよく、さらに、その四角形の縦横比(アスペクト比)を示してもよい。または、識別情報は、認識対象の画像に現れるランダムドットであってもよい。さらに、認識情報は、上述の白い四角形またはランダムドットなどの、所定の方向を基準とする向きを示してもよい。所定の方向は、例えば重力方向である。 For example, the recognition information may be an image to be recognized (for example, an image of the above-described station name sign), or may be a feature point and a feature amount of the image. The feature points and feature quantities are obtained by, for example, SIFT (Scale-invariant feature transform), SURF (Speed-Uploaded Robust Feature), ORB (Oriented-BREF), AKAZE (Accelerated) image, etc. Alternatively, the recognition information may be a white square image similar to the image to be recognized, and may further indicate the aspect ratio (aspect ratio) of the square. Alternatively, the identification information may be random dots that appear in the recognition target image. Furthermore, the recognition information may indicate a direction based on a predetermined direction, such as the above-described white square or random dot. The predetermined direction is, for example, the direction of gravity.
 受信機200は、撮像表示画像Paの中から、このような認識情報に応じた領域を対象領域として認識する。具体的には、認識情報が画像であれば、受信機200は、その認識情報である画像に類似する領域を対象領域として認識する。また、認識情報が、画像処理によって得られた特徴点および特徴量であれば、受信機200は、その画像処理を撮像表示画像Paに対して行うことによって、特徴点検出および特徴量抽出を行う。そして、受信機200は、撮像表示画像Paにおいて、認識情報である特徴点および特徴量に類似する、特徴点および特徴量を有する領域を対象領域として認識する。また、認識情報が、白い四角形とその向きを示してれば、受信機200は、まず、自らに備えられた加速度センサによって重力方向を検出する。そして、受信機200は、重力方向を基準にして配置された撮像表示画像Paから、認識情報により示される向きに向けられた白い四角形に類似する領域を対象領域として認識する。 The receiver 200 recognizes an area corresponding to such recognition information as a target area from the captured display image Pa. Specifically, if the recognition information is an image, the receiver 200 recognizes a region similar to the image that is the recognition information as a target region. If the recognition information is a feature point and a feature amount obtained by image processing, the receiver 200 performs feature point detection and feature amount extraction by performing the image processing on the captured display image Pa. . Then, the receiver 200 recognizes, in the captured display image Pa, a region having feature points and feature amounts that are similar to the feature points and feature amounts that are recognition information as target regions. If the recognition information indicates a white square and its direction, the receiver 200 first detects the direction of gravity using an acceleration sensor provided in the receiver 200. Then, the receiver 200 recognizes, as a target area, an area similar to a white square directed in the direction indicated by the recognition information from the captured display image Pa arranged with reference to the direction of gravity.
 ここで、認識情報は、撮像表示画像Paのうちの基準領域を特定するための基準情報と、その基準領域に対する対象領域の相対位置を示す対象情報とを含んでいてもよい。基準情報は、上述のような、認識対象の画像、特徴点および特徴量、白い四角形の画像、またはランダムドットなどである。この場合、受信機200は、対象領域を認識するときには、まず、基準情報に基づいて撮像表示画像Paから基準領域を特定する。そして、受信機200は、撮像表示画像Paのうち、基準領域の位置を基準として対象情報により示される相対位置にある領域を、対象領域として認識する。なお、対象情報は、対象領域が基準領域と同じ位置にあることを示していてもよい。このように、認識情報が基準情報と対象情報とを含むことによって、幅広い範囲で対象領域を認識することができる。また、AR画像が重畳される場所をサーバが自由に設定して受信機200に教えることができる。 Here, the recognition information may include reference information for specifying a reference area in the captured display image Pa and target information indicating a relative position of the target area with respect to the reference area. The reference information is an image to be recognized, a feature point and a feature amount, a white square image, or a random dot as described above. In this case, when recognizing the target area, the receiver 200 first specifies the reference area from the captured display image Pa based on the reference information. And the receiver 200 recognizes the area | region in the relative position shown by object information among the picked-up display images Pa as a reference | standard by using the position of a reference | standard area | region as a reference | standard. The target information may indicate that the target area is in the same position as the reference area. As described above, since the recognition information includes the reference information and the target information, the target region can be recognized in a wide range. Further, the server can freely set the location where the AR image is superimposed and can be instructed to the receiver 200.
 また、基準情報は、撮像表示画像Paにおける基準領域が、撮像表示画像のうちのディスプレイが映し出されている領域であることを示していてもよい。この場合、送信機100が例えばテレビなどのディスプレイとして構成されていれば、そのディスプレイが映し出されている領域を基準にして対象領域を認識することができる。 Further, the reference information may indicate that the reference area in the captured display image Pa is an area where the display of the captured display image is displayed. In this case, if the transmitter 100 is configured as a display such as a television, for example, the target area can be recognized with reference to the area where the display is displayed.
 言い換えれば、本実施の形態における受信機200は、光IDに基づいて、基準画像と、画像認識方法とを特定する。画像認識方法は、撮像表示画像Paを認識する方法だって、例えば、幾何学的特徴量抽出、スペクトル特徴量抽出、またはテクスチャ特徴量抽出などである。基準画像は、基準となる特徴量を示すデータである。その特徴量は、例えば、画像の白色の外枠の特徴量であって、具体的には、画像の特徴をベクトルで表現したデータであってもよい。受信機200は、撮像表示画像Paから、画像認識方法にしたがって特徴量を抽出し、その特徴量と基準画像の特徴量とを比較することによって、撮像表示画像Paから上述の基準領域または対象領域を見つけ出す。 In other words, the receiver 200 in the present embodiment specifies the reference image and the image recognition method based on the light ID. The image recognition method is a method for recognizing the captured display image Pa, for example, geometric feature amount extraction, spectral feature amount extraction, texture feature amount extraction, or the like. The reference image is data indicating a reference feature amount. The feature amount is, for example, the feature amount of the white outer frame of the image, and specifically, may be data expressing the feature of the image as a vector. The receiver 200 extracts a feature amount from the captured display image Pa in accordance with an image recognition method, and compares the feature amount with the feature amount of the reference image, so that the above-described reference region or target region is extracted from the captured display image Pa. Find out.
 また、画像認識方法には、例えば、ロケーション利用方法、マーカー利用方法、およびマーカーレス方法があってもよい。ロケーション利用方法は、GPSの位置情報(すなわち受信機200の位置)を活用した方法であって、その位置情報に基づいて撮像表示画像Paから対象領域が認識される。マーカー利用方法は、二次元バーコードのような白および黒の図形で構成されたマーカーをターゲット特定用のマークとして用いる方法である。つまり、このマーカー利用方法では、撮像表示画像Paに映し出されているマーカーに基づいて対象領域が認識される。マーカーレス方法では、撮像表示画像Paに対する画像分析により、その撮像表示画像Paから特徴点または特徴量を抽出し、その抽出された特徴点または特徴量に基づいて、ターゲットの位置および領域を特定する方法である。つまり、画像認識方法がマーカーレス方法である場合、その画像認識方法は、上述の幾何学的特徴量抽出、スペクトル特徴量抽出、またはテクスチャ特徴量抽出などである。 Also, the image recognition method may include, for example, a location use method, a marker use method, and a markerless method. The location utilization method is a method utilizing GPS position information (that is, the position of the receiver 200), and the target area is recognized from the captured display image Pa based on the position information. The marker utilization method is a method of using a marker composed of white and black graphics such as a two-dimensional barcode as a target specifying mark. That is, in this marker usage method, the target region is recognized based on the marker displayed in the captured display image Pa. In the markerless method, a feature point or feature amount is extracted from the captured display image Pa by image analysis on the captured display image Pa, and the position and region of the target are specified based on the extracted feature point or feature amount. Is the method. That is, when the image recognition method is a markerless method, the image recognition method is the above-described geometric feature amount extraction, spectral feature amount extraction, texture feature amount extraction, or the like.
 このような受信機200は、送信機100から光IDを受信し、その光ID(以下、受信光IDという)に対応付けられた基準画像および画像認識方法をサーバから取得することによって、その基準画像および画像認識方法を特定してもよい。つまり、サーバには、基準画像および画像認識方法を含むセットが複数保存され、複数のセットのそれぞれは互いに異なる光IDに対応付けられている。これにより、サーバに保存されている複数のセットの中から、受信光IDに対応付けられた1つのセットを特定することができる。したがって、AR画像の重畳のための画像処理の速度を向上させることができる。また、受信機200は、サーバに問い合わせることによって、受信光IDに対応付けられた基準画像などを取得してもよく、自らが予め保持している複数の基準画像の中から、その受信光IDに対応付けられた基準画像を取得してもよい。 Such a receiver 200 receives an optical ID from the transmitter 100 and acquires a reference image and an image recognition method associated with the optical ID (hereinafter referred to as a received optical ID) from the server, thereby obtaining the reference. Images and image recognition methods may be specified. That is, the server stores a plurality of sets including the reference image and the image recognition method, and each of the plurality of sets is associated with a different light ID. Thus, one set associated with the received light ID can be identified from among a plurality of sets stored in the server. Therefore, the speed of image processing for superimposing the AR image can be improved. Further, the receiver 200 may acquire a reference image associated with the received light ID by inquiring of the server, and the received light ID may be obtained from a plurality of reference images held by the receiver 200 in advance. You may acquire the reference | standard image matched with.
 また、サーバは、光IDごとに、その光IDに対応付けられた相対位置情報を、基準画像、画像認識方法およびAR画像とともに保持していてもよい。相対位置情報は、例えば、上述の基準領域と対象領域との相対的な位置関係を示す情報である。これにより、受信機200は、受信光IDをサーバに送信して問い合わせたときには、その受信光IDに対応付けられた基準画像、画像認識方法、AR画像および相対位置情報を取得する。この場合、受信機200は、基準画像および画像認識方法に基づいて撮像表示画像Paから上述の基準領域を特定する。そして、受信機200は、その基準領域の位置から、上述の相対位置情報によって示される方向および距離にある領域を、上述の対象領域として認識し、その対象領域にAR画像を重畳する。また、受信機200は、相対位置情報がなければ、上述の基準領域を対象領域として認識し、その基準領域にAR画像を重畳してもよい。つまり、受信機200は、相対位置情報の取得に代えて、基準画像に基づいてAR画像を表示するプログラムを予め保持し、例えば、基準領域である白枠内にAR画像を表示してもよい。この場合には、相対位置情報は不要である。 Further, the server may hold the relative position information associated with each light ID together with the reference image, the image recognition method, and the AR image for each light ID. The relative position information is information indicating the relative positional relationship between the reference area and the target area, for example. Thereby, when the receiver 200 sends an inquiry to the server by receiving the received light ID, the receiver 200 acquires the reference image, the image recognition method, the AR image, and the relative position information associated with the received light ID. In this case, the receiver 200 specifies the above-described reference region from the captured display image Pa based on the reference image and the image recognition method. Then, the receiver 200 recognizes the region in the direction and the distance indicated by the relative position information from the position of the reference region as the target region, and superimposes the AR image on the target region. Further, if there is no relative position information, the receiver 200 may recognize the above-described reference area as a target area and superimpose an AR image on the reference area. That is, the receiver 200 may store a program for displaying an AR image based on the reference image in advance, instead of acquiring the relative position information, and display the AR image in a white frame that is a reference region, for example. . In this case, relative position information is not necessary.
 基準画像、相対位置情報、AR画像、および画像認識方法の保持または取得には、以下の4つのバリエーション(1)~(4)がある。 There are the following four variations (1) to (4) for holding or acquiring the reference image, relative position information, AR image, and image recognition method.
 (1)サーバは、基準画像、相対位置情報、AR画像、および画像認識方法からなるセットを複数保持している。受信機200は、それらのセットの中から、受信光IDに対応付けられた1つのセットを取得する。 (1) The server holds a plurality of sets including a reference image, relative position information, an AR image, and an image recognition method. The receiver 200 acquires one set associated with the received light ID from these sets.
 (2)サーバは、基準画像およびAR画像からなるセットを複数保持している。受信機200は、予め定められた相対位置情報および画像認識方法を用い、かつ、それらのセットの中から、受信光IDに対応付けられた1つのセットを取得する。または、受信機200は、相対位置情報および画像認識方法からなる複数のセットを予め保持し、その複数のセットの中から、受信光IDに対応付けられた1つのセットを選択してもよい。この場合、受信機200は、受信光IDをサーバに送信して問い合わせ、その受信光IDに対応する相対位置情報および画像認識方法を特定するための情報をサーバから取得してもよい。そして、受信機200は、予め保持している、それぞれ相対位置情報および画像認識方法からなる複数のセットの中から、そのサーバから取得された情報に基づいて1つのセットを選択する。あるいは、受信機200は、サーバに問い合わせることなく、予め保持している、それぞれ相対位置情報および画像認識方法からなる複数のセットの中から、受信光IDに対応付けられた1つのセットを選択してもよい。 (2) The server holds a plurality of sets including a reference image and an AR image. The receiver 200 uses a predetermined relative position information and an image recognition method, and acquires one set associated with the received light ID from the set. Alternatively, the receiver 200 may hold a plurality of sets including the relative position information and the image recognition method in advance, and select one set associated with the received light ID from the plurality of sets. In this case, the receiver 200 may inquire by transmitting the received light ID to the server, and acquire relative position information corresponding to the received light ID and information for specifying the image recognition method from the server. Then, the receiver 200 selects one set based on the information acquired from the server from among a plurality of sets each having the relative position information and the image recognition method. Alternatively, the receiver 200 selects one set associated with the received light ID from a plurality of sets each including the relative position information and the image recognition method stored in advance without inquiring of the server. May be.
 (3)受信機200は、基準画像、相対位置情報、AR画像、および画像認識方法からなるセットを複数保持し、それらのセットの中から1つのセットを選択する。受信機200は、上記(2)と同様に、サーバに問い合わせることによって、1つのセットを選択してもよく、受信機光IDに対応付けられた1つのセットを選択してもよい。 (3) The receiver 200 holds a plurality of sets including a reference image, relative position information, an AR image, and an image recognition method, and selects one set from these sets. Similarly to (2) above, the receiver 200 may select one set by making an inquiry to the server, or may select one set associated with the receiver optical ID.
 (4)受信機200は、基準画像およびAR画像からなるセットを複数保持し、受信光IDに対応付けられた1つのセットを選択する。受信機200は。予め定められた画像認識方法および相対位置情報を用いる。 (4) The receiver 200 holds a plurality of sets including the reference image and the AR image, and selects one set associated with the received light ID. Receiver 200. A predetermined image recognition method and relative position information are used.
 図236は、本実施の形態における表示システムの一例を示す図である。 FIG. 236 is a diagram illustrating an example of a display system in the present embodiment.
 本実施の形態における表示システムは、例えば、上述の駅名標である送信機100と、受信機200と、サーバ300とを備える。 The display system in the present embodiment includes, for example, the transmitter 100, the receiver 200, and the server 300, which are the above-described station names.
 受信機200は、上述のようにAR画像が重畳された撮像表示画像を表示するために、まず、送信機100から光IDを受信する。次に、受信機200は、その光IDをサーバ300に送信する。 The receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the server 300.
 サーバ300は、光IDごとに、その光IDに対応付けられたAR画像および認識情報を保持している。そこで、サーバ300は、受信機200から光IDを受信すると、その受信された光IDに対応付けられたAR画像および認識情報を選択し、その選択されたAR画像および認識情報を受信機200に送信する。これにより、受信機200は、サーバ300から送信されたAR画像および認識情報を受信し、AR画像が重畳された撮像表示画像を表示する。 The server 300 holds an AR image and recognition information associated with each light ID for each light ID. Therefore, when the server 300 receives the optical ID from the receiver 200, the server 300 selects the AR image and the recognition information associated with the received optical ID, and sends the selected AR image and the recognition information to the receiver 200. Send. Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the server 300, and displays the captured display image on which the AR image is superimposed.
 図237は、本実施の形態における表示システムの他の例を示す図である。 FIG. 237 is a diagram showing another example of the display system in the present embodiment.
 本実施の形態における表示システムは、例えば、上述の駅名標である送信機100と、受信機200と、第1のサーバ301と、第2のサーバ302とを備える。 The display system according to the present embodiment includes, for example, the transmitter 100 that is the above-described station name sign, the receiver 200, the first server 301, and the second server 302.
 受信機200は、上述のようにAR画像が重畳された撮像表示画像を表示するために、まず、送信機100から光IDを受信する。次に、受信機200は、その光IDを第1のサーバ301に送信する。 The receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the first server 301.
 第1のサーバ301は、受信機200から光IDを受信すると、その受信された光IDに対応付けられたURL(Uniform Resource Locator)とKeyを受信機200に通知する。このような通知を受けた受信機200は、そのURL基づいて第2のサーバ302にアクセスし、Keyを第2のサーバ302に受け渡す。 When receiving the optical ID from the receiver 200, the first server 301 notifies the receiver 200 of a URL (Uniform Resource Locator) and Key associated with the received optical ID. Receiving such notification, the receiver 200 accesses the second server 302 based on the URL, and passes the key to the second server 302.
 第2のサーバ302は、Keyごとに、そのKeyに対応付けられたAR画像および認識情報を保持している。そこで、第2のサーバ302は、受信機200からKeyを受け取ると、そのKeyに対応付けられたAR画像および認識情報を選択し、その選択されたAR画像および認識情報を受信機200に送信する。これにより、受信機200は、第2のサーバ302から送信されたAR画像および認識情報を受信し、AR画像が重畳された撮像表示画像を表示する。 The second server 302 holds an AR image and recognition information associated with each key for each key. Therefore, when receiving a key from the receiver 200, the second server 302 selects an AR image and recognition information associated with the key, and transmits the selected AR image and recognition information to the receiver 200. . Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the second server 302, and displays a captured display image on which the AR image is superimposed.
 図238は、本実施の形態における表示システムの他の例を示す図である。 FIG. 238 is a diagram showing another example of the display system in the present embodiment.
 本実施の形態における表示システムは、例えば、上述の駅名標である送信機100と、受信機200と、第1のサーバ301と、第2のサーバ302とを備える。 The display system according to the present embodiment includes, for example, the transmitter 100 that is the above-described station name sign, the receiver 200, the first server 301, and the second server 302.
 受信機200は、上述のようにAR画像が重畳された撮像表示画像を表示するために、まず、送信機100から光IDを受信する。次に、受信機200は、その光IDを第1のサーバ301に送信する。 The receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the first server 301.
 第1のサーバ301は、受信機200から光IDを受信すると、その受信された光IDに対応付けられたKeyを第2のサーバ302に通知する。 When receiving the optical ID from the receiver 200, the first server 301 notifies the second server 302 of the Key associated with the received optical ID.
 第2のサーバ302は、Keyごとに、そのKeyに対応付けられたAR画像および認識情報を保持している。そこで、第2のサーバ302は、第1のサーバ301からKeyを受け取ると、そのKeyに対応付けられたAR画像および認識情報を選択し、その選択されたAR画像および認識情報を、第1のサーバ301に送信する。第1のサーバ301は、第2のサーバ302からAR画像および認識情報を受信すると、そのAR画像および認識情報を受信機200に送信する。これにより、受信機200は、第1のサーバ301から送信されたAR画像および認識情報を受信し、AR画像が重畳された撮像表示画像を表示する。 The second server 302 holds an AR image and recognition information associated with each key for each key. Therefore, when the second server 302 receives the key from the first server 301, the second server 302 selects the AR image and the recognition information associated with the key, and the selected AR image and the recognition information are used as the first server. To the server 301. When receiving the AR image and the recognition information from the second server 302, the first server 301 transmits the AR image and the recognition information to the receiver 200. Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the first server 301, and displays the captured display image on which the AR image is superimposed.
 なお、上述の例では、第2のサーバ302は、AR画像および認識情報を第1のサーバ301に送信したが、第1のサーバ301に送信することなく、受信機200に送信してもよい。 In the above-described example, the second server 302 transmits the AR image and the recognition information to the first server 301. However, the second server 302 may transmit the AR image and the recognition information to the receiver 200 without transmitting to the first server 301. .
 図239は、本実施の形態における受信機200の処理動作の一例を示すフローチャートである。 FIG. 239 is a flowchart illustrating an example of processing operations of the receiver 200 in the present embodiment.
 まず、受信機200は、上述の通常露光時間および通信用露光時間による撮像を開始する(ステップS101)。そして、受信機200は、通信用露光時間での撮像により得られる復号用画像に対する復号によって、光IDを取得する(ステップS102)。次に、受信機200は、その光IDをサーバに送信する(ステップS103)。 First, the receiver 200 starts imaging with the above-described normal exposure time and communication exposure time (step S101). Then, the receiver 200 acquires an optical ID by decoding the decoding image obtained by imaging with the communication exposure time (step S102). Next, the receiver 200 transmits the optical ID to the server (step S103).
 受信機200は、送信された光IDに対応するAR画像と認識情報とをサーバから取得する(ステップS104)。次に、受信機200は、通常露光時間の撮像により得られる撮像表示画像のうち、その認識情報に応じた領域を対象領域として認識する(ステップS105)。そして、受信機200は、その対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS106)。 The receiver 200 acquires the AR image corresponding to the transmitted optical ID and the recognition information from the server (step S104). Next, the receiver 200 recognizes, as a target area, an area corresponding to the recognition information in the captured display image obtained by imaging with the normal exposure time (step S105). Then, the receiver 200 superimposes the AR image on the target area, and displays the captured display image on which the AR image is superimposed (step S106).
 次に、受信機200は、撮像と撮像表示画像の表示とを終了すべきか否かを判定する(ステップS107)。ここで、受信機200は、終了すべきでないと判定すると(ステップS107のN)、さらに、受信機200の加速度が閾値以上であるか否かを判定する(ステップS108)。この加速度は、受信機200に備えられている加速度センサによって計測される。受信機200は、加速度が閾値未満であると判定すると(ステップS108のN)、ステップS105からの処理を実行する。これにより、受信機200のディスプレイ201に表示されている撮像表示画像がずれる場合であっても、その撮像表示画像の対象領域にAR画像を追従させることができる。また、受信機200は、加速度が閾値以上であると判定すると(ステップS108のY)、ステップS102からの処理を実行する。これにより、撮像表示画像に送信機100が映らなくなった場合に、送信機100と異なる被写体が映し出されている領域を誤って対象領域として認識してしまうことを抑えることができる。 Next, the receiver 200 determines whether or not the imaging and the display of the captured display image should be terminated (step S107). Here, when the receiver 200 determines that it should not be terminated (N in step S107), it further determines whether or not the acceleration of the receiver 200 is equal to or greater than a threshold value (step S108). This acceleration is measured by an acceleration sensor provided in the receiver 200. When the receiver 200 determines that the acceleration is less than the threshold value (N in step S108), the receiver 200 executes the processing from step S105. Thereby, even when the captured display image displayed on the display 201 of the receiver 200 is shifted, the AR image can follow the target area of the captured display image. If the receiver 200 determines that the acceleration is equal to or greater than the threshold (Y in step S108), the receiver 200 executes the processing from step S102. Thereby, when the transmitter 100 is no longer displayed in the captured display image, it is possible to suppress erroneously recognizing an area where a subject different from the transmitter 100 is displayed as the target area.
 このように本実施の形態では、AR画像が撮像表示画像に重畳されて表示されるため、ユーザに有益な画像を表示することができる。さらに、処理負荷を抑えて適切な対象領域にAR画像を重畳することができる。 Thus, in the present embodiment, since the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
 つまり、一般的な拡張現実(すなわちAR)では、予め保存されている膨大な数の認識対象画像と、撮像表示画像とを比較することによって、その撮像表示画像に何れかの認識対象画像が含まれているか否かが判定される。そして、認識対象画像が含まれていると判定されれば、その認識対象画像に対応するAR画像が撮像表示画像に重畳される。このとき、認識対象画像を基準にAR画像の位置合わせが行われる。このように、一般的な拡張現実では、膨大な数の認識対象画像と撮像表示画像とを比較するため、さらに、位置合わせにおいても撮像表示画像における認識対象画像の位置検出が必要となるため、計算量が多く、処理負荷が高いという問題がある。 That is, in general augmented reality (that is, AR), a huge number of recognition target images stored in advance are compared with the captured display image, and any of the recognition target images is included in the captured display image. It is determined whether or not If it is determined that the recognition target image is included, the AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image. In this way, in general augmented reality, a huge number of recognition target images are compared with captured display images, and further, position detection of the recognition target images in the captured display image is necessary even in alignment. There is a problem that the calculation amount is large and the processing load is high.
 しかし、本実施の形態にける表示方法では、被写体の撮像によって得られる復号用画像を復号することによって光IDが取得される。つまり、被写体である送信機から送信された光IDが受信される。さらに、この光IDに対応するAR画像と認識情報とがサーバから取得される。したがって、サーバでは、膨大な数の認識対象画像と撮像表示画像とを比較する必要がなく、光IDに予め対応付けられているAR画像を選択して表示装置に送信することができる。これにより、計算量を減らして処理負荷を大幅に抑えることができる。さらに、AR画像の表示処理を高速にすることができる。 However, in the display method according to the present embodiment, the light ID is acquired by decoding the decoding image obtained by imaging the subject. That is, the optical ID transmitted from the transmitter that is the subject is received. Furthermore, the AR image corresponding to this optical ID and the recognition information are acquired from the server. Therefore, the server does not need to compare a huge number of recognition target images and captured display images, and can select and transmit an AR image previously associated with the optical ID to the display device. Thereby, the amount of calculation can be reduced and the processing load can be significantly suppressed. Furthermore, the AR image display process can be speeded up.
 また、本実施の形態では、この光IDに対応する認識情報がサーバから取得される。認識情報は、撮像表示画像においてAR画像が重畳される領域である対象領域を認識するための情報である。この認識情報は、例えば白い四角形が対象領域であることを示す情報であってもよい。この場合には、対象領域を簡単に認識することができ、処理負荷をさらに抑えることができる。つまり、認識情報の内容に応じて、処理負荷をさらに抑えることができる。また、サーバでは、光IDに応じてその認識情報の内容を任意に設定することができるため、処理負荷と認識精度とのバランスを適切に保つことができる。 In this embodiment, recognition information corresponding to this optical ID is acquired from the server. The recognition information is information for recognizing a target area that is an area in which an AR image is superimposed in a captured display image. This recognition information may be information indicating that a white square is the target area, for example. In this case, the target area can be easily recognized, and the processing load can be further suppressed. That is, the processing load can be further suppressed according to the content of the recognition information. Further, since the server can arbitrarily set the content of the recognition information according to the optical ID, the balance between the processing load and the recognition accuracy can be appropriately maintained.
 なお、本実施の形態では、受信機200が光IDをサーバに送信した後に、受信機200がその光IDに対応するAR画像および認識情報をサーバから取得するが、AR画像および認識情報のうちの少なくとも一方を予め取得しておいてもよい。つまり、受信機200は、受信される可能性のある複数の光IDに対応する複数のAR画像および複数の認識情報をまとめてサーバから取得して保存しておく。その後、受信機200は、光IDを受信すると、自らに保存されている複数のAR画像および複数の認識情報から、その光IDに対応するAR画像および認識情報を選択する。これにより、AR画像の表示処理をさらに高速にすることができる。 In this embodiment, after the receiver 200 transmits the optical ID to the server, the receiver 200 acquires the AR image and the recognition information corresponding to the optical ID from the server. Of the AR image and the recognition information, At least one of these may be acquired in advance. That is, the receiver 200 collects a plurality of AR images and a plurality of recognition information corresponding to a plurality of optical IDs that may be received from the server and stores them. Thereafter, when receiving the optical ID, the receiver 200 selects an AR image and recognition information corresponding to the optical ID from a plurality of AR images and a plurality of recognition information stored in the receiver 200. Thereby, the display processing of the AR image can be further accelerated.
 図240は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 240 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図240に示すように、照明装置として構成され、施設の案内板101を照らしながら輝度変化することによって、光IDを送信している。案内板101は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 240, the transmitter 100 is configured as an illumination device, and transmits a light ID by changing the luminance while illuminating the facility guide plate 101. Since the guide plate 101 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
 受信機200は、送信機100によって照らされた案内板101を撮像することによって、上述と同様に、撮像表示画像Pbと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板101から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P2と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pbのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、案内板101における枠102が映し出されている領域を対象領域として認識する。この枠102は、施設の待ち時間を示すための枠である。そして、受信機200は、その対象領域にAR画像P2を重畳し、AR画像P2が重畳された撮像表示画像Pbをディスプレイ201に表示する。例えば、AR画像P2は、文字列「30分」を含む画像である。この場合、撮像表示画像Pbの対象領域にそのAR画像P2が重畳されるため、受信機200は、待ち時間「30分」が記載された案内板101が現実に存在するように、撮像表示画像Pbを表示することができる。これにより、案内板101に特別な表示装置を備えることなく、受信機200のユーザに待ち時間を簡単に、かつ、分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pb and the decoding image in the same manner as described above by imaging the guide plate 101 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 101. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P2 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pb as a target area. For example, the receiver 200 recognizes the area where the frame 102 on the guide plate 101 is projected as the target area. This frame 102 is a frame for indicating the waiting time of the facility. Then, the receiver 200 superimposes the AR image P2 on the target area, and displays the captured display image Pb on which the AR image P2 is superimposed on the display 201. For example, the AR image P2 is an image including the character string “30 minutes”. In this case, since the AR image P2 is superimposed on the target area of the captured display image Pb, the receiver 200 captures the captured display image so that the guide plate 101 on which the waiting time “30 minutes” is described actually exists. Pb can be displayed. Thereby, it is possible to inform the user of the receiver 200 of the waiting time easily and easily without providing a special display device on the guide plate 101.
 図241は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 241 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図241に示すように、2つの照明装置からなる。送信機100は、施設の案内板104を照らしながら輝度変化することによって、光IDを送信している。案内板104は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、案内板104は、例えば「ABCランド」および「アドベンチャーランド」などの複数の施設の名称を示す。 The transmitter 100 includes two illumination devices as shown in FIG. The transmitter 100 transmits the light ID by changing the luminance while illuminating the facility guide plate 104. Since the guide plate 104 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted. The guide plate 104 indicates names of a plurality of facilities such as “ABC land” and “adventure land”.
 受信機200は、送信機100によって照らされた案内板104を撮像することによって、上述と同様に、撮像表示画像Pcと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板104から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P3と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pcのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、案内板104が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P3を重畳し、AR画像P3が重畳された撮像表示画像Pcをディスプレイ201に表示する。例えば、AR画像P3は、複数の施設の名称を示す画像である。このAR画像P3では、施設の待ち時間が長いほど、その施設の名称が小さく表示され、逆に、施設の待ち時間が短いほど、その施設の名称が大きく表示されている。この場合、撮像表示画像Pcの対象領域にそのAR画像P3が重畳されるため、受信機200は、待ち時間に応じた大きさの各施設名称が記載された案内板104が現実に存在するように、撮像表示画像Pcを表示することができる。これにより、案内板104に特別な表示装置を備えることなく、受信機200のユーザに各施設の待ち時間を簡単に、かつ、分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pc and the decoding image in the same manner as described above by imaging the guide plate 104 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 104. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P3 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pc as a target area. For example, the receiver 200 recognizes an area where the guide plate 104 is projected as a target area. Then, the receiver 200 superimposes the AR image P3 on the target area, and displays the captured display image Pc on which the AR image P3 is superimposed on the display 201. For example, the AR image P3 is an image indicating names of a plurality of facilities. In this AR image P3, the longer the waiting time of the facility, the smaller the name of the facility is displayed. Conversely, the shorter the waiting time of the facility, the larger the name of the facility is displayed. In this case, since the AR image P3 is superimposed on the target area of the captured display image Pc, the receiver 200 seems to actually have a guide plate 104 on which each facility name having a size corresponding to the waiting time is described. The captured display image Pc can be displayed. Thereby, without providing a special display device on the guide plate 104, the user of the receiver 200 can be informed easily and easily of the waiting time of each facility.
 図242は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 242 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図242に示すように、2つの照明装置からなる。送信機100は、城壁105を照らしながら輝度変化することによって、光IDを送信している。城壁105は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、城壁105には、例えば、キャラクターの顔を模った小さいマークが隠れキャラクター106として刻まれている。 The transmitter 100 includes two illumination devices as shown in FIG. 242, for example. The transmitter 100 transmits the light ID by changing the luminance while illuminating the castle wall 105. Since the castle wall 105 is illuminated by the light from the transmitter 100, the luminance is changed in the same manner as the transmitter 100, and the light ID is transmitted. Further, on the castle wall 105, for example, a small mark imitating the character's face is engraved as a hidden character 106.
 受信機200は、送信機100によって照らされた城壁105を撮像することによって、上述と同様に、撮像表示画像Pdと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、城壁105から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P4と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pdのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、城壁105のうち隠れキャラクター106を含む範囲が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P4を重畳し、AR画像P4が重畳された撮像表示画像Pdをディスプレイ201に表示する。例えば、AR画像P4は、キャラクターの顔を模った画像である。このAR画像P4は、撮像表示画像Pdに映し出されている隠れキャラクター106よりも十分に大きい画像である。この場合、撮像表示画像Pdの対象領域にそのAR画像P4が重畳されるため、受信機200は、キャラクターの顔を模った大きなマークが刻まれた城壁105が現実に存在するように、撮像表示画像Pdを表示することができる。これにより、受信機200のユーザに、隠れキャラクター106の位置を分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pd and the decoding image in the same manner as described above by capturing an image of the castle wall 105 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the castle wall 105. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P4 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pd as a target area. For example, the receiver 200 recognizes, as the target area, an area in which a range including the hidden character 106 in the castle wall 105 is projected. Then, the receiver 200 superimposes the AR image P4 on the target area, and displays the captured display image Pd on which the AR image P4 is superimposed on the display 201. For example, the AR image P4 is an image imitating a character's face. The AR image P4 is an image that is sufficiently larger than the hidden character 106 displayed in the captured display image Pd. In this case, since the AR image P4 is superimposed on the target area of the captured display image Pd, the receiver 200 captures the image so that the castle wall 105 engraved with a large mark imitating the character's face actually exists. The display image Pd can be displayed. Thereby, it is possible to inform the user of the receiver 200 of the position of the hidden character 106 in an easy-to-understand manner.
 図243は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 243 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図243に示すように、2つの照明装置からなる。送信機100は、施設の案内板107を照らしながら輝度変化することによって、光IDを送信している。案内板107は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、案内板107の隅の複数箇所には、赤外線遮断塗料108が塗布されている。 The transmitter 100 includes two illumination devices as shown in FIG. 243, for example. The transmitter 100 transmits the light ID by changing the luminance while illuminating the facility guide plate 107. Since the guide plate 107 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted. In addition, an infrared shielding paint 108 is applied to a plurality of corners of the guide plate 107.
 受信機200は、送信機100によって照らされた案内板107を撮像することによって、上述と同様に、撮像表示画像Peと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板107から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P5と認識情報とをサーバから取得する。受信機200は、撮像表示画像Peのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、案内板107が映し出されている領域を対象領域として認識する。 The receiver 200 acquires the captured display image Pe and the decoding image in the same manner as described above by imaging the guide plate 107 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 107. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P5 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pe as a target area. For example, the receiver 200 recognizes an area where the guide plate 107 is projected as a target area.
 具体的には、認識情報には、複数箇所の赤外線遮断塗料108に外接する矩形が対象領域であることが示されている。また、赤外線遮断塗料108は、送信機100から照射される光に含まれる赤外線を遮断する。したがって、受信機200のイメージセンサには、赤外線遮断塗料108がその周辺よりも暗い像として認識される。受信機200は、それぞれ暗い像として現れる複数箇所の赤外線遮断塗料108に外接する矩形を対象領域として認識する。 Specifically, the recognition information indicates that a rectangle circumscribing the plurality of infrared shielding paints 108 is the target region. The infrared blocking paint 108 blocks infrared rays included in the light emitted from the transmitter 100. Therefore, the image sensor of the receiver 200 recognizes the infrared shielding paint 108 as an image darker than the surrounding area. The receiver 200 recognizes rectangles circumscribing the plurality of infrared shielding paints 108 that appear as dark images as target regions.
 そして、受信機200は、その対象領域にAR画像P5を重畳し、AR画像P5が重畳された撮像表示画像Peをディスプレイ201に表示する。例えば、AR画像P5は、案内板107の施設において行われる催しのスケジュールを示す。この場合、撮像表示画像Peの対象領域にそのAR画像P5が重畳されるため、受信機200は、催しのスケジュールが記載された案内板107が現実に存在するように、撮像表示画像Peを表示することができる。これにより、案内板107に特別な表示装置を備えることなく、受信機200のユーザに施設の催しのスケジュールを分かりやすく知らせることができる。 Then, the receiver 200 superimposes the AR image P5 on the target area, and displays the captured display image Pe on which the AR image P5 is superimposed on the display 201. For example, the AR image P5 shows a schedule of events to be performed in the facility of the guide board 107. In this case, since the AR image P5 is superimposed on the target area of the captured display image Pe, the receiver 200 displays the captured display image Pe so that the guide plate 107 on which the event schedule is described actually exists. can do. Thereby, without providing a special display device on the guide plate 107, it is possible to inform the user of the receiver 200 of the schedule of the facility event in an easy-to-understand manner.
 なお、案内板107には、赤外線遮断塗料108の代わりに、赤外線反射塗料が塗布されていてもよい。赤外線反射塗料は、送信機100から照射される光に含まれる赤外線を反射する。したがって、受信機200のイメージセンサには、赤外線反射塗料がその周辺よりも明るい像として認識される。つまり、この場合には、受信機200は、それぞれ明るい像として現れる複数箇所の赤外線反射塗料に外接する矩形を対象領域として認識する。 Note that an infrared reflecting paint may be applied to the guide plate 107 instead of the infrared shielding paint 108. The infrared reflecting paint reflects infrared rays included in the light irradiated from the transmitter 100. Therefore, the image sensor of the receiver 200 recognizes the infrared reflective paint as an image brighter than the surrounding area. That is, in this case, the receiver 200 recognizes rectangles circumscribing a plurality of infrared reflective paints that appear as bright images as target regions.
 図244は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 244 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、駅名標として構成され、駅出口案内板110の近くに配置されている。駅出口案内板110は、光源を備えて発光しているが、送信機100とは異なり、光IDを送信しない。 The transmitter 100 is configured as a station name sign and is disposed near the station exit guide plate 110. The station exit guide plate 110 includes a light source and emits light, but, unlike the transmitter 100, does not transmit an optical ID.
 受信機200が送信機100および駅出口案内板110を撮像することによって撮像表示画像Ppreおよび復号用画像Pdecを取得する。送信機100は輝度変化し、駅出口案内板110は発光しているため、その復号用画像Pdecには、送信機100に対応する輝線パターン領域Pdec1と、駅出口案内板110に対応する明領域Pdec2とが現れる。輝線パターン領域Pdec1は、受信機200のイメージセンサが有する複数の露光ラインの通信用露光時間での露光によって現れる複数の輝線のパターンからなる領域である。 When the receiver 200 images the transmitter 100 and the station exit guide plate 110, the captured display image Ppre and the decoding image Pdec are acquired. Since the transmitter 100 changes in luminance and the station exit guide plate 110 emits light, the decoding image Pdec includes a bright line pattern region Pdec1 corresponding to the transmitter 100 and a bright region corresponding to the station exit guide plate 110. Pdec2 appears. The bright line pattern region Pdec1 is a region composed of a plurality of bright line patterns that appear by exposure at a communication exposure time of a plurality of exposure lines of the image sensor of the receiver 200.
 ここで、識別情報は、上述のように、撮像表示画像Ppreのうちの基準領域Pbasを特定するための基準情報と、その基準領域Pbasに対する対象領域Ptarの相対位置を示す対象情報とを含んでいる。例えば、その基準情報は、撮像表示画像Ppreにおける基準領域Pbasの位置が、復号用画像Pdecにおける輝線パターン領域Pdec1の位置と同じであることを示す。さらに、対象情報は、対象領域の位置が基準領域の位置であることを示す。 Here, as described above, the identification information includes reference information for specifying the reference region Pbas in the captured display image Ppre and target information indicating the relative position of the target region Ptar with respect to the reference region Pbas. Yes. For example, the reference information indicates that the position of the reference area Pbas in the captured display image Ppre is the same as the position of the bright line pattern area Pdec1 in the decoding image Pdec. Further, the target information indicates that the position of the target area is the position of the reference area.
 したがって、受信機200は、基準情報に基づいて撮像表示画像Ppreから基準領域Pbasを特定する。つまり、受信機200は、撮像表示画像Ppreにおいて、復号用画像Pdecにおける輝線パターン領域Pdec1の位置と同一の位置にある領域を、基準領域Pbasとして特定する。さらに、受信機200は、撮像表示画像Ppreのうち、基準領域Pbasの位置を基準として対象情報により示される相対位置にある領域を、対象領域Ptarとして認識する。上述の例では、対象情報は、対象領域Ptarの位置が基準領域Pbasの位置であることを示すため、受信機200は、撮像表示画像Ppreのうちの基準領域Pbasを対象領域Ptarとして認識する。 Therefore, the receiver 200 specifies the reference region Pbas from the captured display image Ppre based on the reference information. That is, the receiver 200 specifies, in the captured display image Ppre, an area that is at the same position as the bright line pattern area Pdec1 in the decoding image Pdec as the reference area Pbas. Further, the receiver 200 recognizes, as the target region Ptar, a region in the relative position indicated by the target information with reference to the position of the reference region Pbas in the captured display image Ppre. In the above example, since the target information indicates that the position of the target area Ptar is the position of the reference area Pbas, the receiver 200 recognizes the reference area Pbas in the captured display image Ppre as the target area Ptar.
 そして、受信機200は、撮像表示画像Ppreにおける対象領域PtarにAR画像P1を重畳する。 Then, the receiver 200 superimposes the AR image P1 on the target area Ptar in the captured display image Ppre.
 このように、上述の例では、対象領域Ptarを認識するために、輝線パターン領域Pdec1を利用している。一方、輝線パターン領域Pdec1を利用せずに、撮像表示画像Ppreだけから、送信機100が映し出されている領域を、対象領域Ptarとして認識しようとする場合には、誤認識が生じる可能性がある。つまり、撮像表示画像Ppreのうちの、送信機100が映し出されている領域ではなく、駅出口案内板110が映し出されている領域を、対象領域Ptarとして誤認識してしまう可能性がある。これは、撮像表示画像Ppreにおける、送信機100の画像と駅出口案内板110の画像とが似ているからである。しかし、上述の例のように、輝線パターン領域Pdec1を利用する場合には、誤認識の発生を抑えて、正確に対象領域Ptarを認識することができる。 Thus, in the above-described example, the bright line pattern region Pdec1 is used to recognize the target region Ptar. On the other hand, when the region where the transmitter 100 is projected is to be recognized as the target region Ptar from only the captured display image Ppre without using the bright line pattern region Pdec1, erroneous recognition may occur. . That is, in the captured display image Ppre, not the area where the transmitter 100 is projected but the area where the station exit guide plate 110 is projected may be erroneously recognized as the target area Ptar. This is because the image of the transmitter 100 and the image of the station exit guide plate 110 in the captured display image Ppre are similar. However, as in the above example, when the bright line pattern region Pdec1 is used, it is possible to accurately recognize the target region Ptar while suppressing the occurrence of erroneous recognition.
 図245は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 245 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 図244に示す例では、送信機100は、駅名標の全体を輝度変化させることによって光IDを送信し、対象情報は、対象領域の位置が基準領域の位置であることを示している。しかし、本実施の形態では、送信機100は、駅名標の全体を輝度変化させることなく、駅名標の外枠の一部に配置された発光素子を輝度変化させることによって光IDを送信してもよい。また、対象情報は、基準領域Pbasに対する対象領域Ptarの相対位置を示していればよく、例えば、対象領域Ptarの位置が基準領域Pbasの上(具体的には、鉛直方向上向き)であることを示していてもよい。 In the example shown in FIG. 244, the transmitter 100 transmits the light ID by changing the luminance of the entire station name sign, and the target information indicates that the position of the target area is the position of the reference area. However, in this embodiment, the transmitter 100 transmits the light ID by changing the luminance of the light emitting elements arranged in a part of the outer frame of the station name sign without changing the brightness of the entire station name sign. Also good. The target information only needs to indicate the relative position of the target area Ptar with respect to the reference area Pbas. For example, the position of the target area Ptar is above the reference area Pbas (specifically, vertically upward). May be shown.
 図245に示す例では、送信機100は、駅名標の外枠下部に水平方向に沿って配置された複数の発光素子を輝度変化させることによって光IDを送信する。また、対象情報は、対象領域Ptarの位置が基準領域Pbasの上であることを示す。 In the example shown in FIG. 245, the transmitter 100 transmits the light ID by changing the luminance of a plurality of light emitting elements arranged along the horizontal direction below the outer frame of the station name sign. The target information indicates that the position of the target area Ptar is above the reference area Pbas.
 このような場合、受信機200は、基準情報に基づいて撮像表示画像Ppreから基準領域Pbasを特定する。つまり、受信機200は、撮像表示画像Ppreにおいて、復号用画像Pdecにおける輝線パターン領域Pdec1の位置と同一の位置にある領域を、基準領域Pbasとして特定する。具体的には、受信機200は、水平方向に長く垂直方向に短い矩形状の基準領域Pbasを特定する。さらに、受信機200は、撮像表示画像Ppreのうち、基準領域Pbasの位置を基準として対象情報により示される相対位置にある領域を、対象領域Ptarとして認識する。つまり、受信機200は、撮像表示画像Ppreのうちの基準領域Pbasよりも上にある領域を、対象領域Ptarとして認識する。なお、このときには、受信機200は、自らに備えられている加速度センサによって計測される重力方向に基づいて、基準領域Pbasよりも上の向きを特定する。 In such a case, the receiver 200 specifies the reference region Pbas from the captured display image Ppre based on the reference information. That is, the receiver 200 specifies, in the captured display image Ppre, an area that is at the same position as the bright line pattern area Pdec1 in the decoding image Pdec as the reference area Pbas. Specifically, the receiver 200 specifies a rectangular reference region Pbas that is long in the horizontal direction and short in the vertical direction. Further, the receiver 200 recognizes, as the target region Ptar, a region in the relative position indicated by the target information with reference to the position of the reference region Pbas in the captured display image Ppre. That is, the receiver 200 recognizes an area above the reference area Pbas in the captured display image Ppre as the target area Ptar. At this time, the receiver 200 specifies the direction above the reference region Pbas based on the direction of gravity measured by the acceleration sensor provided in the receiver 200.
 なお、対象情報は、対象領域Ptarの相対位置だけでなく、対象領域Ptarのサイズ、形状およびアスペクト比を示してもよい。この場合、受信機200は、対象情報によって示されるサイズ、形状およびアスペクト比の対象領域Ptarを認識する。また、受信機200は、基準領域Pbasのサイズに基づいて、対象領域Ptarのサイズを決定してもよい。 Note that the target information may indicate not only the relative position of the target area Ptar but also the size, shape, and aspect ratio of the target area Ptar. In this case, the receiver 200 recognizes the target area Ptar having the size, shape, and aspect ratio indicated by the target information. In addition, the receiver 200 may determine the size of the target area Ptar based on the size of the reference area Pbas.
 図246は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 246 is a flowchart illustrating another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、図239に示す例と同様に、ステップS101~S104の処理を実行する。 The receiver 200 executes the processing of steps S101 to S104, similarly to the example shown in FIG.
 次に、受信機200は、復号用画像Pdecから輝線パターン領域Pdec1を特定する(ステップS111)。次に、受信機200は、撮像表示画像Ppreから、その輝線パターン領域Pdec1に対応する基準領域Pbasを特定する(ステップS112)。そして、受信機200は、認識情報(具体的には対象情報)とその基準領域Pbasとに基づいて、撮像表示画像Ppreから対象領域Ptarを認識する(ステップS113)。 Next, the receiver 200 identifies the bright line pattern region Pdec1 from the decoding image Pdec (step S111). Next, the receiver 200 specifies a reference area Pbas corresponding to the bright line pattern area Pdec1 from the captured display image Ppre (step S112). The receiver 200 recognizes the target area Ptar from the captured display image Ppre based on the recognition information (specifically, target information) and the reference area Pbas (step S113).
 次に、受信機200は、図239に示す例と同様に、撮像表示画像Ppreの対象領域PtarにAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppreを表示する(ステップS106)。そして、受信機200は、撮像と撮像表示画像Ppreの表示とを終了すべきか否かを判定する(ステップS107)。ここで、受信機200は、終了すべきでないと判定すると(ステップS107のN)、さらに、受信機200の加速度が閾値以上であるか否かを判定する(ステップS114)。この加速度は、受信機200に備えられている加速度センサによって計測される。受信機200は、加速度が閾値未満であると判定すると(ステップS114のN)、ステップS113からの処理を実行する。これにより、受信機200のディスプレイ201に表示されている撮像表示画像Ppreがずれる場合であっても、その撮像表示画像Ppreの対象領域PtarにAR画像を追従させることができる。また、受信機200は、加速度が閾値以上であると判定すると(ステップS114のY)、ステップS111またはステップS102からの処理を実行する。これにより、送信機100と異なる被写体(例えば、駅出口案内板110)が映し出されている領域を誤って対象領域Ptarとして認識してしまうことを抑えることができる。 Next, similarly to the example shown in FIG. 239, the receiver 200 superimposes the AR image on the target area Ptar of the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S106). . Then, the receiver 200 determines whether or not the imaging and the display of the captured display image Pre are to be ended (Step S107). Here, if the receiver 200 determines that it should not be ended (N in step S107), it further determines whether or not the acceleration of the receiver 200 is equal to or greater than a threshold value (step S114). This acceleration is measured by an acceleration sensor provided in the receiver 200. When the receiver 200 determines that the acceleration is less than the threshold value (N in step S114), the receiver 200 executes the processing from step S113. Thereby, even when the captured display image Ppre displayed on the display 201 of the receiver 200 is shifted, the AR image can follow the target area Ptar of the captured display image Ppre. If the receiver 200 determines that the acceleration is equal to or greater than the threshold (Y in step S114), the receiver 200 executes the processing from step S111 or step S102. Thereby, it can suppress that the area | region where the to-be-photographed object (for example, station exit guide board 110) different from the transmitter 100 is reflected is mistakenly recognized as the object area Ptar.
 図247は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 247 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 受信機200は、表示されている撮像表示画像PpreにおけるAR画像P1がタップされると、そのAR画像P1を拡大して表示する。または、受信機200は、タップされると、AR画像P1に示されている内容よりも詳細な内容を示す新たなAR画像を、そのAR画像P1の代わりに表示してもよい。また、AR画像P1が、複数ページからなる情報誌の1ページ分の情報を示す場合には、受信機200は、AR画像P1のページの次のページの情報を示す新たなAR画像を、そのAR画像P1の代わりに表示してもよい。または、受信機200は、タップされると、そのAR画像P1に関連する動画像を新たなAR画像として、そのAR画像P1の代わりに表示してもよい。このとき、受信機200は、対象領域Ptarからオブジェクト(図247の例では紅葉)が出ていくような動画像をAR画像として表示してもよい。 When the AR image P1 in the displayed captured display image Ppre is tapped, the receiver 200 enlarges and displays the AR image P1. Alternatively, when tapped, the receiver 200 may display a new AR image showing more detailed content than that shown in the AR image P1 instead of the AR image P1. In addition, when the AR image P1 indicates information for one page of an information magazine including a plurality of pages, the receiver 200 displays a new AR image indicating information on the next page of the page of the AR image P1. You may display instead of AR image P1. Alternatively, when tapped, the receiver 200 may display a moving image related to the AR image P1 as a new AR image instead of the AR image P1. At this time, the receiver 200 may display a moving image in which an object (autumn leaves in the example of FIG. 247) appears from the target area Ptar as an AR image.
 図248は、本実施の形態における受信機200の撮像によって取得される撮像表示画像Ppreおよび復号用画像Pdecを示す図である。 FIG. 248 is a diagram illustrating a captured display image Ppre and a decoding image Pdec acquired by imaging of the receiver 200 in the present embodiment.
 受信機200は、撮像しているときには、例えば図248の(a1)に示すように、30fpsのフレームレートで撮像表示画像Ppreおよび復号用画像Pdecなどの撮像画像を取得する。具体的には、受信機200は、時刻t1に撮像表示画像Ppre「A」を取得し、時刻t2に復号用画像Pdecを取得し、時刻t3に撮像表示画像Ppre「B」を取得するように、撮像表示画像Ppreと復号用画像Pdecとを交互に取得する。 For example, as illustrated in FIG. 248 (a1), the receiver 200 acquires captured images such as a captured display image Ppre and a decoding image Pdec at a frame rate of 30 fps. Specifically, the receiver 200 acquires the captured display image Ppre “A” at time t1, acquires the decoding image Pdec at time t2, and acquires the captured display image Ppre “B” at time t3. The captured display image Ppre and the decoding image Pdec are obtained alternately.
 また、受信機200は、撮像画像を表示しているときには、撮像画像のうち撮像表示画像Ppreのみを表示し、復号用画像Pdecを表示しない。つまり、受信機200は、図248の(a2)に示すように、復号用画像Pdecを取得するときには、その復号用画像Pdecの代わりに、直前に取得された撮像表示画像Ppreを表示する。具体的には、受信機200は、時刻t1には、取得された撮像表示画像Ppre「A」を表示し、時刻t2には、時刻t1で取得された撮像表示画像Ppre「A」を再び表示する。これにより、受信機200は、15fpsのフレームレートで撮像表示画像Ppreを表示する。 In addition, when displaying the captured image, the receiver 200 displays only the captured display image Ppre among the captured images, and does not display the decoding image Pdec. That is, as shown in (a2) of FIG. 248, the receiver 200 displays the captured display image Ppre acquired immediately before, instead of the decoding image Pdec, when acquiring the decoding image Pdec. Specifically, the receiver 200 displays the acquired captured display image Ppre “A” at time t1, and again displays the captured display image Ppre “A” acquired at time t1 at time t2. To do. Thereby, the receiver 200 displays the captured display image Ppre at a frame rate of 15 fps.
 ここで、図248の(a1)に示す例では、受信機200は、撮像表示画像Ppreと復号用画像Pdecとを交互に取得するが、本実施の形態におけるこれらの画像の取得形態は、このような形態に限らない。つまり、受信機200は、N(Nは1以上の整数)枚の復号用画像Pdecを連続して取得し、その後、M(Mは1以上の整数)枚の撮像表示画像Ppreを連続して取得することを繰り返してもよい。 Here, in the example illustrated in (a1) of FIG. 248, the receiver 200 alternately acquires the captured display image Ppre and the decoding image Pdec. However, the acquisition form of these images in the present embodiment is as follows. It is not restricted to such a form. That is, the receiver 200 continuously obtains N (N is an integer equal to or greater than 1) decoding images Pdec, and then continuously captures M (M is an integer equal to or greater than 1) captured display images Ppre. You may repeat acquiring.
 また、受信機200は、取得される撮像画像を、撮像表示画像Ppreと復号用画像Pdecとに切り替える必要があり、この切り替えに時間がかかってしまうことがある。そこで、図248の(b1)に示すように、受信機200は、撮像表示画像Ppreの取得と、復号用画像Pdecの取得と間の切り替え時において、切り替え期間を設けてもよい。具体的には、受信機200は、時刻t3に復号用画像Pdecを取得すると、時刻t3~t5までの切り替え期間において、撮像画像を切り替えるための処理を実行し、時刻t5に撮像表示画像Ppre「A」を取得する。その後、受信機200は、時刻t5~t7までの切り替え期間において、撮像画像を切り替えるための処理を実行し、時刻t7に復号用画像Pdecを取得する。 In addition, the receiver 200 needs to switch the acquired captured image to the captured display image Ppre and the decoding image Pdec, and this switching may take time. Therefore, as illustrated in (b1) of FIG. 248, the receiver 200 may provide a switching period when switching between acquisition of the captured display image Ppre and acquisition of the decoding image Pdec. Specifically, when the receiver 200 obtains the decoding image Pdec at time t3, the receiver 200 executes processing for switching the captured image during the switching period from time t3 to t5, and at time t5, the captured display image Prep " A ”is acquired. Thereafter, the receiver 200 executes processing for switching the captured image in the switching period from time t5 to time t7, and acquires the decoding image Pdec at time t7.
 このように切り替え期間が設けられた場合、受信機200は、図248の(b2)に示すように、切り替え期間では、直前に取得された撮像表示画像Ppreを表示する。したがって、この場合には、受信機200における撮像表示画像Ppreの表示のフレームレートは低く、例えば3fpsとなる。このようにフレームレートが低い場合には、ユーザが受信機200を動かしても、表示されている撮像表示画像Ppreがその受信機200の動きに応じて移動しないことがある。つまり、撮像表示画像Ppreはライブビューとして表示されない。そこで、受信機200は、撮像表示画像Ppreを受信機200の動きに応じて移動させてもよい。 When the switching period is provided in this way, the receiver 200 displays the captured display image Ppre acquired immediately before in the switching period, as shown in (b2) of FIG. 248. Therefore, in this case, the display frame rate of the captured display image Ppre in the receiver 200 is low, for example, 3 fps. Thus, when the frame rate is low, even if the user moves the receiver 200, the captured display image Ppre displayed may not move according to the movement of the receiver 200. That is, the captured display image Ppre is not displayed as a live view. Therefore, the receiver 200 may move the captured display image Ppre according to the movement of the receiver 200.
 図249は、本実施の形態における受信機200に表示される撮像表示画像Ppreの一例を示す図である。 FIG. 249 is a diagram illustrating an example of a captured display image Pre displayed on the receiver 200 according to the present embodiment.
 受信機200は、例えば図249の(a)に示すように、撮像によって得られた撮像表示画像Ppreをディスプレイ201に表示する。ここで、ユーザが受信機200を左側に動かす。このとき、受信機200による撮像によって新たな撮像表示画像Ppreが取得されない場合、受信機200は、図249の(b)に示すように、表示されている撮像表示画像Ppreを右側に移動させる。つまり、受信機200は、加速度センサを備え、その加速度センサによって計測される加速度に応じて、受信機200の動きに整合するように、表示されている撮像表示画像Ppreを移動させる。これにより、受信機200は、撮像表示画像Ppreを擬似的にライブビューとして表示することができる。 The receiver 200 displays a captured display image Ppre obtained by imaging on the display 201 as shown in FIG. Here, the user moves the receiver 200 to the left side. At this time, when a new captured display image Ppre is not acquired by imaging by the receiver 200, the receiver 200 moves the displayed captured display image Ppre to the right as illustrated in FIG. That is, the receiver 200 includes an acceleration sensor, and moves the displayed captured display image Ppre to match the movement of the receiver 200 according to the acceleration measured by the acceleration sensor. Thereby, the receiver 200 can display the captured display image Ppre as a live view in a pseudo manner.
 図250は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 250 is a flowchart illustrating another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、まず、上述と同様に、撮像表示画像Ppreの対象領域PtarにAR画像を重畳して、その対象領域Ptarに追従させる(ステップS122)。つまり、撮像表示画像Ppreにおける対象領域Ptarと共に移動するAR画像が表示される。そして、受信機200は、AR画像の表示を維持するか否かを判定する(ステップS122)。ここで、AR画像の表示を維持しないと判定すると(ステップS122のN)、受信機200は、撮像によって新たな光IDを取得すれば、その光IDに対応する新たなAR画像を撮像表示画像Ppreに重畳して表示する(ステップS123)。 First, similarly to the above, the receiver 200 superimposes the AR image on the target area Ptar of the captured display image Ppre and follows the target area Ptar (step S122). That is, an AR image that moves together with the target area Ptar in the captured display image Ppre is displayed. Then, the receiver 200 determines whether or not to maintain the display of the AR image (step S122). If it is determined that the display of the AR image is not maintained (N in step S122), if the receiver 200 acquires a new light ID by imaging, the new AR image corresponding to the light ID is captured and displayed. It is displayed superimposed on Pre (step S123).
 一方、AR画像の表示を維持すると判定すると(ステップS122のY)、受信機200は、ステップS121からの処理を繰り返し実行させる。このときには、受信機200は、他のAR画像を取得していても他のAR画像を表示しない。または、受信機200は、新たな復号用画像Pdecを取得していても、その復号用画像Pdecに対する復号によって光IDを取得することは行わない。このときには、復号にかかる消費電力を抑えることができる。 On the other hand, if it is determined that the display of the AR image is to be maintained (Y in step S122), the receiver 200 repeatedly executes the processing from step S121. At this time, the receiver 200 does not display another AR image even if another AR image is acquired. Alternatively, even when the receiver 200 has acquired a new decoding image Pdec, the receiver 200 does not acquire an optical ID by decoding the decoding image Pdec. At this time, power consumption for decoding can be suppressed.
 このように、AR画像の表示を維持することによって、表示されているそのAR画像が消去されたり、他のAR画像の表示によって見え難くなってしまうことを抑えることができる。つまり、表示されているAR画像をユーザに見え易くすることができる。 As described above, by maintaining the display of the AR image, it is possible to prevent the displayed AR image from being erased or being difficult to see due to the display of another AR image. That is, the displayed AR image can be easily seen by the user.
 例えば、ステップS122では、受信機200は、AR画像が表示されてから予め定められた期間(一定期間)が経過するまでは、AR画像の表示を維持すると判定する。つまり、受信機200は、撮像表示画像Ppreを表示するときには、ステップS121で重畳されているAR画像である第1のAR画像と異なる第2のAR画像の表示を抑制しながら、予め定められた表示期間だけ、その第1のAR画像を表示する。受信機200は、この表示期間には、新たに取得される復号用画像Pdecに対する復号を禁止してもよい。 For example, in step S122, the receiver 200 determines to maintain the display of the AR image until a predetermined period (a certain period) elapses after the AR image is displayed. That is, when displaying the captured display image Ppre, the receiver 200 is determined in advance while suppressing the display of the second AR image different from the first AR image that is the AR image superimposed in step S121. The first AR image is displayed only during the display period. The receiver 200 may prohibit the decoding of the newly acquired decoding image Pdec during this display period.
 これにより、ユーザが一度表示された第1のAR画像を見ているときに、その第1のAR画像がそれとは異なる第2のAR画像にすぐに置き換わってしまうことを抑えることができる。さらに、新たに取得される復号用画像Pdecの復号は、第2のAR画像の表示が抑制されているときには無駄な処理であるため、その復号を禁止することによって、消費電力を抑えることができる。 Thereby, when the user is viewing the first AR image once displayed, it is possible to prevent the first AR image from being immediately replaced with a different second AR image. Furthermore, since decoding of the newly acquired decoding image Pdec is a wasteful process when the display of the second AR image is suppressed, power consumption can be suppressed by prohibiting the decoding. .
 または、ステップS122では、受信機200は、フェイスカメラを備え、そのフェイスカメラによる撮像結果に基づいて、ユーザの顔が近付いていることを検出すると、AR画像の表示を維持すると判定してもよい。つまり、受信機200は、撮像表示画像Ppreを表示するときには、さらに、受信機200に備えられたフェイスカメラによる撮像によって、受信機200にユーザの顔が近づいている否かを判定する。そして、受信機200は、顔が近づいていると判定すると、ステップS121で重畳されているAR画像である第1のAR画像と異なる第2のAR画像の表示を抑制しながら、その第1のAR画像を表示する。 Alternatively, in step S122, the receiver 200 may include a face camera, and when detecting that the user's face is approaching based on the imaging result of the face camera, the receiver 200 may determine to maintain the display of the AR image. . That is, when displaying the captured display image Ppre, the receiver 200 further determines whether or not the user's face is approaching the receiver 200 by imaging with a face camera provided in the receiver 200. When the receiver 200 determines that the face is approaching, the first AR image is suppressed while suppressing the display of the second AR image that is different from the first AR image that is the AR image superimposed in step S121. An AR image is displayed.
 または、ステップS122では、受信機200は、加速度センサを備え、その加速度センサによる計測結果に基づいて、ユーザの顔が近付いていることを検出すると、AR画像の表示を維持すると判定してもよい。つまり、受信機200は、撮像表示画像Ppreを表示するときには、さらに、加速度センサによって計測される受信機200の加速度によって、受信機200にユーザの顔が近づいている否かを判定する。例えば、加速度センサによって計測される受信機200の加速度が、受信機200のディスプレイ201に対して垂直外向きの方向に正の値を示す場合に、受信機200はユーザの顔が近付いていると判定する。そして、受信機200は、顔が近づいていると判定すると、ステップS121で重畳されているAR画像である第1の拡張現実画像と異なる第2のAR画像の表示を抑制しながら、その第1のAR画像を表示する。 Alternatively, in step S122, the receiver 200 may include an acceleration sensor, and when detecting that the user's face is approaching based on the measurement result of the acceleration sensor, the receiver 200 may determine to maintain the display of the AR image. . That is, when displaying the captured display image Ppre, the receiver 200 further determines whether or not the user's face is approaching the receiver 200 based on the acceleration of the receiver 200 measured by the acceleration sensor. For example, when the acceleration of the receiver 200 measured by the acceleration sensor shows a positive value in a direction perpendicular to the display 201 of the receiver 200, the receiver 200 is approaching the user's face. judge. When the receiver 200 determines that the face is approaching, the first 200 is performed while suppressing the display of the second AR image that is different from the first augmented reality image that is the AR image superimposed in step S121. The AR image is displayed.
 これにより、ユーザが第1のAR画像を見ようとして受信機200に顔を近づけているときに、その第1のAR画像がそれとは異なる第2のAR画像に置き換わってしまうことを抑えることができる。 Accordingly, when the user is approaching the receiver 200 to see the first AR image, the first AR image can be prevented from being replaced with a different second AR image. .
 または、ステップS122では、受信機200は、その受信機200に備えられているロックボタンが押下されると、AR画像の表示を維持すると判定してもよい。 Alternatively, in step S122, the receiver 200 may determine that the display of the AR image is maintained when a lock button provided in the receiver 200 is pressed.
 また、ステップS122では、受信機200は、上述の一定期間(すなわち表示期間)が経過すると、AR画像の表示を維持しないと判定する。また、受信機200は、上述の一定期間が経過していない場合であっても、加速度センサによって閾値以上の加速度が計測されたときには、AR画像の表示を維持しないと判定する。つまり、受信機200は、撮像表示画像Ppreを表示するときには、さらに、上述の表示期間において、受信機200の加速度を加速度センサによって計測し、計測された加速度が閾値以上か否かを判定する。そして、受信機200は、閾値以上と判定したときには、第2のAR画像の表示の抑制を解除することによって、ステップS123において、第1のAR画像の代わりに第2のAR画像を表示する。 In step S122, the receiver 200 determines that the display of the AR image is not maintained when the above-described certain period (that is, the display period) has elapsed. Further, the receiver 200 determines that the display of the AR image is not maintained when acceleration equal to or greater than the threshold value is measured by the acceleration sensor even when the above-described certain period has not elapsed. That is, when displaying the captured display image Ppre, the receiver 200 further measures the acceleration of the receiver 200 with the acceleration sensor during the display period described above, and determines whether or not the measured acceleration is equal to or greater than a threshold value. When the receiver 200 determines that the second AR image is greater than or equal to the threshold, the receiver 200 cancels the suppression of the display of the second AR image, thereby displaying the second AR image instead of the first AR image in step S123.
 これにより、閾値以上の表示装置の加速度が計測されたときに、第2のAR画像の表示の抑制が解除される。したがって、例えば、ユーザが他の被写体にイメージセンサを向けようとして受信機200を大きく動かしたときには、第2のAR画像を直ぐに表示することができる。 Thereby, when the acceleration of the display device equal to or greater than the threshold is measured, the suppression of the display of the second AR image is released. Therefore, for example, when the user moves the receiver 200 greatly so as to point the image sensor at another subject, the second AR image can be displayed immediately.
 図251は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 251 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図251に示すように、照明装置として構成され、小さい人形用のステージ111を照らしながら輝度変化することによって、光IDを送信している。ステージ111は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 251, the transmitter 100 is configured as an illumination device, and transmits a light ID by changing the luminance while illuminating a stage 111 for a small doll. Since the stage 111 is illuminated by the light from the transmitter 100, the luminance changes similarly to the transmitter 100, and the optical ID is transmitted.
 2つの受信機200は、送信機100によって照らされたステージ111を左右から撮像する。 The two receivers 200 image the stage 111 illuminated by the transmitter 100 from the left and right.
 2つの受信機200のうちの左側の受信機200は、送信機100によって照らされたステージ111を左側から撮像することによって、上述と同様に、撮像表示画像Pfと復号用画像とを取得する。左側の受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、左側の受信機200は、ステージ111から光IDを受信する。左側の受信機200は、その光IDをサーバに送信する。そして、左側の受信機200は、その光IDに対応する三次元のAR画像と認識情報とをサーバから取得する。この三次元のAR画像は、例えば人形を立体的に表示するための画像である。左側の受信機200は、撮像表示画像Pfのうち、その認識情報に応じた領域を対象領域として認識する。例えば、左側の受信機200は、ステージ111中央の上側の領域を対象領域として認識する。 The left receiver 200 of the two receivers 200 captures the captured display image Pf and the decoding image in the same manner as described above by capturing the stage 111 illuminated by the transmitter 100 from the left. The receiver 200 on the left side acquires the optical ID by decoding the decoding image. That is, the left receiver 200 receives the optical ID from the stage 111. The left receiver 200 transmits the optical ID to the server. Then, the left-side receiver 200 acquires a three-dimensional AR image and recognition information corresponding to the optical ID from the server. This three-dimensional AR image is an image for displaying a doll three-dimensionally, for example. The left receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pf as a target area. For example, the left receiver 200 recognizes the upper area at the center of the stage 111 as the target area.
 次に、左側の受信機200は、撮像表示画像Pfに映し出されているステージ111の向きに基づいて、その向きに応じた二次元のAR画像P6aを三次元のAR画像から生成する。そして、左側の受信機200は、その対象領域に二次元のAR画像P6aを重畳し、AR画像P6aが重畳された撮像表示画像Pfをディスプレイ201に表示する。この場合、撮像表示画像Pfの対象領域にその二次元のAR画像P6aが重畳されるため、左側の受信機200は、ステージ111上に人形が現実に存在するように、撮像表示画像Pfを表示することができる。 Next, the left-side receiver 200 generates a two-dimensional AR image P6a corresponding to the orientation from the three-dimensional AR image based on the orientation of the stage 111 displayed in the captured display image Pf. Then, the receiver 200 on the left side superimposes the two-dimensional AR image P6a on the target area, and displays the captured display image Pf on which the AR image P6a is superimposed on the display 201. In this case, since the two-dimensional AR image P6a is superimposed on the target area of the captured display image Pf, the left receiver 200 displays the captured display image Pf so that the doll actually exists on the stage 111. can do.
 同様に、2つの受信機200のうちの右側の受信機200は、送信機100によって照らされたステージ111を右側から撮像することによって、上述と同様に、撮像表示画像Pgと復号用画像とを取得する。右側の受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、右側の受信機200は、ステージ111から光IDを受信する。右側の受信機200は、その光IDをサーバに送信する。そして、右側の受信機200は、その光IDに対応する三次元のAR画像と認識情報とをサーバから取得する。右側の受信機200は、撮像表示画像Pgのうち、その認識情報に応じた領域を対象領域として認識する。例えば、右側の受信機200は、ステージ111中央の上側の領域を対象領域として認識する。 Similarly, the right receiver 200 of the two receivers 200 captures the captured display image Pg and the decoding image in the same manner as described above by capturing the stage 111 illuminated by the transmitter 100 from the right. get. The right receiver 200 acquires the optical ID by decoding the decoding image. That is, the right receiver 200 receives the optical ID from the stage 111. The right receiver 200 transmits the optical ID to the server. Then, the right-side receiver 200 acquires a three-dimensional AR image and recognition information corresponding to the optical ID from the server. The right receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pg as a target area. For example, the right receiver 200 recognizes the upper area at the center of the stage 111 as the target area.
 次に、右側の受信機200は、撮像表示画像Pgに映し出されているステージ111の向きに基づいて、その向きに応じた二次元のAR画像P6bを三次元のAR画像から生成する。そして、右側の受信機200は、その対象領域に二次元のAR画像P6bを重畳し、AR画像P6bが重畳された撮像表示画像Pgをディスプレイ201に表示する。この場合、撮像表示画像Pgの対象領域にその二次元のAR画像P6bが重畳されるため、右側の受信機200は、ステージ111上に人形が現実に存在するように、撮像表示画像Pgを表示することができる。 Next, the right-side receiver 200 generates a two-dimensional AR image P6b corresponding to the orientation from the three-dimensional AR image based on the orientation of the stage 111 displayed in the captured display image Pg. Then, the receiver 200 on the right side superimposes the two-dimensional AR image P6b on the target region, and displays the captured display image Pg on which the AR image P6b is superimposed on the display 201. In this case, since the two-dimensional AR image P6b is superimposed on the target region of the captured display image Pg, the right-side receiver 200 displays the captured display image Pg so that the doll actually exists on the stage 111. can do.
 このように、2つの受信機200は、ステージ111上の同じ位置に、AR画像P6aおよびP6bを表示する。また、これらのAR画像P6aおよびP6bは、仮想的な人形が実際に所定の方向を向いているように、受信機200の向きに応じて生成されている。したがって、ステージ111のどの方向から撮像しても、ステージ111上に人形が現実に存在するように、撮像表示画像を表示することができる。 Thus, the two receivers 200 display the AR images P6a and P6b at the same position on the stage 111. The AR images P6a and P6b are generated according to the orientation of the receiver 200 so that the virtual doll is actually facing a predetermined direction. Therefore, the captured display image can be displayed so that the doll actually exists on the stage 111 no matter what direction the stage 111 is captured.
 なお、上述の例では、受信機200は、三次元のAR画像から、受信機200とステージ111との間の位置関係に応じた二次元のAR画像を生成したが、その二次元のAR画像をサーバから取得してもよい。つまり、受信機200は、光IDと共に、その位置関係を示す情報をサーバに送信し、三次元のAR画像の代わりに、その二次元のAR画像をサーバから取得する。これにより、受信機200の負担を軽減することができる。 In the above example, the receiver 200 generates a two-dimensional AR image corresponding to the positional relationship between the receiver 200 and the stage 111 from the three-dimensional AR image. May be obtained from the server. That is, the receiver 200 transmits information indicating the positional relationship together with the optical ID to the server, and acquires the two-dimensional AR image from the server instead of the three-dimensional AR image. Thereby, the burden on the receiver 200 can be reduced.
 図252は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 252 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図252に示すように、照明装置として構成され、円柱状の構造物112を照らしながら輝度変化することによって、光IDを送信している。構造物112は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as illustrated in FIG. 252, the transmitter 100 is configured as a lighting device, and transmits a light ID by changing luminance while illuminating a cylindrical structure 112. Since the structure 112 is illuminated by the light from the transmitter 100, the luminance is changed similarly to the transmitter 100, and the light ID is transmitted.
 受信機200は、送信機100によって照らされた構造物112を撮像することによって、上述と同様に、撮像表示画像Phと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、構造物112から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P7と認識情報とをサーバから取得する。受信機200は、撮像表示画像Phのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、構造物112の中央部が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P7を重畳し、AR画像P7が重畳された撮像表示画像Phをディスプレイ201に表示する。例えば、AR画像P7は、文字列「ABCD」を含む画像であって、その文字列は構造物112の中央部の曲面に合わせて歪んでいる。この場合、撮像表示画像Phの対象領域にその歪んだ文字列を含むAR画像P2が重畳されるため、受信機200は、構造物112に対して描かれた文字列が現実に存在するように、撮像表示画像Phを表示することができる。 The receiver 200 acquires the captured display image Ph and the decoding image in the same manner as described above by imaging the structure 112 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the structure 112. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P7 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Ph as a target area. For example, the receiver 200 recognizes an area where the central portion of the structure 112 is projected as a target area. Then, the receiver 200 superimposes the AR image P7 on the target area, and displays the captured display image Ph on which the AR image P7 is superimposed on the display 201. For example, the AR image P <b> 7 is an image including a character string “ABCD”, and the character string is distorted in accordance with the curved surface at the center of the structure 112. In this case, since the AR image P2 including the distorted character string is superimposed on the target region of the captured display image Ph, the receiver 200 makes sure that the character string drawn on the structure 112 actually exists. The captured display image Ph can be displayed.
 図253は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 253 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図253に示すように、飲食店のメニュー113を照らしながら輝度変化することによって、光IDを送信している。メニュー113は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、メニュー113は、例えば「ABCスープ」、「XYZサラダ」および「KLMランチ」などの複数の料理の名称を示す。 For example, as shown in FIG. 253, the transmitter 100 transmits the light ID by changing the luminance while illuminating the restaurant menu 113. Since the menu 113 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted. The menu 113 indicates names of a plurality of dishes such as “ABC soup”, “XYZ salad”, and “KLM lunch”.
 受信機200は、送信機100によって照らされたメニュー113を撮像することによって、上述と同様に、撮像表示画像Piと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、メニュー113から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P8と認識情報とをサーバから取得する。受信機200は、撮像表示画像Piのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、メニュー113が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P8を重畳し、AR画像P8が重畳された撮像表示画像Piをディスプレイ201に表示する。例えば、AR画像P8は、複数の料理のそれぞれに使われている食材をマークで示す画像である。例えば、AR画像P8は、卵が使われている料理「XYZサラダ」に対しては、卵を模ったマークを示し、豚肉が使われている料理「KLMランチ」に対しては、豚を模ったマークを示す。この場合、撮像表示画像Piの対象領域にそのAR画像P8が重畳されるため、受信機200は、食材のマークが付されたメニュー113が現実に存在するように、撮像表示画像Piを表示することができる。これにより、メニュー113に特別な表示装置を備えることなく、受信機200のユーザに各料理の食材を簡単に、かつ、分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pi and the decoding image in the same manner as described above by capturing an image of the menu 113 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the menu 113. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P8 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pi as a target area. For example, the receiver 200 recognizes an area where the menu 113 is displayed as the target area. Then, the receiver 200 superimposes the AR image P8 on the target region, and displays the captured display image Pi on which the AR image P8 is superimposed on the display 201. For example, the AR image P8 is an image that shows the ingredients used for each of a plurality of dishes by marks. For example, the AR image P8 shows a mark imitating an egg for a dish “XYZ salad” using eggs, and a pig for a dish “KLM lunch” using pork. The imitated mark is shown. In this case, since the AR image P8 is superimposed on the target area of the captured display image Pi, the receiver 200 displays the captured display image Pi so that the menu 113 with the food mark is actually present. be able to. Thereby, without providing a special display device in the menu 113, the user of the receiver 200 can be easily and easily informed of the ingredients of each dish.
 また、受信機200は、複数のAR画像を取得して、ユーザによって設定されたユーザ情報に基づいて、それらの複数のAR画像からユーザに適したAR画像を選択し、そのAR画像を重畳してもよい。例えば、ユーザが卵にアレルギー反応を示すことがユーザ情報に示されていれば、受信機200は、卵が使われた料理に対して卵のマークが付されたAR画像を選択する。また、豚肉の摂取が禁止されていることがユーザ情報に示されていれば、受信機200は、豚肉が使われた料理に対して豚のマークが付されたAR画像を選択する。または、受信機200は、光IDと共に、そのユーザ情報をサーバに送信し、その光IDとユーザ情報に応じたAR画像をサーバから取得してもよい。これにより、ユーザごとに、そのユーザに対して喚起を促すメニューを表示することができる。 In addition, the receiver 200 acquires a plurality of AR images, selects an AR image suitable for the user from the plurality of AR images based on the user information set by the user, and superimposes the AR images. May be. For example, if the user information indicates that the user shows an allergic reaction to the egg, the receiver 200 selects an AR image that is marked with an egg for a dish in which the egg is used. If the user information indicates that the intake of pork is prohibited, the receiver 200 selects an AR image in which a pork mark is attached to a dish in which pork is used. Alternatively, the receiver 200 may transmit the user information together with the optical ID to the server and acquire an AR image corresponding to the optical ID and the user information from the server. Thereby, for each user, a menu that prompts the user to call can be displayed.
 図254は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 254 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図254に示すように、テレビとして構成され、ディスプレイに映像を表示しながら輝度変化することによって、光IDを送信している。また、送信機100の近傍には、通常のテレビ114が配置されている。テレビ114は、ディスプレイに映像を表示しているが、光IDを送信していない。 For example, as shown in FIG. 254, the transmitter 100 is configured as a television, and transmits an optical ID by changing the luminance while displaying an image on a display. In addition, a normal television 114 is disposed in the vicinity of the transmitter 100. The television 114 displays an image on the display, but does not transmit an optical ID.
 受信機200は、例えば送信機100とともにテレビ114を撮像することによって、上述と同様に、撮像表示画像Pjと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P9と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pjのうち、その認識情報に応じた領域を対象領域として認識する。 The receiver 200 acquires the captured display image Pj and the decoding image in the same manner as described above, for example, by imaging the television 114 together with the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P9 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pj as a target area.
 例えば、受信機200は、復号用画像の輝線パターン領域を利用することによって、撮像表示画像Pjにおいて、光IDを送信している送信機100が映し出されている領域の下部を第1の対象領域として認識する。なお、このとき、認識情報に含まれる基準情報は、撮像表示画像Pjにおける基準領域の位置が、復号用画像における輝線パターン領域の位置と同じであることを示す。さらに、認識情報に含まれる対象情報は、その基準領域の下部に対象領域があることを示す。受信機200は、このような認識情報を用いて上述の第1の対象領域を認識している。 For example, the receiver 200 uses the bright line pattern area of the image for decoding, so that the lower part of the area where the transmitter 100 transmitting the optical ID is displayed in the captured display image Pj is the first target area. Recognize as At this time, the reference information included in the recognition information indicates that the position of the reference area in the captured display image Pj is the same as the position of the bright line pattern area in the decoding image. Furthermore, the target information included in the recognition information indicates that there is a target area below the reference area. The receiver 200 recognizes the first target area described above using such recognition information.
 さらに、受信機200は、撮像表示画像Pjの下部において予め位置が固定されている領域を第2の対象領域として認識する。第2の対象領域は、第1の対象領域よりも大きい。なお、認識情報に含まれる対象情報は、さらに、第1の対象領域の位置だけでなく、上述のような第2の対象領域の位置およびサイズも示している。受信機200は、このような認識情報を用いて上述の第2の対象領域を認識している。 Furthermore, the receiver 200 recognizes an area whose position is fixed in advance below the captured display image Pj as the second target area. The second target area is larger than the first target area. The target information included in the recognition information further indicates not only the position of the first target area but also the position and size of the second target area as described above. The receiver 200 recognizes the second target area described above using such recognition information.
 そして、受信機200は、その第1の対象領域および第2の対象領域にAR画像P9を重畳し、AR画像P8が重畳された撮像表示画像Pjをディスプレイ201に表示する。このAR画像P9の重畳では、受信機200は、そのAR画像P9のサイズを第1の対象領域のサイズに合わせ、サイズ調整されたAR画像P9をその第1の対象領域に重畳する。さらに、受信機200は、AR画像P9のサイズを第2の対象領域のサイズに合わせ、サイズ調整されたAR画像P9をその第2の対象領域に重畳する。 Then, the receiver 200 superimposes the AR image P9 on the first target area and the second target area, and displays the captured display image Pj on which the AR image P8 is superimposed on the display 201. In the superposition of the AR image P9, the receiver 200 matches the size of the AR image P9 with the size of the first target area, and superimposes the AR image P9 whose size has been adjusted on the first target area. Furthermore, the receiver 200 matches the size of the AR image P9 with the size of the second target area, and superimposes the AR image P9 whose size has been adjusted on the second target area.
 例えば、AR画像P9は、送信機100の映像に対する字幕を示す。また、AR画像P9の字幕の言語は、受信機200に設定登録されているユーザ情報に応じた言語である。つまり、受信機200は、光IDをサーバに送信するときに、そのユーザ情報(例えば、ユーザの国籍または使用言語などを示す情報)もサーバに送信する。そして、受信機200は、そのユーザ情報に応じた言語の字幕を示すAR画像P9を取得する。または、受信機200は、それぞれ異なる言語の字幕を示す複数のAR画像P9を取得し、設定登録されているユーザ情報に応じて、それらの複数のAR画像P9から、重畳に使用されるAR画像P9を選択してもよい。 For example, the AR image P9 indicates a caption for the video of the transmitter 100. Further, the language of the caption of the AR image P9 is a language according to the user information set and registered in the receiver 200. That is, when transmitting the optical ID to the server, the receiver 200 also transmits the user information (for example, information indicating the user's nationality or language used) to the server. Then, the receiver 200 acquires an AR image P9 indicating a language caption corresponding to the user information. Alternatively, the receiver 200 acquires a plurality of AR images P9 indicating subtitles in different languages, and uses the AR images used for superimposition from the plurality of AR images P9 according to the user information registered and registered. P9 may be selected.
 言い換えれば、図254に示す例では、受信機200は、それぞれ画像を表示している複数のディスプレイを被写体として撮像することによって、撮像表示画像Pjおよび復号用画像を取得する。そして、受信機200は、対象領域を認識するときには、撮像表示画像Pjのうち、複数のディスプレイのうちの光IDを送信しているディスプレイである送信ディスプレイ(すなわち送信機100)が現れている領域を対象領域として認識する。次に、受信機200は、送信ディスプレイに表示されている画像に対応する第1の字幕をAR画像としてその対象領域に重畳する。さらに、受信機200は、撮像表示画像Pjのうちの対象領域よりも大きい領域に、第1の字幕が拡大された字幕である第2の字幕を重畳する。 In other words, in the example illustrated in FIG. 254, the receiver 200 acquires a captured display image Pj and a decoding image by capturing a plurality of displays each displaying an image as a subject. Then, when the receiver 200 recognizes the target area, an area in which the transmission display (that is, the transmitter 100) that is the display that transmits the light ID among the plurality of displays appears in the captured display image Pj. Is recognized as a target area. Next, the receiver 200 superimposes the first subtitle corresponding to the image displayed on the transmission display as an AR image on the target area. Furthermore, the receiver 200 superimposes a second subtitle, which is a subtitle obtained by enlarging the first subtitle, on a region larger than the target region in the captured display image Pj.
 これにより、受信機200は、送信機100の映像に字幕が現実に存在するように、撮像表示画像Pjを表示することができる。さらに、受信機200は、撮像表示画像Pjの下部にも、大きな字幕を重畳するため、送信機100の映像に付されている字幕が小さくても、字幕を見やすくすることができる。なお、送信機100の映像に付される字幕がなく、撮像表示画像Pjの下部に大きな字幕だけが重畳される場合には、その重畳されている字幕が送信機100の映像に対する字幕か、テレビ114の映像に対する字幕かを判断することが困難である。しかし、本実施の形態では、光IDを送信する送信機100の映像に対しても字幕が付されるため、ユーザは、重畳されている字幕が何れの映像に対する字幕かを容易に判断することができる。 Thereby, the receiver 200 can display the captured display image Pj so that captions actually exist in the video of the transmitter 100. Furthermore, since the receiver 200 also superimposes a large caption on the lower part of the captured display image Pj, even if the caption attached to the video of the transmitter 100 is small, the caption can be easily viewed. In addition, when there is no caption attached to the video of the transmitter 100 and only a large caption is superimposed on the lower part of the captured display image Pj, whether the superimposed caption is a caption for the video of the transmitter 100 or a television It is difficult to determine whether it is a caption for 114 videos. However, in the present embodiment, since captions are attached to the video of the transmitter 100 that transmits the optical ID, the user can easily determine which video the superimposed caption is for. Can do.
 また、受信機200は、撮像表示画像Pjの表示では、さらに、サーバから取得される情報に、音声情報が含まれているか否かを判定してもよい。そして、受信機200は、音声情報が含まれていると判定したときには、第1および第2の字幕よりも、音声情報が示す音声を優先して出力する。これにより、音声が優先的に出力されるため、ユーザが字幕を読む負担を軽減することができる。 In the display of the captured display image Pj, the receiver 200 may further determine whether or not audio information is included in the information acquired from the server. When the receiver 200 determines that the audio information is included, the receiver 200 outputs the audio indicated by the audio information with priority over the first and second subtitles. Thereby, since sound is preferentially output, it is possible to reduce the burden of the user reading subtitles.
 また、上述の例では、ユーザ情報(すなわちユーザの属性)に応じて字幕の言語を異ならせたが、送信機100に表示されている映像(すなわちコンテンツ)そのものを異ならせてもよい。例えば、送信機100に表示されている映像がニュースの映像である場合において、ユーザが日本人であることがユーザ情報に示されていれば、受信機200は、日本で放送されているニュース映像をAR画像として取得する。そして、受信機200は、そのニュース映像を、送信機100のディスプレイが映し出されている領域(すなわち対象領域)に重畳する。一方、ユーザが米国人であることがユーザ情報に示されていれば、受信機200は、米国で放送されているニュース映像をAR画像として取得する。そして、受信機200は、そのニュース映像を、送信機100のディスプレイが映し出されている領域(すなわち対象領域)に重畳する。これにより、ユーザに適した映像を表示することができる。なお、ユーザ情報には、ユーザの属性として、例えば、国籍または使用言語などが示され、受信機200はその属性に基づいて上述のAR画像を取得する。 In the above example, the subtitle language is changed according to the user information (that is, the user attribute), but the video (that is, the content) itself displayed on the transmitter 100 may be changed. For example, in the case where the video displayed on the transmitter 100 is a news video, if the user information indicates that the user is Japanese, the receiver 200 may update the news video broadcast in Japan. Is acquired as an AR image. Then, the receiver 200 superimposes the news video on an area where the display of the transmitter 100 is displayed (that is, the target area). On the other hand, if the user information indicates that the user is American, the receiver 200 acquires a news video broadcast in the United States as an AR image. Then, the receiver 200 superimposes the news video on an area where the display of the transmitter 100 is displayed (that is, the target area). Thereby, an image suitable for the user can be displayed. Note that the user information indicates, for example, nationality or language used as the user attribute, and the receiver 200 acquires the above-described AR image based on the attribute.
 図255は、本実施の形態における認識情報の一例を示す図である。 FIG. 255 is a diagram showing an example of recognition information in the present embodiment.
 認識情報が例えば上述のような特徴点および特徴量などであっても、誤認識が生じる可能性がある。例えば、送信機100aおよび100bは、それぞれ送信機100と同様に駅名標として構成されている。これらの送信機100aおよび100bは、互に異なる駅名標であっても、互に近い位置にあれば、類似しているために誤認識される可能性がある。 Even if the recognition information is, for example, the above-described feature points and feature amounts, misrecognition may occur. For example, the transmitters 100a and 100b are configured as station names as with the transmitter 100, respectively. Even if these transmitters 100a and 100b are station names different from each other, they may be misrecognized because they are similar if they are located close to each other.
 そこで、送信機100aおよび100bのそれぞれの認識情報は、送信機100aまたは100bの画像全体の各特徴点および各特徴量を示すことなく、その画像のうちの特徴的な一部分のみの各特徴点および各特徴量を示してもよい。 Therefore, the recognition information of each of the transmitters 100a and 100b does not indicate each feature point and each feature amount of the entire image of the transmitter 100a or 100b, and each feature point of only a characteristic part of the image and Each feature amount may be indicated.
 例えば、送信機100aの部分a1と、送信機100bの部分b1とは互に大きく異なり、送信機100aの部分a2と、送信機100bの部分b2とは互に大きく異なる。そこで、サーバは、送信機100aおよび100bが予め定められた範囲内(すなわち近距離)に設置されていれば、送信機100aに対応する認識情報として、部分a1および部分a2のそれぞれの画像の特徴点および特徴量を保持する。同様に、サーバは、送信機100bに対応する識別情報として、部分b1および部分b2のそれぞれの画像の特徴点および特徴量を保持する。 For example, the part a1 of the transmitter 100a and the part b1 of the transmitter 100b are greatly different from each other, and the part a2 of the transmitter 100a and the part b2 of the transmitter 100b are greatly different from each other. Therefore, if the transmitters 100a and 100b are installed in a predetermined range (that is, a short distance), the server uses image characteristics of the parts a1 and a2 as the recognition information corresponding to the transmitter 100a. Preserve points and features. Similarly, the server holds the feature points and feature amounts of the images of the parts b1 and b2 as the identification information corresponding to the transmitter 100b.
 これにより、受信機200は、互に類似する送信機100aおよび100bが互に近くにある場合(上述の予め定められた範囲内にある場合)であっても、それらの識別情報を用いて適切に対象領域を認識することができる。 Thereby, even when the transmitters 100a and 100b similar to each other are close to each other (when they are within the predetermined range described above), the receiver 200 can appropriately use the identification information. The target area can be recognized.
 図256は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 256 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、まず、受信機200に設定登録されているユーザ情報に基づいて、ユーザに視覚障害があるか否かを判定する(ステップS131)。ここで、受信機200は、視覚障害があると判定すると(ステップS131のY)、重畳して表示されるAR画像の文字を音声で出力する(ステップS132)。一方、受信機200は、視覚障害がないと判定すると(ステップS131のN)、さらに、ユーザ情報に基づいて、ユーザに聴覚障害があるか否かを判定する(ステップS133)。ここで、受信機200は、聴覚障害があると判定すると(ステップS133のY)、音声出力を停止する(ステップS134)。このとき、受信機200は、全ての機能による音声の出力を停止する。 The receiver 200 first determines whether or not the user has a visual impairment based on the user information set and registered in the receiver 200 (step S131). If the receiver 200 determines that there is a visual impairment (Y in step S131), the receiver 200 outputs the characters of the AR image displayed in a superimposed manner by voice (step S132). On the other hand, when the receiver 200 determines that there is no visual impairment (N in step S131), the receiver 200 further determines whether the user has a hearing impairment based on the user information (step S133). Here, if the receiver 200 determines that there is a hearing impairment (Y in step S133), the receiver 200 stops the sound output (step S134). At this time, the receiver 200 stops outputting sound by all functions.
 なお、受信機200は、ステップS131において視覚障害があると判定したときに(ステップS131のY)、ステップS133の処理を行ってもよい。つまり、受信機200は、視覚障害があり、かつ、聴覚障害がないと判定したときに、重畳して表示されるAR画像の文字を音声で出力してもよい。 The receiver 200 may perform the process of step S133 when it is determined in step S131 that there is a visual impairment (Y in step S131). That is, when it is determined that there is a visual impairment and there is no hearing impairment, the receiver 200 may output the AR image characters displayed in a superimposed manner by voice.
 図257は、本実施の形態における受信機200が輝線パターン領域を識別する一例を示す図である。 FIG. 257 is a diagram illustrating an example in which the receiver 200 according to the present embodiment identifies bright line pattern regions.
 受信機200は、まず、それぞれ光IDを送信する2つの送信機を撮像することによって復号用画像を取得し、その復号用画像に対する復号によって、図257の(e)に示すように、光IDを取得する。このとき、復号用画像には2つの輝線パターン領域XおよびYが含まれているため、受信機200は、輝線パターン領域Xに対応する送信機の光IDと、輝線パターン領域Yに対応する送信機の光IDとを取得する。輝線パターン領域Xに対応する送信機の光IDは、例えば、アドレス0~9のそれぞれに対応する数値(すなわちデータ)からなり、「5,2,8,4,3,6,1,9,4,3」を示す。輝線パターン領域Xに対応する送信機の光IDも同様に、例えば、アドレス0~9のそれぞれに対応する数値からなり、「5,2,7,7,1,5,3,2,7,4」を示す。 First, the receiver 200 obtains a decoding image by imaging two transmitters each transmitting an optical ID, and performs decoding on the decoding image to obtain an optical ID as shown in FIG. 257 (e). To get. At this time, since the decoding image includes two bright line pattern areas X and Y, the receiver 200 transmits the light ID of the transmitter corresponding to the bright line pattern area X and the transmission corresponding to the bright line pattern area Y. Get the machine's optical ID. The light ID of the transmitter corresponding to the bright line pattern region X is made up of numerical values (that is, data) corresponding to addresses 0 to 9, for example, “5, 2, 8, 4, 3, 6, 1, 9, 4,3 ". Similarly, the transmitter optical ID corresponding to the bright line pattern region X is also composed of numerical values corresponding to addresses 0 to 9, for example, “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 ".
 受信機200は、これらの光IDを一度取得しても、すなわちこれらの光IDが既知であっても、撮像しているときに、それぞれの光IDがどちらの輝線パターン領域から得られたのか分からない状況になることがある。このような場合、受信機200は、図257の(a)~(d)に示す処理を行うことによって、それぞれの既知の光IDがどちらの輝線パターン領域から得られたのかを容易に、かつ、迅速に判定することができる。 Even if the receiver 200 acquires these light IDs once, that is, even if these light IDs are known, from which bright line pattern area each light ID was obtained when taking an image. It may be a situation that you do not understand. In such a case, the receiver 200 can easily determine from which bright line pattern area each known light ID is obtained by performing the processes shown in FIGS. 257 (a) to (d). Can be determined quickly.
 具体的には、受信機200は、まず、図257の(a)に示すように、復号用画像Pdec11を取得して、その復号用画像Pdec11に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス0の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス0の数値は「5」であり、輝線パターン領域Yの光IDのアドレス0の数値も「5」である。それぞれの光IDのアドレス0の数値が「5」であるため、このときには、既知の光IDがどちらの輝線パターン領域から得られたのかを判定することができない。 Specifically, first, as shown in FIG. 257 (a), the receiver 200 acquires the decoding image Pdec11 and decodes each of the bright line pattern regions X and Y by decoding the decoding image Pdec11. The numerical value of the address 0 of the optical ID is acquired. For example, the numerical value of the light ID address 0 of the bright line pattern region X is “5”, and the numerical value of the light ID address 0 of the bright line pattern region Y is also “5”. Since the numerical value of the address 0 of each light ID is “5”, at this time, it cannot be determined from which bright line pattern area the known light ID is obtained.
 そこで、受信機200は、図257の(b)に示すように、復号用画像Pdec12を取得して、その復号用画像Pdec12に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス1の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス1の数値は「2」であり、輝線パターン領域Yの光IDのアドレス1の数値も「2」である。それぞれの光IDのアドレス1の数値が「2」であるため、このときにも、既知の光IDがどちらの輝線パターン領域から得られたのかを判定することができない。 Therefore, as shown in FIG. 257 (b), the receiver 200 obtains the decoding image Pdec12, and decodes the decoding image Pdec12 to obtain the address 1 of each light ID of the bright line pattern regions X and Y. Get the number of. For example, the numerical value of the address 1 of the light ID in the bright line pattern region X is “2”, and the numerical value of the address 1 of the light ID in the bright line pattern region Y is also “2”. Since the numerical value of the address 1 of each light ID is “2”, it is impossible to determine from which bright line pattern region the known light ID is obtained.
 そこで、さらに、受信機200は、図257の(c)に示すように、復号用画像Pdec13を取得して、その復号用画像Pdec13に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス2の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス2の数値は「8」であり、輝線パターン領域Yの光IDのアドレス2の数値は「7」である。このときには、既知の光ID「5,2,8,4,3,6,1,9,4,3」が輝線パターン領域Xから得られたと判定することができ、既知の光ID「5,2,7,7,1,5,3,2,7,4」が輝線パターン領域Yから得られたと判定することができる。 Therefore, as shown in FIG. 257 (c), the receiver 200 obtains the decoding image Pdec13 and decodes the decoding image Pdec13 to obtain the respective light IDs of the bright line pattern regions X and Y. Get the numerical value of address 2. For example, the numerical value of the address 2 of the light ID in the bright line pattern region X is “8”, and the numerical value of the address 2 of the light ID in the bright line pattern region Y is “7”. At this time, it can be determined that the known light ID “5, 2, 8, 4, 3, 6, 1, 9, 4, 3” has been obtained from the bright line pattern region X, and the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 "can be determined to have been obtained from the bright line pattern region Y.
 しかし、受信機200は、信頼度を高めるために、さらに、図257の(d)に示すように、それぞれの光IDのアドレス3の数値を取得してもよい。つまり、受信機200は、復号用画像Pdec14を取得して、その復号用画像Pdec14に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス3の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス3の数値は「4」であり、輝線パターン領域Yの光IDのアドレス3の数値は「7」である。このときには、既知の光ID「5,2,8,4,3,6,1,9,4,3」が輝線パターン領域Xから得られたと判定することができ、既知の光ID「5,2,7,7,1,5,3,2,7,4」が輝線パターン領域Yから得られたと判定することができる。つまり、アドレス2だけでなくアドレス3によっても、輝線パターン領域XおよびYの光IDを識別することができるため、信頼度を高めることができる。 However, in order to increase the reliability, the receiver 200 may further acquire the numerical value of the address 3 of each optical ID as shown in FIG. 257 (d). That is, the receiver 200 acquires the decoding image Pdec14, and acquires the numerical value of the address 3 of the light ID of each of the bright line pattern areas X and Y by decoding the decoding image Pdec14. For example, the numerical value of the address 3 of the light ID in the bright line pattern region X is “4”, and the numerical value of the address 3 of the light ID in the bright line pattern region Y is “7”. At this time, it can be determined that the known light ID “5, 2, 8, 4, 3, 6, 1, 9, 4, 3” has been obtained from the bright line pattern region X, and the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 "can be determined to have been obtained from the bright line pattern region Y. That is, since the light IDs of the bright line pattern areas X and Y can be identified not only by the address 2 but also by the address 3, the reliability can be increased.
 このように、本実施の形態では、光IDの全てのアドレスの数値(すなわちデータ)を改めて取得することなく、少なくとも1つのアドレスの数値を取得し直す。これによって、既知の光IDがどちらの輝線パターン領域から得られたのかを容易に、かつ、迅速に判定することができる。 Thus, in this embodiment, the numerical value of at least one address is reacquired without acquiring the numerical values (that is, data) of all addresses of the optical ID again. This makes it possible to easily and quickly determine from which bright line pattern region the known light ID is obtained.
 なお、上述の図257の(c)および(d)に示す例では、所定のアドレスに対して取得された数値が、既知の光IDの数値と一致しているが、一致していなくてもよい。例えば、図257の(d)に示す例において、受信機200は、輝線パターン領域Yの光IDのアドレス3の数値として「6」を取得する。このアドレス3の数値「6」は、既知の光ID「5,2,7,7,1,5,3,2,7,4」のアドレス3の数値「7」とは異なる。しかし、数値「6」は数値「7」に近い数値であるため、受信機200は、既知の光ID「5,2,7,7,1,5,3,2,7,4」が輝線パターン領域Yから得られたと判定してもよい。なお、受信機は、数値「6」が数値「7」±n(nは例えば1以上の数)の範囲内にあるか否かによって、数値「6」が数値「7」に近い数値であるか否かを判定してもよい。 In the example shown in (c) and (d) of FIG. 257 described above, the numerical value acquired for the predetermined address matches the numerical value of the known optical ID. Good. For example, in the example illustrated in (d) of FIG. 257, the receiver 200 acquires “6” as the numerical value of the address 3 of the light ID of the bright line pattern region Y. The numerical value “6” of the address 3 is different from the numerical value “7” of the address 3 of the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4”. However, since the numerical value “6” is a numerical value close to the numerical value “7”, the receiver 200 has a known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4” with a bright line. It may be determined that the pattern area Y is obtained. In the receiver, the numerical value “6” is a numerical value close to the numerical value “7” depending on whether the numerical value “6” is within the range of the numerical value “7” ± n (n is a number of 1 or more, for example). It may be determined whether or not.
 図258は、本実施の形態における受信機200の他の例を示す図である。 FIG. 258 is a diagram illustrating another example of the receiver 200 in the present embodiment.
 受信機200は、上述の例ではスマートフォンとして構成されているが、図19~図21に示す例と同様に、イメージセンサを備えたヘッドマウントディスプレイ(グラスともいう)として構成されていてもよい。 The receiver 200 is configured as a smartphone in the above-described example, but may be configured as a head-mounted display (also referred to as glass) including an image sensor, similar to the examples illustrated in FIGS.
 このような受信機200は、上述のようなAR画像の表示のための処理回路(以下、AR処理回路という)を常に起動しておくと消費電力が多くなるため、予め定められた信号を検出したときに、そのAR処理回路を起動してもよい。 Such a receiver 200 detects a predetermined signal because power consumption increases when a processing circuit for displaying an AR image as described above (hereinafter referred to as an AR processing circuit) is always activated. When this occurs, the AR processing circuit may be activated.
 例えば、受信機200は、タッチセンサ202を備えている。タッチセンサ202は、ユーザの指などに触れると、タッチ信号を出力する。受信機200は、そのタッチ信号を検出したときに、AR処理回路を起動する。 For example, the receiver 200 includes a touch sensor 202. The touch sensor 202 outputs a touch signal when it touches a user's finger or the like. The receiver 200 activates the AR processing circuit when detecting the touch signal.
 または、受信機200は、Bluetooth(登録商標)またはWi-Fi(登録商標)などの電波信号を検出したときに、AR処理回路を起動してもよい。 Alternatively, the receiver 200 may activate the AR processing circuit when detecting a radio signal such as Bluetooth (registered trademark) or Wi-Fi (registered trademark).
 または、受信機200は、加速度センサを備え、その加速度センサによって重力の向きと反対の向きへの閾値以上の加速度が計測されたときに、AR処理回路を起動してもよい。つまり、受信機200は、上記加速度を示す信号を検出したときに、AR処理回路を起動する。例えば、ユーザが、グラスとして構成されている受信機200の鼻あて部分を下から指先で上向きに突きあげると、受信機200は上記加速度を示す信号を検出して、AR処理回路を起動する。 Alternatively, the receiver 200 may include an acceleration sensor, and may activate the AR processing circuit when the acceleration sensor measures an acceleration equal to or greater than a threshold value in a direction opposite to the direction of gravity. That is, the receiver 200 activates the AR processing circuit when detecting the signal indicating the acceleration. For example, when the user pushes up the nose pad portion of the receiver 200 configured as a glass upward with a fingertip from below, the receiver 200 detects a signal indicating the acceleration and activates the AR processing circuit.
 または、受信機200は、GPSおよび9軸センサなどによって、イメージセンサが送信機100に向けられたことを検知したときに、AR処理回路を起動してもよい。つまり、受信機200は、受信機200が所定の向きに向けられたことを示す信号を検出したときに、AR処理回路を起動する。この場合、送信機100が上述の日本語の駅名標などであれば、受信機200は、英語の駅名を示すAR画像をその駅名標に重畳して表示する。 Alternatively, the receiver 200 may activate the AR processing circuit when it is detected by the GPS and the 9-axis sensor that the image sensor is directed to the transmitter 100. That is, the receiver 200 activates the AR processing circuit when detecting a signal indicating that the receiver 200 is directed in a predetermined direction. In this case, if the transmitter 100 is the above-mentioned Japanese station name mark, the receiver 200 displays an AR image indicating the English station name superimposed on the station name mark.
 図259は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 259 is a flowchart illustrating another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、送信機100から光IDを取得すると(ステップS141)、その光IDに応じたモード指定情報を受信することによって、ノイズキャンセルのモードを切り替える(ステップS142)。そして、受信機200は、そのモードの切り替え処理を終了すべきか否かを判定し(ステップS143)、終了すべきでないと判定すると(ステップS143のN)ステップS141からの処理を繰り返し実行する。ノイズキャンセルのモードの切り替えは、例えば、飛行機内におけるエンジンなどの騒音を消去するモード(ON)と、その騒音の消去を行わないモード(OFF)である。具体的には、受信機200を携帯するユーザは、その受信機200に接続されるイヤホンを耳にあてて、その受信機200から出力される音楽などの音声を聞いている。このようなユーザが飛行機に搭乗すると、受信機200は光IDを取得する。その結果、受信機200は、ノイズキャンセルのモードをOFFからONに切り替える。これにより、ユーザは、機内であっても、エンジンの騒音などのノイズが含まれない音声を聞くことができる。また、ユーザが飛行機から出るときにも、受信機200は光IDを取得する。この光IDを取得した受信機200は、ノイズキャンセルのモードをONからOFFに切り替える。なお、ノイズキャンセルの対象となるノイズは、エンジンの騒音だけでなく、人の声など、どのような音であってもよい。 When the receiver 200 acquires the optical ID from the transmitter 100 (step S141), the receiver 200 switches the mode of noise cancellation by receiving mode designation information corresponding to the optical ID (step S142). Then, the receiver 200 determines whether or not the mode switching process should be terminated (step S143), and when it is determined that the mode switching process should not be terminated (N in step S143), the process from step S141 is repeatedly executed. The switching of the noise canceling mode is, for example, a mode (ON) for canceling noise such as an engine in an airplane and a mode (OFF) for not canceling the noise. Specifically, a user carrying the receiver 200 is listening to a sound such as music output from the receiver 200 by putting an earphone connected to the receiver 200 on the ear. When such a user gets on the airplane, the receiver 200 acquires an optical ID. As a result, the receiver 200 switches the noise cancellation mode from OFF to ON. As a result, the user can hear a voice that does not include noise such as engine noise even in the cabin. The receiver 200 also acquires the light ID when the user leaves the airplane. The receiver 200 that has acquired this optical ID switches the noise cancellation mode from ON to OFF. Note that the noise to be subject to noise cancellation is not limited to engine noise but may be any sound such as a human voice.
 図260は、本実施の形態における複数の送信機を含む送信システムの一例を示す図である。 FIG. 260 is a diagram illustrating an example of a transmission system including a plurality of transmitters in this embodiment.
 この送信システムは、予め定められた順に配列された複数の送信機120を備えている。これらの送信機120は、送信機100と同様、上記実施の形態1~22のうちの何れかの実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)を備える。先頭の送信機120は、予め定められた周波数(キャリア周波数)にしたがって1つまたは複数の発光素子の輝度を変化させることによって、光IDを送信する。さらに、先頭の送信機120は、その輝度の変化を示す信号を同期信号として後続の送信機120に出力する。後続の送信機120は、その同期信号を受けると、その同期信号にしたがって1つまたは複数の発光素子の輝度を変化させることによって、光IDを送信する。さらに、後続の送信機120は、その輝度の変化を示す信号を同期信号として次の後続の送信機120に出力する。これにより、送信システムに含まれる全ての送信機120は、同期して光IDを送信する。 This transmission system includes a plurality of transmitters 120 arranged in a predetermined order. Similar to the transmitter 100, these transmitters 120 are transmitters according to any one of the above-described embodiments 1 to 22, and include one or a plurality of light emitting elements (for example, LEDs). The leading transmitter 120 transmits the optical ID by changing the luminance of one or a plurality of light emitting elements according to a predetermined frequency (carrier frequency). Further, the first transmitter 120 outputs a signal indicating the change in luminance to the subsequent transmitter 120 as a synchronization signal. When the subsequent transmitter 120 receives the synchronization signal, it transmits the optical ID by changing the luminance of one or more light emitting elements in accordance with the synchronization signal. Further, the subsequent transmitter 120 outputs a signal indicating the change in luminance to the subsequent subsequent transmitter 120 as a synchronization signal. Thereby, all the transmitters 120 included in the transmission system transmit the optical ID in synchronization.
 ここで、同期信号は、先頭の送信機120から後続の送信機120に受け渡され、後続の送信機120からさらに次の後続の送信機120に受け渡されて、最後の送信機120にまで届く。同期信号の受け渡しには例えば約1μ秒かかる。したがって、送信システムに、N(Nは2以上の整数)台の送信機120が備えられていれば、同期信号が先頭の送信機120から最後の送信機120に届くまで1×Nμ秒かかることになる。その結果、光IDの送信のタイミングが最大Nμ秒ずれることになる。例えば、N台の送信機120が9.6kHzの周波数にしたがって光IDを送信し、受信機200が9.6kHzの周波数で光IDを受信しようとしても、受信機200は、Nμ秒ずれた光IDを受信するため、その光IDを正しく受信することができない場合がある。 Here, the synchronization signal is transferred from the first transmitter 120 to the subsequent transmitter 120, and is further transferred from the subsequent transmitter 120 to the next subsequent transmitter 120 to the last transmitter 120. reach. For example, it takes about 1 μsec to transfer the synchronization signal. Therefore, if N (N is an integer of 2 or more) transmitters 120 are provided in the transmission system, it takes 1 × Nμ seconds for the synchronization signal to reach the last transmitter 120 from the first transmitter 120. become. As a result, the transmission timing of the optical ID is shifted by a maximum of N μ seconds. For example, even if N transmitters 120 transmit optical IDs according to a frequency of 9.6 kHz, and the receiver 200 attempts to receive an optical ID at a frequency of 9.6 kHz, the receiver 200 receives light that is shifted by N μ seconds. Since the ID is received, the optical ID may not be received correctly.
 そこで、本実施の形態では、先頭の送信機120は、送信システムに含まれる送信機120の台数に応じて速めに光IDを送信する。例えば、先頭の送信機120は、9.605kHzの周波数にしたがって光IDを送信する。一方、受信機200は、9.6kHzの周波数で光IDを受信する。このとき、受信機200はNμ秒ずれた光IDを受信しても、先頭の送信機120の周波数が受信機200の周波数よりも0.005kHzだけ高いため、その光IDのずれによる受信エラーの発生を抑えることができる。 Therefore, in the present embodiment, the head transmitter 120 transmits the optical ID at a higher speed according to the number of transmitters 120 included in the transmission system. For example, the first transmitter 120 transmits an optical ID according to a frequency of 9.605 kHz. On the other hand, the receiver 200 receives the optical ID at a frequency of 9.6 kHz. At this time, even if the receiver 200 receives the optical ID shifted by N μs, the frequency of the leading transmitter 120 is higher than the frequency of the receiver 200 by 0.005 kHz. Occurrence can be suppressed.
 また、先頭の送信機120は、最後の送信機120から同期信号をフィードバックしてもらうことによって、周波数の調整量を制御してもよい。例えば、先頭の送信機120は、自ら同期信号を出力してから、最後の送信機120からフィードバックされた同期信号を受け取るまでの時間を計測する。そして、先頭の送信機120は、その時間が長いほど、基準の周波数(例えば、9.6kHz)よりも高い周波数にしたがって光IDを送信する。 Also, the first transmitter 120 may control the frequency adjustment amount by having the synchronization signal fed back from the last transmitter 120. For example, the first transmitter 120 measures the time from when it outputs the synchronization signal itself until it receives the synchronization signal fed back from the last transmitter 120. And the head transmitter 120 transmits optical ID according to a frequency higher than a reference frequency (for example, 9.6 kHz), so that the time is long.
 図261は、本実施の形態における複数の送信機および受信機を含む送信システムの一例を示す図である。 FIG. 261 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in the present embodiment.
 この送信システムは、例えば2つの送信機120と受信機200とを備えている。2つの送信機120のうちの一方の送信機120は、9.599kHzの周波数にしたがって光IDを送信する。他方の送信機120は、9.601kHzの周波数にしたがって光IDを送信する。このような場合、2つの送信機120はそれぞれ、自らの光IDの周波数を電波信号で受信機200に通知する。 This transmission system includes, for example, two transmitters 120 and a receiver 200. One of the two transmitters 120 transmits an optical ID according to a frequency of 9.599 kHz. The other transmitter 120 transmits an optical ID according to a frequency of 9.601 kHz. In such a case, each of the two transmitters 120 notifies the receiver 200 of the frequency of its own optical ID with a radio wave signal.
 受信機200は、それらの周波数の通知を受けると、通知された周波数のそれぞれにしたがった復号を試みる。つまり、受信機200は、9.599kHzの周波数にしたがって、復号用画像に対する復号を試み、これにより光IDが受信できなければ、9.601kHzの周波数にしたがって、復号用画像に対する復号を試みる。このように、受信機200は、通知された全ての周波数のそれぞれにしたがって、復号用画像に対する復号を試みる。言い換えれば、受信機200は、通知されたそれぞれの周波数に対して総当たりを行う。または、受信機200は、通知された全ての周波数の平均周波数にしたがった復号を試みてもよい。つまり、受信機200は、9.599kHzと9.601kHzとの平均周波数である9.6kHzにしたがった復号を試みる。 When the receiver 200 receives the notification of those frequencies, the receiver 200 tries to perform decoding according to each of the notified frequencies. That is, the receiver 200 attempts to decode the decoding image according to the frequency of 9.599 kHz. If the optical ID cannot be received by this, the receiver 200 attempts to decode the decoding image according to the frequency of 9.601 kHz. As described above, the receiver 200 attempts to decode the decoding image according to each of all the notified frequencies. In other words, the receiver 200 performs brute force for each notified frequency. Alternatively, the receiver 200 may attempt decoding according to the average frequency of all the notified frequencies. That is, the receiver 200 attempts decoding according to 9.6 kHz, which is an average frequency of 9.599 kHz and 9.601 kHz.
 これにより、受信機200と送信機120とのそれぞれの周波数の違いによる受信エラーの発生率を低下させることができる。 Thereby, it is possible to reduce the occurrence rate of reception errors due to the difference in frequency between the receiver 200 and the transmitter 120.
 図262Aは、本実施の形態における受信機200の処理動作の一例を示すフローチャートである。 FIG. 262A is a flowchart illustrating an example of processing operations of the receiver 200 in the present embodiment.
 受信機200は、まず、撮像を開始して(ステップS151)、パラメータNを1に初期化する(ステップS152)。次に、受信機200は、その撮像によって得られた復号用画像を、パラメータNに対応する周波数にしたがって復号し、その復号結果に対する評価値を算出する(ステップS153)。例えば、パラメータN=1、2、3、4、5のそれぞれには、9.6kHz、9.601kHz、9.599kHz、9.602kHzなどの周波数が予め対応付けられている。評価値は、復号結果が正しい光IDに類似しているほど高い数値を示す。 First, the receiver 200 starts imaging (step S151) and initializes the parameter N to 1 (step S152). Next, the receiver 200 decodes the decoding image obtained by the imaging according to the frequency corresponding to the parameter N, and calculates an evaluation value for the decoding result (step S153). For example, parameters N = 1, 2, 3, 4, 5 are associated with frequencies such as 9.6 kHz, 9.601 kHz, 9.599 kHz, and 9.602 kHz in advance. The evaluation value indicates a higher numerical value as the decoding result is more similar to the correct optical ID.
 次に、受信機200は、パラメータNの数値が、予め定められた1以上の整数であるNmaxに等しいか否かを判定する(ステップS154)。ここで、受信機200は、Nmaxに等しくないと判定すると(ステップS154のN)、パラメータNをインクリメントして(ステップS155)、ステップS153からの処理を繰り返し実行する。一方、受信機200は、Nmaxに等しいと判定すると(ステップS154のY)、最大の評価値が算出された周波数を最適周波数として、受信機200の場所を示す場所情報に対応付けてサーバに登録する。このように登録される最適周波数および場所情報は、登録後、その場所情報に示される場所に移動してきた受信機200による光IDの受信のために用いられる。また、場所情報は、例えばGPSによって計測される位置を示す情報であってもよく、無線LAN(Local Area Network)におけるアクセスポイントの識別情報(例えば、SSID:Service Set Identifier)であってもよい。 Next, the receiver 200 determines whether or not the numerical value of the parameter N is equal to Nmax that is a predetermined integer of 1 or more (step S154). Here, if the receiver 200 determines that it is not equal to Nmax (N in step S154), it increments the parameter N (step S155) and repeats the processing from step S153. On the other hand, if the receiver 200 determines that it is equal to Nmax (Y in step S154), the frequency for which the maximum evaluation value is calculated is registered as the optimum frequency in association with the location information indicating the location of the receiver 200. To do. The optimum frequency and location information registered in this way are used for receiving the optical ID by the receiver 200 that has moved to the location indicated by the location information after registration. The location information may be information indicating a position measured by GPS, for example, or may be identification information (for example, SSID: Service Set Identifier) of an access point in a wireless LAN (Local Area Network).
 また、サーバへの登録を行った受信機200は、その最適周波数による復号によって得られた光IDにしたがって、例えば上述のようなAR画像の表示を行う。 In addition, the receiver 200 that has registered with the server displays, for example, the AR image as described above according to the optical ID obtained by decoding at the optimum frequency.
 図262Bは、本実施の形態における受信機200の処理動作の一例を示すフローチャートである。 FIG. 262B is a flowchart illustrating an example of processing operations of the receiver 200 in the present embodiment.
 図262Aに示すサーバへの登録が行われた後、受信機200は、自らが存在する場所を示す場所情報をサーバに送信する(ステップS161)。次に、受信機200は、その場所情報に対応付けて登録されている最適周波数をそのサーバから取得する(ステップS162)。 After registration with the server shown in FIG. 262A is performed, the receiver 200 transmits location information indicating a location where the receiver 200 exists to the server (step S161). Next, the receiver 200 acquires the optimum frequency registered in association with the location information from the server (step S162).
 次に、受信機200は、撮像を開始し(ステップS163)、その撮像によって得られた復号用画像を、ステップS162で取得した最適周波数にしたがって復号する(ステップS164)。受信機200は、この復号によって得られた光IDにしたがって、例えば上述のようなAR画像の表示を行う。 Next, the receiver 200 starts imaging (step S163), and decodes the decoding image obtained by the imaging according to the optimum frequency acquired in step S162 (step S164). The receiver 200 displays an AR image as described above, for example, according to the optical ID obtained by this decoding.
 このように、サーバへの登録が行われた後では、受信機200は、図262Aに示す処理を実行することなく、最適周波数を取得して光IDを受信することができる。なお、受信機200は、ステップS162において最適周波数を取得することができなかったときに、図262Aに示す処理を実行することによって最適周波数を取得してもよい。 Thus, after registration to the server is performed, the receiver 200 can acquire the optimum frequency and receive the optical ID without executing the processing shown in FIG. 262A. Note that the receiver 200 may acquire the optimum frequency by executing the process illustrated in FIG. 262A when the optimum frequency cannot be obtained in step S162.
 [実施の形態23のまとめ]
 図263Aは、本実施の形態における表示方法を示すフローチャートである。
[Summary of Embodiment 23]
FIG. 263A is a flowchart illustrating a display method in this embodiment.
 本実施の形態における表示方法は、上述の受信機200である表示装置が画像を表示する表示方法であって、ステップSL11~SL16を含む。 The display method in the present embodiment is a display method in which the display device that is the above-described receiver 200 displays an image, and includes steps SL11 to SL16.
 ステップSL11では、イメージセンサが被写体を撮像することによって撮像表示画像および復号用画像を取得する。ステップSL12では、その復号用画像に対する復号によって光IDを取得する。ステップSL13では、その光IDをサーバに送信する。ステップSL14では、その光IDに対応するAR画像と認識情報とをサーバから取得する。ステップSL15では、撮像表示画像のうち、認識情報に応じた領域を対象領域として認識する。ステップSL16では、対象領域にAR画像が重畳された撮像表示画像を表示する。 In step SL11, the image sensor captures an image of the subject to acquire a captured display image and a decoding image. In step SL12, the optical ID is acquired by decoding the decoding image. In step SL13, the optical ID is transmitted to the server. In step SL14, the AR image corresponding to the optical ID and the recognition information are acquired from the server. In step SL15, an area corresponding to the recognition information in the captured display image is recognized as a target area. In step SL16, the captured display image in which the AR image is superimposed on the target area is displayed.
 これにより、AR画像が撮像表示画像に重畳されて表示されるため、ユーザに有益な画像を表示することができる。さらに、処理負荷を抑えて適切な対象領域にAR画像を重畳することができる。 Thereby, since the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
 つまり、一般的な拡張現実(すなわちAR)では、予め保存されている膨大な数の認識対象画像と、撮像表示画像とを比較することによって、その撮像表示画像に何れかの認識対象画像が含まれているか否かが判定される。そして、認識対象画像が含まれていると判定されれば、その認識対象画像に対応するAR画像が撮像表示画像に重畳される。このとき、認識対象画像を基準にAR画像の位置合わせが行われる。このように、一般的な拡張現実では、膨大な数の認識対象画像と撮像表示画像とを比較するため、さらに、位置合わせにおいても撮像表示画像における認識対象画像の位置検出が必要となるため、計算量が多く、処理負荷が高いという問題がある。 That is, in general augmented reality (that is, AR), a huge number of recognition target images stored in advance are compared with the captured display image, and any of the recognition target images is included in the captured display image. It is determined whether or not If it is determined that the recognition target image is included, the AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image. In this way, in general augmented reality, a huge number of recognition target images are compared with captured display images, and further, position detection of the recognition target images in the captured display image is necessary even in alignment. There is a problem that the calculation amount is large and the processing load is high.
 しかし、本実施の形態にける表示方法では、図235~図262Bにも示すように、被写体の撮像によって得られる復号用画像を復号することによって光IDが取得される。つまり、被写体である送信機から送信された光IDが受信される。さらに、この光IDに対応するAR画像と認識情報とがサーバから取得される。したがって、サーバでは、膨大な数の認識対象画像と撮像表示画像とを比較する必要がなく、光IDに予め対応付けられているAR画像を選択して表示装置に送信することができる。これにより、計算量を減らして処理負荷を大幅に抑えることができる。 However, in the display method according to the present embodiment, as shown in FIGS. 235 to 262B, the light ID is acquired by decoding the decoding image obtained by imaging the subject. That is, the optical ID transmitted from the transmitter that is the subject is received. Furthermore, the AR image corresponding to this optical ID and the recognition information are acquired from the server. Therefore, the server does not need to compare a huge number of recognition target images and captured display images, and can select and transmit an AR image previously associated with the optical ID to the display device. Thereby, the amount of calculation can be reduced and the processing load can be significantly suppressed.
 また、本実施の形態における表示方法では、この光IDに対応する認識情報がサーバから取得される。認識情報は、撮像表示画像においてAR画像が重畳される領域である対象領域を認識するための情報である。この認識情報は、例えば白い四角形が対象領域であることを示す情報であってもよい。この場合には、対象領域を簡単に認識することができ、処理負荷をさらに抑えることができる。つまり、認識情報の内容に応じて、処理負荷をさらに抑えることができる。また、サーバでは、光IDに応じてその認識情報の内容を任意に設定することができるため、処理負荷と認識精度とのバランスを適切に保つことができる。 Further, in the display method in the present embodiment, the recognition information corresponding to this light ID is acquired from the server. The recognition information is information for recognizing a target area that is an area in which an AR image is superimposed in a captured display image. This recognition information may be information indicating that a white square is the target area, for example. In this case, the target area can be easily recognized, and the processing load can be further suppressed. That is, the processing load can be further suppressed according to the content of the recognition information. Further, since the server can arbitrarily set the content of the recognition information according to the optical ID, the balance between the processing load and the recognition accuracy can be appropriately maintained.
 ここで、認識情報は、撮像表示画像のうちの基準領域を特定するための基準情報であり、対象領域の認識では、その基準情報に基づいて撮像表示画像から基準領域を特定し、撮像表示画像のうち、その基準領域の位置により対象領域を認識してもよい。 Here, the recognition information is reference information for specifying a reference area in the captured display image. In recognition of the target area, the reference area is specified from the captured display image based on the reference information, and the captured display image is displayed. Of these, the target area may be recognized based on the position of the reference area.
 または、認識情報は、撮像表示画像のうちの基準領域を特定するための基準情報と、その基準領域に対する対象領域の相対位置を示す対象情報とを含んでいてもよい。この場合、対象領域の認識では、基準情報に基づいて撮像表示画像から基準領域を特定し、撮像表示画像のうち、その基準領域の位置を基準として対象情報により示される相対位置にある領域を、対象領域として認識する。 Alternatively, the recognition information may include reference information for specifying a reference area in the captured display image and target information indicating a relative position of the target area with respect to the reference area. In this case, in the recognition of the target area, the reference area is identified from the captured display image based on the reference information, and the area at the relative position indicated by the target information in the captured display image with the position of the reference area as a reference, Recognize as a target area.
 これにより、図244および図245に示すように、撮像表示画像において認識される対象領域の位置の自由度を広げることができる。 Thereby, as shown in FIGS. 244 and 245, the degree of freedom of the position of the target area recognized in the captured display image can be expanded.
 また、基準情報は、撮像表示画像における基準領域の位置が、復号用画像における、イメージセンサが有する複数の露光ラインの露光によって現れる複数の輝線のパターンからなる輝線パターン領域の位置と同じであることを示してもよい。 Further, the reference information is such that the position of the reference area in the captured display image is the same as the position of the bright line pattern area composed of a plurality of bright line patterns appearing by exposure of the multiple exposure lines of the image sensor in the decoding image. May be indicated.
 これにより、図244および図245に示すように、撮像表示画像における輝線パターン領域に対応する領域を基準にして対象領域を認識することができる。 Thereby, as shown in FIGS. 244 and 245, the target area can be recognized with reference to the area corresponding to the bright line pattern area in the captured display image.
 また、基準情報は、撮像表示画像における基準領域が、撮像表示画像のうちのディスプレイが映し出されている領域であることを示してもよい。 Further, the reference information may indicate that the reference area in the captured display image is an area in which the display of the captured display image is displayed.
 これにより、図235に示すように、例えば駅名標をディスプレイとすれば、そのディスプレイが映し出されている領域を基準にして対象領域を認識することができる。 Thus, as shown in FIG. 235, for example, if a station name sign is used as a display, the target area can be recognized with reference to the area where the display is displayed.
 また、撮像表示画像の表示では、上述のAR画像である第1のAR画像と異なる第2のAR画像の表示を抑制しながら、予め定められた表示期間だけ、第1のAR画像を表示してもよい。 Further, in the display of the captured display image, the first AR image is displayed for a predetermined display period while suppressing the display of the second AR image different from the first AR image that is the above-described AR image. May be.
 これにより、図250に示すように、ユーザが一度表示された第1のAR画像を見ているときに、その第1のAR画像がそれとは異なる第2のAR画像にすぐに置き換わってしまうことを抑えることができる。 As a result, as shown in FIG. 250, when the user is viewing the first AR image once displayed, the first AR image is immediately replaced with a different second AR image. Can be suppressed.
 また、撮像表示画像の表示では、表示期間には、新たに取得される復号用画像に対する復号を禁止してもよい。 In the display of the captured display image, decoding of a newly acquired decoding image may be prohibited during the display period.
 これにより、図250に示すように、新たに取得される復号用画像の復号は、第2のAR画像の表示が抑制されているときには無駄な処理であるため、その復号を禁止することによって、消費電力を抑えることができる。 As a result, as shown in FIG. 250, decoding of the newly acquired decoding image is a wasteful process when the display of the second AR image is suppressed. Power consumption can be reduced.
 また、撮像表示画像の表示では、さらに、表示期間において、表示装置の加速度を加速度センサによって計測し、計測された加速度が閾値以上か否かを判定してもよい。そして、閾値以上と判定したときには、第2のAR画像の表示の抑制を解除することによって、第1のAR画像の代わりに第2のAR画像を表示してもよい。 In the display of the captured display image, the acceleration of the display device may be measured by an acceleration sensor during the display period, and it may be determined whether or not the measured acceleration is equal to or greater than a threshold value. And when it determines with more than a threshold value, you may display a 2nd AR image instead of a 1st AR image by canceling suppression of a display of the 2nd AR image.
 これにより、図250に示すように、閾値以上の表示装置の加速度が計測されたときに、第2のAR画像の表示の抑制が解除される。したがって、例えば、ユーザが他の被写体にイメージセンサを向けようと表示装置を大きく動かしたときには、第2のAR画像を直ぐに表示することができる。 Thereby, as shown in FIG. 250, when the acceleration of the display device equal to or higher than the threshold is measured, the suppression of the display of the second AR image is released. Accordingly, for example, when the user moves the display device greatly to direct the image sensor toward another subject, the second AR image can be displayed immediately.
 また、撮像表示画像の表示では、さらに、表示装置に備えられたフェイスカメラによる撮像によって、表示装置にユーザの顔が近づいている否かを判定してもよい。そして、顔が近づいていると判定すると、第1のAR画像と異なる第2のAR画像の表示を抑制しながら、第1のAR画像を表示してもよい。または、撮像表示画像の表示では、さらに、加速度センサによって計測される表示装置の加速度によって、表示装置にユーザの顔が近づいている否かを判定してもよい。そして、顔が近づいていると判定すると、第1のAR画像と異なる第2のAR画像の表示を抑制しながら、第1のAR画像を表示してもよい。 Further, in the display of the captured display image, it may be further determined whether or not the user's face is approaching the display device by imaging with a face camera provided in the display device. And if it determines with the face approaching, you may display a 1st AR image, suppressing the display of the 2nd AR image different from a 1st AR image. Alternatively, in the display of the captured display image, whether or not the user's face is approaching the display device may be further determined based on the acceleration of the display device measured by the acceleration sensor. And if it determines with the face approaching, you may display a 1st AR image, suppressing the display of the 2nd AR image different from a 1st AR image.
 これにより、図250に示すように、ユーザが第1のAR画像を見ようとして表示装置に顔を近づけているときに、その第1のAR画像がそれとは異なる第2のAR画像に置き換わってしまうことを抑えることができる。 As a result, as shown in FIG. 250, when the user is looking at the first AR image and the face is brought close to the display device, the first AR image is replaced with a different second AR image. That can be suppressed.
 また、図254に示すように、撮像表示画像および復号用画像の取得では、それぞれ画像を表示している複数のディスプレイを被写体として撮像することによって、その撮像表示画像および復号用画像を取得してもよい。このとき、対象領域の認識では、撮像表示画像のうち、複数のディスプレイのうちの光IDを送信しているディスプレイである送信ディスプレイが現れている領域を対象領域として認識する。また、撮像表示画像の表示では、送信ディスプレイに表示されている画像に対応する第1の字幕をAR画像として対象領域に重畳し、さらに、撮像表示画像のうちの対象領域よりも大きい領域に、第1の字幕が拡大された字幕である第2の字幕を重畳する。 Further, as shown in FIG. 254, in the acquisition of the captured display image and the decoding image, the captured display image and the decoding image are acquired by capturing a plurality of displays each displaying an image as a subject. Also good. At this time, in the recognition of the target area, an area in which a transmission display that is a display that transmits the light ID among the plurality of displays appears in the captured display image is recognized as the target area. Further, in the display of the captured display image, the first subtitle corresponding to the image displayed on the transmission display is superimposed on the target area as an AR image, and further, in an area larger than the target area of the captured display image, The second subtitle, which is an expanded subtitle of the first subtitle, is superimposed.
 これにより、送信ディスプレイの画像に第1の字幕が重畳されるため、その第1の字幕が複数のディスプレイのうちの何れのディスプレイの画像に対する字幕であるかを、ユーザに容易に把握させることができる。また、第1の字幕が拡大された字幕である第2の字幕も表示されるため、第1の字幕が小さくて読み難い場合であっても、第2の字幕の表示によって、字幕を読み易くすることができる。 Thereby, since the first subtitle is superimposed on the image on the transmission display, the user can easily grasp which display subtitle is the subtitle for the display image of the plurality of displays. it can. In addition, since the second subtitle, which is an enlarged subtitle of the first subtitle, is also displayed, even if the first subtitle is small and difficult to read, the subtitle can be easily read by displaying the second subtitle. can do.
 また、撮像表示画像の表示では、さらに、上述のサーバから取得される情報に、音声情報が含まれているか否かを判定し、含まれていると判定したときには、第1および第2の字幕よりも、その音声情報が示す音声を優先して出力してもよい。 Further, in the display of the captured display image, it is further determined whether or not audio information is included in the information acquired from the server. When it is determined that the information is included, the first and second subtitles are determined. The voice indicated by the voice information may be output with priority.
 これにより、音声が優先的に出力されるため、ユーザが字幕を読む負担を軽減することができる。 This allows the user to reduce the burden of reading subtitles because the audio is output preferentially.
 図263Bは、本実施の形態における表示装置の構成を示すブロック図である。 FIG. 263B is a block diagram illustrating a structure of the display device in this embodiment.
 本実施の形態における表示装置10は、画像を表示する表示装置であって、イメージセンサ11と、復号部12と、送信部13と、取得部14と、認識部15と、表示部16とを備える。なお、この表示装置10は、上述の受信機200に相当する。 The display device 10 according to the present embodiment is a display device that displays an image, and includes an image sensor 11, a decoding unit 12, a transmission unit 13, an acquisition unit 14, a recognition unit 15, and a display unit 16. Prepare. The display device 10 corresponds to the receiver 200 described above.
 イメージセンサ11は、被写体を撮像することによって撮像表示画像および復号用画像を取得する。復号部12は、その復号用画像に対する復号によって光IDを取得する。送信部13は、その光IDをサーバに送信する。取得部14は、光IDに対応するAR画像と認識情報とをサーバから取得する。認識部15は、撮像表示画像のうち、その認識情報に応じた領域を対象領域として認識する。表示部16は、その対象領域にAR画像が重畳された撮像表示画像を表示する。 The image sensor 11 acquires a captured display image and a decoding image by imaging a subject. The decoding unit 12 acquires the optical ID by decoding the decoding image. The transmission unit 13 transmits the optical ID to the server. The acquisition unit 14 acquires the AR image corresponding to the optical ID and the recognition information from the server. The recognition unit 15 recognizes a region corresponding to the recognition information in the captured display image as a target region. The display unit 16 displays a captured display image in which an AR image is superimposed on the target area.
 これにより、AR画像が撮像表示画像に重畳されて表示されるため、ユーザに有益な画像を表示することができる。さらに、処理負荷を抑えて適切な対象領域にAR画像を重畳することができる。 Thereby, since the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
 なお、本実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。ここで、本実施の形態の受信機200または表示装置10などを実現するソフトウェアは、図239、図246、図250、図256、図259、および図262A~図263Aに示すフローチャートに含まれる各ステップをコンピュータに実行させるプログラムである。 In the present embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. Here, software for realizing the receiver 200 or the display device 10 according to the present embodiment is included in each of the flowcharts shown in FIGS. 239, 246, 250, 256, 259, and 262A to 263A. A program for causing a computer to execute steps.
 [実施の形態23の変形例1]
 以下、実施の形態23の変形例1、つまり、光IDを用いたARを実現する表示方法の変形例1について説明する。
[Modification 1 of Embodiment 23]
Hereinafter, a first modification of the twenty-third embodiment, that is, a first modification of the display method for realizing the AR using the optical ID will be described.
 図264は、実施の形態23の変形例1における受信機がAR画像を表示する例を示す図である。 FIG. 264 is a diagram illustrating an example in which the receiver in the first modification of the twenty-third embodiment displays an AR image.
 受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Pkと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 acquires the above-described captured display image Pk, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは、ロボットとして構成されている送信機100cと、送信機100cの隣にいる人物21を撮像する。送信機100cは、上記実施の形態1~22のうちの何れかの実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)131を備える。この送信機100cは、その1つまたは複数の発光素子131を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100c configured as a robot and the person 21 adjacent to the transmitter 100c. The transmitter 100c is the transmitter according to any one of Embodiments 1 to 22, and includes one or a plurality of light emitting elements (for example, LEDs) 131. The transmitter 100c changes its luminance by blinking the one or more light emitting elements 131, and transmits an optical ID (optical identification information) by the luminance change. This light ID is the above-mentioned visible light signal.
 受信機200は、送信機100cおよび人物21を通常露光時間で撮像することによって、それらが映し出された撮像表示画像Pkを取得する。さらに、受信機200は、その通常露光時間よりも短い通信用露光時間で送信機100cおよび人物21を撮像することによって、復号用画像を取得する。 The receiver 200 acquires the captured display image Pk on which the transmitter 100c and the person 21 are imaged by the normal exposure time. Furthermore, the receiver 200 acquires the decoding image by capturing the transmitter 100c and the person 21 with the communication exposure time shorter than the normal exposure time.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100cから光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P10と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pkのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100cであるロボットが映し出されている領域の右側にある領域を対象領域として認識する。具体的には、受信機200は、撮像表示画像Pkに映し出されている送信機100cの2つのマーカ132aおよび132bの間の距離を特定する。そして、受信機200は、その距離に応じた幅および高さを有する領域を対象領域として認識する。つまり、認識情報は、マーカ132aおよび132bの形状と、それらのマーカ132aおよび132bを基準とする対象領域の位置および大きさとを示している。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100c. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P10 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pk as a target area. For example, the receiver 200 recognizes an area on the right side of the area where the robot that is the transmitter 100c is projected as the target area. Specifically, the receiver 200 specifies the distance between the two markers 132a and 132b of the transmitter 100c displayed in the captured display image Pk. Then, the receiver 200 recognizes an area having a width and a height corresponding to the distance as a target area. That is, the recognition information indicates the shape of the markers 132a and 132b and the position and size of the target region with reference to the markers 132a and 132b.
 そして、受信機200は、その対象領域にAR画像P10を重畳し、AR画像P10が重畳された撮像表示画像Pkをディスプレイ201に表示する。例えば、受信機200は、送信機100cとは異なる他のロボットを示すAR画像P10を取得する。この場合、撮像表示画像Pkの対象領域にそのAR画像P10が重畳されるため、送信機100cの隣に他のロボットが現実に存在するように、撮像表示画像Pkを表示することができる。その結果、人物21は、他のロボットが実在していなくても、送信機100cと共に他のロボットと一緒に写真に写ることができる。 Then, the receiver 200 superimposes the AR image P10 on the target area, and displays the captured display image Pk on which the AR image P10 is superimposed on the display 201. For example, the receiver 200 acquires an AR image P10 indicating another robot different from the transmitter 100c. In this case, since the AR image P10 is superimposed on the target area of the captured display image Pk, the captured display image Pk can be displayed so that another robot actually exists next to the transmitter 100c. As a result, the person 21 can be photographed together with the other robot together with the transmitter 100c even if no other robot exists.
 図265は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 265 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
 送信機100は、例えば図265に示すように、表示パネルを有する画像表示装置として構成され、その表示パネルに静止画像PSを表示しながら輝度変化することによって、光IDを送信している。なお、表示パネルは、例えば液晶ディスプレイまたは有機EL(electroluminescence)ディスプレイである。 For example, as shown in FIG. 265, the transmitter 100 is configured as an image display device having a display panel, and transmits a light ID by changing the luminance while displaying a still image PS on the display panel. The display panel is, for example, a liquid crystal display or an organic EL (electroluminescence) display.
 受信機200は、送信機100を撮像することによって、上述と同様に、撮像表示画像Pmと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P11と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pmのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100における表示パネルが映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P11を重畳し、AR画像P11が重畳された撮像表示画像Pmをディスプレイ201に表示する。例えば、AR画像P11は、送信機100の表示パネルに表示されている静止画像PSと同一または実質的に同一のピクチャを表示順で先頭のピクチャとして有する動画像である。つまり、AR画像P11は、静止画像PSから動きだす動画像である。 The receiver 200 acquires the captured display image Pm and the decoding image by imaging the transmitter 100 in the same manner as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P11 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pm as a target area. For example, the receiver 200 recognizes the area where the display panel of the transmitter 100 is displayed as the target area. Then, the receiver 200 superimposes the AR image P11 on the target area, and displays the captured display image Pm on which the AR image P11 is superimposed on the display 201. For example, the AR image P11 is a moving image having the same or substantially the same picture as the still image PS displayed on the display panel of the transmitter 100 as the first picture in the display order. That is, the AR image P11 is a moving image that starts to move from the still image PS.
 この場合、撮像表示画像Pmの対象領域にそのAR画像P11が重畳されるため、受信機200は、動画像を表示する画像表示装置が現実に存在するように、撮像表示画像Pmを表示することができる。 In this case, since the AR image P11 is superimposed on the target region of the captured display image Pm, the receiver 200 displays the captured display image Pm so that an image display device that displays a moving image actually exists. Can do.
 図266は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 266 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
 送信機100は、例えば図266に示すように駅名標として構成され、輝度変化することによって、光IDを送信している。 The transmitter 100 is configured as a station name symbol as shown in FIG. 266, for example, and transmits the light ID by changing the luminance.
 受信機200は、図266の(a)に示すように、送信機100から離れた位置から送信機100を撮像する。これにより、受信機200は、上述と同様に、撮像表示画像Pnと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P12~P14と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pnのうち、その認識情報に応じた2つの領域を第1および第2の対象領域として認識する。例えば、受信機200は、送信機100の周囲の領域を第1の対象領域として認識する。そして、受信機200は、その第1の対象領域にAR画像P12を重畳し、AR画像P12が重畳された撮像表示画像Pnをディスプレイ201に表示する。例えば、AR画像P12は、受信機200のユーザに対して送信機100への接近を促す矢印である。 The receiver 200 images the transmitter 100 from a position away from the transmitter 100 as shown in FIG. Thereby, the receiver 200 acquires the captured display image Pn and the decoding image in the same manner as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR images P12 to P14 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes two regions corresponding to the recognition information in the captured display image Pn as first and second target regions. For example, the receiver 200 recognizes the area around the transmitter 100 as the first target area. Then, the receiver 200 superimposes the AR image P12 on the first target area, and displays the captured display image Pn on which the AR image P12 is superimposed on the display 201. For example, the AR image P12 is an arrow that prompts the user of the receiver 200 to approach the transmitter 100.
 この場合、撮像表示画像Pnの第1の対象領域にそのAR画像P12が重畳されて表示されるため、ユーザは、受信機200を送信機100に向けた状態で送信機100に近づく。このような受信機200の送信機100への接近によって、撮像表示画像Pnに映し出されている送信機100の領域(上述の基準領域に相当)は大きくなる。その領域の大きさが第1の閾値以上になると、受信機200は、例えば図266の(b)に示すように、さらに、送信機100が映し出されている領域である第2の対象領域にAR画像P13を重畳する。つまり、受信機200は、AR画像P12およびP13が重畳された撮像表示画像Pnをディスプレイ201に表示する。例えば、AR画像P13は、ユーザに対して、駅名標に示される駅周辺の概要を知らせるメッセージである。また、AR画像P13は、撮像表示画像Pnに映し出されている送信機100の領域の大きさと等しい。 In this case, since the AR image P12 is displayed superimposed on the first target area of the captured display image Pn, the user approaches the transmitter 100 with the receiver 200 facing the transmitter 100. As the receiver 200 approaches the transmitter 100, the area of the transmitter 100 (corresponding to the reference area described above) displayed in the captured display image Pn becomes larger. When the size of the area becomes equal to or larger than the first threshold, the receiver 200 further moves to a second target area that is an area where the transmitter 100 is projected, as shown in FIG. 266 (b), for example. The AR image P13 is superimposed. That is, the receiver 200 displays the captured display image Pn on which the AR images P12 and P13 are superimposed on the display 201. For example, the AR image P13 is a message that informs the user of the outline of the vicinity of the station indicated by the station name sign. The AR image P13 is equal to the size of the area of the transmitter 100 displayed in the captured display image Pn.
 また、この場合にも、撮像表示画像Pnの第1の対象領域に矢印であるAR画像P12が重畳されて表示されるため、ユーザは、受信機200を送信機100に向けた状態で送信機100に近づく。このような受信機200の送信機100への接近によって、撮像表示画像Pnに映し出されている送信機100の領域(上述の基準領域に相当)はさらに大きくなる。その領域の大きさが第2の閾値以上になると、受信機200は、例えば図266の(c)に示すように、第2の対象領域に重畳されているAR画像P13をAR画像P14に変更する。さらに、受信機200は、第1の対象領域に重畳されているAR画像P12を削除する。 Also in this case, since the AR image P12 that is an arrow is superimposed and displayed on the first target area of the captured display image Pn, the user transmits the transmitter 200 with the receiver 200 facing the transmitter 100. Approaching 100. As the receiver 200 approaches the transmitter 100, the area of the transmitter 100 (corresponding to the reference area described above) displayed in the captured display image Pn becomes larger. When the size of the area becomes equal to or larger than the second threshold, the receiver 200 changes the AR image P13 superimposed on the second target area to the AR image P14, for example, as illustrated in (c) of FIG. 266. To do. Furthermore, the receiver 200 deletes the AR image P12 superimposed on the first target area.
 つまり、受信機200は、AR画像P14が重畳された撮像表示画像Pnをディスプレイ201に表示する。例えば、AR画像P14は、ユーザに対して、駅名標に示される駅周辺の詳細を知らせるメッセージである。また、AR画像P14は、撮像表示画像Pnに映し出されている送信機100の領域の大きさと等しい。この送信機100の領域は、受信機200が送信機100に近いほど大きい。したがって、AR画像P14は、AR画像P13よりも大きい。 That is, the receiver 200 displays the captured display image Pn on which the AR image P14 is superimposed on the display 201. For example, the AR image P14 is a message that informs the user of the details around the station indicated by the station name sign. The AR image P14 is equal to the size of the area of the transmitter 100 displayed in the captured display image Pn. The area of the transmitter 100 is larger as the receiver 200 is closer to the transmitter 100. Therefore, the AR image P14 is larger than the AR image P13.
 このように、受信機200は、送信機100に近づくほど、AR画像を大きくし、多くの情報を表示する。また、AR画像P12のようなユーザに接近を促す矢印が表示されるため、送信機100に近づくと多くの情報が表示されることをユーザに容易に把握させることができる。 Thus, the receiver 200 enlarges the AR image and displays more information as it approaches the transmitter 100. In addition, since an arrow that prompts the user to approach, such as the AR image P12, is displayed, the user can easily grasp that a large amount of information is displayed when approaching the transmitter 100.
 図267は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 267 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
 図266に示す例では、受信機200は、送信機100に近づくと多くの情報を表示させるが、送信機100との間の距離に関わらず、多くの情報を例えば吹き出しの形態で表示してもよい。 In the example shown in FIG. 266, the receiver 200 displays a lot of information when approaching the transmitter 100, but displays a lot of information in the form of, for example, a balloon regardless of the distance to the transmitter 100. Also good.
 具体的には、受信機200は、図267に示すように、送信機100を撮像することにより、上述と同様に、撮像表示画像Poと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P15と認識情報とをサーバから取得する。受信機200は、撮像表示画像Poのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100の周囲の領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P15を重畳し、AR画像P15が重畳された撮像表示画像Poをディスプレイ201に表示する。例えば、AR画像P15は、ユーザに対して、駅名標に示される駅周辺の詳細を吹き出しの形態で知らせるメッセージである。 Specifically, as shown in FIG. 267, the receiver 200 captures the captured display image Po and the decoding image by capturing the transmitter 100 in the same manner as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P15 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Po as a target area. For example, the receiver 200 recognizes an area around the transmitter 100 as a target area. Then, the receiver 200 superimposes the AR image P15 on the target area, and displays the captured display image Po on which the AR image P15 is superimposed on the display 201. For example, the AR image P15 is a message that informs the user of the details of the vicinity of the station indicated by the station name in a balloon form.
 この場合、撮像表示画像Poの対象領域にそのAR画像P15が重畳されるため、受信機200のユーザは送信機100に近づかなくても多くの情報を受信機200に表示させることができる。 In this case, since the AR image P15 is superimposed on the target area of the captured display image Po, the user of the receiver 200 can display a lot of information on the receiver 200 without approaching the transmitter 100.
 図268は、実施の形態23の変形例1における受信機200の他の例を示す図である。 FIG. 268 is a diagram illustrating another example of the receiver 200 in the first modification of the twenty-third embodiment.
 受信機200は、上述の例ではスマートフォンとして構成されているが、図19~図21および図258に示す例と同様に、イメージセンサを備えたヘッドマウントディスプレイ(グラスともいう)として構成されていてもよい。 The receiver 200 is configured as a smartphone in the above-described example, but is configured as a head-mounted display (also referred to as glass) including an image sensor, similar to the examples illustrated in FIGS. 19 to 21 and 258. Also good.
 このような受信機200は、復号用画像の一部の復号対象領域に対してのみ復号を行うことによって光IDを取得する。例えば、受信機200は、図268の(a)に示すように、視線検出カメラ203を備えている。視線検出カメラ203は、受信機200であるヘッドマウントディスプレイを装着しているユーザの眼を撮像する。受信機200は、この視線検出カメラ203による撮像によって得られた眼の画像に基づいて、そのユーザの視線を検出する。 Such a receiver 200 acquires the optical ID by performing decoding only on a part of the decoding target area of the decoding image. For example, the receiver 200 includes a line-of-sight detection camera 203 as shown in FIG. The line-of-sight detection camera 203 images the eyes of the user wearing the head-mounted display that is the receiver 200. The receiver 200 detects the user's line of sight based on the eye image obtained by the imaging by the line-of-sight detection camera 203.
 受信機200は、図268の(b)に示すように、例えば、ユーザの視界のうち、検出された視線が向けられている領域に視線枠204が現れるように、その視線枠204を表示する。したがって、この視線枠204は、ユーザの視線の動きに応じて移動する。受信機200は、復号用画像のうち、その視線枠204内に相当する領域を復号対象領域として扱う。つまり、受信機200は、復号用画像のうち復号対象領域外に輝線パターン領域があっても、その輝線パターン領域に対する復号を行わず、復号対象領域内の輝線パターン領域に対してのみ復号を行う。これにより、復号用画像に複数の輝線パターン領域が有る場合でも、それらの全ての輝線パターン領域に対する復号を行わないため、処理負荷を軽減することができるとともに、余計なAR画像の表示を抑えることができる。 As shown in FIG. 268 (b), for example, the receiver 200 displays the line-of-sight frame 204 so that the line-of-sight frame 204 appears in the area of the user's field of view where the detected line of sight is directed. . Accordingly, the line-of-sight frame 204 moves according to the movement of the user's line of sight. The receiver 200 treats an area corresponding to the line-of-sight frame 204 in the decoding image as a decoding target area. In other words, the receiver 200 does not decode the bright line pattern area in the decoding image even if there is a bright line pattern area outside the decoding target area, and performs decoding only on the bright line pattern area in the decoding target area. . As a result, even when there are a plurality of bright line pattern areas in the decoding image, decoding is not performed for all of the bright line pattern areas, so that the processing load can be reduced and the display of an extra AR image can be suppressed. Can do.
 また、受信機200は、それぞれ音声を出力するための複数の輝線パターン領域が復号用画像に含まれている場合には、復号対象領域内の輝線パターン領域のみを復号して、その輝線パターン領域に対応する音声のみを出力してもよい。あるいは、受信機200は、復号用画像に含まれる複数の輝線パターン領域のそれぞれを復号し、復号対象領域内の輝線パターン領域に対応する音声を大きく出力し、復号対象領域外の輝線パターン領域に対応する音声を小さく出力してもよい。また、復号対象領域外に複数の輝線パターン領域がある場合には、受信機200は、復号対象領域に近い輝線パターン領域ほど、その輝線パターン領域に対応する音声を大きく出力してもよい。 In addition, when a plurality of bright line pattern areas for outputting sound are included in the decoding image, the receiver 200 decodes only the bright line pattern area in the decoding target area, and the bright line pattern area Only the sound corresponding to may be output. Alternatively, the receiver 200 decodes each of the plurality of bright line pattern regions included in the decoding image, outputs a large amount of sound corresponding to the bright line pattern region in the decoding target region, and outputs to the bright line pattern region outside the decoding target region. The corresponding voice may be output small. In addition, when there are a plurality of bright line pattern areas outside the decoding target area, the receiver 200 may output a larger amount of sound corresponding to the bright line pattern area as the bright line pattern area is closer to the decoding target area.
 図269は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 269 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
 送信機100は、例えば図269に示すように、表示パネルを有する画像表示装置として構成され、その表示パネルに画像を表示しながら輝度変化することによって、光IDを送信している。 For example, as shown in FIG. 269, the transmitter 100 is configured as an image display device having a display panel, and transmits an optical ID by changing luminance while displaying an image on the display panel.
 受信機200は、送信機100を撮像することによって、上述と同様に、撮像表示画像Ppと復号用画像とを取得する。 The receiver 200 acquires the captured display image Pp and the decoding image in the same manner as described above by imaging the transmitter 100.
 このとき、受信機200は、復号用画像における輝線パターン領域と同じ位置にあってその輝線パターン領域と同じ大きさの領域を、撮像表示画像Ppの中から特定する。そして、受信機200は、その領域の一端から他端に向けて繰り返し移動する走査線P100を表示してもよい。 At this time, the receiver 200 specifies, from the captured display image Pp, an area having the same position as the bright line pattern area in the decoding image and the same size as the bright line pattern area. The receiver 200 may display the scanning line P100 that repeatedly moves from one end of the region to the other end.
 この走査線P100が表示されている間、受信機200は、復号用画像に対する復号によって光IDを取得し、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像と認識情報とをサーバから取得する。受信機200は、撮像表示画像Ppのうち、その認識情報に応じた領域を対象領域として認識する。 While the scanning line P100 is displayed, the receiver 200 acquires the optical ID by decoding the decoding image and transmits the optical ID to the server. Then, the receiver 200 acquires the AR image corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pp as a target area.
 このような対象領域を認識すると、受信機200は、走査線P100の表示を終了し、その対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppをディスプレイ201に表示する。 When recognizing such a target area, the receiver 200 ends the display of the scanning line P100, superimposes the AR image on the target area, and displays the captured display image Pp on which the AR image is superimposed on the display 201. .
 これにより、送信機100の撮像が行われてからAR画像が表示されるまでの間、移動する走査線P100が表示されるため、光IDおよびAR画像の読み取りなどの処理が行われていることをユーザに対して知らせることができる。 Accordingly, since the moving scanning line P100 is displayed from when the transmitter 100 is imaged until the AR image is displayed, processing such as reading of the optical ID and the AR image is performed. Can be notified to the user.
 図270は、実施の形態23の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 270 is a diagram illustrating another example in which the receiver 200 in the first modification of the twenty-third embodiment displays an AR image.
 2つの送信機100はそれぞれ、例えば図270に示すように、表示パネルを有する画像表示装置として構成され、その表示パネルに同一の静止画像PSを表示しながら輝度変化することによって、光IDを送信している。ここで、2つの送信機100はそれぞれ、互いに異なる態様で輝度変化することによって、互いに異なる光ID(例えば光ID「01」および「02」)を送信している。 Each of the two transmitters 100 is configured as an image display device having a display panel, for example, as shown in FIG. 270, and transmits an optical ID by changing the luminance while displaying the same still image PS on the display panel. is doing. Here, the two transmitters 100 transmit different optical IDs (for example, optical IDs “01” and “02”) by changing the luminance in different manners.
 受信機200は、図265に示す例と同様に、2つの送信機100を撮像することによって、撮像表示画像Pqと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光ID「01」および「02」を取得する。つまり、受信機200は、2つの送信機100のうちの一方から光ID「01」を受信し、他方から光ID「02」を受信する。受信機200は、それらの光IDをサーバに送信する。そして、受信機200は、その光ID「01」に対応するAR画像P16と認識情報とをサーバから取得する。さらに、受信機200は、光ID「02」に対応するAR画像P17と認識情報とをサーバから取得する。 The receiver 200 acquires the captured display image Pq and the decoding image by capturing images of the two transmitters 100 as in the example illustrated in FIG. The receiver 200 acquires the optical IDs “01” and “02” by decoding the decoding image. That is, the receiver 200 receives the optical ID “01” from one of the two transmitters 100 and receives the optical ID “02” from the other. The receiver 200 transmits those optical IDs to the server. Then, the receiver 200 acquires the AR image P16 corresponding to the optical ID “01” and the recognition information from the server. Furthermore, the receiver 200 acquires the AR image P17 corresponding to the optical ID “02” and the recognition information from the server.
 受信機200は、撮像表示画像Pqのうち、それらの認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、2つの送信機100のそれぞれの表示パネルが映し出されている領域を対象領域として認識する。そして、受信機200は、光ID「01」に対応する対象領域にAR画像P16を重畳し、光ID「02」に対応する対象領域にAR画像P17を重畳する。そして、受信機200は、AR画像P16およびP17が重畳された撮像表示画像Pqをディスプレイ201に表示する。例えば、AR画像P16は、光ID「01」に対応する送信機100の表示パネルに表示されている静止画像PSと同一または実質的に同一のピクチャを表示順で先頭のピクチャとして有する動画像である。また、AR画像P17は、光ID「02」に対応する送信機100の表示パネルに表示されている静止画像PSと同一または実質的に同一のピクチャを表示順で先頭のピクチャとして有する動画像である。つまり、それぞれ動画像であるAR画像P16およびAR画像P17の先頭のピクチャは同じである。しかし、AR画像P16およびAR画像P17は互いに異なる動画像であって、それぞれの先頭以外のピクチャは異なっている。 The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pq as a target area. For example, the receiver 200 recognizes an area where the display panels of the two transmitters 100 are displayed as the target area. Then, the receiver 200 superimposes the AR image P16 on the target area corresponding to the light ID “01”, and superimposes the AR image P17 on the target area corresponding to the light ID “02”. Then, the receiver 200 displays the captured display image Pq on which the AR images P16 and P17 are superimposed on the display 201. For example, the AR image P16 is a moving image having the same or substantially the same picture as the still picture PS displayed on the display panel of the transmitter 100 corresponding to the optical ID “01” as the first picture in the display order. is there. The AR image P17 is a moving image having the same or substantially the same picture as the still picture PS displayed on the display panel of the transmitter 100 corresponding to the optical ID “02” as the first picture in the display order. is there. That is, the leading pictures of the AR image P16 and the AR image P17, which are moving images, are the same. However, the AR image P16 and the AR image P17 are different moving images, and the pictures other than the head of each are different.
 したがって、このようなAR画像P16およびAR画像P17が撮像表示画像Pqに重畳されるため、受信機200は、同じピクチャから再生される互いに異なる動画像を表示する画像表示装置が現実に存在するように、撮像表示画像Pqを表示することができる。 Therefore, since the AR image P16 and the AR image P17 are superimposed on the captured display image Pq, the receiver 200 actually has an image display device that displays different moving images reproduced from the same picture. The captured display image Pq can be displayed.
 図271は、実施の形態23の変形例1における受信機200の処理動作の一例を示すフローチャートである。この図271のフローチャートによって示される処理動作は、具体的には、図265に示す送信機100が2つある場合に、それらの送信機100を個別に撮像する受信機200の処理動作の一例である。 FIG. 271 is a flowchart illustrating an example of processing operations of the receiver 200 in Modification 1 of Embodiment 23. The processing operation shown by the flowchart of FIG. 271 is an example of the processing operation of the receiver 200 that individually images each of the transmitters 100 when there are two transmitters 100 shown in FIG. is there.
 まず、受信機200は、第1の送信機100を第1の被写体として撮像することによって第1の光IDを取得する(ステップS201)。次に、受信機200は、撮像表示画像の中から、その第1の被写体を認識する(ステップS202)。つまり、受信機200は、第1の光IDに対応する第1のAR画像および第1の認識情報をサーバから取得し、その第1の認識情報に基づいて第1の被写体を認識する。そして、受信機200は、その第1のAR画像である第1の動画像の再生を最初から開始する(ステップS203)。つまり、受信機200は、第1の動画像の先頭のピクチャから再生を開始する。 First, the receiver 200 acquires the first light ID by imaging the first transmitter 100 as the first subject (step S201). Next, the receiver 200 recognizes the first subject from the captured display image (step S202). That is, the receiver 200 acquires the first AR image and the first recognition information corresponding to the first light ID from the server, and recognizes the first subject based on the first recognition information. Then, the receiver 200 starts reproduction of the first moving image that is the first AR image from the beginning (step S203). That is, the receiver 200 starts reproduction from the first picture of the first moving image.
 ここで、受信機200は、第1の被写体が撮像表示画像から外れたか否かを判定する(ステップS204)。つまり、受信機200は、撮像表示画像から第1の被写体を認識することができなくなったか否かを判定する。ここで、第1の被写体が撮像表示画像から外れたと判定すると(ステップS204のY)、受信機200は、第1のAR画像である第1の動画像の再生を中断する(ステップS205)。 Here, the receiver 200 determines whether or not the first subject is out of the captured display image (step S204). That is, the receiver 200 determines whether or not the first subject cannot be recognized from the captured display image. Here, if it is determined that the first subject has deviated from the captured display image (Y in step S204), the receiver 200 interrupts the reproduction of the first moving image that is the first AR image (step S205).
 次に、受信機200は、第1の送信機100とは異なる第2の送信機100を第2の被写体として撮像することによって、ステップS201で取得された第1の光IDとは異なる第2の光IDを取得したか否かを判定する(ステップS206)。ここで、受信機200は、第2の光IDを取得したと判定すると(ステップS206のY)、第1の光IDを取得したとき以降のステップS202~S203と同様の処理を行う。つまり、受信機200は、撮像表示画像の中から、第2の被写体を認識する(ステップS207)。そして、受信機200は、第2の光IDに対応する第2のAR画像である第2の動画像の再生を最初から開始する(ステップS208)。つまり、受信機200は、第2の動画像の先頭のピクチャから再生を開始する。 Next, the receiver 200 captures a second transmitter 100 different from the first transmitter 100 as a second subject, thereby different from the first light ID acquired in step S201. It is determined whether or not the optical ID of the first one has been acquired (step S206). Here, when the receiver 200 determines that the second optical ID has been acquired (Y in step S206), the receiver 200 performs the same processing as steps S202 to S203 after the first optical ID is acquired. That is, the receiver 200 recognizes the second subject from the captured display image (step S207). Then, the receiver 200 starts reproduction of the second moving image that is the second AR image corresponding to the second optical ID from the beginning (step S208). That is, the receiver 200 starts playback from the first picture of the second moving image.
 一方、受信機200は、ステップS206において、第2の光IDを取得していないと判定すると(ステップS206のN)、第1の被写体が再び撮像表示画像に入ったか否かを判定する(ステップS209)。つまり、受信機200は、撮像表示画像から第1の被写体を再び認識したか否かを判定する。ここで、受信機200は、第1の被写体が撮像表示画像に入ったと判定すると(ステップS209のY)、さらに、予め定められた時間(すなわち所定時間)が経過したか否かを判定する(ステップS210)。つまり、受信機200は、第1の被写体が撮像表示画像から外れてから再び入るまでにおいて、所定時間が経過したか否かを判定する。ここで、所定時間が経過していないと判定すると(ステップS210のY)、受信機200は、中断された第1の動画像の途中からの再生を開始する(ステップS211)。なお、この途中からの再生開始時に最初に表示される第1の動画像のピクチャである再生再開先頭ピクチャは、第1の動画像の再生が中断されたときの最後に表示されたピクチャの次の表示順のピクチャであってもよい。あるいは、再生再開先頭ピクチャは、最後に表示されたピクチャからn(nは1以上の整数)枚だけ表示順で前のピクチャであってもよい。 On the other hand, if the receiver 200 determines in step S206 that the second light ID has not been acquired (N in step S206), it determines whether or not the first subject has entered the captured display image again (step S206). S209). That is, the receiver 200 determines whether or not the first subject is recognized again from the captured display image. Here, when the receiver 200 determines that the first subject has entered the captured display image (Y in step S209), the receiver 200 further determines whether or not a predetermined time (that is, a predetermined time) has passed (step S209). Step S210). That is, the receiver 200 determines whether or not a predetermined time has elapsed from when the first subject is removed from the captured display image until it enters again. If it is determined that the predetermined time has not elapsed (Y in step S210), the receiver 200 starts reproduction from the middle of the interrupted first moving image (step S211). It should be noted that the first picture to be resumed for reproduction, which is the first picture of the first moving picture that is displayed at the beginning of the reproduction from the middle, is the next picture that was last displayed when the reproduction of the first moving picture was interrupted. The pictures may be in the display order. Alternatively, the reproduction restart top picture may be a picture preceding the last displayed picture by n (n is an integer of 1 or more) in display order.
 一方、所定時間が経過したと判定すると(ステップS210のN)、受信機200は、中断された第1の動画像の最初からの再生を開始する(ステップS212)。 On the other hand, if it is determined that the predetermined time has elapsed (N in step S210), the receiver 200 starts playback of the interrupted first moving image from the beginning (step S212).
 また、上述の例では、受信機200は、撮像表示画像の対象領域にAR画像を重畳するが、このときに、AR画像の明るさを調整してもよい。つまり、受信機200は、サーバから取得したAR画像の明るさが、撮像表示画像の対象領域の明るさと一致するか否かを判定する。そして、受信機200は、一致しないと判定すると、AR画像の明るさを調整することによって、そのAR画像の明るさを対象領域の明るさに一致させる。そして、受信機200は、明るさが調整されたAR画像を撮像表示画像の対象領域に重畳する。これにより、重畳されるAR画像を、より実在するオブジェクトの画像に近づけることができ、ユーザのAR画像に対する違和感を抑えることができる。なお、AR画像の明るさは、そのAR画像の空間的な平均の明るさであり、対象領域の明るさも、その対象領域の空間的な平均の明るさである。 In the above example, the receiver 200 superimposes the AR image on the target area of the captured display image. At this time, the brightness of the AR image may be adjusted. That is, the receiver 200 determines whether or not the brightness of the AR image acquired from the server matches the brightness of the target area of the captured display image. If the receiver 200 determines that they do not match, the receiver 200 adjusts the brightness of the AR image to match the brightness of the AR image with the brightness of the target region. Then, the receiver 200 superimposes the AR image whose brightness has been adjusted on the target area of the captured display image. Thereby, the superimposed AR image can be brought closer to the image of the actual object, and the user's uncomfortable feeling with respect to the AR image can be suppressed. Note that the brightness of the AR image is the spatial average brightness of the AR image, and the brightness of the target area is also the spatial average brightness of the target area.
 また、受信機200は、図247に示すように、AR画像をタップすると、そのAR画像を拡大してディスプレイ201の全体に表示してもよい。また、図247に示す例では、受信機200は、AR画像がタップされるそのAR画像を他のAR画像に切り替えるが、タップに関わらずに、自動的にAR画像を切り替えてもよい。例えば、受信機200は、AR画像が表示されている時間があらかじめ定められた時間だけ経過すると、そのAR画像を他のAR画像に切り替えて表示する。また、受信機200は、現在時刻があらかじめ定められた時刻になると、それまで表示されていたAR画像を、他のAR画像に切り替えて表示する。これにより、ユーザは操作を行うことなく、簡単に新たなAR画像を見ることができる。 Further, as shown in FIG. 247, when the receiver 200 taps an AR image, the receiver 200 may enlarge the AR image and display it on the entire display 201. In the example illustrated in FIG. 247, the receiver 200 switches the AR image on which the AR image is tapped to another AR image, but may automatically switch the AR image regardless of the tap. For example, when the AR image is displayed for a predetermined time, the receiver 200 switches the AR image to another AR image for display. Further, when the current time reaches a predetermined time, the receiver 200 switches the AR image that has been displayed so far to another AR image and displays it. Thereby, the user can easily see a new AR image without performing an operation.
 [実施の形態23の変形例2]
 以下、実施の形態23の変形例2、つまり、光IDを用いたARを実現する表示方法の変形例2について説明する。
[Modification 2 of Embodiment 23]
Hereinafter, a second modification of the twenty-third embodiment, that is, a second modification of the display method for realizing the AR using the optical ID will be described.
 図272は、実施の形態23またはその変形例1における受信機200において想定されるAR画像を表示するときの課題の一例を示す図である。 FIG. 272 is a diagram illustrating an example of a problem when displaying an AR image assumed in the receiver 200 in the twenty-third embodiment or the first modification thereof.
 例えば、実施の形態23またはその変形例1における受信機200は、時刻t1に、被写体を撮像する。なお、上述の被写体は、輝度変化によって光IDを送信するテレビなどの送信機、または、その送信機からの光によって照らされるポスター、案内板、もしくは看板などである。その結果、受信機200は、イメージセンサの有効画素領域によって得られる画像の全体(以下、全撮像画像という)を、ディスプレイ201に撮像表示画像として表示する。このとき、受信機200は、その撮像表示画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。対象領域は、例えばテレビなどの送信機の像またはポスターなどの像を示す領域である。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。なお、AR画像は、静止画または動画でもよく、1つ以上の文字または記号からなる文字列であってもよい。 For example, the receiver 200 in the twenty-third embodiment or its modification 1 images the subject at time t1. Note that the above-described subject is a transmitter such as a television that transmits a light ID according to a change in luminance, or a poster, a guide board, or a signboard illuminated by light from the transmitter. As a result, the receiver 200 displays the entire image obtained by the effective pixel area of the image sensor (hereinafter, referred to as all captured images) on the display 201 as a captured display image. At this time, the receiver 200 recognizes, in the captured display image, an area corresponding to the recognition information acquired based on the light ID as a target area on which the AR image is superimposed. The target area is an area indicating an image of a transmitter such as a television or an image of a poster, for example. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201. The AR image may be a still image or a moving image, or a character string including one or more characters or symbols.
 ここで、受信機200のユーザは、AR画像を大きく表示させるために被写体に近づくと、時刻t2において、イメージセンサにおける対象領域に対応する領域(以下、認識領域という)が有効画素領域からはみ出す。なお、認識領域は、イメージセンサの有効画素領域中、撮像表示画像における対象領域の画像が投影される領域である。つまり、イメージセンサにおける有効画素領域と認識領域はそれぞれ、ディスプレイ201における撮像表示画像および対象領域に相当する。 Here, when the user of the receiver 200 approaches the subject in order to display the AR image in a large size, a region corresponding to the target region in the image sensor (hereinafter referred to as a recognition region) protrudes from the effective pixel region at time t2. The recognition area is an area in which an image of the target area in the captured display image is projected in the effective pixel area of the image sensor. That is, the effective pixel area and the recognition area in the image sensor correspond to the captured display image and the target area on the display 201, respectively.
 認識領域が有効画素領域からはみ出すことによって、受信機200は、撮像表示画像から対象領域を認識することできず、AR画像を表示することができない状態となる。 When the recognition area protrudes from the effective pixel area, the receiver 200 cannot recognize the target area from the captured display image and cannot display the AR image.
 そこで、本変形例における受信機200は、ディスプレイ201の全体に表示される撮像表示画像よりも画角の広い画像を全撮像画像として取得する。 Therefore, the receiver 200 in the present modification acquires an image having a wider angle of view than the captured display image displayed on the entire display 201 as the entire captured image.
 図273は、実施の形態23の変形例2における受信機200がAR画像を表示する例を示す図である。 FIG. 273 is a diagram illustrating an example in which the receiver 200 in the second modification of the twenty-third embodiment displays an AR image.
 本変形例に係る受信機200の全撮像画像の画角、つまりイメージセンサの有効画素領域の画角は、ディスプレイ201の全体に表示される撮像表示画像の画角よりも広い。なお、イメージセンサにおいて、ディスプレイ201に表示される画像範囲に相当する領域を、以下、表示領域という。 The field angle of all captured images of the receiver 200 according to this modification, that is, the field angle of the effective pixel area of the image sensor is wider than the field angle of the captured display image displayed on the entire display 201. In the image sensor, an area corresponding to an image range displayed on the display 201 is hereinafter referred to as a display area.
 例えば、受信機200は、時刻t1に、被写体を撮像する。その結果、受信機200は、イメージセンサの有効画素領域によって得られる全撮像画像のうち、有効画素領域よりも狭い表示領域によって得られる画像のみを、撮像表示画像としてディスプレイ201に表示する。このとき、受信機200は、上述と同様、その全撮像画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。 For example, the receiver 200 images the subject at time t1. As a result, the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image. At this time, similarly to the above, the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
 ここで、受信機200のユーザは、AR画像を大きく表示させるために被写体に近づくと、イメージセンサにおける認識領域が拡大する。そして、時刻t2において、その認識領域はイメージセンサにおける表示領域からはみ出す。つまり、ディスプレイ201に表示されている撮像表示画像から、対象領域の画像(例えば、ポスターの像など)がはみ出してしまう。しかし、イメージセンサにおける認識領域は、有効画素領域からははみ出していない。つまり、受信機200は、時刻t2においても、対象領域を含む全撮像画像を取得している。その結果、受信機200は、全撮像画像から対象領域を認識することでき、対象領域のうち撮像表示画像内にある一部の領域にのみ、その領域に対応するAR画像の一部を重畳してディスプレイ201に表示する。 Here, when the user of the receiver 200 approaches the subject in order to display the AR image in a large size, the recognition area in the image sensor is expanded. At time t2, the recognition area protrudes from the display area in the image sensor. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201. However, the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all the captured images, and superimposes a part of the AR image corresponding to the target area only on a part of the target area in the captured display image. Are displayed on the display 201.
 これにより、ユーザがAR画像を大きく表示させるために被写体に近づき、対象領域が撮像表示画像からはみ出しても、AR画像の表示を継続することができる。 Thereby, even when the user approaches the subject to display the AR image in a large size and the target region protrudes from the captured display image, the display of the AR image can be continued.
 図274は、実施の形態23の変形例2における受信機200の処理動作の一例を示すフローチャートである。 FIG. 274 is a flowchart illustrating an example of processing operations of the receiver 200 in the second modification of the twenty-third embodiment.
 受信機200は、イメージセンサが被写体を撮像することによって全撮像画像および復号用画像を取得する(ステップS301)。次に、受信機200は、その復号用画像に対する復号によって光IDを取得する(ステップS302)。次に、受信機200は、その光IDをサーバに送信する(ステップS303)。次に、受信機200は、その光IDに対応するAR画像と認識情報とをサーバから取得する(ステップS304)。次に、受信機200は、全撮像画像のうち、認識情報に応じた領域を対象領域として認識する(ステップS305)。 The receiver 200 acquires the entire captured image and the decoding image by the image sensor capturing the subject (step S301). Next, the receiver 200 acquires an optical ID by decoding the decoding image (step S302). Next, the receiver 200 transmits the optical ID to the server (step S303). Next, the receiver 200 acquires an AR image and recognition information corresponding to the optical ID from the server (step S304). Next, the receiver 200 recognizes an area corresponding to the recognition information among all captured images as a target area (step S305).
 ここで、受信機200は、イメージセンサの有効画素領域中の、その対象領域の画像に対応する領域である認識領域が、表示領域からはみ出しているか否かを判定する(ステップS306)。ここで、はみ出していると判定すると(ステップS306のYes)、受信機200は、対象領域のうち、撮像表示画像内にある一部の領域にのみ、その領域に対応するAR画像の一部を表示する(ステップS307)。一方、受信機200は、はみ出していないと判定すると(ステップS306のNo)、受信機200は、撮像表示画像の対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS308)。 Here, the receiver 200 determines whether or not the recognition area that is the area corresponding to the image of the target area in the effective pixel area of the image sensor protrudes from the display area (step S306). Here, if it is determined that it is protruding (Yes in step S306), the receiver 200 extracts a part of the AR image corresponding to the area only in a part of the target area within the captured display image. It is displayed (step S307). On the other hand, when the receiver 200 determines that it does not protrude (No in step S306), the receiver 200 superimposes the AR image on the target area of the captured display image and displays the captured display image on which the AR image is superimposed. (Step S308).
 そして、受信機200は、AR画像の表示処理を終了すべきか否かを判定し(ステップS309)、終了すべきでないと判定すると(ステップS309のNo)、ステップS305からの処理を繰り返し実行する。 Then, the receiver 200 determines whether or not the AR image display process should be terminated (step S309), and when it is determined that the AR image display process should not be terminated (No in step S309), the process from step S305 is repeatedly executed.
 図275は、実施の形態23の変形例2における受信機200がAR画像を表示する他の例を示す図である。 FIG. 275 is a diagram illustrating another example in which the receiver 200 in the second modification of the twenty-third embodiment displays an AR image.
 受信機200は、上述の表示領域に対する認識領域の大きさの比率によってAR画像の画面表示を切り替えてもよい。 The receiver 200 may switch the screen display of the AR image according to the ratio of the size of the recognition area to the display area.
 イメージセンサの表示領域の水平方向の幅をw1、垂直方向の幅をh1とし、認識領域の水平方向の幅をw2、垂直方向の幅をh2とする場合、受信機は、比率(h2/h1)および(w2/w1)のうちの大きい方の比率を閾値と比較する。 When the horizontal width of the display area of the image sensor is w1, the vertical width is h1, the horizontal width of the recognition area is w2, and the vertical width is h2, the receiver uses the ratio (h2 / h1). ) And (w2 / w1), the larger ratio is compared with the threshold value.
 例えば、受信機200は、図275の(画面表示1)のように、AR画像が対象領域に重畳された撮像表示画像を表示しているときには、その大きい方の比率を、第1の閾値(例えば、0.9)と比較する。そして、大きい方の比率が0.9以上になったときには、受信機200は、図275の(画面表示2)のように、ディスプレイ201の全体にAR画像を拡大して表示する。なお、認識領域が表示領域よりも大きくなったとき、さらに、有効画素領域よりも大きくなったときにも、受信機200は、ディスプレイ201の全体にAR画像を拡大して表示し続ける。 For example, as shown in (screen display 1) in FIG. 275, the receiver 200 displays a captured display image in which the AR image is superimposed on the target region, and sets the larger ratio to the first threshold ( For example, compare with 0.9). When the larger ratio becomes 0.9 or more, the receiver 200 enlarges and displays the AR image on the entire display 201 as shown in (screen display 2) in FIG. Note that when the recognition area becomes larger than the display area and further when the recognition area becomes larger than the effective pixel area, the receiver 200 continues to enlarge and display the AR image on the entire display 201.
 また、受信機200は、例えば、図275の(画面表示2)のように、ディスプレイ201の全体にAR画像を拡大して表示しているときには、その大きい方の比率を、第2の閾値(例えば、0.7)と比較する。第2の閾値は、第1の閾値よりも小さい。そして、大きい方の比率が0.7以下になったときには、受信機200は、図275の(画面表示1)のように、AR画像が対象領域に重畳された撮像表示画像を表示する。 Further, for example, when the AR image is enlarged and displayed on the entire display 201 as shown in FIG. 275 (screen display 2), the receiver 200 sets the larger ratio to the second threshold ( For example, compare with 0.7). The second threshold is smaller than the first threshold. When the larger ratio becomes 0.7 or less, the receiver 200 displays a captured display image in which the AR image is superimposed on the target area as shown in (screen display 1) in FIG.
 図276は、実施の形態23の変形例2における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 276 is a flowchart illustrating another example of the processing operation of the receiver 200 in the second modification of the twenty-third embodiment.
 受信機200は、まず、光ID処理を行う(ステップS301a)。この光ID処理は、図274に示すステップS301~S304を含む処理である。次に、受信機200は、撮像表示画像のうち、認識情報に応じた領域を対象領域として認識する(ステップS311)。そして、受信機200は、撮像表示画像の対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS312)。 First, the receiver 200 performs optical ID processing (step S301a). This optical ID process is a process including steps S301 to S304 shown in FIG. Next, the receiver 200 recognizes an area corresponding to the recognition information in the captured display image as a target area (step S311). Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed (step S312).
 次に、受信機200は、認識領域の比率、すなわち比率(h2/h1)および(w2/w1)のうちの大きい方の比率が第1の閾値K(例えばK=0.9)以上であるか否かを判定する(ステップS313)。ここで、第1の閾値K以上でないと判定すると(ステップS313のNo)、受信機200は、ステップS311からの処理を繰り返し実行する。一方、第1の閾値K以上であると判定すると(ステップS313のYes)、受信機200は、AR画像をディスプレイ201の全体に拡大して表示する(ステップS314)。このとき、受信機200は、イメージセンサの電源をオンとオフとに周期的に切り換える。イメージセンサの電源を周期的にオフにすることによって、受信機200の省電力化を図ることができる。 Next, the receiver 200 has a recognition area ratio, that is, a larger ratio of the ratios (h2 / h1) and (w2 / w1) is equal to or greater than a first threshold value K (for example, K = 0.9). It is determined whether or not (step S313). If it is determined that the value is not equal to or greater than the first threshold value K (No in step S313), the receiver 200 repeatedly executes the processing from step S311. On the other hand, if it determines with it being more than the 1st threshold value K (Yes of step S313), the receiver 200 will expand and display the AR image on the entire display 201 (step S314). At this time, the receiver 200 periodically switches the power supply of the image sensor between on and off. Power saving of the receiver 200 can be achieved by periodically turning off the power supply of the image sensor.
 次に、受信機200は、イメージセンサの電源が周期的にオンにされているときに、認識領域の比率が第2の閾値L(例えばL=0.7)以下であるか否かを判定する。ここで、第2の閾値L以下でないと判定すると(ステップS315のNo)、受信機200は、ステップS314からの処理を繰り返し実行する。一方、第2の閾値L以下であると判定すると(ステップS315のYes)、受信機200は、撮像表示画像の対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS316)。 Next, the receiver 200 determines whether or not the recognition area ratio is equal to or smaller than a second threshold L (for example, L = 0.7) when the image sensor is periodically turned on. To do. Here, if it is determined that it is not equal to or less than the second threshold L (No in step S315), the receiver 200 repeatedly executes the processing from step S314. On the other hand, if it is determined that it is equal to or smaller than the second threshold value L (Yes in step S315), the receiver 200 superimposes the AR image on the target area of the captured display image and displays the captured display image on which the AR image is superimposed. (Step S316).
 そして、受信機200は、AR画像の表示処理を終了すべきか否かを判定し(ステップS317)、終了すべきでないと判定すると(ステップS317のNo)、ステップS313からの処理を繰り返し実行する。 Then, the receiver 200 determines whether or not to end the AR image display process (step S317). If the receiver 200 determines that the AR image display process should not be ended (No in step S317), the receiver 200 repeatedly executes the processes from step S313.
 このように、第2の閾値Lを第1の閾値Kよりも小さい値にしておくことによって、受信機200の画面表示が(画面表示1)と(画面表示2)とで頻繁に切り替えられることを防ぎ、画面表示の状態を安定化させることができる。 Thus, by setting the second threshold value L to be smaller than the first threshold value K, the screen display of the receiver 200 can be frequently switched between (screen display 1) and (screen display 2). Can be prevented and the state of the screen display can be stabilized.
 なお、図275および図276に示す例では、表示領域と有効画素領域とは同一であってもよく、異なっていてもよい。また、その例では、表示領域に対する認識領域の大きさの比率を用いたが、表示領域と有効画素領域とが異なる場合には、表示領域の代わりに、有効画素領域に対する認識領域の大きさの比率を用いてもよい。 In the example shown in FIGS. 275 and 276, the display area and the effective pixel area may be the same or different. In this example, the ratio of the size of the recognition area to the display area is used. However, if the display area and the effective pixel area are different, the size of the recognition area relative to the effective pixel area is used instead of the display area. A ratio may be used.
 図277は、実施の形態23の変形例2における受信機200がAR画像を表示する他の例を示す図である。 FIG. 277 is a diagram illustrating another example in which the receiver 200 in the second modification of the twenty-third embodiment displays an AR image.
 図277に示す例では、図273に示す例と同様、受信機200のイメージセンサは、表示領域よりも広い有効画素領域を有する。 In the example shown in FIG. 277, like the example shown in FIG. 273, the image sensor of the receiver 200 has an effective pixel area wider than the display area.
 例えば、受信機200は、時刻t1に、被写体を撮像する。その結果、受信機200は、イメージセンサの有効画素領域によって得られる全撮像画像のうち、有効画素領域よりも狭い表示領域によって得られる画像のみを、撮像表示画像としてディスプレイ201に表示する。このとき、受信機200は、上述と同様、その全撮像画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。 For example, the receiver 200 images the subject at time t1. As a result, the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image. At this time, similarly to the above, the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
 ここで、ユーザは、受信機200(具体的にはイメージセンサ)の向きを変えると、イメージセンサにおける認識領域が、例えば図277中左上方向に移動し、時刻t2では、表示領域からはみ出す。つまり、ディスプレイ201に表示されている撮像表示画像から、対象領域の画像(例えば、ポスターの像など)がはみ出してしまう。しかし、イメージセンサにおける認識領域は、有効画素領域からははみ出していない。つまり、受信機200は、時刻t2においても、対象領域を含む全撮像画像を取得している。その結果、受信機200は、全撮像画像から対象領域を認識することでき、対象領域のうちの撮像表示画像内にある一部の領域にのみ、その領域に対応するAR画像の一部を重畳してディスプレイ201に表示する。さらに、受信機200は、イメージセンサにおける認識領域の動き、すなわち全撮像画像における対象領域の動きに応じて、表示されるAR画像の一部の大きさおよび位置を変更する。 Here, when the user changes the direction of the receiver 200 (specifically, the image sensor), the recognition area in the image sensor moves, for example, in the upper left direction in FIG. 277, and protrudes from the display area at time t2. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201. However, the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all the captured images, and superimposes a part of the AR image corresponding to the area only on a part of the target area in the captured display image. And displayed on the display 201. Furthermore, the receiver 200 changes the size and position of a part of the displayed AR image in accordance with the movement of the recognition area in the image sensor, that is, the movement of the target area in all captured images.
 また、上述のように認識領域が表示領域からはみ出したときには、受信機200は、有効画素領域の縁と、表示領域の縁との間の距離(以下、領域間距離という)に対応するピクセル数を閾値と比較する。 When the recognition area protrudes from the display area as described above, the receiver 200 counts the number of pixels corresponding to the distance between the edge of the effective pixel area and the edge of the display area (hereinafter referred to as inter-area distance). Is compared to a threshold.
 例えば、有効画素領域の上辺と、表示領域の上辺との間と距離と、有効画素領域の下辺と、表示領域の下辺との間と距離とのうち、短い方の距離(以下、第1の距離という)に対応するピクセル数をdhとする。また、有効画素領域の左辺と、表示領域の左辺との間と距離と、有効画素領域の右辺と、表示領域の右辺との間と距離とのうち、短い方の距離(以下、第2の距離という)に対応するピクセル数をdwとする。このとき、上述の領域間距離は、第1および第2の距離のうちの短い方の距離である。 For example, the shorter distance (hereinafter referred to as the first distance) among the distance between the upper side of the effective pixel area and the upper side of the display area, and the distance between the lower side of the effective pixel area and the lower side of the display area. Let dh be the number of pixels corresponding to (distance). Further, the shorter distance (hereinafter referred to as the second distance) among the distance between the left side of the effective pixel area and the left side of the display area, and the distance between the right side of the effective pixel area and the right side of the display area. Let dw be the number of pixels corresponding to (distance). At this time, the above-mentioned inter-region distance is the shorter one of the first and second distances.
 つまり、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数を、閾値Nと比較する。そして、受信機200は、例えば時刻t2において、その小さい方のピクセル数が閾値N以下になれば、そのイメージセンサにおける認識領域の位置に応じてAR画像の一部の大きさおよび位置を変更することなく固定する。すなわち、受信機200は、AR画像の画面表示を切り替える。例えば、受信機200は、表示されるAR画像の一部の大きさおよび位置を、その小さい方のピクセル数が閾値Nとなったときにディスプレイ201に表示されていたAR画像の一部の大きさおよび位置に固定する。 That is, the receiver 200 compares the smaller pixel number of the pixel numbers dw and dh with the threshold value N. Then, for example, when the smaller number of pixels becomes equal to or smaller than the threshold value N at time t2, the receiver 200 changes the size and position of a part of the AR image according to the position of the recognition area in the image sensor. Fix without. That is, the receiver 200 switches the screen display of the AR image. For example, the receiver 200 sets the size and position of a part of the displayed AR image to the size of the part of the AR image displayed on the display 201 when the smaller number of pixels reaches the threshold value N. Fix in position and position.
 したがって、時刻t3において、認識領域がさらに移動し、有効画素領域からはみ出すことになっても、受信機200は、時刻t2と同様にAR画像の一部を表示し続ける。すなわち、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数が閾値N以下であるかぎり、時刻t2のときと同様、大きさおよび位置が固定されたAR画像の一部を撮像表示画像に重畳して表示し続ける。 Therefore, even when the recognition area further moves and protrudes from the effective pixel area at time t3, the receiver 200 continues to display a part of the AR image similarly to time t2. That is, as long as the smaller number of pixels dw and dh is equal to or smaller than the threshold value N, the receiver 200 captures a part of the AR image whose size and position are fixed as at time t2. Continue to superimpose on the display image.
 図277に示す例では、受信機200は、イメージセンサにおける認識領域の移動に応じて、表示されるAR画像の一部の大きさおよび位置を変更したが、AR画像全体の表示倍率および位置を変更してもよい。 In the example illustrated in FIG. 277, the receiver 200 changes the size and position of a part of the displayed AR image in accordance with the movement of the recognition area in the image sensor, but the display magnification and position of the entire AR image are changed. It may be changed.
 図278は、実施の形態23の変形例2における受信機200がAR画像を表示する他の例を示す図である。具体的には、図278は、AR画像の表示倍率が変更される例を示す。 FIG. 278 is a diagram illustrating another example in which the receiver 200 in the second modification of the twenty-third embodiment displays an AR image. Specifically, FIG. 278 shows an example in which the display magnification of the AR image is changed.
 例えば、図277に示す例と同様、時刻t1の状態から、ユーザは、受信機200(具体的にはイメージセンサ)の向きを変えると、イメージセンサにおける認識領域が、例えば図278中左上方向に移動し、時刻t2では、表示領域からはみ出す。つまり、ディスプレイ201に表示されている撮像表示画像から、対象領域の画像(例えば、ポスターの像など)がはみ出してしまう。しかし、イメージセンサにおける認識領域は、有効画素領域からははみ出していない。つまり、受信機200は、時刻t2においても、対象領域を含む全撮像画像を取得している。その結果、受信機200は、全撮像画像から対象領域を認識することできる。 For example, as in the example shown in FIG. 277, when the user changes the orientation of the receiver 200 (specifically, the image sensor) from the state at the time t1, the recognition area in the image sensor becomes, for example, in the upper left direction in FIG. It moves and protrudes from the display area at time t2. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201. However, the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all captured images.
 そこで、図278に示す例では、受信機200は、対象領域のうちの撮像表示画像内にある一部の領域のサイズに、AR画像全体のサイズが一致するように、そのAR画像の表示倍率を変更する。つまり、受信機200はAR画像を縮小する。そして、受信機200は、表示倍率が変更された(すなわち縮小された)AR画像をその領域に重畳してディスプレイ201に表示する。さらに、受信機200は、イメージセンサにおける認識領域の動き、すなわち全撮像画像における対象領域の動きに応じて、表示されるAR画像の表示倍率および位置を変更する。 Therefore, in the example illustrated in FIG. 278, the receiver 200 displays the display magnification of the AR image so that the size of the entire AR image matches the size of a part of the target region in the captured display image. To change. That is, the receiver 200 reduces the AR image. Then, the receiver 200 superimposes the AR image whose display magnification has been changed (that is, reduced) on the area and displays the AR image on the display 201. Furthermore, the receiver 200 changes the display magnification and position of the displayed AR image in accordance with the movement of the recognition area in the image sensor, that is, the movement of the target area in all captured images.
 また、上述のように認識領域が表示領域からはみ出したときには、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数を、閾値Nと比較する。そして、受信機200は、例えば時刻t2において、その小さい方のピクセル数が閾値N以下になれば、そのイメージセンサにおける認識領域の位置に応じてAR画像の表示倍率および位置を変更することなく固定する。つまり、受信機200は、AR画像の画面表示を切り替える。例えば、受信機200は、表示されるAR画像の表示倍率および位置を、その小さい方のピクセル数が閾値Nとなったときにディスプレイ201に表示されていたAR画像の表示倍率および位置に固定する。 In addition, when the recognition area protrudes from the display area as described above, the receiver 200 compares the smaller pixel number of the pixel numbers dw and dh with the threshold value N. Then, for example, if the smaller number of pixels becomes equal to or less than the threshold value N at time t2, the receiver 200 is fixed without changing the display magnification and position of the AR image according to the position of the recognition area in the image sensor. To do. That is, the receiver 200 switches the screen display of the AR image. For example, the receiver 200 fixes the display magnification and position of the displayed AR image to the display magnification and position of the AR image displayed on the display 201 when the smaller number of pixels reaches the threshold value N. .
 したがって、時刻t3において、認識領域がさらに移動し、有効画素領域からはみ出すことになっても、受信機200は、時刻t2と同様にAR画像を表示し続ける。すなわち、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数が閾値N以下であるかぎり、時刻t2のときと同様、表示倍率および位置が固定されたAR画像を撮像表示画像に重畳して表示し続ける。 Therefore, the receiver 200 continues to display the AR image similarly to the time t2 even when the recognition area further moves at the time t3 and protrudes from the effective pixel area. That is, as long as the smaller number of pixels dw and dh is equal to or smaller than the threshold value N, the receiver 200 converts an AR image with a fixed display magnification and position into a captured display image as at time t2. Continue to overlay and display.
 なお、上述の例では、ピクセル数dw、dhのうちの小さい方と閾値とを比較したが、その小さい方のピクセル数の比率と閾値とを比較してもよい。そのピクセル数dwの比率は、例えば、有効画素領域の水平方向のピクセル数w0に対するピクセル数dwの比率(dw/w0)である。同様に、ピクセル数dhの比率は、例えば、有効画素領域の垂直方向のピクセル数h0に対するピクセル数dhの比率(dh/h0)である。または、有効画素領域の水平方向または垂直方向のピクセル数の代わりに、表示領域の水平方向または垂直方向のピクセル数を用いて、ピクセル数dw、dhのそれぞれの比率を表してもよい。ピクセル数dw、dhの比率と比較される閾値は、例えば0.05である。 In the above example, the smaller of the number of pixels dw and dh is compared with the threshold value, but the ratio of the smaller number of pixels may be compared with the threshold value. The ratio of the number of pixels dw is, for example, the ratio of the number of pixels dw to the number of pixels w0 in the horizontal direction of the effective pixel region (dw / w0). Similarly, the ratio of the number of pixels dh is, for example, the ratio of the number of pixels dh to the number of pixels h0 in the vertical direction of the effective pixel region (dh / h0). Alternatively, the ratio between the pixel numbers dw and dh may be expressed by using the number of pixels in the horizontal or vertical direction of the display area instead of the number of pixels in the horizontal or vertical direction of the effective pixel area. The threshold value compared with the ratio of the number of pixels dw and dh is, for example, 0.05.
 また、ピクセル数dw、dhのうちの小さい方の画角と閾値とを比較してもよい。有効画素領域の対角線のピクセル数がmであって、その対角線に対応する画角がθ(例えば55°)である場合、ピクセル数dwに対応する画角は、θ×dw/mであり、ピクセル数dhに対応する画角は、θ×dh/mである。 Also, the smaller angle of view of the number of pixels dw and dh may be compared with a threshold value. When the number of diagonal pixels in the effective pixel area is m and the angle of view corresponding to the diagonal is θ (for example, 55 °), the angle of view corresponding to the number of pixels dw is θ × dw / m, The angle of view corresponding to the number of pixels dh is θ × dh / m.
 また、図277および図278に示す例では、受信機200は、有効画素領域と認識領域との間の領域間距離に基づいて、AR画像の画面表示を切り替えたが、表示領域と認識領域との関係に基づいて、AR画像の画面表示を切り替えてもよい。 In the example illustrated in FIGS. 277 and 278, the receiver 200 switches the screen display of the AR image based on the inter-region distance between the effective pixel region and the recognition region. The screen display of the AR image may be switched based on the relationship.
 図279は、実施の形態23の変形例2における受信機200がAR画像を表示する他の例を示す図である。具体的には、図279は、表示領域と認識領域との関係に基づいてAR画像の画面表示を切り替える例を示す。また、図279に示す例では、図273に示す例と同様、受信機200のイメージセンサは、表示領域よりも広い有効画素領域を有する。 FIG. 279 is a diagram illustrating another example in which the receiver 200 in the second modification of the twenty-third embodiment displays an AR image. Specifically, FIG. 279 illustrates an example in which the screen display of the AR image is switched based on the relationship between the display area and the recognition area. In the example shown in FIG. 279, as in the example shown in FIG. 273, the image sensor of the receiver 200 has an effective pixel area wider than the display area.
 例えば、受信機200は、時刻t1に、被写体を撮像する。その結果、受信機200は、イメージセンサの有効画素領域によって得られる全撮像画像のうち、有効画素領域よりも狭い表示領域によって得られる画像のみを、撮像表示画像としてディスプレイ201に表示する。このとき、受信機200は、上述と同様、その全撮像画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。 For example, the receiver 200 images the subject at time t1. As a result, the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image. At this time, similarly to the above, the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
 ここで、ユーザは、受信機200の向きを変えると、受信機200は、イメージセンサにおける認識領域の動きに応じて、表示されるAR画像の位置を変更させる。そして、例えば、イメージセンサにおける認識領域が、例えば図279中左上方向に移動し、時刻t2では、認識領域の縁の一部と表示領域の縁の一部とが一致する。つまり、ディスプレイ201に表示されている撮像表示画像の隅に、対象領域の画像(例えばポスターなどの像)が配置される。その結果、受信機200は、撮像表示画像の隅にある対象領域にAR画像を重畳してディスプレイ201に表示する。 Here, when the user changes the direction of the receiver 200, the receiver 200 changes the position of the displayed AR image according to the movement of the recognition area in the image sensor. Then, for example, the recognition area in the image sensor moves in the upper left direction in FIG. 279, for example, and at time t2, a part of the edge of the recognition area coincides with a part of the edge of the display area. That is, an image of the target area (for example, an image such as a poster) is arranged at the corner of the captured display image displayed on the display 201. As a result, the receiver 200 superimposes the AR image on the target area at the corner of the captured display image and displays the AR image on the display 201.
 そして、認識領域がさらに移動して表示領域からはみ出すときには、受信機200は、時刻t2で表示されていたAR画像の大きさおよび位置を変更することなく固定する。つまり、受信機200は、AR画像の画面表示を切り替える。 When the recognition area further moves and protrudes from the display area, the receiver 200 fixes the AR image displayed at time t2 without changing the size and position. That is, the receiver 200 switches the screen display of the AR image.
 したがって、時刻t3において、認識領域がさらに移動し、有効画素領域からはみ出すことになっても、受信機200は、時刻t2と同様にAR画像を表示し続ける。すなわち、受信機200は、認識領域が表示領域からはみ出ているかぎり、受信機200は、時刻t2のときと同じサイズのAR画像を、撮像表示画像における時刻t2のときと同じ位置に重畳して表示し続ける。 Therefore, the receiver 200 continues to display the AR image similarly to the time t2 even when the recognition area further moves at the time t3 and protrudes from the effective pixel area. In other words, as long as the recognition area extends beyond the display area, the receiver 200 superimposes the AR image having the same size as that at time t2 on the same position as at time t2 in the captured display image. Continue to display.
 このように、図279に示す例では、受信機200は、認識領域が表示領域からはみ出すか否かに応じてAR画像の画面表示を切り替える。また、受信機200は、表示領域を包含し、その表示領域よりも大きく有効画素領域よりも小さい判定領域を、表示領域の代わりに用いてもよい。この場合、受信機200は、認識領域が判定領域からはみ出すか否かに応じてAR画像の画面表示を切り替える。 Thus, in the example shown in FIG. 279, the receiver 200 switches the screen display of the AR image depending on whether or not the recognition area protrudes from the display area. Further, the receiver 200 may include a determination area that includes the display area and is larger than the display area and smaller than the effective pixel area instead of the display area. In this case, the receiver 200 switches the screen display of the AR image depending on whether or not the recognition area protrudes from the determination area.
 以上、図273~図279を用いてAR画像の画面表示について説明したが、受信機200は、全撮像画像から対象領域を認識することができなくなったときに、その直前まで認識されていた対象領域の大きさのAR画像を撮像表示画像に重畳して表示してもよい。 As described above, the screen display of the AR image has been described with reference to FIGS. 273 to 279. However, when the receiver 200 cannot recognize the target region from all the captured images, the target that has been recognized until immediately before the target region is recognized. The AR image having the size of the area may be displayed superimposed on the captured display image.
 図280は、実施の形態23の変形例2における受信機200がAR画像を表示する他の例を示す図である。 FIG. 280 is a diagram illustrating another example in which the receiver 200 in the second modification of the twenty-third embodiment displays an AR image.
 なお、図243に示す例では、受信機200は、送信機100によって照らされた案内板107を撮像することによって、上述と同様に、撮像表示画像Peと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板107から光IDを受信する。しかし、案内板107の表面全体が、光を吸収するような色(例えば暗色)であれば、その表面は送信機100によって照らされても暗いため、受信機200は、光IDを正しく受信することができない場合がある。または、案内板107の表面全体が、復号用画像(すなわち輝線画像)のような縞模様であっても、受信機200は、光IDを正しく受信することができない場合がある。 In the example illustrated in FIG. 243, the receiver 200 captures the captured display image Pe and the decoding image in the same manner as described above by capturing an image of the guide plate 107 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 107. However, if the entire surface of the guide plate 107 is a color that absorbs light (for example, dark color), the receiver 200 receives the light ID correctly because the surface is dark even when illuminated by the transmitter 100. It may not be possible. Alternatively, even if the entire surface of the guide plate 107 has a striped pattern such as a decoding image (that is, a bright line image), the receiver 200 may not be able to correctly receive the light ID.
 そこで、図280に示すように、案内板107の近くに反射板109を配置しておいてもよい。これにより、受信機200は、送信機100から反射板109によって反射された光、つまり、送信機100から送信される可視光(具体的には光ID)を受けることができる。その結果、受信機200は、適切に光IDを受信してAR画像P5を表示することができる。 Therefore, as shown in FIG. 280, a reflecting plate 109 may be disposed near the guide plate 107. Thereby, the receiver 200 can receive light reflected from the transmitter 100 by the reflector 109, that is, visible light (specifically, light ID) transmitted from the transmitter 100. As a result, the receiver 200 can appropriately receive the optical ID and display the AR image P5.
 [実施の形態23の変形例1および2のまとめ]
 図281Aは、本発明の一態様に係る表示方法を示すフローチャートである。
[Summary of Modifications 1 and 2 of Embodiment 23]
FIG. 281A is a flowchart illustrating a display method according to one embodiment of the present invention.
 本発明の一態様に係る表示方法は、ステップS41~S43を含む。 The display method according to one aspect of the present invention includes steps S41 to S43.
 ステップS41では、光の輝度変化により信号を送信する送信機によりライトアップされている対象物を被写体として撮像センサにより撮像することによって、撮像画像を取得する。ステップS42では、その撮像画像から信号を復号する。ステップS43では、復号された信号に対応する動画像をメモリから読み出し、撮像画像中のその被写体に対応する対象領域に、動画像を重畳させてディスプレイに表示する。ここで、ステップS43では、その動画像に含まれる複数の画像のうちの何れかの画像であって、その対象物を含む画像と、前記対象物を含む画像の表示時間で前後にある所定の数の複数の画像とのうちの、何れかの画像から、その動画像を表示する。例えば、その所定の数は、10フレームである。あるいは、対象物は、静止画であり、ステップS43では、静止画と同一の画像から、その動画像を表示する。なお、動画像の表示が開始される画像は、静止画と同一の画像に限らず、その静止画と同一の画像、すなわち対象物を含む画像から、表示順で所定のフレーム数だけ前後にある画像であってもよい。また、対象物は、静止画に限らず、人形などであってもよい。 In step S41, a picked-up image is acquired by picking up an image of an object illuminated by a transmitter that transmits a signal according to a change in luminance of light as a subject with an image sensor. In step S42, a signal is decoded from the captured image. In step S43, a moving image corresponding to the decoded signal is read from the memory, and the moving image is superimposed on a target area corresponding to the subject in the captured image and displayed on the display. Here, in step S43, a predetermined image that is one of a plurality of images included in the moving image and includes display images of the image including the target object and the image including the target object. The moving image is displayed from any one of the plurality of images. For example, the predetermined number is 10 frames. Alternatively, the object is a still image, and in step S43, the moving image is displayed from the same image as the still image. The image from which the display of the moving image is started is not limited to the same image as the still image, but is the same number as the still image, that is, the predetermined number of frames in the display order from the image including the target object. It may be an image. Further, the object is not limited to a still image, and may be a doll or the like.
 なお、撮像センサおよび撮像画像は、例えば、実施の形態23におけるイメージセンサおよび全撮像画像である。また、ライトアップされる静止画は、画像表示装置の表示パネルに表示される静止画像であってもよく、送信機からの光によって照らされるポスター、案内板、もしくは看板などであってもよい。 Note that the imaging sensor and the captured image are, for example, the image sensor and the entire captured image in the twenty-third embodiment. The still image to be lit up may be a still image displayed on the display panel of the image display device, or may be a poster, a guide board, a signboard, or the like illuminated by light from the transmitter.
 また、このような表示方法は、さらに、信号をサーバに送信する送信ステップと、その信号に対応する動画像をサーバから受信する受信ステップとを含んでもよい。 Further, such a display method may further include a transmission step of transmitting a signal to the server and a reception step of receiving a moving image corresponding to the signal from the server.
 これにより、例えば図265に示すように、静止画が動き出すように仮想現実的に動画像を表示することができ、ユーザに有益な画像を表示することができる。 Thereby, as shown in FIG. 265, for example, a moving image can be displayed virtually so that a still image starts to move, and an image useful to the user can be displayed.
 また、静止画は、所定の色の外枠を有し、本発明の一態様に係る表示方法は、さらに、その所定の色により、撮像画像から対象領域を認識する認識ステップを含んでもよい。この場合、ステップS43では、認識された対象領域のサイズと同一となるように、動画像をリサイズし、撮像画像中の対象領域に、リサイズされた動画像を重畳させてディスプレイに表示してもよい。例えば、所定の色の外枠は、静止画を取り囲む白色または黒色の矩形枠であり、実施の形態23における認識情報によって示される。そして、実施の形態23におけるAR画像が動画像としてリサイズされて重畳される。 Further, the still image has an outer frame of a predetermined color, and the display method according to one aspect of the present invention may further include a recognition step of recognizing a target area from the captured image by the predetermined color. In this case, in step S43, the moving image is resized so as to be the same as the size of the recognized target region, and the resized moving image is superimposed on the target region in the captured image and displayed on the display. Good. For example, the outer frame of the predetermined color is a white or black rectangular frame surrounding a still image, and is indicated by the recognition information in the twenty-third embodiment. Then, the AR image in the twenty-third embodiment is resized and superimposed as a moving image.
 これにより、動画像が被写体として実在するように、より現実的にその動画像を表示することができる。 Thereby, the moving image can be displayed more realistically so that the moving image actually exists as a subject.
 また、撮像センサの撮像領域のうち、その撮像領域よりも小さい領域である表示領域に投影される画像のみがディスプレイに表示される。この場合、ステップS43では、その撮像領域において被写体が投影されている投影領域が、表示領域よりも大きい場合には、投影領域のうち、表示領域を越えた部分によって得られる画像を、ディスプレイに表示しなくてもよい。ここで、例えば図273に示すように、撮像領域および投影領域は、イメージセンサの有効画素領域および認識領域である。 Further, only the image projected on the display area which is smaller than the imaging area among the imaging areas of the imaging sensor is displayed on the display. In this case, in step S43, if the projection area onto which the subject is projected in the imaging area is larger than the display area, an image obtained by a portion of the projection area that exceeds the display area is displayed on the display. You don't have to. Here, for example, as shown in FIG. 273, the imaging region and the projection region are an effective pixel region and a recognition region of the image sensor.
 これにより、例えば図273に示すように、被写体である静止画に撮像センサが近づくことによって、投影領域(図273の認識領域)によって得られる画像の一部がディスプレイに表示されなくても、被写体である静止画の全体が撮像領域に投影されている場合がある。したがって、この場合には、被写体である静止画を適切に認識することができ、撮像画像中の被写体に対応する対象領域に、動画像を適切に重畳させることができる。 Thus, for example, as shown in FIG. 273, the image sensor approaches the still image that is the subject, so that the subject can be displayed even if a part of the image obtained by the projection area (recognition area in FIG. 273) is not displayed on the display. In some cases, the entire still image is projected onto the imaging region. Therefore, in this case, a still image that is a subject can be appropriately recognized, and a moving image can be appropriately superimposed on a target region corresponding to the subject in the captured image.
 また、例えば、表示領域の水平方向および垂直方向のそれぞれの幅が、w1およびh1であり、投影領域の水平方向および垂直方向のそれぞれの幅が、w2およびh2である。この場合、ステップS43では、h2/h1またはw2/w1のいずれか大きい値が所定の値以上である場合には、動画像をディスプレイの全画面に表示し、h2/h1またはw2/w1のいずれか大きい値が所定の値よりも小さい場合には、撮像画像中の対象領域に動画像を重畳させてディスプレイに表示してもよい。 For example, the horizontal and vertical widths of the display area are w1 and h1, and the horizontal and vertical widths of the projection area are w2 and h2. In this case, in step S43, when h2 / h1 or w2 / w1 is greater than a predetermined value, a moving image is displayed on the entire screen of the display, and either h2 / h1 or w2 / w1 is displayed. If the larger value is smaller than the predetermined value, the moving image may be superimposed on the target area in the captured image and displayed on the display.
 これにより、例えば図275に示すように、被写体である静止画に撮像センサが近づくと、動画像が全画面に表示されるため、ユーザは、撮像センサをさらに静止画に近づけて動画像を大きく表示させる必要がない。そのため、撮像センサを静止画に近づけすぎて、投影領域(図275の認識領域)が撮像領域(有効画素領域)からはみ出してしまうことによって、信号を復号することができなくなることを抑えることができる。 Thus, for example, as shown in FIG. 275, when the imaging sensor approaches the still image that is the subject, the moving image is displayed on the entire screen, so that the user enlarges the moving image by moving the imaging sensor closer to the still image. There is no need to display. For this reason, it is possible to prevent the signal from being unable to be decoded when the imaging sensor is too close to the still image and the projection area (the recognition area in FIG. 275) protrudes from the imaging area (effective pixel area). .
 また、本発明の一態様に係る表示方法は、さらに、動画像をディスプレイの全画面に表示する場合には、撮像センサの動作をオフにする制御ステップを含んでいてもよい。 In addition, the display method according to one aspect of the present invention may further include a control step of turning off the operation of the imaging sensor when a moving image is displayed on the entire screen of the display.
 これにより、例えば図276のステップS314に示すように、撮像センサの動作をオフにすることによって、撮像センサの消費電力を抑えることができる。 Thereby, for example, as shown in step S314 in FIG. 276, the power consumption of the image sensor can be suppressed by turning off the operation of the image sensor.
 また、ステップS43では、撮像センサが移動することにより、撮像画像から対象領域が認識できなくなった場合には、認識できなくなる直前に認識していた対象領域のサイズと同一のサイズで動画像を表示してもよい。なお、撮像画像から対象領域が認識できないとは、例えば、被写体である静止画に対応する対象領域の少なくとも一部が撮像画像に含まれていない状況である。このように、対象領域が認識できない場合には、例えば図279の時刻t3のときのように、直前に認識していた対象領域のサイズと同じサイズの動画像が表示される。したがって、撮像センサを移動させてしまったために、動画像の少なくとも一部が表示されなくなることを抑えることができる。 In step S43, if the target area cannot be recognized from the captured image due to the movement of the imaging sensor, the moving image is displayed in the same size as the size of the target area recognized immediately before it cannot be recognized. May be. Note that the target area cannot be recognized from the captured image, for example, is a situation in which at least a part of the target area corresponding to the still image that is the subject is not included in the captured image. Thus, when the target area cannot be recognized, a moving image having the same size as the size of the target area recognized immediately before is displayed, for example, at time t3 in FIG. Accordingly, it is possible to suppress the at least part of the moving image from being displayed because the imaging sensor has been moved.
 また、ステップS43では、撮像センサが移動することにより、対象領域のうちの一部のみが、撮像画像のうちのディスプレイに表示される領域に含まれる場合には、その対象領域の一部に対応する動画像の空間領域の一部を、対象領域の一部に重畳させてディスプレイに表示してもよい。なお、動画像の空間領域の一部とは、動画像を構成する各ピクチャのうちの一部である。 Further, in step S43, when only a part of the target area is included in the area displayed on the display of the captured image due to the movement of the imaging sensor, it corresponds to a part of the target area. A part of the spatial area of the moving image to be displayed may be superimposed on a part of the target area and displayed on the display. A part of the spatial area of the moving image is a part of each picture constituting the moving image.
 これにより、例えば図277の時刻t2のときのように、動画像(図277のAR画像)の空間領域の一部のみがディスプレイに表示される。その結果、撮像センサが被写体となる静止画に適切に向けられていないことをユーザに知らせることができる。 Thus, only a part of the space area of the moving image (AR image in FIG. 277) is displayed on the display, for example, at time t2 in FIG. 277. As a result, the user can be informed that the image sensor is not properly directed to the still image that is the subject.
 また、ステップS43では、撮像センサが移動することにより、撮像画像から対象領域が認識できなくなった場合には、認識できなくなる直前に表示されていた、対象領域の一部に対応する動画像の空間領域の一部を、継続して表示してもよい。 In step S43, if the target area cannot be recognized from the captured image due to the movement of the imaging sensor, the space of the moving image corresponding to a part of the target area displayed immediately before the target area cannot be recognized. A part of the area may be continuously displayed.
 これにより、例えば図277の時刻t3のときのように、ユーザが、被写体となる静止画と異なる方向に撮像センサを向けたときにも、動画像(図277のAR画像)の空間領域の一部が継続して表示される。その結果、撮像センサをどのように向ければ動画像の全体が表示されるかを、ユーザに把握しやすくすることができる。 As a result, even when the user points the imaging sensor in a direction different from the still image that is the subject, for example, at time t3 in FIG. 277, one space area of the moving image (AR image in FIG. 277) is displayed. The part is displayed continuously. As a result, it is possible to make it easier for the user to know how the image sensor is directed to display the entire moving image.
 また、ステップS43では、撮像センサの撮像領域における水平方向および垂直方向のそれぞれの幅が、w0およびh0であり、撮像領域において被写体が投影されている投影領域と、その撮像領域との間の水平方向および垂直方向のそれぞれの距離が、dhおよびdwである場合、dw/w0またはdh/h0のいずれか小さい方の値が、所定値以下の場合に、対象領域が認識できないと判断してもよい。なお、投影領域は、例えば図277に示す認識領域である。または、ステップS43では、撮像センサの撮像領域において被写体が投影されている投影領域と、その撮像領域との間の水平方向および垂直方向のそれぞれの距離のうちの短い方に対応する画角が所定値以下の場合に、対象領域が認識できないと判断してもよい。 In step S43, the horizontal and vertical widths in the imaging area of the imaging sensor are w0 and h0, respectively, and the horizontal area between the projection area where the subject is projected in the imaging area and the imaging area. When the distances in the direction and the vertical direction are dh and dw, it is determined that the target region cannot be recognized when the smaller one of dw / w0 and dh / h0 is equal to or smaller than a predetermined value. Good. The projection area is a recognition area shown in FIG. 277, for example. Alternatively, in step S43, the angle of view corresponding to the shorter one of the horizontal and vertical distances between the projection area where the subject is projected in the imaging area of the imaging sensor and the imaging area is predetermined. If the value is less than or equal to the value, it may be determined that the target area cannot be recognized.
 これにより、対象領域が認識できるか否かを適切に判断することができる。 This makes it possible to appropriately determine whether or not the target area can be recognized.
 図281Bは、本発明の一態様に係る表示装置の構成を示すブロック図である。 FIG. 281B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
 本発明の一態様に係る表示装置A10は、撮像センサA11と、復号部A12と、表示制御部A13とを備える。 The display device A10 according to an aspect of the present invention includes an imaging sensor A11, a decoding unit A12, and a display control unit A13.
 撮像センサA11は、光の輝度変化により信号を送信する送信機によりライトアップされている静止画を被写体として撮像することによって、撮像画像を取得する。 The imaging sensor A11 acquires a captured image by capturing a still image illuminated by a transmitter that transmits a signal according to a change in the luminance of light as a subject.
 復号部A12は、その撮像画像から信号を復号する復号部する。 The decoding unit A12 is a decoding unit that decodes a signal from the captured image.
 表示制御部A13は、復号された信号に対応する動画像をメモリから読み出し、その撮像画像中の被写体に対応する対象領域に、その動画像を重畳させてディスプレイに表示する。ここで、表示制御部A13は、その動画像に含まれる複数の画像のうち、静止画と同一の画像である先頭画像から、その複数の画像を順に表示する。 The display control unit A13 reads out the moving image corresponding to the decoded signal from the memory, and displays the moving image on the display by superimposing the moving image on the target area corresponding to the subject in the captured image. Here, the display control unit A13 displays the plurality of images in order from the first image that is the same image as the still image among the plurality of images included in the moving image.
 これにより、上述の表示方法と同様の効果を奏することができる。 Thereby, the same effect as the above-described display method can be obtained.
 また、撮像センサA11は、複数のマイクロミラーと、フォトセンサとを有し、表示装置A10は、さらに、撮像センサを制御する撮像制御部を備えてもよい。この場合、撮像制御部は、撮像画像のうち、信号を含む領域を信号領域として特定し、複数のマイクロミラーのうち、特定した信号領域に対応するマイクロミラーの角度を制御する。そして、撮像制御部は、複数のマイクロミラーのうち、角度が制御されたマイクロミラーによる反射光のみを、上述のフォトセンサに受光させる。 Further, the imaging sensor A11 may include a plurality of micromirrors and a photosensor, and the display device A10 may further include an imaging control unit that controls the imaging sensor. In this case, the imaging control unit identifies a region including a signal as a signal region in the captured image, and controls the angle of the micromirror corresponding to the identified signal region among the plurality of micromirrors. And an imaging control part makes the above-mentioned photosensor receive only the reflected light by the micro mirror by which the angle was controlled among a plurality of micro mirrors.
 これにより、例えば図232Aに示すように、光の輝度変化によって表される信号である可視光信号に高周波成分が含まれていても、その高周波成分を正しく復号することができる。 Thus, for example, as shown in FIG. 232A, even if a visible light signal, which is a signal represented by a change in light brightness, contains a high-frequency component, the high-frequency component can be correctly decoded.
 なお、上記各実施の形態および各変形例において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図271、図274、図276および図281Aのフローチャートによって示される表示方法をコンピュータに実行させる。 In each of the above-described embodiments and modifications, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the display method shown by the flowcharts of FIGS. 271, 274, 276, and 281A.
 以上、一つまたは複数の態様に係る表示方法について、上記各実施の形態および各変形例に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態および変形例における構成要素を組み合わせて構築される形態も、本発明の範囲内に含まれてもよい。 As described above, the display method according to one or a plurality of aspects has been described based on the above-described embodiments and modifications. However, the present invention is not limited to the embodiments. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments and modifications are also within the scope of the present invention. May be included.
 [実施の形態23の変形例3]
 以下、実施の形態23の変形例3、つまり、光IDを用いたARを実現する表示方法の変形例3について説明する。
[Modification 3 of Embodiment 23]
Hereinafter, a third modification of the twenty-third embodiment, that is, a third modification of the display method for realizing the AR using the optical ID will be described.
 図282は、AR画像の拡大および移動の一例を示す図である。 FIG. 282 is a diagram illustrating an example of expansion and movement of the AR image.
 受信機200は、図282の(a)に示すように、上記実施の形態23もしくはその変形例1または2と同様、撮像表示画像Ppreの対象領域にAR画像P21を重畳する。そして、受信機200は、そのAR画像P21が重畳された撮像表示画像Ppreをディスプレイ201に表示する。例えば、AR画像P21は動画像である。 As shown in FIG. 282 (a), the receiver 200 superimposes the AR image P21 on the target area of the captured display image Ppre, as in the case of the twenty-third embodiment or the first or second modification thereof. Then, the receiver 200 displays the captured display image Ppre on which the AR image P21 is superimposed on the display 201. For example, the AR image P21 is a moving image.
 ここで、受信機200は、図282の(b)に示すように、サイズ変更の指示を受け付けると、その指示に応じてAR画像P21のサイズを変更する。例えば、受信機200は、拡大の指示を受け付けると、その指示に応じてAR画像P21を拡大する。サイズ変更の指示は、ユーザによるAR画像P21に対する例えばピンチ操作、ダブルタップまたは長押しによって行われる。具体的には、受信機200は、ピンチアウトによって行われる拡大の指示を受け付けると、その指示に応じてAR画像P21を拡大する。逆に、受信機200は、ピンチインによって行われる縮小の指示を受け付けると、その指示に応じてAR画像P21を縮小する。 Here, when receiving an instruction to change the size, the receiver 200 changes the size of the AR image P21 in accordance with the instruction, as shown in FIG. 282 (b). For example, when receiving an enlargement instruction, the receiver 200 enlarges the AR image P21 according to the instruction. The size change instruction is given by, for example, a pinch operation, a double tap, or a long press on the AR image P21 by the user. Specifically, when receiving an enlargement instruction performed by pinching out, the receiver 200 enlarges the AR image P21 according to the instruction. Conversely, when receiving an instruction for reduction performed by pinch-in, the receiver 200 reduces the AR image P21 in accordance with the instruction.
 また、受信機200は、図282の(c)に示すように、位置変更の指示を受け付けると、その指示に応じてAR画像P21の位置を変更する。位置変更の指示は、ユーザによるAR画像に対する例えばスワイプなどによって行われる。具体的には、受信機200は、スワイプによって行われる位置変更の指示を受け付けると、その指示に応じてAR画像P21の位置を変更する。すなわち、AR画像P21が移動する。 Further, as shown in FIG. 282 (c), the receiver 200 receives the position change instruction and changes the position of the AR image P21 according to the instruction. The instruction to change the position is given by, for example, swiping the AR image by the user. Specifically, when receiving an instruction to change the position performed by swiping, the receiver 200 changes the position of the AR image P21 according to the instruction. That is, the AR image P21 moves.
 これにより、動画像であるAR画像の拡大によって、そのAR画像をより見易くすることができるとともに、動画像であるAR画像の縮小または移動によって、AR画像に隠れている撮像表示画像Ppreの領域をユーザに表示することができる。 This makes it possible to make the AR image easier to see by enlarging the AR image that is a moving image, and to reduce the area of the captured display image Pre that is hidden in the AR image by reducing or moving the AR image that is the moving image. Can be displayed to the user.
 図283は、AR画像の拡大の一例を示す図である。 FIG. 283 is a diagram illustrating an example of enlargement of the AR image.
 受信機200は、図283の(a)に示すように、上記実施の形態23もしくはその変形例1または2と同様、撮像表示画像Ppreの対象領域にAR画像P22を重畳する。そして、受信機200は、そのAR画像P22が重畳された撮像表示画像Ppreをディスプレイ201に表示する。例えば、AR画像P22は、文字列が記載されている静止画像である。 As shown in FIG. 283 (a), the receiver 200 superimposes the AR image P22 on the target area of the captured display image Ppre, as in the twenty-third embodiment or the first or second modification thereof. Then, the receiver 200 displays the captured display image Ppre on which the AR image P22 is superimposed on the display 201. For example, the AR image P22 is a still image in which a character string is described.
 ここで、受信機200は、図283の(b)に示すように、サイズ変更の指示を受け付けると、その指示に応じてAR画像P22のサイズを変更する。例えば、受信機200は、拡大の指示を受け付けると、その指示に応じてAR画像P22を拡大する。サイズ変更の指示は、上述と同様、ユーザによるAR画像P22に対する例えばピンチ操作、ダブルタップまたは長押しによって行われる。具体的には、受信機200は、ピンチアウトによって行われる拡大の指示を受け付けると、その指示に応じてAR画像P22を拡大する。このAR画像P22の拡大によって、AR画像P22に記載されている文字列をユーザに対して読み易くすることができる。 Here, when receiving an instruction to change the size, the receiver 200 changes the size of the AR image P22 in accordance with the instruction, as shown in FIG. 283 (b). For example, when receiving an enlargement instruction, the receiver 200 enlarges the AR image P22 according to the instruction. The size change instruction is performed by, for example, a pinch operation, a double tap, or a long press on the AR image P22 by the user, as described above. Specifically, when receiving an enlargement instruction performed by pinching out, the receiver 200 enlarges the AR image P22 according to the instruction. By enlarging the AR image P22, the character string described in the AR image P22 can be easily read by the user.
 また、受信機200は、図283の(c)に示すように、さらに、サイズ変更の指示を受け付けると、その指示に応じてAR画像P22のサイズを変更する。例えば、受信機200は、さらなる拡大の指示を受け付けると、その指示に応じてAR画像P22をさらに拡大する。このAR画像P22の拡大によって、AR画像P22に記載されている文字列をユーザに対してさらに読み易くすることができる。 Further, as illustrated in (c) of FIG. 283, when the receiver 200 further receives a size change instruction, the receiver 200 changes the size of the AR image P22 according to the instruction. For example, when receiving a further enlargement instruction, the receiver 200 further enlarges the AR image P22 according to the instruction. By enlarging the AR image P22, it is possible to make it easier for the user to read the character string described in the AR image P22.
 なお、受信機200は、拡大の指示を受け付けたときに、その指示に応じたAR画像の拡大率が閾値以上になる場合には、高解像度のAR画像を取得してもよい。この場合、受信機200は、既に表示されている元のAR画像の代わりに、その高解像度のAR画像を上述の拡大率まで拡大して表示してもよい。例えば、受信機200は、640×480画素のAR画像の代わりに、1920×1080画素のAR画像を表示する。これにより、AR画像が現実に被写体として撮像されているように、そのAR画像を拡大することができるとともに、光学ズームでは得られない高解像度の画像を表示することができる。 Note that, when receiving an enlargement instruction, the receiver 200 may acquire a high-resolution AR image if the enlargement ratio of the AR image corresponding to the instruction is equal to or greater than a threshold value. In this case, the receiver 200 may enlarge and display the high-resolution AR image up to the above-described enlargement factor instead of the original AR image that has already been displayed. For example, the receiver 200 displays an AR image of 1920 × 1080 pixels instead of the AR image of 640 × 480 pixels. As a result, the AR image can be enlarged and a high-resolution image that cannot be obtained by the optical zoom can be displayed so that the AR image is actually captured as a subject.
 図284は、受信機200によるAR画像の拡大および移動に関する処理動作の一例を示すフローチャートである。 FIG. 284 is a flowchart illustrating an example of processing operations related to enlargement and movement of the AR image by the receiver 200.
 まず、受信機200は、図239のフローチャートに示すステップS101と同様に、通常露光時間および通信用露光時間による撮像を開始する(ステップS401)。この撮像が開始されると、通常露光時間による撮像表示画像Ppreと、通信用露光時間による復号用画像(すなわち輝線画像)Pdecとがそれぞれ周期的に得られる。そして、受信機200は、その復号用画像Pdecを復号することによって光IDを取得する。 First, the receiver 200 starts imaging based on the normal exposure time and the communication exposure time, similarly to step S101 shown in the flowchart of FIG. 239 (step S401). When this imaging is started, a captured display image Ppre based on the normal exposure time and a decoding image (that is, a bright line image) Pdec based on the communication exposure time are periodically obtained. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec.
 次に、受信機200は、図239のフローチャートに示すステップS102~S106の処理を含むAR画像重畳処理を行う(ステップS402)。このAR画像重畳処理が行われると、AR画像が撮像表示画像Ppreに重畳されて表示される。このとき、受信機200は、光ID取得レートを下げる(ステップS403)。光ID取得レートとは、ステップS401において開始される撮像によって得られる単位時間あたりの撮像画像の枚数のうちの、復号用画像(すなわち輝線画像)Pdecの枚数の割合である。例えば、光ID取得レートが下がることによって、単位時間あたりに得られる復号用画像Pdecの枚数は、単位時間あたりに得られる撮像表示画像Ppreの枚数よりも少なくなる。 Next, the receiver 200 performs an AR image superimposition process including the processes of steps S102 to S106 shown in the flowchart of FIG. 239 (step S402). When this AR image superimposition process is performed, the AR image is displayed superimposed on the captured display image Ppre. At this time, the receiver 200 decreases the optical ID acquisition rate (step S403). The light ID acquisition rate is a ratio of the number of decoding images (that is, bright line images) Pdec out of the number of captured images per unit time obtained by imaging started in step S401. For example, as the optical ID acquisition rate decreases, the number of decoding images Pdec obtained per unit time becomes smaller than the number of captured display images Ppre obtained per unit time.
 次に、受信機200は、サイズ変更の指示を受け付けたか否かを判定する(ステップS404)。ここで、サイズ変更の指示を受け付けたと判定すると(ステップS404のYes)、受信機200は、さらに、そのサイズ変更の指示が拡大の指示か否かを判定する(ステップS405)。サイズ変更の指示が拡大の指示であると判定すると(ステップS405のYes)、受信機200は、さらに、AR画像の再取得が必要か否かを判定する(ステップS406)。例えば、受信機200は、拡大の指示に応じたAR画像の拡大率が閾値以上になると判断した場合に、AR画像の再取得が必要と判定する。ここで、受信機200は、再取得が必要と判定すると(ステップS406のYes)、高解像度のAR画像を例えばサーバから取得して、重畳して表示されているAR画像を、その高解像度のAR画像に置き換える(ステップS407)。 Next, the receiver 200 determines whether or not a size change instruction has been received (step S404). If it is determined that the size change instruction has been received (Yes in step S404), the receiver 200 further determines whether the size change instruction is an enlargement instruction (step S405). If it is determined that the size change instruction is an enlargement instruction (Yes in step S405), the receiver 200 further determines whether it is necessary to reacquire the AR image (step S406). For example, when the receiver 200 determines that the AR image enlargement rate according to the enlargement instruction is equal to or greater than a threshold, the receiver 200 determines that the AR image needs to be reacquired. When the receiver 200 determines that reacquisition is necessary (Yes in step S406), the receiver 200 acquires a high-resolution AR image from, for example, a server, and converts the AR image displayed in a superimposed manner into the high-resolution AR image. Replace with the AR image (step S407).
 そして、受信機200は、受け付けられたサイズ変更の指示に応じて、AR画像のサイズを変更する(ステップS408)。つまり、ステップS407で高解像度のAR画像を取得した場合には、受信機200は、その高解像度のAR画像を拡大する。また、ステップS406で、AR画像の再取得が不要と判定された場合には(ステップS406のNo)、受信機200は、重畳されているAR画像を拡大する。また、ステップS405で、サイズ変更の指示が縮小の指示であると判定すると(ステップS405のNo)、受信機200は、受け付けられたサイズ変更の指示、すなわち縮小の指示に応じて、重畳して表示されているAR画像を縮小する。 Then, the receiver 200 changes the size of the AR image in accordance with the received size change instruction (step S408). That is, when a high-resolution AR image is acquired in step S407, the receiver 200 enlarges the high-resolution AR image. Further, when it is determined in step S406 that the re-acquisition of the AR image is unnecessary (No in step S406), the receiver 200 enlarges the superimposed AR image. If it is determined in step S405 that the size change instruction is a reduction instruction (No in step S405), the receiver 200 performs superimposition according to the received size change instruction, that is, the reduction instruction. Reduce the displayed AR image.
 一方、受信機200は、ステップS404で、サイズ変更の指示を受け付けていないと判定すると(ステップS404のNo)、位置変更の指示を受け付けたか否かを判定する(ステップS409)。ここで、位置変更の指示を受け付けたと判定すると(ステップS409のYes)、受信機200は、その位置変更の指示に応じて、重畳して表示されているAR画像の位置を変更する(ステップS410)。つまり、受信機200は、AR画像を移動させる。また、位置変更の指示を受け付けていないと判定すると(ステップS409のNo)、受信機200は、ステップS404からの処理を繰り返し実行する。 On the other hand, when the receiver 200 determines in step S404 that the size change instruction has not been received (No in step S404), the receiver 200 determines whether a position change instruction has been received (step S409). If it is determined that a position change instruction has been received (Yes in step S409), the receiver 200 changes the position of the superimposed AR image in accordance with the position change instruction (step S410). ). That is, the receiver 200 moves the AR image. If it is determined that the position change instruction has not been received (No in step S409), the receiver 200 repeatedly executes the processing from step S404.
 ステップS408でAR画像のサイズが変更されると、または、ステップS410でAR画像の位置が変更されると、受信機200は、ステップS401から周期的に取得されている光IDが、取得されなくなったか否かを判定する(ステップS411)。ここで、光IDが取得されなくなったと判定すると(ステップS411のYes)、受信機200は、AR画像の拡大および移動に関する処理動作を終了する。一方、現在も光IDが取得されていると判定すると(ステップS411のNo)、受信機200は、ステップS404からの処理を繰り返し実行する。 When the size of the AR image is changed in step S408 or the position of the AR image is changed in step S410, the receiver 200 cannot acquire the light ID periodically acquired from step S401. It is determined whether or not (step S411). If it is determined that the light ID is no longer acquired (Yes in step S411), the receiver 200 ends the processing operation related to the expansion and movement of the AR image. On the other hand, if it is determined that the light ID is still acquired (No in step S411), the receiver 200 repeatedly executes the processing from step S404.
 図285は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 285 is a diagram illustrating an example of superimposition of the AR image by the receiver 200.
 受信機200は、上述のように、撮像表示画像Ppre中の対象領域にAR画像P23を重畳する。ここで、図285に示すように、AR画像P23は、AR画像P23の各部位がAR画像P23の端に近いほどその部位における透過率が高くなるように構成されている。透過率は、重畳される画像が透けて表示される度合いである。例えば、AR画像の全体の透過率が100%とは、撮像表示画像の対象領域にAR画像が重畳されていても、ディスプレイ201にはそのAR画像が表示されずに対象領域のみが表示されることを意味する。逆に、AR画像の全体の透過率が0%とは、ディスプレイ201には撮像表示画像の対象領域は表示されず、その対象領域に重畳されているAR画像のみが表示されることを意味する。 As described above, the receiver 200 superimposes the AR image P23 on the target area in the captured display image Ppre. Here, as shown in FIG. 285, the AR image P23 is configured such that the transmittance of each part of the AR image P23 is higher as the part is closer to the end of the AR image P23. The transmittance is the degree to which the superimposed image is displayed through. For example, when the overall transmittance of the AR image is 100%, even if the AR image is superimposed on the target area of the captured display image, only the target area is displayed on the display 201 without displaying the AR image. Means that. Conversely, the transmittance of the entire AR image being 0% means that the target area of the captured display image is not displayed on the display 201 and only the AR image superimposed on the target area is displayed. .
 例えば、AR画像P23が矩形状である場合、AR画像P23における各部位の透過率は、その部位が矩形の上端、下端、左端または右端に近いほど高い。より具体的には、それらの端における透過率は100%である。また、AR画像P23の中央部分には、AR画像P23よりも小さい透過率0%の矩形領域があり、その矩形領域には、例えば「Kyoto Station」と英語で記載されている。つまり、AR画像P23の周縁部では、透過率がグラデーションのように0%から100%まで段階的に変化している。 For example, when the AR image P23 is rectangular, the transmittance of each part in the AR image P23 is higher as the part is closer to the upper end, lower end, left end, or right end of the rectangle. More specifically, the transmittance at those ends is 100%. In addition, in the central portion of the AR image P23, there is a rectangular area having a transmittance of 0% which is smaller than that of the AR image P23. In the rectangular area, for example, “Kyoto Station” is written in English. That is, at the peripheral edge of the AR image P23, the transmittance changes stepwise from 0% to 100% like a gradation.
 受信機200は、このようなAR画像P23を、図285に示すように、撮像表示画像Ppre中の対象領域に重畳する。このとき、受信機200は、AR画像P23のサイズを対象領域のサイズに合わせて、そのリサイズされたAR画像P23を対象領域に重畳する。例えば、対象領域には、AR画像P23の中央部にある矩形領域と同じ背景色の駅名標の像が現れている。なお、駅名標には日本語で「京都」と記載されている。 The receiver 200 superimposes such an AR image P23 on the target area in the captured display image Pre as shown in FIG. At this time, the receiver 200 matches the size of the AR image P23 with the size of the target area, and superimposes the resized AR image P23 on the target area. For example, an image of a station name sign having the same background color as that of the rectangular area in the center of the AR image P23 appears in the target area. The station name has “Kyoto” written in Japanese.
 ここで、上述のように、AR画像P23の各部位の透過率は、その部位がAR画像P23の端に近いほど高い。したがって、対象領域にAR画像P23が重畳されると、AR画像P23の中央部分の矩形領域は表示されても、そのAR画像P23の端は表示されず、対象領域の端、すなわち、駅名標の像の端が表示される。 Here, as described above, the transmittance of each part of the AR image P23 is higher as the part is closer to the end of the AR image P23. Therefore, when the AR image P23 is superimposed on the target area, even if the rectangular area at the center of the AR image P23 is displayed, the end of the AR image P23 is not displayed, but the end of the target area, that is, the station name mark The edge of the image is displayed.
 これにより、AR画像P23と対象領域とのずれを目立ち難くすることができる。つまり、AR画像P23が対象領域に重畳されても、受信機200の動きなどによって、AR画像P23と対象領域との間にずれが生じることがある。この場合、仮にAR画像P23の全体の透過率が0%であれば、AR画像P23の端と、対象領域の端とが表示され、そのずれが目立ってしまう。しかし、本変形例におけるAR画像P23では、端に近い部位ほどその部位の透過率が高いため、AR画像P23の端が表示され難くすることができ、その結果、AR画像P23と対象領域との間のずれを目立ち難くすることができる。さらに、AR画像P23の周縁部では、グラデーションのように透過率が変化しているため、AR画像P23が対象領域に重畳されていることを目立ち難くすることができる。 Thereby, the deviation between the AR image P23 and the target area can be made inconspicuous. That is, even when the AR image P23 is superimposed on the target area, a shift may occur between the AR image P23 and the target area due to the movement of the receiver 200 or the like. In this case, if the overall transmittance of the AR image P23 is 0%, the end of the AR image P23 and the end of the target area are displayed, and the shift becomes conspicuous. However, in the AR image P23 in the present modification, the closer to the end, the higher the transmittance of the part, so that the end of the AR image P23 can be made difficult to be displayed. As a result, the AR image P23 and the target region It is possible to make the gap inconspicuous. Furthermore, since the transmittance changes like gradation in the peripheral portion of the AR image P23, it is difficult to notice that the AR image P23 is superimposed on the target region.
 図286は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 286 is a diagram illustrating an example of superimposition of the AR image by the receiver 200.
 受信機200は、上述のように、撮像表示画像Ppre中の対象領域にAR画像P24を重畳する。ここで、図286に示すように、撮像される被写体は、例えば飲食店のメニューである。このメニューは白枠に囲われ、さらに、その白枠は黒枠に囲われている。つまり、被写体は、メニューと、そのメニューを囲う白枠と、その白枠を囲う黒枠とを含む。 As described above, the receiver 200 superimposes the AR image P24 on the target area in the captured display image Ppre. Here, as shown in FIG. 286, the subject to be imaged is, for example, a restaurant menu. This menu is surrounded by a white frame, and the white frame is surrounded by a black frame. That is, the subject includes a menu, a white frame surrounding the menu, and a black frame surrounding the white frame.
 受信機200は、撮像表示画像Ppreのうちの、白枠の像よりも大きく、黒枠の像よりも小さい領域を対象領域として認識する。そして、受信機200は、AR画像P24のサイズをその対象領域のサイズに合わせて、そのリサイズされたAR画像P24を対象領域に重畳する。 The receiver 200 recognizes, as a target area, an area larger than the white frame image and smaller than the black frame image in the captured display image Pre. Then, the receiver 200 matches the size of the AR image P24 with the size of the target area, and superimposes the resized AR image P24 on the target area.
 これにより、重畳されているAR画像P24が、受信機200の動きなどによって、対象領域からずれてしまった場合でも、そのAR画像P24を、黒枠に囲まれた状態で表示させ続けることができる。したがって、AR画像P24と対象領域との間のずれを目立ち難くすることができる。 Thereby, even when the superimposed AR image P24 is displaced from the target area due to the movement of the receiver 200, the AR image P24 can be continuously displayed in a state surrounded by a black frame. Therefore, the shift between the AR image P24 and the target region can be made inconspicuous.
 なお、図286に示す例では、枠の色は黒または白であるが、これらの色に限定されず、どのような色であってもよい。 In the example shown in FIG. 286, the color of the frame is black or white, but is not limited to these colors, and may be any color.
 図287は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 287 is a diagram illustrating an example of superimposition of the AR image by the receiver 200.
 例えば、受信機200は、夜空にライトアップされた城が描かれたポスターを被写体として撮像する。例えば、このポスターは、バックライトとして構成された上述の送信機100によって照らされ、そのバックライトによって可視光信号(すなわち光ID)を送信している。受信機200は、その撮像によって、そのポスターである被写体の像を含む撮像表示画像Ppreと、その光IDに対応するAR画像P25とを取得する。ここで、AR画像P25は、上述の城が描かれている領域が抜き取られたポスターの像と同一の形状を有する。すなわち、AR画像P25における、ポスターの像の城に対応する領域は、マスキングされている。さらに、AR画像P25は、上述のAR画像P23と同様、AR画像P25の各部位がAR画像P25の端に近いほどその部位における透過率が高くなるように構成されている。また、AR画像P25において透過率が0%である中央部分には、夜空に打ち上げられる花火が動画として表示される。 For example, the receiver 200 images a poster on which a castle illuminated in the night sky is drawn as a subject. For example, the poster is illuminated by the above-described transmitter 100 configured as a backlight, and a visible light signal (ie, a light ID) is transmitted by the backlight. The receiver 200 acquires the captured display image Ppre including the image of the subject that is the poster and the AR image P25 corresponding to the light ID by the imaging. Here, the AR image P25 has the same shape as the poster image from which the region where the castle is drawn is extracted. That is, the area corresponding to the castle of the poster image in the AR image P25 is masked. Further, the AR image P25 is configured so that the transmittance of each part of the AR image P25 is higher as the part of the AR image P25 is closer to the end of the AR image P25, similarly to the AR image P23 described above. In the central portion where the transmittance is 0% in the AR image P25, fireworks launched in the night sky are displayed as moving images.
 受信機200は、このようなAR画像P25のサイズを、被写体の像である対象領域のサイズに合わせて、そのリサイズされたAR画像P25を対象領域に重畳する。その結果、ポスターに描かれている城は、AR画像ではなく、被写体の像として表示され、さらに、花火の動像がAR画像として表示される。 The receiver 200 matches the size of the AR image P25 with the size of the target area that is the image of the subject, and superimposes the resized AR image P25 on the target area. As a result, the castle drawn on the poster is displayed as an image of the subject, not as an AR image, and a moving image of fireworks is displayed as an AR image.
 これにより、ポスターの中で現実に花火が打ち上げられているように撮像表示画像Ppreを表示することができる。また、AR画像P25の各部位の透過率は、その部位がAR画像P25の端に近いほど高い。したがって、対象領域にAR画像P25が重畳されると、AR画像P25の中央部分は表示されても、そのAR画像P25の端は表示されず、対象領域の端が表示される。その結果、AR画像P25と対象領域とのずれを目立ち難くすることができる。さらに、AR画像P25の周縁部では、グラデーションのように透過率が変化しているため、AR画像P25が対象領域に重畳されていることを目立ち難くすることができる。 Thereby, the captured display image Ppre can be displayed as if fireworks are actually being launched in the poster. Further, the transmittance of each part of the AR image P25 is higher as the part is closer to the end of the AR image P25. Therefore, when the AR image P25 is superimposed on the target area, even if the center portion of the AR image P25 is displayed, the end of the AR image P25 is not displayed, but the end of the target area is displayed. As a result, the deviation between the AR image P25 and the target region can be made inconspicuous. Furthermore, since the transmittance changes like a gradation at the peripheral portion of the AR image P25, it is difficult to notice that the AR image P25 is superimposed on the target region.
 図288は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 288 is a diagram illustrating an example of superimposition of the AR image by the receiver 200.
 例えば、受信機200は、テレビとして構成されている送信機100を被写体として撮像する。具体的には、この送信機100は、夜空にライトアップされた城をディスプレイに表示するとともに、可視光信号(すなわち光ID)を送信している。受信機200は、その撮像によって、送信機100が映し出された撮像表示画像Ppreと、その光IDに対応するAR画像P26とを取得する。ここで、受信機200は、まず、撮像表示画像Ppreをディスプレイ201に表示する。このとき、受信機200は、ディスプレイ201に、ユーザに消灯を促すメッセージmも表示する。具体的には、そのメッセージmは、例えば「部屋の照明をオフにして、部屋を暗くしてください」である。 For example, the receiver 200 images the transmitter 100 configured as a television as a subject. Specifically, the transmitter 100 displays a castle illuminated in the night sky on a display and transmits a visible light signal (that is, a light ID). The receiver 200 acquires the captured display image Ppre displayed by the transmitter 100 and the AR image P26 corresponding to the optical ID by the imaging. Here, the receiver 200 first displays the captured display image Ppre on the display 201. At this time, the receiver 200 also displays on the display 201 a message m that prompts the user to turn it off. Specifically, the message m is, for example, “Turn off the room lighting and darken the room”.
 このメッセージmの表示によって、ユーザが消灯し、送信機100が設置されている部屋が暗くなると、受信機200は、AR画像P26を撮像表示画像Ppreに重畳して表示する。ここで、AR画像P26は、撮像表示画像Ppreと同じサイズであって、そのAR画像P26における、撮像表示画像Ppreの城に対応する領域はくり抜かれている。つまり、AR画像P26における、撮像表示画像Ppreの城に対応する領域はマスキングされている。したがって、その領域から撮像表示画像Ppreの城をユーザに見せることができる。また、AR画像P26におけるその領域の周縁部では、上述と同様に、透過率がグラデーションのように0%から100%まで段階的に変化していてもよい。この場合には、撮像表示画像PpreとAR画像P26との間のずれを目立ち難くすることができる。 When the message m is displayed and the user is turned off and the room in which the transmitter 100 is installed becomes dark, the receiver 200 displays the AR image P26 superimposed on the captured display image Ppre. Here, the AR image P26 has the same size as the captured display image Ppre, and the area corresponding to the castle of the captured display image Ppre in the AR image P26 is cut out. That is, the area corresponding to the castle of the captured display image Ppre in the AR image P26 is masked. Therefore, the castle of the captured display image Ppre can be shown to the user from the area. Further, in the peripheral portion of the area in the AR image P26, similarly to the above, the transmittance may change stepwise from 0% to 100% like a gradation. In this case, the shift between the captured display image Ppre and the AR image P26 can be made inconspicuous.
 上述の例では、周縁部の透過率が高いAR画像を、撮像表示画像Ppreの対象領域に重畳することによって、AR画像と対象領域とのずれを目立ち難くしている。しかし、このようなAR画像の代わりに、撮像表示画像Ppreと同じサイズであって、全体が半透明(すなわち透過率が50%)のAR画像を撮像表示画像Ppreに重畳してもよい。この場合であっても、AR画像と対象領域とのずれを目立ち難くすることができる。また、撮像表示画像Ppreが全体的に明るい場合には、均一に透明度が低いAR画像を撮像表示画像Ppreに重畳し、逆に、撮像表示画像Ppreが全体的に暗い場合には、均一に透明度が高いAR画像を撮像表示画像Ppreに重畳してもよい。 In the above-described example, the AR image having a high peripheral edge transmittance is superimposed on the target area of the captured display image Ppre, so that the shift between the AR image and the target area is less noticeable. However, instead of such an AR image, an AR image that is the same size as the captured display image Ppre and is entirely translucent (that is, having a transmittance of 50%) may be superimposed on the captured display image Ppre. Even in this case, the shift between the AR image and the target region can be made inconspicuous. Further, when the captured display image Ppre is generally bright, the AR image having a uniform low transparency is superimposed on the captured display image Ppre. Conversely, when the captured display image Ppre is generally dark, the transparency is uniformly uniform. A high AR image may be superimposed on the captured display image Ppre.
 なお、AR画像P25およびAR画像P26の花火などのオブジェクトは、CG(computer graphics)によって表現されてもよい。この場合には、マスキングを不要にすることができる。また、図288に示す例では、受信機200は、ユーザに消灯を促すメッセージmを表示するが、このような表示を行うことなく、自動的に消灯してもよい。例えば、受信機200は、Bluetooth(登録商標)、ZigBee、または特定小電力無線局等によって、テレビである送信機100が設定されている照明装置に対して消灯信号を出力する。これによって、自動的に照明装置の消灯が行われる。 It should be noted that objects such as the fireworks of the AR image P25 and the AR image P26 may be expressed by CG (computer graphics). In this case, masking can be eliminated. In the example illustrated in FIG. 288, the receiver 200 displays the message m that prompts the user to turn off the light, but the light may be automatically turned off without performing such display. For example, the receiver 200 outputs a turn-off signal to the lighting device in which the transmitter 100 that is a television is set by Bluetooth (registered trademark), ZigBee, a specific low-power radio station, or the like. Thereby, the lighting device is automatically turned off.
 図289Aは、受信機200による撮像によって得られる撮像表示画像Ppreの一例を示す図である。 FIG. 289A is a diagram illustrating an example of a captured display image Ppre obtained by imaging by the receiver 200.
 例えば、送信機100は、スタジアムに設置されている大型ディスプレイとして構成されている。そして、送信機100は、例えばファーストフードおよびドリンクの注文が光IDで可能であることを示すメッセージを表示するとともに、可視光信号(すなわち光ID)を送信する。このようなメッセージが表示されると、ユーザは受信機200を送信機100に向けて撮像を行う。つまり、受信機200は、スタジアムに設置されている大型ディスプレイとして構成されている送信機100を被写体として撮像する。 For example, the transmitter 100 is configured as a large display installed in a stadium. Then, the transmitter 100 displays a message indicating that, for example, fast food and drinks can be ordered with the light ID, and transmits a visible light signal (that is, a light ID). When such a message is displayed, the user images the receiver 200 toward the transmitter 100. That is, the receiver 200 images the transmitter 100 configured as a large display installed in the stadium as a subject.
 受信機200は、その撮像によって撮像表示画像Ppreと復号用画像Pdecとを取得する。そして、受信機200は、その復号用画像Pdecを復号することによって光IDを取得し、その光IDと撮像表示画像Ppreとをサーバに送信する。 The receiver 200 acquires the captured display image Ppre and the decoding image Pdec by the imaging. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and transmits the optical ID and the captured display image Ppre to the server.
 サーバは、光IDごとに、その光IDに対応付けられた設置情報の中から、受信機200から送信された光IDに対応付けられた、撮像された大型ディスプレイの設置情報を特定する。例えば、設置情報は、大型ディスプレイが設置されている位置および向きと、その大型ディスプレイの大きさとなどを示す。さらに、サーバは、その撮像表示画像Ppreに映し出されている大型ディスプレイの大きさおよび向きと、その設置情報とに基づいて、スタジアムにおいてその撮像表示画像Ppreの撮像が行われた座席の番号を特定する。そして、サーバは、その座席の番号を含むメニュー画面を受信機200に表示させる。 For each light ID, the server specifies the installation information of the captured large display associated with the light ID transmitted from the receiver 200 from the installation information associated with the light ID. For example, the installation information indicates the position and orientation where the large display is installed, the size of the large display, and the like. Further, the server identifies the seat number where the captured display image Ppre was captured in the stadium based on the size and orientation of the large display displayed in the captured display image Ppre and the installation information. To do. Then, the server causes the receiver 200 to display a menu screen including the seat number.
 図289Bは、受信機200のディスプレイ201に表示されるメニュー画面の一例を示す図である。 FIG. 289B is a diagram illustrating an example of a menu screen displayed on the display 201 of the receiver 200.
 メニュー画面m1は、例えば商品ごとに、その商品の注文数が入力される入力欄ma1と、サーバによって特定されたスタジアムの座席の番号が記載されている座席欄mb1と、注文ボタンmc1とを含む。ユーザは、受信機200を操作することによって、所望の商品に対応する入力欄ma1にその商品の注文数を入力し、注文ボタンmc1を選択する。これにより、注文が確定され、受信機200は、その入力結果に応じた注文内容をサーバに送信する。 The menu screen m1 includes, for example, for each product, an input field ma1 in which the number of orders for the product is input, a seat field mb1 in which the seat number of the stadium specified by the server is described, and an order button mc1. . The user operates the receiver 200 to input the order quantity of the product in the input field ma1 corresponding to the desired product, and selects the order button mc1. As a result, the order is confirmed, and the receiver 200 transmits the order contents corresponding to the input result to the server.
 サーバは、その注文内容を受信すると、その注文内容にしたがった注文数の商品を、上述のように特定された番号の座席に届けるようにスタジアムのスタッフに指示する。 When the server receives the order details, it instructs the stadium staff to deliver the number of products according to the order details to the seat of the number specified as described above.
 図290は、受信機200とサーバとの処理動作の一例を示すフローチャートである。 FIG. 290 is a flowchart illustrating an example of processing operation between the receiver 200 and the server.
 受信機200は、まず、スタジアムの大型ディスプレイとして構成されている送信機100を撮像する(ステップS421)。受信機200は、その撮像によって得られる復号用画像Pdecを復号することによって、送信機100から送信される光IDを取得する(ステップS422)。受信機200は、ステップS422で取得された光IDと、ステップS421の撮像によって得られた撮像表示画像Ppreとをサーバに送信する(ステップS423)。 The receiver 200 first images the transmitter 100 configured as a large stadium display (step S421). The receiver 200 acquires the optical ID transmitted from the transmitter 100 by decoding the decoding image Pdec obtained by the imaging (step S422). The receiver 200 transmits the optical ID acquired in step S422 and the captured display image Ppre obtained by imaging in step S421 to the server (step S423).
 サーバは、その光IDおよび撮像表示画像Ppreを受信すると(ステップS424)、その光IDに基づいて、スタジアムに設置されている大型ディスプレイの設置情報を特定する(ステップS425)。例えば、サーバは、光IDごとに、その光IDに対応付けられた大型ディスプレイの設置情報を示すテーブルを保持し、受信機200から送信された光IDに対応付けられた設置情報をそのテーブルから検索することによって、その設置情報を特定する。 When the server receives the light ID and the captured display image Pre (step S424), the server identifies installation information of a large display installed in the stadium based on the light ID (step S425). For example, for each light ID, the server holds a table indicating installation information of a large display associated with the light ID, and the installation information associated with the light ID transmitted from the receiver 200 is stored from the table. The installation information is specified by searching.
 次に、サーバは、その特定された設置情報と、撮像表示画像Ppreに映っている大型ディスプレイの大きさおよび向きとに基づいて、スタジアムにおいて、その撮像表示画像Ppreの取得(すなわち撮像)が行われた座席の番号を特定する(ステップS426)。そして、サーバは、特定された座席の番号を含むメニュー画面m1のURL(Uniform Resource Locator)を受信機200に送信する(ステップS427)。 Next, the server acquires (that is, captures) the captured display image Ppre in the stadium based on the specified installation information and the size and orientation of the large display displayed in the captured display image Ppre. The assigned seat number is specified (step S426). Then, the server transmits the URL (Uniform Resource Locator) of the menu screen m1 including the identified seat number to the receiver 200 (step S427).
 受信機200は、サーバから送信されたメニュー画面m1のURLを受信すると(ステップS428)、そのURLにアクセスし、メニュー画面m1を表示する(ステップS429)。ここで、ユーザは、受信機200を操作することによって、注文内容をメニュー画面m1に入力し、注文ボタンmc1を選択することによって、注文を確定する。これにより、受信機200は、注文内容をサーバに送信する(ステップS430)。 When the receiver 200 receives the URL of the menu screen m1 transmitted from the server (step S428), the receiver 200 accesses the URL and displays the menu screen m1 (step S429). Here, the user operates the receiver 200 to input the order contents into the menu screen m1, and selects the order button mc1, thereby confirming the order. Thereby, the receiver 200 transmits the order details to the server (step S430).
 サーバは、その受信機200から送信された注文内容を受信すると、その注文内容にしたがった受注処理を行う(ステップS431)。このとき、サーバは、例えば、その注文内容に応じた注文数の商品を、ステップS426で特定された番号の座席に届けるようにスタジアムのスタッフに指示する。 Upon receiving the order details transmitted from the receiver 200, the server performs an order receiving process according to the order details (step S431). At this time, for example, the server instructs the staff of the stadium to deliver the number of products corresponding to the order contents to the seat of the number specified in step S426.
 このように、受信機200による撮像によって得られた撮像表示画像Ppreに基づいて、座席の番号が特定されるため、受信機200のユーザは、商品の注文の際に、わざわざ座席の番号を入力する必要がない。したがって、ユーザは、座席の番号の入力を省いて簡単に商品の注文を行うことができる。 Thus, since the seat number is specified based on the captured display image Ppre obtained by imaging by the receiver 200, the user of the receiver 200 bothers to input the seat number when ordering a product. There is no need to do. Therefore, the user can easily place an order for a product without inputting the seat number.
 なお、上述の例では、サーバが座席の番号を特定したが、受信機200が座席の番号を特定してもよい。この場合には、受信機200は、サーバから設置情報を取得して、その設置情報と、撮像表示画像Ppreに映っている大型ディスプレイの大きさおよび向きとに基づいて座席の番号を特定する。 In the above example, the server specifies the seat number, but the receiver 200 may specify the seat number. In this case, the receiver 200 acquires the installation information from the server, and specifies the seat number based on the installation information and the size and orientation of the large display displayed in the captured display image Pre.
 図291は、受信機1800aによって再生される音声の音量を説明するための図である。 FIG. 291 is a diagram for explaining the volume of sound reproduced by the receiver 1800a.
 受信機1800aは、図123に示す例と同様に、例えば街頭デジタルサイネージとして構成される送信機1800bから送信された光ID(可視光信号)を受信する。そして、受信機1800aは、送信機1800bによる画像再生と同じタイミングで、音声を再生する。つまり、受信機1800aは、送信機1800bによって再生される画像と同期するように音声を再生する。なお、受信機1800aは、送信機1800bによって再生される画像(再生画像)と同一の画像、または、その再生画像に関連するAR画像(ARの動画像)を、音声とともに再生してもよい。 The receiver 1800a receives the light ID (visible light signal) transmitted from the transmitter 1800b configured as, for example, street digital signage, similarly to the example shown in FIG. Then, the receiver 1800a reproduces sound at the same timing as the image reproduction by the transmitter 1800b. That is, the receiver 1800a reproduces sound so as to be synchronized with the image reproduced by the transmitter 1800b. Note that the receiver 1800a may reproduce the same image as the image reproduced by the transmitter 1800b (reproduced image) or an AR image (AR moving image) related to the reproduced image together with the sound.
 ここで、受信機1800aは、上述のように音声を再生するときには、送信機1800bまでの距離に応じてその音声の音量を調整する。具体的には、受信機1800aは、送信機1800bまでの距離が長いほど音量を小さく調整し、逆に、送信機1800bまでの距離が短いほど音量を大きく調整する。 Here, when reproducing the sound as described above, the receiver 1800a adjusts the volume of the sound according to the distance to the transmitter 1800b. Specifically, the receiver 1800a adjusts the volume smaller as the distance to the transmitter 1800b is longer, and conversely adjusts the volume larger as the distance to the transmitter 1800b is shorter.
 受信機1800aは、送信機1800bまでの距離を、GPS(Global Positioning System)などを利用して特定してもよい。具体的には、受信機1800aは、光IDに対応付けられた送信機1800bの位置情報をサーバなどから取得し、さらに、GPSによって受信機1800aの位置を特定する。そして、受信機1800aは、サーバから取得された位置情報によって示される送信機1800bの位置と、特定された受信機1800aの位置との間の距離を、上述の送信機1800bまでの距離として特定する。なお、受信機1800aは、GPSの代わりにBluetooth(登録商標)などを利用して、送信機1800bまでの距離を特定してもよい。 The receiver 1800a may specify the distance to the transmitter 1800b using a GPS (Global Positioning System) or the like. Specifically, the receiver 1800a acquires position information of the transmitter 1800b associated with the optical ID from a server or the like, and further specifies the position of the receiver 1800a by GPS. Then, the receiver 1800a specifies the distance between the position of the transmitter 1800b indicated by the position information acquired from the server and the position of the specified receiver 1800a as the distance to the above-described transmitter 1800b. . Note that the receiver 1800a may specify the distance to the transmitter 1800b by using Bluetooth (registered trademark) instead of GPS.
 また、受信機1800aは、撮像によって得られる上述の復号用画像Pdecの輝線パターン領域の大きさに基づいて、送信機1800bまでの距離を特定してもよい。輝線パターン領域は、図245および図246に示す例と同様、受信機1800aのイメージセンサが有する複数の露光ラインの通信用露光時間での露光によって現れる複数の輝線のパターンからなる領域である。この輝線パターン領域は、撮像表示画像Ppreに映し出されている送信機1800bのディスプレイの領域に相当する。具体的には、受信機1800aは、輝線パターン領域が大きいほど短い距離を送信機1800bまでの距離として特定し、逆に、輝線パターン領域が小さいほど長い距離を送信機1800bまでの距離として特定する。また、受信機1800aは、輝線パターン領域の大きさと距離との関係を示す距離データを用い、その距離データにおいて、撮像表示画像Ppre中の輝線パターン領域の大きさに対応付けられている距離を、送信機1800bまでの距離として特定してもよい。なお、受信機1800aは、上述のように受信された光IDをサーバに送信し、その光IDに対応付けられた距離データをそのサーバから取得してもよい。 Further, the receiver 1800a may specify the distance to the transmitter 1800b based on the size of the bright line pattern region of the above-described decoding image Pdec obtained by imaging. Similar to the examples shown in FIGS. 245 and 246, the bright line pattern region is a region composed of a plurality of bright line patterns that appear by exposure at a communication exposure time of a plurality of exposure lines included in the image sensor of the receiver 1800 a. This bright line pattern area corresponds to the display area of the transmitter 1800b displayed in the captured display image Ppre. Specifically, the receiver 1800a specifies the shorter distance as the distance to the transmitter 1800b as the bright line pattern region is larger, and conversely specifies the longer distance as the distance to the transmitter 1800b as the bright line pattern region is smaller. . The receiver 1800a uses distance data indicating the relationship between the size of the bright line pattern region and the distance, and in the distance data, the distance associated with the size of the bright line pattern region in the captured display image Pre is The distance to the transmitter 1800b may be specified. Note that the receiver 1800a may transmit the optical ID received as described above to the server, and obtain distance data associated with the optical ID from the server.
 このように、送信機1800bまでの距離に応じて音量が調整されるため、受信機1800aのユーザは、受信機1800aによって再生される音声を、現実に送信機1800bによって再生されている音声のように聞き取ることができる。 As described above, since the volume is adjusted according to the distance to the transmitter 1800b, the user of the receiver 1800a makes the sound reproduced by the receiver 1800a like the sound actually reproduced by the transmitter 1800b. Can be heard.
 図292は、受信機1800aから送信機1800bまでの距離と音量との関係を示す図である。 FIG. 292 is a diagram showing the relationship between the distance from the receiver 1800a to the transmitter 1800b and the sound volume.
 例えば、送信機1800bまでの距離がL1~L2[m]の間では、音量は、Vmin~Vmax[dB]までの範囲において、その距離に比例して増加または減少する。具体的には、受信機1800aは、送信機1800bまでの距離がL1[m]からL2[m]まで長くなれば、音量をVmax[dB]からVmin[dB]まで直線的に減少させる。また、送信機1800bまでの距離がL1[m]よりも短くなっても、受信機1800aは、音量をVmax[dB]に維持し、送信機1800bまでの距離がL2[m]よりも長くなっても、音量をVmin[dB]に維持する。 For example, when the distance to the transmitter 1800b is between L1 and L2 [m], the volume increases or decreases in proportion to the distance in the range from Vmin to Vmax [dB]. Specifically, the receiver 1800a linearly decreases the volume from Vmax [dB] to Vmin [dB] when the distance to the transmitter 1800b increases from L1 [m] to L2 [m]. Even if the distance to the transmitter 1800b is shorter than L1 [m], the receiver 1800a maintains the volume at Vmax [dB], and the distance to the transmitter 1800b is longer than L2 [m]. However, the volume is maintained at Vmin [dB].
 このように、受信機1800aは、最大音量Vmaxと、その最大音量Vmaxの音声が出力される最長距離L1と、最小音量Vminと、その最小音量Vminの音声が出力される最短距離L2とを記憶している。また、受信機1800aは、自らに設定されている属性に応じて、その最大音量Vmax、最小音量Vmin、最長距離L1および最短距離L2を変更してもよい。例えば、属性がユーザの年齢であって、その年齢が高齢を示す場合には、受信機1800aは、最大音量Vmaxを基準最大音量よりも大きくし、最小音量Vminを基準最小音量よりも大きくしてもよい。また、属性は、音声の出力が、スピーカから行われるかイヤホンから行われるかを示す情報であってもよい。 As described above, the receiver 1800a stores the maximum volume Vmax, the longest distance L1 at which the sound of the maximum volume Vmax is output, the minimum volume Vmin, and the shortest distance L2 at which the sound of the minimum volume Vmin is output. is doing. The receiver 1800a may change the maximum volume Vmax, the minimum volume Vmin, the longest distance L1, and the shortest distance L2 according to the attributes set for the receiver 1800a. For example, when the attribute is the age of the user and the age indicates a high age, the receiver 1800a sets the maximum volume Vmax higher than the reference maximum volume and sets the minimum volume Vmin higher than the reference minimum volume. Also good. Further, the attribute may be information indicating whether audio output is performed from a speaker or an earphone.
 このように、受信機1800aには最小音量Vminが設定されているため、受信機1800aが送信機1800bから遠すぎるために、音声が聞こえないことを抑えることができる。さらに、受信機1800aには最大音量Vmaxが設定されているため、受信機1800aが送信機1800bから近すぎるために、必要以上に大音量の音声が出力されてしまうことを抑えることができる。 As described above, since the minimum volume Vmin is set in the receiver 1800a, it is possible to prevent the receiver 1800a from being inaudible because the receiver 1800a is too far from the transmitter 1800b. Furthermore, since the maximum volume Vmax is set in the receiver 1800a, the receiver 1800a is too close to the transmitter 1800b, so that it is possible to suppress an excessively loud sound from being output.
 図293は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 293 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、ライトアップされた看板を撮像する。ここで、看板は、光IDを送信する上述の送信機100である照明装置によってライトアップされている。したがって、受信機200は、その撮像によって撮像表示画像Ppreと復号用画像Pdecとを取得する。そして、受信機200は、復号用画像Pdecを復号することによって光IDを取得し、その光IDに対応付けられた複数のAR画像P27a~P27cと認識情報とをサーバから取得する。受信機200は、認識情報に基づいて、撮像表示画像Ppreのうちの看板が映し出されている領域m2の周辺を対象領域として認識する。 The receiver 200 images the illuminated signboard. Here, the signboard is lit up by the illumination device which is the above-described transmitter 100 that transmits the optical ID. Therefore, the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by the imaging. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and acquires a plurality of AR images P27a to P27c associated with the optical ID and the recognition information from the server. Based on the recognition information, the receiver 200 recognizes the periphery of the area m2 in which the signboard is displayed in the captured display image Ppre as a target area.
 具体的には、受信機200は、図293の(a)に示すように、領域m2の左側に接する領域を第1の対象領域として認識し、その第1の対象領域にAR画像P27aを重畳する。 Specifically, as shown in FIG. 293 (a), the receiver 200 recognizes an area in contact with the left side of the area m2 as the first target area, and superimposes the AR image P27a on the first target area. To do.
 次に、受信機200は、図293の(b)に示すように、領域m2の下側を含む領域を第2の対象領域として認識し、その第2の対象領域にAR画像P27bを重畳する。 Next, as illustrated in FIG. 293 (b), the receiver 200 recognizes an area including the lower side of the area m2 as the second target area, and superimposes the AR image P27b on the second target area. .
 次に、受信機200は、図293の(c)に示すように、領域m2の上側に接する領域を第3の対象領域として認識し、その第3の対象領域にAR画像P27cを重畳する。 Next, as shown in FIG. 293 (c), the receiver 200 recognizes an area in contact with the upper side of the area m2 as the third target area, and superimposes the AR image P27c on the third target area.
 ここで、AR画像P27a~P27cのそれぞれは、例えば雪男のキャラクターの画像であって、動画であってもよい。 Here, each of the AR images P27a to P27c is, for example, an image of a snowman character and may be a moving image.
 また、受信機200は、光IDを継続して繰り返し取得している間、予め定められた順序およびタイミングで、認識される対象領域を第1~第3の対象領域のうちの何れかに切り替えてもよい。つまり、受信機200は、認識される対象領域を、第1の対象領域、第2の対象領域、第3の対象領域の順に切り替えてもよい。あるいは、受信機200は、上述の光IDを取得するごとに、予め定められた順序で、認識される対象領域を第1~第3の対象領域のうちの何れかに切り替えてもよい。つまり、受信機200は、最初に光IDを取得し、その光IDを継続して繰り返し取得している間には、図293の(a)に示すように、第1の対象領域を認識して、その第1の対象領域にAR画像P27aを重畳する。そして、受信機200は、その光IDを取得できなくなった場合には、AR画像P27aを非表示にする。次に、受信機200は、再び光IDを取得した場合には、その光IDを継続して繰り返し取得している間、図293の(b)に示すように、第2の対象領域を認識して、その第2の対象領域にAR画像P27bを重畳する。そして、受信機200は、再び、その光IDを取得できなくなった場合には、AR画像P27bを非表示にする。次に、受信機200は、再び光IDを取得した場合には、その光IDを継続して繰り返し取得している間、図293の(c)に示すように、第3の対象領域を認識して、その第3の対象領域にAR画像P27cを重畳する。 Further, the receiver 200 switches the recognized target area to any one of the first to third target areas in a predetermined order and timing while continuously acquiring the optical ID. May be. That is, the receiver 200 may switch the recognized target area in the order of the first target area, the second target area, and the third target area. Alternatively, the receiver 200 may switch the recognized target area to any one of the first to third target areas in a predetermined order each time the above-described optical ID is acquired. That is, the receiver 200 first acquires the light ID, and while continuously acquiring the light ID, the receiver 200 recognizes the first target area as illustrated in FIG. Then, the AR image P27a is superimposed on the first target area. Then, when the receiver 200 cannot acquire the optical ID, the receiver 200 hides the AR image P27a. Next, when the receiver 200 acquires the light ID again, the receiver 200 recognizes the second target area as shown in FIG. 293 (b) while continuously acquiring the light ID. Then, the AR image P27b is superimposed on the second target area. Then, when the receiver 200 cannot acquire the optical ID again, the receiver 200 hides the AR image P27b. Next, when the receiver 200 acquires the light ID again, the receiver 200 recognizes the third target area as shown in FIG. 293 (c) while continuously acquiring the light ID. Then, the AR image P27c is superimposed on the third target area.
 このように光IDを取得するごとに、認識される対象領域を切り替える場合には、受信機200は、N(Nは2以上の整数)回に1回の頻度で、表示されるAR画像の色を変更してもよい。N回は、AR画像が表示される回数であって、例えば200回であってもよい。つまり、AR画像P27a~P27cは、全て同じ白色のキャラクターの画像であるが、200回に1回の頻度で、例えばピンク色のキャラクターのAR画像が表示される。受信機200は、そのピンク色のキャラクターのAR画像が表示されているときに、ユーザによるそのAR画像に対する操作を受け付けると、そのユーザに対してポイントを付与してもよい。 In this way, when the target area to be recognized is switched every time the light ID is acquired, the receiver 200 displays the AR image displayed once every N (N is an integer of 2 or more) times. The color may be changed. N times is the number of times the AR image is displayed, and may be 200 times, for example. That is, the AR images P27a to P27c are images of the same white character, but an AR image of a pink character, for example, is displayed at a frequency of once every 200 times. When the AR image of the pink character is displayed, the receiver 200 may give points to the user when receiving an operation on the AR image by the user.
 このように、AR画像が重畳される対象領域を切り替えたり、AR画像の色を所定の頻度で変更することによって、送信機100によってライトアップされた看板の撮像にユーザの興味を向けることができ、ユーザに光IDを繰り返し取得させることができる。 In this way, by switching the target area on which the AR image is superimposed or changing the color of the AR image at a predetermined frequency, the user's interest can be directed to the imaging of the signboard lit up by the transmitter 100. The user can repeatedly obtain the optical ID.
 図294は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 294 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、例えば建物内における複数の通路が交差する位置の床面に描かれたマークM4を撮像することによって、ユーザの進むべき進路を提示する、いわゆるウェイファインダー(Way Finder)としての機能を有する。建物は、例えばホテルなどであって、提示される進路は、チェックインを行ったユーザが自らの部屋に向かう進路である。 The receiver 200 functions as a so-called way finder (Way Finder) that presents a route to be followed by the user, for example, by imaging the mark M4 drawn on the floor surface at a position where a plurality of passages intersect in the building. Have The building is a hotel, for example, and the presented route is a route where the user who has checked in heads to his / her room.
 マークM4は、輝度変化によって光IDを送信する上述の送信機100である照明装置によってライトアップされている。したがって、受信機200は、そのマークM4の撮像によって撮像表示画像Ppreと復号用画像Pdecとを取得する。そして、受信機200は、復号用画像Pdecを復号することによって光IDを取得し、その光IDと受信機200の端末情報とをサーバに送信する。受信機200は、その光IDおよび端末情報に対応付けられた複数のAR画像P28と認識情報とをサーバから取得する。なお、光IDおよび端末情報は、ユーザのチェックインのときに、複数のAR画像P28および認識情報に対応付けてサーバに格納されている。 The mark M4 is lit up by the illumination device that is the above-described transmitter 100 that transmits the light ID by a change in luminance. Therefore, the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by capturing the mark M4. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and transmits the optical ID and the terminal information of the receiver 200 to the server. The receiver 200 acquires a plurality of AR images P28 and recognition information associated with the optical ID and terminal information from the server. The optical ID and terminal information are stored in the server in association with a plurality of AR images P28 and recognition information at the time of user check-in.
 受信機200は、認識情報に基づいて、撮像表示画像PpreのうちのマークM4が映し出されている領域m4の周辺において複数の対象領域を認識する。そして、受信機200は、図294に示すように、その複数の対象領域のそれぞれに、例えば動物の足跡のようなAR画像P28を重畳して表示する。 Based on the recognition information, the receiver 200 recognizes a plurality of target areas around the area m4 in which the mark M4 is displayed in the captured display image Ppre. Then, as shown in FIG. 294, the receiver 200 displays an AR image P28 such as an animal footprint superimposed on each of the plurality of target regions.
 具体的には、認識情報は、マークM4の位置で右に曲がる進路を示す。受信機200は、このような認識情報に基づいて、撮像表示画像Ppreにおける経路を特定し、その経路に沿って配列される複数の対象領域を認識する。この経路は、ディスプレイ201の下側から領域m4に向かい、領域m4で右に曲がる経路である。受信機200は、あたかも動物がその経路に沿って歩いたかのように、認識された複数の対象領域のそれぞれにAR画像P28を配置する。 Specifically, the recognition information indicates the course of turning to the right at the position of the mark M4. Based on such recognition information, the receiver 200 identifies a route in the captured display image Ppre and recognizes a plurality of target regions arranged along the route. This route is a route that goes from the lower side of the display 201 to the region m4 and turns right in the region m4. The receiver 200 arranges the AR image P28 in each of the recognized plurality of target regions as if the animal walked along the route.
 ここで、受信機200は、撮像表示画像Ppreにおける経路を特定する場合には、自らに備えられている9軸センサによって検出される地磁気を利用してもよい。この場合、認識情報は、マークM4の位置で進むべき方位を地磁気の向きを基準として示す。例えば、認識情報は、マークM4の位置で進むべき方向として西を示す。受信機200は、このような認識情報に基づいて、撮像表示画像Ppreにおいて、ディスプレイ201の下側から領域m4に向かい、領域m4で西に向かう経路を特定する。そして、受信機200は、その経路に沿って配列される複数の対象領域を認識する。なお、受信機200は、9軸センサによる重力加速度の検出によって、ディスプレイ201の下側を特定する。 Here, when specifying the route in the captured display image Pre, the receiver 200 may use the geomagnetism detected by the 9-axis sensor provided in the receiver 200. In this case, the recognition information indicates the direction to proceed at the position of the mark M4 with reference to the direction of geomagnetism. For example, the recognition information indicates west as the direction to proceed at the position of the mark M4. Based on such recognition information, the receiver 200 specifies a route from the lower side of the display 201 toward the area m4 and toward the west in the area m4 in the captured display image Ppre. Then, the receiver 200 recognizes a plurality of target areas arranged along the route. Note that the receiver 200 identifies the lower side of the display 201 by detecting gravitational acceleration using a nine-axis sensor.
 このように、受信機200によってユーザの進路が提示されるため、ユーザはその進路にしたがって進めば、簡単に目的地に辿り着くことができる。また、その進路は、撮像表示画像PpreにおけるAR画像として表示されるため、ユーザに分かりやすくその進路を提示することができる。 In this way, since the route of the user is presented by the receiver 200, the user can easily reach the destination by following the route. Moreover, since the course is displayed as an AR image in the captured display image Ppre, the course can be presented to the user in an easy-to-understand manner.
 なお、送信機100である照明装置は、短パルスの光でマークM4を照らすことによって、明るさを抑えながら光IDを適切に送信することができる。また、受信機200は、マークM4を撮像したが、ディスプレイ201側に配置されているカメラ(いわゆる自取りカメラ)を用いて、照明装置を撮像してもよい。また、受信機200は、マークM4および照明装置の両方を撮像してもよい。 In addition, the illuminating device which is the transmitter 100 can appropriately transmit the light ID while suppressing the brightness by illuminating the mark M4 with a short pulse of light. In addition, the receiver 200 images the mark M4. However, the receiver 200 may image the illumination device using a camera (a so-called self-taking camera) disposed on the display 201 side. The receiver 200 may capture both the mark M4 and the illumination device.
 図295は、受信機200によるラインスキャン時間の求め方の一例を説明するための図である。 FIG. 295 is a diagram for explaining an example of how to determine the line scan time by the receiver 200.
 受信機200は、復号用画像Pdecを復号する場合には、ラインスキャン時間を用いて復号を行う。このラインスキャン時間は、イメージセンサに含まれる1つの露光ラインの露光が開始されてから、次の露光ラインの露光が開始されるまでの時間である。受信機200は、このラインスキャン時間が判明していれば、その判明しているラインスキャン時間を用いて復号用画像Pdecを復号する。しかし、そのラインスキャン時間が判明していない場合には、受信機200は、ラインスキャン時間を復号用画像Pdecから求める。 The receiver 200 performs decoding using the line scan time when decoding the decoding image Pdec. This line scan time is the time from the start of exposure of one exposure line included in the image sensor to the start of exposure of the next exposure line. If the line scan time is known, the receiver 200 decodes the decoding image Pdec using the known line scan time. However, when the line scan time is not known, the receiver 200 obtains the line scan time from the decoding image Pdec.
 例えば、受信機200は、図295に示すように、復号用画像Pdecにおいて輝線パターンを構成する複数の明線と複数の暗線の中から、最小幅の線を見つけ出す。なお、明線は、送信機100の輝度が高いときに、1つまたは複数の連続する露光ラインのそれぞれが露光することによって生じる復号用画像Pdec上の線である。また、暗線は、送信機100の輝度が低いときに、1つまたは複数の連続する露光ラインのそれぞれが露光することによって生じる復号用画像Pdec上の線である。 For example, as shown in FIG. 295, the receiver 200 finds a line having the minimum width from among a plurality of bright lines and a plurality of dark lines constituting the bright line pattern in the decoding image Pdec. The bright line is a line on the decoding image Pdec generated when each of one or a plurality of continuous exposure lines is exposed when the luminance of the transmitter 100 is high. Further, the dark line is a line on the decoding image Pdec generated by exposure of each of one or a plurality of continuous exposure lines when the luminance of the transmitter 100 is low.
 受信機200は、その最小幅の線を見つけると、その最小幅の線に対応する露光ラインのライン数、つまりピクセル数を特定する。送信機100が光IDを送信するために輝度変化するキャリア周波数が9.6kHzである場合、送信機100の輝度が高い時間または低い時間は、最短で104μsである。したがって、受信機200は、104μsを、特定された最小幅のピクセル数で除算することによって、ラインスキャン時間を算出する。 When the receiver 200 finds the line with the minimum width, the receiver 200 specifies the number of exposure lines corresponding to the line with the minimum width, that is, the number of pixels. When the carrier frequency at which the luminance changes so that the transmitter 100 transmits the optical ID is 9.6 kHz, the time when the luminance of the transmitter 100 is high or low is 104 μs at the shortest. Therefore, the receiver 200 calculates the line scan time by dividing 104 μs by the number of pixels having the specified minimum width.
 図296は、受信機200によるラインスキャン時間の求め方の一例を説明するための図である。 FIG. 296 is a diagram for explaining an example of how to obtain the line scan time by the receiver 200.
 受信機200は、復号用画像Pdecの輝線パターンに対してフーリエ変換を行い、そのフーリエ変換によって得られる空間周波数に基づいてラインスキャン時間を求めてもよい。 The receiver 200 may perform a Fourier transform on the bright line pattern of the decoding image Pdec and obtain the line scan time based on the spatial frequency obtained by the Fourier transform.
 例えば図296に示すように、受信機200は、上述のフーリエ変換によって、空間周波数と、復号用画像Pdecにおけるその空間周波数の成分の強度との関係を示すスペクトルを導出する。次に、受信機200は、そのスペクトルに示される複数のピークのそれぞれを順に選択する。そして、受信機200は、ピークを選択するごとに、その選択されたピークの空間周波数(例えば図296における空間周波数f2)が、9.6kHzの時間周波数によって得られるようなラインスキャン時間を、ラインキャン時間候補として算出する。9.6kHzは、上述のように送信機100の輝度変化のキャリア周波数である。これにより、複数のラインスキャン時間候補が算出される。受信機200は、これらの複数のラインスキャン時間候補のうちの最尤の候補を、ラインスキャン時間として選択する。 For example, as shown in FIG. 296, the receiver 200 derives a spectrum indicating the relationship between the spatial frequency and the intensity of the component of the spatial frequency in the decoding image Pdec by the Fourier transform described above. Next, the receiver 200 sequentially selects each of the plurality of peaks indicated in the spectrum. Each time the receiver 200 selects a peak, the receiver 200 calculates a line scan time such that the spatial frequency of the selected peak (for example, the spatial frequency f2 in FIG. 296) is obtained by a time frequency of 9.6 kHz. Calculated as a candidate for the can time. 9.6 kHz is the carrier frequency of the luminance change of the transmitter 100 as described above. Thereby, a plurality of line scan time candidates are calculated. The receiver 200 selects the most likely candidate among the plurality of line scan time candidates as the line scan time.
 最尤の候補を選択するためには、受信機200は、撮像におけるフレームレートと、イメージセンサに含まれる露光ラインの数とに基づいて、ラインスキャン時間の許容範囲を算出する。つまり、受信機200は、1×10[μs]/{(フレームレート)×(露光ライン数)}によって、ラインスキャン時間の最大値を算出する。そして、受信機200は、その最大値×定数K(K<1)~最大値までを、ラインスキャン時間の許容範囲として決定する。定数Kは、例えば0.9または0.8などである。 In order to select the most likely candidate, the receiver 200 calculates the allowable range of line scan time based on the frame rate in imaging and the number of exposure lines included in the image sensor. That is, the receiver 200 calculates the maximum value of the line scan time by 1 × 10 6 [μs] / {(frame rate) × (number of exposure lines)}. Then, the receiver 200 determines the maximum value × constant K (K <1) to the maximum value as the allowable range of the line scan time. The constant K is, for example, 0.9 or 0.8.
 受信機200は、複数のラインスキャン時間候補のうち、この許容範囲にある候補を最尤の候補、すなわちラインスキャン時間として選択する。 The receiver 200 selects a candidate within this allowable range from among a plurality of line scan time candidates as a maximum likelihood candidate, that is, a line scan time.
 なお、受信機200は、図295に示す例によって算出されたラインスキャン時間が上述の許容範囲にあるか否かによって、その算出されたラインスキャン時間の信頼性を評価してもよい。 Note that the receiver 200 may evaluate the reliability of the calculated line scan time depending on whether or not the line scan time calculated according to the example illustrated in FIG. 295 is within the above-described allowable range.
 図297は、受信機200によるラインスキャン時間の求め方の一例を示すフローチャートである。 FIG. 297 is a flowchart showing an example of how to obtain the line scan time by the receiver 200.
 受信機200は、復号用画像Pdecの復号を試みることによって、ラインスキャン時間を求めてもよい。具体的には、まず、受信機200は、撮像を開始する(ステップS441)。次に、受信機200は、ラインスキャン時間が判明しているか否かを判定する(ステップS442)。例えば、受信機200は、自らの種類および型式をサーバに通知し、その種類および型式に応じたラインスキャン時間を問い合わせることによって、そのラインスキャン時間が判明しているか否かを判定してもよい。ここで、判明していると判定すると(ステップS442のYes)、受信機200は、光IDの基準取得回数をn(nは2以上の整数であって、例えば4)に設定する(ステップS443)。次に、受信機200は、その判明しているラインスキャン時間を用いて復号用画像Pdecを復号することによって、光IDを取得する(ステップS444)。このとき、受信機200は、ステップS441で開始された撮像によって順次得られる複数の復号用画像Pdecのそれぞれに対して復号を行うことによって、複数の光IDを取得する。ここで、受信機200は、同じ光IDを基準取得回数(すなわちn回)だけ取得したか否かを判定する(ステップS445)。n回取得したと判定すると(ステップS445のYes)、受信機200は、その光IDを信用し、その光IDを用いた処理(例えばAR画像の重畳)を開始する(ステップS446)。一方、n回取得していないと判定すると(ステップS445のNo)、受信機200は、その光IDを信用せず、処理を終了する。 The receiver 200 may obtain the line scan time by trying to decode the decoding image Pdec. Specifically, first, the receiver 200 starts imaging (step S441). Next, the receiver 200 determines whether or not the line scan time is known (step S442). For example, the receiver 200 may determine whether or not the line scan time is known by notifying the server of the type and model of the receiver 200 and inquiring the line scan time according to the type and model. . If it is determined that it is known (Yes in step S442), the receiver 200 sets the reference acquisition count of the optical ID to n (n is an integer equal to or larger than 2, for example, 4) (step S443). ). Next, the receiver 200 acquires the optical ID by decoding the decoding image Pdec using the known line scan time (step S444). At this time, the receiver 200 acquires a plurality of optical IDs by performing decoding on each of the plurality of decoding images Pdec obtained sequentially by the imaging started in step S441. Here, the receiver 200 determines whether or not the same optical ID has been acquired the reference acquisition times (that is, n times) (step S445). If it is determined that it has been acquired n times (Yes in step S445), the receiver 200 trusts the optical ID and starts processing using the optical ID (for example, superimposition of an AR image) (step S446). On the other hand, if it is determined that it has not been acquired n times (No in step S445), the receiver 200 does not trust the optical ID and ends the process.
 ステップS442において、ラインスキャン時間が判明していないと判定すると(ステップS442のNo)、受信機200は、光IDの基準取得回数をn+k(kは1以上の整数)に設定する(ステップS447)。つまり、受信機200は、ラインスキャン時間が判明していないときには、ラインスキャン時間が判明しているときよりも多い基準取得回数を設定する。次に、受信機200は、仮のラインスキャン時間を決定する(ステップS448)。そして、受信機200は、仮決めのラインスキャン時間を用いて復号用画像Pdecを復号することによって、光IDを取得する(ステップS449)。このとき、受信機200は、上述と同様、ステップS441で開始された撮像によって順次得られる複数の復号用画像Pdecのそれぞれに対して復号を行うことによって、複数の光IDを取得する。ここで、受信機200は、同じ光IDを基準取得回数(すなわち(n+k)回)だけ取得したか否かを判定する(ステップS450)。 If it is determined in step S442 that the line scan time is not known (No in step S442), the receiver 200 sets the optical ID reference acquisition count to n + k (k is an integer equal to or greater than 1) (step S447). . That is, when the line scan time is not known, the receiver 200 sets a larger reference acquisition count than when the line scan time is known. Next, the receiver 200 determines a temporary line scan time (step S448). Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec using the provisional line scan time (step S449). At this time, the receiver 200 acquires a plurality of optical IDs by performing decoding on each of the plurality of decoding images Pdec obtained sequentially by the imaging started in step S441, as described above. Here, the receiver 200 determines whether or not the same optical ID has been acquired the reference acquisition times (that is, (n + k) times) (step S450).
 (n+k)回取得したと判定すると(ステップS450のYes)、受信機200は、仮決めのラインスキャン時間が正しいラインスキャン時間であると判断する。そして、受信機200は、受信機200の種類および型式と、そのラインスキャン時間とをサーバに通知する(ステップS451)。これにより、サーバでは、受信機の種類および型式と、その受信機に適したラインスキャン時間とが対応付けて記憶される。したがって、同じ種類および型式の他の受信機が撮像を開始した場合には、他の受信機は、サーバに問い合わせることによって、自らのラインスキャン時間を特定することができる。つまり、他の受信機は、ステップS442の判定において、ラインスキャン時間が判明していると判定することができる。 If it is determined that (n + k) times have been acquired (Yes in step S450), the receiver 200 determines that the provisional line scan time is the correct line scan time. Then, the receiver 200 notifies the server of the type and model of the receiver 200 and the line scan time (step S451). As a result, the server stores the type and model of the receiver in association with the line scan time suitable for the receiver. Therefore, when another receiver of the same type and type starts imaging, the other receiver can specify its own line scan time by making an inquiry to the server. That is, the other receivers can determine that the line scan time is known in the determination in step S442.
 そして、受信機200は、(n+k)回取得された光IDを信用し、その光IDを用いた処理(例えばAR画像の重畳)を開始する(ステップS446)。 Then, the receiver 200 trusts the optical ID acquired (n + k) times, and starts processing using the optical ID (for example, superimposition of an AR image) (step S446).
 また、ステップS450において、同じ光IDを(n+k)回取得していないと判定すると(ステップS450のNo)、受信機200は、さらに、終了条件が満たされたか否かを判定する(ステップS452)。終了条件は、例えば、撮像開始から予め定められた時間が経過したこと、あるいは、光IDの取得が最大取得回数以上行われたことなどである。このような終了条件が満たされたと判定すると(ステップS452のYes)、受信機200は処理を終了する。一方、終了条件が満たされていないと判定すると(ステップS452のNo)、受信機200は、仮決めのラインスキャン時間を変更する(ステップS453)。そして、受信機200は、その変更された仮決めのラインスキャン時間を用いてステップS449からの処理を繰り返し実行する。 If it is determined in step S450 that the same light ID has not been acquired (n + k) times (No in step S450), the receiver 200 further determines whether or not an end condition is satisfied (step S452). . The end condition is, for example, that a predetermined time has elapsed since the start of imaging, or that the optical ID has been acquired more than the maximum number of acquisitions. If it is determined that such an end condition is satisfied (Yes in step S452), the receiver 200 ends the process. On the other hand, when determining that the termination condition is not satisfied (No in step S452), the receiver 200 changes the provisional line scan time (step S453). Then, the receiver 200 repeatedly executes the processing from step S449 using the changed provisional line scan time.
 このように、受信機200は、ラインスキャン時間が判明していなくても、図295~図297に示す例のように、そのラインスキャン時間を求めることができる。これにより、受信機200の種類および型式がどのようなものであっても、受信機200は、復号用画像Pdecを適切に復号して光IDを取得することができる。 Thus, even if the line scan time is not known, the receiver 200 can obtain the line scan time as in the examples shown in FIGS. 295 to 297. Accordingly, regardless of the type and model of the receiver 200, the receiver 200 can appropriately decode the decoding image Pdec and obtain an optical ID.
 図298は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 298 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、テレビとして構成されている送信機100を撮像する。この送信機100は、例えばテレビ番組を表示しながら輝度変化することによって、光IDとタイムコードを周期的に送信している。タイムコードは、送信されるたびに、その送信時の時刻を示す情報であって、例えば図126に示す時間パケットであってもよい。 The receiver 200 images the transmitter 100 configured as a television. The transmitter 100 periodically transmits an optical ID and a time code by changing luminance while displaying a television program, for example. The time code is information indicating the time at the time of transmission each time it is transmitted, and may be a time packet shown in FIG. 126, for example.
 受信機200は、上述の撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。そして、受信機200は、周期的に取得される撮像表示画像Ppreをディスプレイ201に表示しながら、復号用画像Pdecを復号することによって、上述の光IDとタイムコードを取得する。次に、受信機200は、その光IDをサーバ300に送信する。サーバ300は、その光IDを受信すると、その光IDに対応付けられた音声データと、AR開始時刻情報と、AR画像P29と、認識情報とを受信機200に送信する。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec through the above-described imaging. The receiver 200 acquires the above-described optical ID and time code by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201. Next, the receiver 200 transmits the optical ID to the server 300. When the server 300 receives the optical ID, the server 300 transmits the audio data associated with the optical ID, the AR start time information, the AR image P29, and the recognition information to the receiver 200.
 受信機200は、音声データを取得すると、その音声データを送信機100に映し出されているテレビ番組の映像と同期させて再生する。つまり、音声データは、複数の音声単位データからなり、それらの複数の音声単位データにはタイムコードが含まれている。受信機200は、音声データのうち、光IDとともに送信機100から取得されるタイムコードと同一の時刻を示すタイムコードを含む音声単位データから、複数の音声単位データの再生を開始する。これにより、音声データの再生が、テレビ番組の映像と同期される。なお、このような音声と映像との同期は、図123以降の各図によって示される音声同期再生と同様の方法によって行われてもよい。 When the receiver 200 acquires the audio data, the receiver 200 reproduces the audio data in synchronization with the video of the TV program displayed on the transmitter 100. That is, the sound data is composed of a plurality of sound unit data, and the plurality of sound unit data includes a time code. The receiver 200 starts reproduction of a plurality of audio unit data from the audio unit data including the time code indicating the same time as the time code acquired from the transmitter 100 together with the optical ID in the audio data. Thereby, the reproduction of the audio data is synchronized with the video of the television program. It should be noted that such synchronization between audio and video may be performed by a method similar to the audio synchronized playback shown in each of the drawings after FIG.
 受信機200は、AR画像P29および認識情報を取得すると、撮像表示画像Ppreのうち、その認識情報に応じた領域を対象領域として認識し、その対象領域にAR画像P29を重畳する。例えば、AR画像P29は、受信機200のディスプレイ201の亀裂を示す画像であって、対象領域は、撮像表示画像Ppreのうちの送信機100の像を横切る領域である。 When the receiver 200 acquires the AR image P29 and the recognition information, the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Ppre as a target area, and superimposes the AR image P29 on the target area. For example, the AR image P29 is an image showing a crack in the display 201 of the receiver 200, and the target area is an area that crosses the image of the transmitter 100 in the captured display image Ppre.
 ここで、受信機200は、上述のようなAR画像P29が重畳された撮像表示画像Ppreを、AR開始時刻情報に応じたタイミングに表示する。AR開始時刻情報は、AR画像P29が表示される時刻を示す情報である。つまり、受信機200は、送信機100から随時送信されるタイムコードのうち、AR開始時刻情報と同一の時刻を示すタイムコードを受信したタイミングに、上述のAR画像P29が重畳された撮像表示画像Ppreを表示する。例えば、AR開始時刻情報によって示される時刻は、テレビ番組において、魔法使いの少女が氷の魔法をかけるシーンが登場する時刻である。また、この時刻には、受信機200は、音声データの再生によって、そのAR画像P29の亀裂が生じる音を受信機200のスピーカから出力してもよい。 Here, the receiver 200 displays the captured display image Ppre on which the AR image P29 as described above is superimposed at a timing according to the AR start time information. The AR start time information is information indicating the time at which the AR image P29 is displayed. That is, the receiver 200 captures a display image in which the above-described AR image P29 is superimposed at the timing of receiving the time code indicating the same time as the AR start time information among the time codes transmitted from the transmitter 100 as needed. Display Pre. For example, the time indicated by the AR start time information is the time when a scene in which a magician girl applies ice magic appears in a television program. At this time, the receiver 200 may output from the speaker of the receiver 200 a sound in which the AR image P29 is cracked due to the reproduction of the audio data.
 これにより、ユーザは、テレビ番組のシーンを、より臨場感を持って視聴することできる。 This allows the user to view the TV program scene with a sense of reality.
 また、受信機200は、AR開始時刻情報によって示される時刻に、受信機200に備えられているバイブレータを振動させてもよく、光源をフラッシュのように発光させてもよく、ディスプレイ201を瞬間的に明るくさせたり点滅させたりしてもよい。また、AR画像P29は、亀裂を示す画像だけでなく、ディスプレイ201の結露が凍り付いた状態を示す画像を含んでいてもよい。 Further, the receiver 200 may vibrate a vibrator provided in the receiver 200 at a time indicated by the AR start time information, or may cause the light source to emit light like a flash, and the display 201 may be instantaneously displayed. It may be brightened or flashed. Further, the AR image P29 may include not only an image showing a crack but also an image showing a state in which condensation of the display 201 is frozen.
 図299は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 299 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、例えば玩具の杖として構成されている送信機100を撮像する。この送信機100は、光源を備え、その光源が輝度変化することによって、光IDを送信している。 The receiver 200 images the transmitter 100 configured as a toy cane, for example. The transmitter 100 includes a light source, and transmits an optical ID by changing the luminance of the light source.
 受信機200は、上述の撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。そして、受信機200は、周期的に取得される撮像表示画像Ppreをディスプレイ201に表示しながら、復号用画像Pdecを復号することによって、上述の光IDを取得する。次に、受信機200は、その光IDをサーバ300に送信する。サーバ300は、その光IDを受信すると、その光IDに対応付けられたAR画像P30と認識情報とを受信機200に送信する。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec through the above-described imaging. The receiver 200 acquires the above-described optical ID by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201. Next, the receiver 200 transmits the optical ID to the server 300. When the server 300 receives the optical ID, the server 300 transmits the AR image P30 associated with the optical ID and the recognition information to the receiver 200.
 ここで、認識情報は、さらに、送信機100を把持する人物によるジェスチャ(すなわち動作)を示すジェスチャ情報を含む。ジェスチャ情報は、例えば、人物が送信機100を右から左に動かすジェスチャを示す。受信機200は、各撮像表示画像Ppreに映し出されている、送信機100を把持する人物によるジェスチャと、ジェスチャ情報によって示されるジェスチャとを比較する。そして、受信機200は、それらのジェスチャが一致すると、例えば、多くの星型のAR画像P30が、そのジェスチャによって移動する送信機100の軌跡に沿って配列されるように、それらのAR画像P30を撮像表示画像Ppreに重畳する。 Here, the recognition information further includes gesture information indicating a gesture (that is, an action) by a person holding the transmitter 100. The gesture information indicates, for example, a gesture in which a person moves the transmitter 100 from right to left. The receiver 200 compares the gesture by the person holding the transmitter 100 displayed on each captured display image Ppre with the gesture indicated by the gesture information. Then, when the gestures coincide with each other, the receiver 200 arranges the AR images P30 such that, for example, many star-shaped AR images P30 are arranged along the trajectory of the transmitter 100 moved by the gestures. Is superimposed on the captured display image Ppre.
 図300は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 300 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
 受信機200は、上述と同様に、例えば玩具の杖として構成されている送信機100を撮像する。 The receiver 200 images the transmitter 100 configured as, for example, a toy cane, as described above.
 受信機200は、その撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。そして、受信機200は、周期的に取得される撮像表示画像Ppreをディスプレイ201に表示しながら、復号用画像Pdecを復号することによって、上述の光IDを取得する。次に、受信機200は、その光IDをサーバ300に送信する。サーバ300は、その光IDを受信すると、その光IDに対応付けられたAR画像P31と認識情報とを受信機200に送信する。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec by the imaging. The receiver 200 acquires the above-described optical ID by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201. Next, the receiver 200 transmits the optical ID to the server 300. When the server 300 receives the optical ID, the server 300 transmits the AR image P31 associated with the optical ID and the recognition information to the receiver 200.
 ここで、認識情報は、上述と同様に、送信機100を把持する人物によるジェスチャを示すジェスチャ情報を含む。ジェスチャ情報は、例えば、人物が送信機100を右から左に動かすジェスチャを示す。受信機200は、各撮像表示画像Ppreに映し出されている、送信機100を把持する人物によるジェスチャと、ジェスチャ情報によって示されるジェスチャとを比較する。そして、受信機200は、それらのジェスチャが一致すると、例えば、撮像表示画像Ppreにおいて、その送信機100を把持する人物が映し出されている領域である対象領域に、ドレスの衣装を示すAR画像P30を重畳する。 Here, the recognition information includes gesture information indicating a gesture by a person holding the transmitter 100 as described above. The gesture information indicates, for example, a gesture in which a person moves the transmitter 100 from right to left. The receiver 200 compares the gesture by the person holding the transmitter 100 displayed on each captured display image Ppre with the gesture indicated by the gesture information. Then, when the gestures match, the receiver 200, for example, in the captured display image Ppre, an AR image P30 indicating a dress costume in a target area that is an area in which a person holding the transmitter 100 is projected. Is superimposed.
 このように、本変形例における表示方法では、光IDに対応するジェスチャ情報をサーバから取得する。次に、周期的に取得される撮像表示画像によって示される被写体の動きが、サーバから取得されたジェスチャ情報によって示される動きと一致するか否かを判定する。そして、一致すると判定されたときに、AR画像が重畳された撮像表示画像Ppreを表示する。 As described above, in the display method in the present modification, gesture information corresponding to the light ID is acquired from the server. Next, it is determined whether or not the movement of the subject indicated by the periodically acquired captured display image matches the movement indicated by the gesture information acquired from the server. And when it determines with matching, the picked-up display image Ppre on which AR image was superimposed is displayed.
 これにより、例えば人物などの被写体の動きに応じてAR画像を表示することができる。つまり、適切なタイミングにAR画像を表示することができる。 Thereby, for example, an AR image can be displayed according to the movement of a subject such as a person. That is, the AR image can be displayed at an appropriate timing.
 図301は、受信機200の姿勢に応じて取得される復号用画像Pdecの一例を示す図である。 FIG. 301 is a diagram illustrating an example of the decoding image Pdec acquired according to the attitude of the receiver 200.
 例えば、図301の(a)に示すように、受信機200は、横向きの姿勢で、輝度変化によって光IDを送信する送信機100を撮像する。なお、横向きの姿勢は、受信機200のディスプレイ201の長手方向が水平方向に沿う姿勢である。また、受信機200に備えられているイメージセンサの各露光ラインは、ディスプレイ201の長手方向に対して直交している。上述のような撮像によって、輝線の数が少ない輝線パターン領域Xを含む復号用画像Pdecが取得される。この復号用画像Pdecの輝線パターン領域Xでは、輝線の数が少ない。つまり、輝度がHighまたはLowに変化する部位が少ない。したがって、受信機200は、その復号用画像Pdecに対する復号によって適切に光IDを取得することができない場合がある。 For example, as shown in (a) of FIG. 301, the receiver 200 images the transmitter 100 that transmits the optical ID according to the luminance change in the horizontal orientation. In the horizontal orientation, the longitudinal direction of the display 201 of the receiver 200 is an orientation along the horizontal direction. Further, each exposure line of the image sensor provided in the receiver 200 is orthogonal to the longitudinal direction of the display 201. By the imaging as described above, a decoding image Pdec including the bright line pattern region X with a small number of bright lines is acquired. In the bright line pattern region X of the decoding image Pdec, the number of bright lines is small. That is, there are few parts where the luminance changes to High or Low. Therefore, the receiver 200 may not be able to acquire the optical ID appropriately by decoding the decoding image Pdec.
 そこで、例えば、図301の(b)に示すように、ユーザは、受信機200の姿勢を横向きから縦向きに変える。なお、縦向きの姿勢は、受信機200のディスプレイ201の長手方向が垂直方向に沿う姿勢である。このような姿勢の受信機200は、光IDを送信する送信機100を撮像すると、輝線の数が多い輝線パターン領域Yを含む復号用画像Pdecを取得することができる。 Therefore, for example, as shown in (b) of FIG. 301, the user changes the attitude of the receiver 200 from landscape to portrait. The vertical orientation is a posture in which the longitudinal direction of the display 201 of the receiver 200 is along the vertical direction. The receiver 200 having such an attitude can acquire the decoding image Pdec including the bright line pattern region Y having a large number of bright lines when the transmitter 100 that transmits the light ID is imaged.
 このように、受信機200の姿勢に応じて、光IDを適切に取得することができない場合があるため、受信機200に光IDを取得させるときには、撮像している受信機200の姿勢を適宜変更するとよい。姿勢が変更されているときには、受信機200は、光IDを取得し易い姿勢になったタイミングで、光IDを適切に取得することができる。 As described above, the optical ID may not be appropriately acquired according to the attitude of the receiver 200. Therefore, when the receiver 200 acquires the optical ID, the attitude of the receiver 200 that is imaging is appropriately set. It is good to change. When the posture is changed, the receiver 200 can appropriately acquire the light ID at a timing when the posture is such that the light ID can be easily acquired.
 図302は、受信機200の姿勢に応じて取得される復号用画像Pdecの他の例を示す図である。 FIG. 302 is a diagram illustrating another example of the decoding image Pdec acquired according to the attitude of the receiver 200.
 例えば、送信機100は、喫茶店のデジタルサイネージとして構成され、映像表示期間に、喫茶店の広告に関する映像を表示し、光ID送信期間に、輝度変化によって光IDを送信する。つまり、送信機100は、映像表示期間における映像の表示と、光ID送信期間における光IDの送信とを交互に繰り返し実行する。 For example, the transmitter 100 is configured as a digital signage of a coffee shop, displays a video relating to a coffee shop advertisement during the video display period, and transmits a light ID by a change in luminance during the light ID transmission period. That is, the transmitter 100 alternately and repeatedly performs video display during the video display period and transmission of the optical ID during the optical ID transmission period.
 受信機200は、送信機100の撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。このとき、送信機100の映像表示期間および光ID送信期間の繰り返し周期と、受信機200による撮像表示画像Ppreおよび復号用画像Pdecの取得の繰り返し周期との同期によって、輝線パターン領域を含む復号用画像Pdecを取得することができない場合がある。さらに、受信機200の姿勢によって、輝線パターン領域を含む復号用画像Pdecを取得することができない場合がある。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec by imaging of the transmitter 100. At this time, the decoding cycle including the bright line pattern region is synchronized with the repetition cycle of the video display period and the optical ID transmission period of the transmitter 100 and the repetition cycle of acquisition of the captured display image Ppre and the decoding image Pdec by the receiver 200. The image Pdec may not be acquired. Furthermore, the decoding image Pdec including the bright line pattern region may not be acquired depending on the attitude of the receiver 200.
 例えば、受信機200は、図302の(a)に示すような姿勢で、送信機100を撮像する。つまり、受信機200は、送信機100に近づき、受信機200のイメージセンサの全体に送信機100の像が投影されるように、その送信機100を撮像する。 For example, the receiver 200 images the transmitter 100 in a posture as shown in FIG. That is, the receiver 200 approaches the transmitter 100 and images the transmitter 100 so that the image of the transmitter 100 is projected on the entire image sensor of the receiver 200.
 ここで、受信機200が撮像表示画像Ppreを取得するタイミングが、送信機100の映像表示期間内にあれば、受信機200は、送信機100が映し出された撮像表示画像Ppreを適切に取得する。 Here, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 appropriately acquires the captured display image Ppre displayed by the transmitter 100. .
 そして、受信機200が復号用画像Pdecを取得するタイミングが、送信機100の映像表示期間と光ID送信期間とに跨る場合であっても、受信機200は、輝線パターン領域Z1を含む復号用画像Pdecを取得することができる。 Even when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 includes the bright line pattern region Z1. An image Pdec can be acquired.
 つまり、イメージセンサに含まれる各露光ラインの露光は、垂直方向の上端にある露光ラインから下側に順に開始される。したがって、映像表示期間において、受信機200が復号用画像Pdecを取得するためにイメージセンサの露光を開始しても、輝線パターン領域を得ることはできない。しかし、その映像表示期間が光ID送信期間に切り替わると、その光ID送信期間に露光が行われる各露光ラインに対応した輝線パターン領域を得ることができる。 That is, the exposure of each exposure line included in the image sensor is started sequentially from the exposure line at the upper end in the vertical direction downward. Therefore, even if the receiver 200 starts exposure of the image sensor to acquire the decoding image Pdec during the video display period, it is not possible to obtain the bright line pattern region. However, when the video display period is switched to the light ID transmission period, a bright line pattern region corresponding to each exposure line that is exposed in the light ID transmission period can be obtained.
 ここで、受信機200は、図302の(b)に示すような姿勢で、送信機100を撮像する。つまり、受信機200は、送信機100から離れ、受信機200のイメージセンサの上側の領域のみに送信機100の像が投影されるように、その送信機100を撮像する。このときには、上述と同様、受信機200が撮像表示画像Ppreを取得するタイミングが、送信機100の映像表示期間内にあれば、受信機200は、送信機100が映し出された撮像表示画像Ppreを適切に取得する。しかし、受信機200が復号用画像Pdecを取得するタイミングが、送信機100の映像表示期間と光ID送信期間とに跨る場合には、受信機200が、輝線パターン領域を含む復号用画像Pdecを取得することができないことがある。つまり、送信機100の映像表示期間が光ID送信期間に切り替わっても、その光ID送信期間に露光が行われるイメージセンサの下側にある各露光ラインには、輝度変化する送信機100の像が投影されないことがある。したがって、輝線パターン領域を有する復号用画像Pdecを取得することができない。 Here, the receiver 200 images the transmitter 100 in a posture as shown in FIG. 302 (b). That is, the receiver 200 is separated from the transmitter 100 and images the transmitter 100 so that the image of the transmitter 100 is projected only on the area above the image sensor of the receiver 200. At this time, as described above, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 captures the captured display image Prep on which the transmitter 100 is projected. Get properly. However, when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 selects the decoding image Pdec including the bright line pattern region. You may not be able to get it. That is, even if the video display period of the transmitter 100 is switched to the optical ID transmission period, the image of the transmitter 100 whose luminance changes is displayed on each exposure line below the image sensor that is exposed during the optical ID transmission period. May not be projected. Therefore, the decoding image Pdec having the bright line pattern region cannot be acquired.
 一方、受信機200は、図302の(c)に示すように、送信機100から離れた状態で、受信機200のイメージセンサの下側の領域のみに送信機100の像が投影されるように、その送信機100を撮像する。このときには、上述と同様、受信機200が撮像表示画像Ppreを取得するタイミングが、送信機100の映像表示期間内にあれば、受信機200は、送信機100が映し出された撮像表示画像Ppreを適切に取得する。さらに、受信機200が復号用画像Pdecを取得するタイミングが、送信機100の映像表示期間と光ID送信期間とに跨る場合でも、受信機200が輝線パターン領域を含む復号用画像Pdecを取得することができることがある。つまり、送信機100の映像表示期間が光ID送信期間に切り替わると、その光ID送信期間に露光が行われるイメージセンサの下側にある各露光ラインには、輝度変化する送信機100の像が投影される。したがって、輝線パターン領域Z2を有する復号用画像Pdecを取得することができる。 On the other hand, as shown in FIG. 302 (c), the receiver 200 projects the image of the transmitter 100 only on the lower region of the image sensor of the receiver 200 in a state of being separated from the transmitter 100. Then, the transmitter 100 is imaged. At this time, as described above, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 captures the captured display image Prep on which the transmitter 100 is projected. Get properly. Furthermore, even when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 acquires the decoding image Pdec including the bright line pattern region. There are things that can be done. That is, when the video display period of the transmitter 100 is switched to the optical ID transmission period, the image of the transmitter 100 whose luminance changes is displayed on each exposure line below the image sensor that is exposed during the optical ID transmission period. Projected. Therefore, the decoding image Pdec having the bright line pattern region Z2 can be acquired.
 このように、受信機200の姿勢に応じて、光IDを適切に取得することができない場合があるため、受信機200は、光IDを取得するときには、受信機200の姿勢を変えるようにユーザに促してもよい。つまり、受信機200は、撮像が開始されると、受信機200の姿勢が変わるように、例えば「動かしてください」または「振ってください」というメッセージの表示または音声出力を行う。これにより、受信機200は、姿勢を変えながら撮像を行うため、光IDを適切に取得することができる。 As described above, since the optical ID may not be appropriately acquired according to the attitude of the receiver 200, the receiver 200 may change the attitude of the receiver 200 when acquiring the optical ID. You may be encouraged. That is, when imaging starts, the receiver 200 displays, for example, a message “Please move” or “Shake” or output sound so that the attitude of the receiver 200 changes. Thereby, since the receiver 200 performs imaging while changing the posture, it can appropriately acquire the light ID.
 図303は、受信機200の処理動作の一例を示すフローチャートである。 FIG. 303 is a flowchart illustrating an example of the processing operation of the receiver 200.
 例えば、受信機200は、撮像しているときに、受信機200が振られているか否かを判定する(ステップS461)。具体的には、受信機200は、受信機200に備えられた9軸センサの出力に基づいて、振られているか否かを判定する。ここで、受信機200は、撮像中に振られていると判定すると(ステップS461のYes)、上述の光ID取得レートを上げる(ステップS462)。具体的には、受信機200は、撮像中に得られる単位時間あたりの全ての撮像画像を復号用画像(すなわち輝線画像)Pdecとして取得し、取得された全ての復号用画像のそれぞれをデコードする。または、受信機200は、全ての撮像画像が撮像表示画像Ppreとして取得されているときには、つまり、復号用画像Pdecの取得およびデコードが停止されているときには、その取得およびデコードを開始する。 For example, the receiver 200 determines whether or not the receiver 200 is shaken during imaging (step S461). Specifically, the receiver 200 determines whether or not it is shaken based on the output of the 9-axis sensor provided in the receiver 200. Here, if the receiver 200 determines that it is being shaken during imaging (Yes in step S461), the receiver 200 increases the above-described optical ID acquisition rate (step S462). Specifically, the receiver 200 acquires all captured images per unit time obtained during imaging as decoding images (that is, bright line images) Pdec, and decodes all the acquired decoding images. . Alternatively, the receiver 200 starts acquisition and decoding when all the captured images are acquired as the captured display image Ppre, that is, when acquisition and decoding of the decoding image Pdec are stopped.
 一方、受信機200は、撮像中に振られていないと判定すると(ステップS461のNo)、低い光ID取得レートで復号用画像Pdecを取得する(ステップS463)。具体的には、光ID取得レートがステップS462で上げられて現在も高い光ID取得レートになっていれば、受信機200は、現在の光ID取得レートが高いため、その光ID取得レートを下げる。これにより、受信機200による復号用画像Pdecの復号処理が行われる頻度が少なくなるため、消費電力を抑えることができる。 On the other hand, if the receiver 200 determines that it is not shaken during imaging (No in step S461), the receiver 200 acquires the decoding image Pdec at a low optical ID acquisition rate (step S463). Specifically, if the optical ID acquisition rate is increased in step S462 and is still a high optical ID acquisition rate, the receiver 200 sets the optical ID acquisition rate because the current optical ID acquisition rate is high. Lower. Thereby, since the frequency with which the decoding process of the decoding image Pdec by the receiver 200 is reduced, power consumption can be suppressed.
 そして、受信機200は、光ID取得レートの調整処理を終了するための終了条件が満たされたか否かを判定し(ステップS464)、満たされていないと判定すると(ステップS464のNo)、ステップS461からの処理を繰り返し実行する。一方、受信機200は、終了条件が満たされたと判定すると(ステップS464のYes)、光ID取得レートの調整処理を終了する。 Then, the receiver 200 determines whether or not an end condition for ending the adjustment process of the optical ID acquisition rate is satisfied (step S464). When the receiver 200 determines that the end condition is not satisfied (No in step S464), The processing from S461 is repeatedly executed. On the other hand, when the receiver 200 determines that the end condition is satisfied (Yes in step S464), the receiver 200 ends the optical ID acquisition rate adjustment process.
 図304は、受信機200によるカメラレンズの切り替え処理の一例を示す図である。 FIG. 304 is a diagram illustrating an example of a camera lens switching process by the receiver 200.
 受信機200は、広角レンズ211と望遠レンズ212とをそれぞれカメラレンズとして備えていてもよい。広角レンズ211を用いた撮像によって得られる撮像画像は、画角の広い画像であって、その画像には被写体が小さく映し出される。一方、望遠レンズ212を用いた撮像によって得られる撮像画像は、画角の狭い画像であって、その画像には被写体が大きく映し出される。 The receiver 200 may include a wide-angle lens 211 and a telephoto lens 212 as camera lenses. A captured image obtained by imaging using the wide-angle lens 211 is an image with a wide angle of view, and a subject is projected to be small in the image. On the other hand, a captured image obtained by imaging using the telephoto lens 212 is an image with a narrow angle of view, and a subject is projected greatly in the image.
 上述のような受信機200は、撮像を行うときには、図304に示す方法A~Eの何れかの方法によって、撮像に用いられるカメラレンズを切り替えてもよい。 The receiver 200 as described above may switch the camera lens used for imaging by any one of the methods A to E shown in FIG. 304 when imaging.
 方法Aでは、受信機200は、通常撮像の場合でも、光IDを受信する場合でも、撮像するときには常に望遠レンズ212を用いる。ここで、通常撮像の場合とは、撮像によって全ての撮像画像を撮像表示画像Ppreとして取得する場合である。また、光IDを受信する場合とは、撮像によって撮像表示画像Ppreと復号用画像Pdecを周期的に取得する場合である。 In Method A, the receiver 200 always uses the telephoto lens 212 when imaging, whether in normal imaging or when receiving an optical ID. Here, the case of normal imaging is a case where all captured images are acquired as captured display images Ppre by imaging. The case where the optical ID is received is a case where the captured display image Ppre and the decoding image Pdec are periodically acquired by imaging.
 方法Bでは、受信機200は、通常撮像の場合には、広角レンズ211を用いる。一方、光IDを受信する場合には、受信機200は、まず、広角レンズ211を用いる。そして、受信機200は、その広角レンズ211を用いているときに取得された復号用画像Pdecに輝線パターン領域が含まれていれば、カメラレンズを広角レンズ211から望遠レンズ212に切り替える。この切り替え後には、受信機200は、画角の狭い、すなわち輝線パターン領域が大きく表れた復号用画像Pdecを取得することができる。 In Method B, the receiver 200 uses the wide-angle lens 211 in the case of normal imaging. On the other hand, when receiving the optical ID, the receiver 200 first uses the wide-angle lens 211. The receiver 200 switches the camera lens from the wide-angle lens 211 to the telephoto lens 212 if the bright line pattern region is included in the decoding image Pdec acquired when the wide-angle lens 211 is used. After this switching, the receiver 200 can acquire a decoding image Pdec with a narrow angle of view, that is, a bright line pattern region appearing large.
 方法Cでは、受信機200は、通常撮像の場合には、広角レンズ211を用いる。一方、光IDを受信する場合には、受信機200は、カメラレンズを広角レンズ211と望遠レンズ212とに切り替える。つまり、受信機200は、広角レンズ211を用いて撮像表示画像Ppreを取得し、望遠レンズ212を用いて復号用画像Pdecを取得する。 In Method C, the receiver 200 uses the wide-angle lens 211 in the case of normal imaging. On the other hand, when receiving the optical ID, the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212. That is, the receiver 200 acquires the captured display image Ppre using the wide-angle lens 211 and acquires the decoding image Pdec using the telephoto lens 212.
 方法Dでは、受信機200は、通常撮像の場合でも、光IDを受信する場合でも、ユーザによる操作に応じて、カメラレンズを広角レンズ211と望遠レンズ212とに切り替える。 In Method D, the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212 in accordance with the operation by the user regardless of whether it is normal imaging or receives an optical ID.
 方法Eでは、受信機200は、光IDを受信する場合、広角レンズ211を用いて取得された復号用画像Pdecを復号し、正しく復号できなければ、カメラレンズを広角レンズ211から望遠レンズ212に切り替える。または、受信機200は、望遠レンズ212を用いて取得された復号用画像Pdecを復号し、正しく復号できなければ、カメラレンズを望遠レンズ212から広角レンズ211に切り替える。なお、受信機200は、復号用画像Pdecを正しく復号できたか否かを判定するときには、まず、その復号用画像Pdecに対する復号によって得られる光IDをサーバに送信する。サーバは、その光IDが自らに登録されている光IDに一致していれば、一致していることを示す一致情報を受信機200に通知し、一致していなければ、一致していないことを示す不一致情報を受信機200に通知する。受信機200は、サーバから通知された情報が一致情報であれば、復号用画像Pdecが正しく復号できたと判定し、サーバから通知された情報が不一致情報であれば、復号用画像Pdecが正しく復号できなかったと判定する。または、受信機200は、復号用画像Pdecの復号によって得られる光IDが、予め定められた条件を満たす場合には、復号用画像Pdecが正しく復号できたと判定する。一方、その条件を満たさない場合には、受信機200は、復号用画像Pdecが正しく復号できなかったと判定する。 In the method E, when receiving the optical ID, the receiver 200 decodes the decoding image Pdec acquired using the wide-angle lens 211. If the decoding image Pdec cannot be correctly decoded, the camera lens is changed from the wide-angle lens 211 to the telephoto lens 212. Switch. Alternatively, the receiver 200 decodes the decoding image Pdec acquired using the telephoto lens 212, and switches the camera lens from the telephoto lens 212 to the wide-angle lens 211 if it cannot be decoded correctly. Note that when determining whether or not the decoding image Pdec has been correctly decoded, the receiver 200 first transmits an optical ID obtained by decoding the decoding image Pdec to the server. If the optical ID matches the optical ID registered in the server, the server notifies the receiver 200 of the matching information indicating that it matches, and if it does not match, the server does not match. Is sent to the receiver 200. The receiver 200 determines that the decoding image Pdec has been correctly decoded if the information notified from the server is coincidence information. If the information notified from the server is mismatch information, the receiver 200 correctly decodes the decoding image Pdec. Judge that it was not possible. Alternatively, the receiver 200 determines that the decoding image Pdec has been correctly decoded when the optical ID obtained by decoding the decoding image Pdec satisfies a predetermined condition. On the other hand, if the condition is not satisfied, the receiver 200 determines that the decoding image Pdec has not been correctly decoded.
 このようにカメラレンズを切り替えることによって、適切な復号用画像Pdecを取得することができる。 By switching the camera lens in this way, an appropriate decoding image Pdec can be acquired.
 図305は、受信機200によるカメラの切り替え処理の一例を示す図である。 FIG. 305 is a diagram illustrating an example of camera switching processing by the receiver 200.
 例えば、受信機200は、カメラとしてインカメラ213とアウトカメラ(図305では図示せず)とを備える。インカメラ213は、フェイスカメラまたは自撮りカメラともいい、受信機200におけるディスプレイ201と同じ面に配置されているカメラである。アウトカメラは、受信機200におけるディスプレイ201の面と反対側の面に配置されているカメラである。 For example, the receiver 200 includes an in camera 213 and an out camera (not shown in FIG. 305) as cameras. The in-camera 213 is also referred to as a face camera or a self-portrait camera, and is a camera arranged on the same surface as the display 201 in the receiver 200. The out camera is a camera arranged on the surface of the receiver 200 opposite to the surface of the display 201.
 このような受信機200は、インカメラ213を上に向けた状態で、照明装置として構成された送信機100をインカメラ213によって撮像する。この撮像によって、受信機200は、復号用画像Pdecを取得し、その復号用画像Pdecに対する復号によって、送信機100から送信される光IDを取得する。 Such a receiver 200 images the transmitter 100 configured as a lighting device with the in-camera 213 with the in-camera 213 facing upward. By this imaging, the receiver 200 acquires the decoding image Pdec, and acquires the optical ID transmitted from the transmitter 100 by decoding the decoding image Pdec.
 次に、受信機200は、その取得された光IDをサーバに送信することによって、その光IDに対応付けられたAR画像および認識情報をサーバから取得する。受信機200は、アウトカメラおよびインカメラ213のそれぞれによって得られる各撮像表示画像Ppreの中から、その認識情報に応じた対象領域を認識する処理を開始する。ここで、受信機200は、アウトカメラおよびインカメラ213のそれぞれによって得られた撮像表示画像Ppreの何れからも、対象領域を認識することができない場合、受信機200を動かすようにユーザに促す。受信機200に促されたユーザは、受信機200を動かす。具体的には、ユーザは、インカメラ213およびアウトカメラがユーザの前後方向を向くように、受信機200を動かす。その結果、受信機200は、アウトカメラによって取得された撮像表示画像Ppreの中から、対象領域を認識する。つまり、受信機200は、人が映し出された領域を対象領域として認識し、撮像表示画像Ppreのうちのその対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppreを表示する。 Next, the receiver 200 acquires the AR image and the recognition information associated with the optical ID from the server by transmitting the acquired optical ID to the server. The receiver 200 starts a process of recognizing a target area corresponding to the recognition information from the captured display images Ppre obtained by the out camera and the in camera 213, respectively. Here, the receiver 200 prompts the user to move the receiver 200 when the target area cannot be recognized from any of the captured display images Ppre obtained by the out camera and the in camera 213 respectively. The user who is prompted by the receiver 200 moves the receiver 200. Specifically, the user moves the receiver 200 so that the in-camera 213 and the out-camera face the front-rear direction of the user. As a result, the receiver 200 recognizes the target area from the captured display image Ppre acquired by the out camera. That is, the receiver 200 recognizes a region in which a person is projected as a target region, superimposes an AR image on the target region in the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed. To do.
 図306は、受信機200とサーバとの処理動作の一例を示すフローチャートである。 FIG. 306 is a flowchart illustrating an example of processing operation between the receiver 200 and the server.
 受信機200は、照明装置である送信機100をインカメラ213で撮像することによって、その送信機100から送信される光IDを取得し、その光IDをサーバに送信する(ステップS471)。サーバは、受信機200から光IDを受信し(ステップS472)、その光IDに基づいて、受信機200の位置を推定する(ステップS473)。例えば、サーバは、光IDごとに、その光IDを送信する送信機100が配置されている部屋、建物、またはスペースなど示すテーブルを記憶している。そして、サーバは、そのテーブルにおいて、受信機200から送信された光IDに対応付けられた部屋などを、受信機200の位置として推定する。さらに、サーバは、その推定された位置に対応付けられたAR画像および認識情報を受信機200に送信する(ステップS474)。 The receiver 200 acquires an optical ID transmitted from the transmitter 100 by capturing an image of the transmitter 100, which is a lighting device, with the in-camera 213, and transmits the optical ID to the server (step S471). The server receives the optical ID from the receiver 200 (step S472), and estimates the position of the receiver 200 based on the optical ID (step S473). For example, the server stores, for each light ID, a table indicating a room, a building, a space, or the like in which the transmitter 100 that transmits the light ID is arranged. Then, the server estimates the room associated with the optical ID transmitted from the receiver 200 as the position of the receiver 200 in the table. Further, the server transmits the AR image and recognition information associated with the estimated position to the receiver 200 (step S474).
 受信機200は、サーバから送信されたAR画像および認識情報を取得する(ステップS475)。ここで、受信機200は、アウトカメラおよびインカメラ213のそれぞれによって得られた各撮像表示画像Ppreの中から、その認識情報に応じた対象領域を認識する処理を開始する。そして、受信機200は、例えばアウトカメラによって取得された撮像表示画像Ppreの中から対象領域を認識する(ステップS476)。受信機200は、撮像表示画像Ppreのうちの対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppreを表示する(ステップS477)。 The receiver 200 acquires the AR image and the recognition information transmitted from the server (Step S475). Here, the receiver 200 starts a process of recognizing a target area corresponding to the recognition information from each captured display image Ppre obtained by each of the out camera and the in camera 213. Then, the receiver 200 recognizes the target area from, for example, the captured display image Ppre acquired by the out-camera (step S476). The receiver 200 superimposes the AR image on the target area in the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S477).
 なお、上述の例では、受信機200は、サーバから送信されたAR画像および認識情報を取得すると、ステップS476において、アウトカメラおよびインカメラ213のそれぞれによって得られた各撮像表示画像Ppreの中から対象領域を認識する処理を開始した。しかし、受信機200は、ステップS476において、アウトカメラのみによって得られた撮像表示画像Ppreの中から対象領域を認識する処理を開始してもよい。つまり、光IDを取得するためのカメラ(上述の例ではインカメラ213)と、AR画像が重畳される撮像表示画像Ppreを取得するためのカメラ(上述の例ではアウトカメラ)とを、常に異ならせてもよい。 In the above-described example, when the receiver 200 acquires the AR image and the recognition information transmitted from the server, in step S476, the receiver 200 selects from the captured display images Ppre obtained by the out camera and the in camera 213, respectively. The process of recognizing the target area has started. However, the receiver 200 may start the process of recognizing the target area from the captured display image Ppre obtained only by the out-camera in step S476. That is, the camera for acquiring the light ID (in-camera 213 in the above example) and the camera for acquiring the captured display image Pre on which the AR image is superimposed (out-camera in the above example) are always different. It may be allowed.
 また、上述の例では、受信機200は、照明装置である送信機100をインカメラ213で撮像したが、送信機100によって照らされた床面をアウトカメラで撮影してもよい。このようなアウトカメラによる撮像でも、受信機200は、送信機100から送信される光IDを取得することができる。 In the above-described example, the receiver 200 images the transmitter 100 that is an illumination device with the in-camera 213, but the floor illuminated by the transmitter 100 may be captured with the out-camera. Even with such an out-camera imaging, the receiver 200 can acquire the optical ID transmitted from the transmitter 100.
 図307は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 307 is a diagram illustrating an example of superimposition of the AR image by the receiver 200.
 受信機200は、例えばコンビニエンスストアなどの店舗に設置された電子レンジとして構成されている送信機100を撮像する。この送信機100は、電子レンジの庫内を撮像するためのカメラと、その庫内を照らす照明装置とを備える。そして、送信機100は、庫内に収納された飲食物(すなわち温め対象物)を、カメラによる撮像によって認識する。また、送信機100は、その飲食物を温めるときには、上述の照明装置を発光させるとともに、その照明装置を輝度変化させることによって、認識された飲食物を示す光IDを送信する。なお、この照明装置は電子レンジの庫内を照らすが、その照明装置の光は、電子レンジの透過性を有する窓部から外部に放たれる。したがって、光IDは、照明装置から電子レンジの窓部を介して、電子レンジの外部に送信される。 The receiver 200 images the transmitter 100 configured as a microwave oven installed in a store such as a convenience store. The transmitter 100 includes a camera for imaging the inside of a microwave oven and an illumination device that illuminates the inside of the oven. And the transmitter 100 recognizes the food / beverage (namely, warming object) accommodated in the store | warehouse | chamber by imaging with a camera. In addition, when the food or drink is warmed, the transmitter 100 transmits the light ID indicating the recognized food or drink by causing the lighting device to emit light and changing the luminance of the lighting device. In addition, although this illuminating device illuminates the inside of the store | warehouse | chamber of a microwave oven, the light of the illuminating device is emitted outside from the window part which has the transparency of a microwave oven. Accordingly, the light ID is transmitted from the lighting device to the outside of the microwave oven through the window portion of the microwave oven.
 ここで、ユーザは、コンビニエンスストアにて飲食物を購入し、その飲食物を温めるために、電子レンジである送信機100にその飲食物を入れる。このとき、送信機100は、カメラによってその飲食物を認識し、その認識された飲食物を示す光IDを送信しながら飲食物の温めを開始する。 Here, the user purchases food and drink at a convenience store, and puts the food and drink into the transmitter 100, which is a microwave oven, in order to warm the food and drink. At this time, the transmitter 100 recognizes the food and drink with the camera, and starts warming the food and drink while transmitting the optical ID indicating the recognized food and drink.
 受信機200は、その温めを開始した送信機100を撮像することによって、送信機100から送信された光IDを取得し、その光IDをサーバに送信する。次に、受信機200は、その光IDに対応付けられたAR画像、音声データおよび認識情報をサーバから取得する。 The receiver 200 acquires the optical ID transmitted from the transmitter 100 by capturing an image of the transmitter 100 that has started the warming, and transmits the optical ID to the server. Next, the receiver 200 acquires an AR image, audio data, and recognition information associated with the optical ID from the server.
 上述のAR画像は、送信機100の内部の仮想的な様子を示す動画であるAR画像P32aと、庫内に収納された飲食物を詳細に示すAR画像P32bと、送信機100から湯気が出ている様子を動画によって示すAR画像P32cと、飲食物の温め完了までの残り時間を動画によって示すAR画像P32dとを含む。 The above-mentioned AR image includes an AR image P32a that is a moving image showing a virtual state inside the transmitter 100, an AR image P32b that shows food and drink stored in the cabinet in detail, and steam from the transmitter 100. AR image P32c which shows a state of being heated by a moving image, and AR image P32d which shows the remaining time until the completion of warming of food and drink is displayed.
 例えば、AR画像P32aは、電子レンジの庫内に収納された飲食物がピザであれば、ピザを載せたターンテーブルが回転していて、そのピザの周りを複数の小人が踊っている動画である。AR画像P32bは、例えば、庫内に収納された飲食物がピザであれば、その商品名「ピザ」と、そのピザの材料とを示す画像である。 For example, in the AR image P32a, if the food and drink stored in the microwave oven is a pizza, the turntable on which the pizza is placed is rotating, and a video in which a plurality of dwarfs are dancing around the pizza It is. The AR image P32b is, for example, an image showing the product name “pizza” and the material of the pizza if the food and drink stored in the warehouse is a pizza.
 受信機200は、認識情報に基づいて、撮像表示画像Ppreのうちの送信機100の窓部が映し出されている領域を、AR画像P32aの対象領域として認識し、その対象領域にAR画像P32aを重畳する。さらに、受信機200は、認識情報に基づいて、撮像表示画像Ppreのうちの、送信機100が映し出されている領域よりも上にある領域を、AR画像P32bの対象領域として認識し、その対象領域にAR画像P32bを重畳する。さらに、受信機200は、認識情報に基づいて、撮像表示画像Ppreのうち、AR画像P32aの対象領域と、AR画像P32bの対象領域との間にある領域を、AR画像P32cの対象領域として認識し、その対象領域にAR画像P32cを重畳する。さらに、受信機200は、認識情報に基づいて、撮像表示画像Ppreのうち、送信機100が映し出されている領域の下にある領域を、AR画像P32dの対象領域として認識し、その対象領域にAR画像P32dを重畳する。 Based on the recognition information, the receiver 200 recognizes the area in which the window of the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32a, and the AR image P32a is displayed in the target area. Superimpose. Further, based on the recognition information, the receiver 200 recognizes the area above the area where the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32b, and the target The AR image P32b is superimposed on the area. Furthermore, based on the recognition information, the receiver 200 recognizes an area between the target area of the AR image P32a and the target area of the AR image P32b in the captured display image Ppre as the target area of the AR image P32c. Then, the AR image P32c is superimposed on the target area. Further, based on the recognition information, the receiver 200 recognizes an area below the area where the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32d, and sets the target area as the target area. The AR image P32d is superimposed.
 さらに、受信機200は、音声データを再生することによって、飲食物が加熱されるときに生じる音を出力する。 Furthermore, the receiver 200 outputs sound generated when food or drink is heated by reproducing audio data.
 受信機200によって上述のようなAR画像P32a~P32dが表示され、さらに、音が出力されることによって、飲食物の温めが完了するまでの間、ユーザの興味を受信機200に引き付けることができる。その結果、温めの完了を待っているユーザの負担を軽減することができる。また、湯気などを示すAR画像P32cが表示され、飲食物が加熱されるときに生じる音が出力されることによって、ユーザにシズル感を与えることができる。また、AR画像P32dの表示によって、ユーザは、飲食物の温め完了までの残り時間を容易に知ることができる。したがって、ユーザは、温め完了までの間、例えば、電子レンジである送信機100から離れて店舗内に陳列されている本などを読むことができる。また、受信機200は、残り時間が0になったときには、温めが完了したことをユーザに通知してもよい。 The AR images P32a to P32d as described above are displayed by the receiver 200, and further, by outputting sound, it is possible to attract the user's interest to the receiver 200 until the warming of food and drink is completed. . As a result, the burden on the user who is waiting for completion of warming can be reduced. Further, the AR image P32c indicating steam or the like is displayed, and a sound generated when the food or drink is heated is output, so that a sizzle can be given to the user. In addition, the display of the AR image P32d allows the user to easily know the remaining time until the completion of the heating of the food and drink. Therefore, the user can read, for example, a book displayed in the store away from the transmitter 100 that is a microwave oven until the warming is completed. The receiver 200 may notify the user that the warming is completed when the remaining time becomes zero.
 なお、上述の例では、AR画像P32aは、ピザを載せたターンテーブルが回転していて、そのピザの周りを複数の小人が踊っている動画であったが、例えば、庫内の温度分布を仮想的に示す画像であってもよい。また、AR画像P32bは、庫内に収納された飲食物の商品名および材料を示す画像であったが、栄養成分またはカロリーを示す画像であってもよい。あるいは、AR画像P32bは、割引券を示す画像であってもよい。 In the above-described example, the AR image P32a is a moving image in which a turntable on which a pizza is placed is rotating and a plurality of dwarfs are dancing around the pizza. May be an image that virtually represents. Moreover, although AR image P32b was an image which shows the brand name and material of the food / beverage accommodated in the store | warehouse | chamber, the image which shows a nutrient component or a calorie may be sufficient. Alternatively, the AR image P32b may be an image showing a discount ticket.
 このように本変形例における表示方法では、被写体は、照明装置を備えた電子レンジであって、照明装置は、電子レンジの庫内を照らし、かつ、輝度変化することによって光IDを電子レンジの外部に送信する。そして、撮像表示画像Ppreおよび復号用画像Pdecの取得では、光IDを送信している電子レンジを撮像することによって撮像表示画像Ppreおよび復号用画像Pdecを取得する。対象領域の認識では、撮像表示画像Ppreに映し出されている電子レンジの窓部分を対象領域として認識する。撮像表示画像Ppreの表示では、庫内の状態変化を示すAR画像が重畳された撮像表示画像Ppreを表示する。 As described above, in the display method according to this modification, the subject is a microwave oven provided with an illumination device, and the illumination device illuminates the interior of the microwave oven and changes the luminance to change the light ID of the microwave oven. Send to the outside. In acquiring the captured display image Ppre and the decoding image Pdec, the captured display image Ppre and the decoding image Pdec are acquired by imaging the microwave oven that transmits the optical ID. In the recognition of the target area, the window portion of the microwave oven displayed in the captured display image Ppre is recognized as the target area. In the display of the captured display image Ppre, the captured display image Ppre on which the AR image indicating the state change in the warehouse is superimposed is displayed.
 これにより、電子レンジの庫内の状態変化がAR画像として表示されるため、電子レンジの利用者に庫内の様子を分かりやすく伝えることができる。 Thereby, since the change in the state of the microwave oven is displayed as an AR image, the state of the oven can be easily communicated to the user of the microwave oven.
 図308は、受信機200、電子レンジ、中継サーバおよび電子決済用サーバを含むシステムの処理動作を示すシーケンス図である。なお、電子レンジは、上述と同様、カメラおよび照明装置を備え、その照明装置の輝度を変化させることによって光IDを送信する。つまり、電子レンジは送信機100としての機能を有する。 FIG. 308 is a sequence diagram showing processing operations of the system including the receiver 200, the microwave oven, the relay server, and the electronic settlement server. Note that the microwave oven includes a camera and a lighting device as described above, and transmits the light ID by changing the luminance of the lighting device. That is, the microwave oven has a function as the transmitter 100.
 まず、電子レンジは、庫内に収納された飲食物をカメラによって認識する(ステップS481)。次に、電子レンジは、その認識された飲食物を示す光IDを照明装置の輝度変化によって受信機200に送信する。 First, the microwave oven recognizes food and drink stored in the cabinet with the camera (step S481). Next, the microwave oven transmits a light ID indicating the recognized food and drink to the receiver 200 by a luminance change of the lighting device.
 受信機200は、電子レンジを撮像することによって、その電子レンジから送信された光IDを受信し(ステップS483)、光IDとカード情報とを中継サーバに送信する。カード情報は、受信機200に予め保存されているクレジットカードなどの情報であって、電子決済に必要な情報である。 The receiver 200 receives the optical ID transmitted from the microwave oven by imaging the microwave oven (step S483), and transmits the optical ID and the card information to the relay server. The card information is information such as a credit card stored in advance in the receiver 200 and is information necessary for electronic payment.
 中継サーバは、光IDごとに、その光IDに対応するAR画像、認識情報および商品情報を示すテーブルを保持している。この商品情報は、光IDによって示される飲食物の料金などを示す。このような中継サーバは、受信機200から送信された光IDとカード情報とを受信すると(ステップS486)、その光IDに対応付けられた商品情報を上述のテーブルから見つけ出す。そして、中継サーバは、その商品情報とカード情報とを電子決済用サーバに送信する(ステップS486)。電子決済用サーバは、中継サーバから送信された商品情報とカード情報とを受信すると(ステップS487)、その商品情報とカード情報とに基づいて電子決済の処理を行う(ステップS488)。そして、電子決済用サーバは、その電子決済の処理が完了すると、その完了を中継サーバに通知する(ステップS489)。 The relay server holds a table indicating an AR image, recognition information, and product information corresponding to each optical ID. This merchandise information indicates the price of food and drink indicated by the light ID. When such a relay server receives the optical ID and card information transmitted from the receiver 200 (step S486), it finds product information associated with the optical ID from the above table. Then, the relay server transmits the merchandise information and card information to the electronic settlement server (step S486). When the electronic settlement server receives the merchandise information and the card information transmitted from the relay server (step S487), the electronic settlement server performs an electronic settlement process based on the merchandise information and the card information (step S488). Then, when the electronic payment processing is completed, the electronic settlement server notifies the relay server of the completion (step S489).
 中継サーバは、電子決済用サーバからの決済完了の通知を確認すると(ステップS490)、飲食物の温め開始しを電子レンジに指示する(ステップS491)。さらに、中継サーバは、上述のテーブルにおいて、ステップS485で受信した光IDに対応付けられているAR画像および認識情報を受信機200に送信する(ステップS493)。 When the relay server confirms the payment completion notification from the electronic payment server (step S490), the relay server instructs the microwave to start heating the food (step S491). Further, the relay server transmits the AR image and the recognition information associated with the optical ID received in step S485 in the above table to the receiver 200 (step S493).
 電子レンジは、中継サーバから温め開始の指示を受けると、庫内に収納された飲食物の温めを開始する(ステップS492)。また、受信機200は、中継サーバから送信されたAR画像および認識情報を受信すると、ステップS483から開始されている撮像によって周期的に取得される撮像表示画像Ppreから、その認識情報に応じた対象領域を認識する。そして、受信機200は、その対象領域にAR画像を重畳する(ステップS494)。 When the microwave oven receives a warming start instruction from the relay server, it starts warming the food and drink stored in the cabinet (step S492). Further, when the receiver 200 receives the AR image and the recognition information transmitted from the relay server, the target according to the recognition information from the captured display image Ppre periodically acquired by the imaging started from step S483. Recognize the area. Then, the receiver 200 superimposes the AR image on the target area (step S494).
 これにより、受信機200のユーザは、電子レンジの庫内に飲食物を入れて撮像を行えば、簡単に決済を済ませて、飲食物の温めを開始することができる。また、決済ができない場合には、ユーザによる飲食物の温めを禁止することができる。さらに、温めが開始されたときには、図307に示すAR画像P32aなどの表示を行うことができ、庫内の様子をユーザに知らせることができる。 Thereby, the user of the receiver 200 can easily settle the settlement and start warming the food and drink by placing the food and drink in the microwave oven and taking an image. Moreover, when payment cannot be performed, warming of food and drink by a user can be prohibited. Furthermore, when warming is started, an AR image P32a shown in FIG. 307 and the like can be displayed, and the user can be informed of the state in the warehouse.
 図309は、POS端末、サーバ、受信機200および電子レンジを含むシステムの処理動作を示すシーケンス図である。なお、電子レンジは、上述と同様、カメラおよび照明装置を備え、その照明装置の輝度を変化させることによって光IDを送信する。つまり、電子レンジは送信機100としての機能を有する。また、POS(point-of-sale)端末は、電信レンジと同じコンビニエンスストアなどの店舗に設置された端末である。 FIG. 309 is a sequence diagram illustrating processing operations of a system including a POS terminal, a server, a receiver 200, and a microwave oven. Note that the microwave oven includes a camera and a lighting device as described above, and transmits the light ID by changing the luminance of the lighting device. That is, the microwave oven has a function as the transmitter 100. A POS (point-of-sale) terminal is a terminal installed in a store such as the same convenience store as the telegraph range.
 まず、受信機200のユーザは、店舗で、商品である飲食物を選び、その飲食物を購入するためにPOS端末が設置された場所に向かう。その店舗の店員は、POS端末を操作し、飲食物の代金をユーザから受け取る。この店員によるPOS端末の操作によって、POS端末は、操作入力データと販売情報とを取得する(ステップS501)。販売情報は、例えば商品の名称、個数および値段と、販売場所と、販売日時とを示す。操作入力データは、例えば、店員によって入力されたユーザの性別および年代などを示す。POS端末は、その操作入力データと販売情報とをサーバに送信する(ステップS502)。サーバは、POS端末から送信された操作入力データと販売情報とを受信する(ステップS503)。 First, the user of the receiver 200 selects a food or drink as a product at a store and heads to a place where a POS terminal is installed in order to purchase the food or drink. The store clerk operates the POS terminal and receives the price of food and drink from the user. By the operation of the POS terminal by the store clerk, the POS terminal acquires operation input data and sales information (step S501). The sales information indicates, for example, the name, number, and price of the product, the sales location, and the sales date. The operation input data indicates, for example, the sex and age of the user input by the store clerk. The POS terminal transmits the operation input data and sales information to the server (step S502). The server receives operation input data and sales information transmitted from the POS terminal (step S503).
 一方、受信機200のユーザは、店員に飲食物の代金を支払うと、その飲食物を温めるために電位レンジの庫内に飲食物を入れる。電子レンジは、庫内に収納された飲食物をカメラによって認識する(ステップS504)。次に、電子レンジは、その認識された飲食物を示す光IDを照明装置の輝度変化によって受信機200に送信する(ステップS505)。そして、電子レンジは、飲食物の温めを開始する(ステップS507)。 On the other hand, when the user of the receiver 200 pays the clerk for the food and drink, he / she puts the food and drink in the potential range to warm the food and drink. The microwave oven recognizes the food and drink stored in the cabinet with the camera (step S504). Next, the microwave oven transmits the light ID indicating the recognized food and drink to the receiver 200 by the luminance change of the lighting device (step S505). And a microwave oven starts warming of food and drink (step S507).
 受信機200は、電子レンジを撮像することによって、その電子レンジから送信された光IDを受信し(ステップS508)、光IDと端末情報とをサーバに送信する(ステップS509)。端末情報は、受信機200に予め保存されている情報であって、例えば、受信機200のディスプレイ201に表示される言語の種別(例えば英語または日本語など)を示す。 The receiver 200 receives the optical ID transmitted from the microwave oven by imaging the microwave oven (step S508), and transmits the optical ID and the terminal information to the server (step S509). The terminal information is information stored in advance in the receiver 200 and indicates, for example, the language type (for example, English or Japanese) displayed on the display 201 of the receiver 200.
 サーバは、受信機200からアクセスされ、受信機200から送信された光IDと端末情報とを受信すると、その受信機200からのアクセスが、最初のアクセスか否かを判定する(ステップS510)。最初のアクセスは、ステップS503の処理が行われたときから所定時間内において最初に行われるアクセスである。ここで、サーバは、その受信機200からのアクセスが最初のアクセスであると判定すると(ステップS510のYes)、操作入力データと端末情報とを関連付けて保存する(ステップS511)。 When the server is accessed from the receiver 200 and receives the optical ID and the terminal information transmitted from the receiver 200, the server determines whether the access from the receiver 200 is the first access (step S510). The first access is the first access performed within a predetermined time from the time when the process of step S503 is performed. If the server determines that the access from the receiver 200 is the first access (Yes in step S510), the server associates and stores the operation input data and the terminal information (step S511).
 なお、サーバは、受信機200からのアクセスが最初のアクセスか否かを判定したが、販売情報によって示される商品が、光IDによって示される飲食物に一致するか否かを判定してもよい。また、サーバは、ステップS511では、操作入力データと端末情報とを関連付けるだけでなく、販売情報もそれらに関連付けて保存してもよい。 The server determines whether or not the access from the receiver 200 is the first access, but may determine whether or not the product indicated by the sales information matches the food or drink indicated by the light ID. . In step S511, the server may store not only the operation input data and the terminal information but also the sales information in association with them.
 (屋内での利用)
 図310は、地下街等の屋内での利用の様子を示す図である。
(Indoor use)
FIG. 310 is a diagram showing a state of indoor use such as an underground shopping center.
 受信機200は、照明装置として構成された送信機100の送信する光IDを受信し、自身の現在位置を推定する。また、受信機200は、地図上に現在位置を表示して道案内を行ったり、付近の店舗の情報を表示したりする。 The receiver 200 receives the light ID transmitted from the transmitter 100 configured as a lighting device, and estimates its current position. Further, the receiver 200 displays the current position on a map to provide route guidance, or displays information on nearby stores.
 緊急時には送信機100から災害情報や避難情報を送信することで、通信が混雑している場合や、通信基地局が故障した場合や、通信基地局からの電波が届かない場所にいる場合であっても、これらの情報を得ることができる。これは、緊急放送を聞き逃した場合や、緊急放送を聞くことができない聴覚障害者に有効である。 In an emergency, transmission of disaster information and evacuation information from the transmitter 100 can be used when communication is congested, when a communication base station fails, or when radio waves from the communication base station do not reach. Even this information can be obtained. This is effective for a hearing impaired person who has missed an emergency broadcast or cannot hear an emergency broadcast.
 つまり、受信機200は、撮像することによって、送信機100から送信された光IDを取得し、さらに、その光IDに対応付けられたAR画像P33と認識情報とをサーバから取得する。そして、受信機200は、上述の撮像によって得られた撮像表示画像Ppreから、認識情報に応じた対象領域を認識し、その対象領域に、矢印の形状をしたAR画像P33を重畳する。これにより、受信機200を上述のウェイファインダー(図294参照)として利用することができる。 That is, the receiver 200 acquires the optical ID transmitted from the transmitter 100 by taking an image, and further acquires the AR image P33 and the recognition information associated with the optical ID from the server. Then, the receiver 200 recognizes the target area corresponding to the recognition information from the captured display image Ppre obtained by the above-described imaging, and superimposes the AR image P33 in the shape of an arrow on the target area. Thereby, receiver 200 can be used as the above-mentioned way finder (refer to Drawing 294).
 (拡張現実オブジェクトの表示)
 図311は、拡張現実オブジェクトを表示する様子を示す図である。
(Display of augmented reality object)
FIG. 311 is a diagram illustrating a state in which an augmented reality object is displayed.
 拡張現実を表示させる舞台2718eは、上述の送信機100として構成され、発光部2718a、2718b、2718c、2718dの発光パターンや位置パターンで、拡張現実オブジェクトの情報や、拡張現実オブジェクトを表示させる基準位置を送信する。 The stage 2718e that displays the augmented reality is configured as the transmitter 100 described above, and the light emitting patterns and position patterns of the light emitting units 2718a, 2718b, 2718c, and 2718d are the reference positions for displaying the augmented reality object information and the augmented reality object. Send.
 受信機200は、受信した情報を基に、AR画像である拡張現実オブジェクト2718fを撮像画像に重畳して表示させる。 The receiver 200 displays the augmented reality object 2718f, which is an AR image, superimposed on the captured image based on the received information.
 なお、これらの包括的または具体的な態様は、装置、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、装置、システム、方法、集積回路、コンピュータプログラムまたは記録媒体の任意な組み合わせで実現されてもよい。また、一実施形態に関わる方法を実行するコンピュータプログラムがサーバの記録媒体に保存されており、端末の要求に応じて、サーバから端末に配信する態様で実現されてもよい。 Note that these comprehensive or specific modes may be realized by a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement | achieve in arbitrary combinations of a circuit, a computer program, or a recording medium. Moreover, the computer program which performs the method concerning one Embodiment is preserve | saved at the recording medium of the server, and may be implement | achieved in the aspect delivered to a terminal from a server according to the request | requirement of a terminal.
 [実施の形態23の変形例4]
 図312は、実施の形態23の変形例4における表示システムの構成を示す図である。
[Modification 4 of Embodiment 23]
FIG. 312 is a diagram showing a configuration of a display system in Modification 4 of Embodiment 23. In FIG.
 この表示システム500は、可視光信号を用いた物体認識と拡張現実(Augmented Reality/Mixed Reality)表示とを行う。 This display system 500 performs object recognition and augmented reality (Augmented Reality / Mixed Reality) display using visible light signals.
 受信機200は、撮像を行い、可視光信号の受信と、物体認識または空間認識のための特徴量の抽出とを行う。特徴量の抽出は、撮像によって得られる撮像画像からの画像特徴量の抽出である。なお、可視光信号は、赤外線または紫外線などの可視光隣接キャリア信号であってもよい。また、本変形例では、受信機200が、拡張現実感画像(すなわちAR画像)が表示される対象物の認識を行う認識装置として構成されている。なお、図312に示す例では、対象物は例えばAR対象物501などである。 The receiver 200 performs imaging, receives a visible light signal, and extracts feature quantities for object recognition or space recognition. The feature amount extraction is extraction of an image feature amount from a captured image obtained by imaging. The visible light signal may be a visible light adjacent carrier signal such as infrared light or ultraviolet light. In this modification, the receiver 200 is configured as a recognition device that recognizes an object on which an augmented reality image (that is, an AR image) is displayed. In the example illustrated in FIG. 312, the target object is, for example, the AR target object 501.
 送信機100は、自身またはAR対象物501を識別するためのID等の情報を、可視光信号または電波信号として送信する。なお、IDは、例えば上述の光IDなどの識別情報であり、AR対象物501は、上述の対象領域である。可視光信号は、送信機100が有する光源の輝度変化により送信される信号である。 The transmitter 100 transmits information such as an ID for identifying itself or the AR object 501 as a visible light signal or a radio wave signal. The ID is identification information such as the above-described optical ID, for example, and the AR object 501 is the above-described target area. The visible light signal is a signal transmitted by a change in luminance of a light source included in the transmitter 100.
 受信機200またはサーバ300は、送信機100が送信する識別情報と、AR認識情報及びAR表示情報を紐付けて保持している。紐付けは1対1であってもよいし、1対多であってもよい。AR認識情報とは、上述の認識情報であって、AR表示を行うためのAR対象物501を認識するための情報である。具体的には、AR認識情報は、AR対象物501の画像特徴量(SIFT特徴量、SURF特徴量、またはORB特徴量等)、色、形状、大きさ、反射率、透過率、または三次元モデル等である。また、AR認識情報は、どの認識手法を用いて認識を行うかを示す識別情報または認識アルゴリズムを含んでもよい。AR表示情報は、AR表示を行うための情報であり、画像(すなわち上述のAR画像)、映像、音声、三次元モデル、モーションデータ、表示座標、表示サイズ、または透過率等である。また、AR表示情報は、色相、彩度および明度のそれぞれの絶対値または変更割合であってもよい。 The receiver 200 or the server 300 holds the identification information transmitted from the transmitter 100 in association with the AR recognition information and the AR display information. The association may be one-to-one or one-to-many. The AR recognition information is the above-described recognition information, and is information for recognizing the AR object 501 for performing AR display. Specifically, the AR recognition information is the image feature amount (SIFT feature amount, SURF feature amount, ORB feature amount, etc.), color, shape, size, reflectance, transmittance, or three-dimensional of the AR object 501. Model etc. The AR recognition information may include identification information or a recognition algorithm indicating which recognition method is used for recognition. The AR display information is information for performing AR display, and is an image (that is, the above-described AR image), video, audio, three-dimensional model, motion data, display coordinates, display size, transmittance, or the like. Further, the AR display information may be the absolute value or change ratio of each of hue, saturation, and brightness.
 送信機100は、サーバ300としての機能を兼ねてもよい。つまり、送信機100は、AR認識情報およびAR表示情報を保持し、有線または無線通信によって、それらの情報を送信してもよい。 The transmitter 100 may also function as the server 300. That is, the transmitter 100 may hold the AR recognition information and the AR display information and transmit the information by wired or wireless communication.
 受信機200は、カメラ(具体的にはイメージセンサ)で画像を撮像する。また、受信機200は、可視光信号、または、WiFiもしくはBluetooth(登録商標)などの電波信号を受信する。また、受信機200は、GPS等によって得られる位置情報、ジャイロセンサもしくは加速度センサによって得られる情報、およびマイクからの音声などの情報を取得し、これらの全ての情報あるいは一部の情報を統合して付近に存在するAR対象物を認識してもよい。また、受信機200は、それらの情報を統合せず、何れかの情報のみを用いてAR対象物を認識してもよい。 The receiver 200 captures an image with a camera (specifically, an image sensor). The receiver 200 receives a visible light signal or a radio wave signal such as WiFi or Bluetooth (registered trademark). Further, the receiver 200 acquires position information obtained by GPS or the like, information obtained by a gyro sensor or acceleration sensor, and information such as sound from a microphone, and integrates all or some of these information. AR objects existing in the vicinity may be recognized. Further, the receiver 200 may recognize the AR object using only one of the information without integrating the information.
 図313は、実施の形態23の変形例4に係る表示システムの処理動作を示すフローチャートである。 FIG. 313 is a flowchart showing the processing operation of the display system according to the fourth modification of the twenty-third embodiment.
 受信機200は、まず、既に可視光信号を受信しているか否かを判定する(ステップS521)。つまり、受信機200は、例えば、可視光信号を光源の輝度変化により送信する送信機100を撮影することにより、識別情報を示す可視光信号を取得しているか否かを判定する。このときには、その撮影によって、送信機100の撮像画像が取得される。 First, the receiver 200 determines whether or not a visible light signal has already been received (step S521). That is, for example, the receiver 200 determines whether or not the visible light signal indicating the identification information is acquired by photographing the transmitter 100 that transmits the visible light signal according to the luminance change of the light source. At this time, a captured image of the transmitter 100 is acquired by the shooting.
 ここで、受信機200は、既に可視光信号を受信していると判定した場合には(ステップS521のY)、受信した情報からAR対象物(物体、基準点、空間座標、または空間中の受信機200の位置と向き)を特定する。さらに、受信機200は、AR対象物の相対位置を認識する。この相対位置は、受信機200からAR対象物までの距離および方向によって表される。例えば、受信機200は、図244に示す輝線パターン領域の大きさおよび位置などに基づいて、AR対象物(すなわち輝線パターン領域である対象領域)を特定し、そのAR対象物の相対位置を認識する。 Here, when the receiver 200 determines that the visible light signal has already been received (Y in step S521), the AR object (object, reference point, spatial coordinate, or space in the space is determined from the received information. The position and orientation of the receiver 200 are specified. Furthermore, the receiver 200 recognizes the relative position of the AR object. This relative position is represented by the distance and direction from the receiver 200 to the AR object. For example, the receiver 200 identifies the AR object (that is, the target area that is the bright line pattern area) based on the size and position of the bright line pattern area shown in FIG. 244 and recognizes the relative position of the AR target object. To do.
 そして、受信機200は、可視光信号に含まれるID等の情報と相対位置とをサーバ300に送信し、その情報および相対位置とをキーとして用いることによって、サーバ300に登録されたAR認識情報とAR表示情報とを取得する(ステップS522)。このとき、受信機200は、認識したAR対象物の情報だけでなく、そのAR対象物の付近に存在する他のAR対象物の情報(すなわちAR認識情報およびAR表示情報)も同時に取得しても良い。これにより、付近に存在する他のAR対象物がその受信機200によって撮像された際に、受信機200は、素早く、また、誤りなく、その付近に存在する他のAR対象物を認識することができる。例えば、付近に存在する他のAR対象物は、最初に認識したAR対象物とは異なる。 Then, the receiver 200 transmits the information such as an ID included in the visible light signal and the relative position to the server 300, and uses the information and the relative position as a key, thereby registering the AR recognition information in the server 300. And AR display information are acquired (step S522). At this time, the receiver 200 acquires not only the information on the recognized AR object but also information on other AR objects existing in the vicinity of the AR object (that is, the AR recognition information and the AR display information). Also good. Thereby, when another AR object existing in the vicinity is imaged by the receiver 200, the receiver 200 recognizes the other AR object existing in the vicinity quickly and without error. Can do. For example, other AR objects present in the vicinity are different from the first recognized AR object.
 なお、受信機200は、サーバ300にアクセスする代わりに、受信機200内のデータベースからこれらの情報を取得してもよい。受信機200は、これらの情報を、取得時から一定時間経過後、または特定の処理(例えば、画面のオフ、ボタン押下、アプリの終了もしくは停止、AR画像の表示、または、別のAR対象物の認識等)の後に廃棄してもよい。あるいは、受信機200は、取得される複数の情報のそれぞれで、その情報の取得から一定時間経過ごとに、その情報の信頼度を下げ、複数の情報のうち信頼度の高い情報を用いてもよい。 Note that the receiver 200 may acquire these pieces of information from the database in the receiver 200 instead of accessing the server 300. The receiver 200 receives these pieces of information after a certain period of time has elapsed from the time of acquisition or a specific process (for example, turning off the screen, pressing a button, ending or stopping an application, displaying an AR image, or another AR object. May be discarded after the recognition). Alternatively, the receiver 200 may decrease the reliability of the information every time a certain period of time has elapsed since the acquisition of the information, and use highly reliable information among the plurality of information. Good.
 ここで、受信機200は、各AR対象物との相対位置に基づいて、その相対位置の関係において有効なAR対象物のAR認識情報を優先して取得してもよい。例えば、受信機200は、ステップS521において、複数の送信機100を撮影することにより、複数の可視光信号(すなわち識別情報)を取得し、ステップS522において、それらの複数の可視光信号に対応する複数のAR認識情報(すなわち画像特徴量)を取得する。このとき、受信機200は、ステップS522において、複数のAR対象物のうち、それらの送信機100の撮影を行う受信機200から最も近いAR対象物の画像特徴量を選択する。つまり、この選択された画像特徴量が、可視光信号を用いて特定される1つのAR対象物(すなわち第1の対象物)の特定に用いられる。これにより、複数の画像特徴量が取得されても、適切な画像特徴量を第1の対象物の特定に用いることができる。 Here, the receiver 200 may preferentially acquire the AR recognition information of the AR object that is effective in relation to the relative position based on the relative position to each AR object. For example, the receiver 200 acquires a plurality of visible light signals (that is, identification information) by photographing the plurality of transmitters 100 in step S521, and corresponds to the plurality of visible light signals in step S522. A plurality of AR recognition information (that is, image feature amount) is acquired. At this time, in step S522, the receiver 200 selects an image feature amount of the AR object closest to the receiver 200 that performs imaging of the transmitter 100 among the plurality of AR objects. That is, the selected image feature amount is used to specify one AR object (that is, the first object) specified using the visible light signal. Thereby, even if a plurality of image feature amounts are acquired, an appropriate image feature amount can be used for specifying the first object.
 一方、受信機200は、可視光信号を受信していないと判定した場合には(ステップS521のN)、さらに、既にAR認識情報を取得しているか否かを判定する(ステップS523)。AR認識情報を取得していないと判定すると(ステップS523のN)、受信機200は、可視光信号によって示されるID等の識別情報を用いずに、画像処理により、または、位置情報もしくは電波情報などのその他の情報を用いてAR対象物の候補を認識する(ステップS524)。この処理は受信機200のみで行われてもよい。あるいは、受信機200は、撮像画像、またはその撮像画像の画像特徴量などの情報をサーバ300へ送信し、サーバ300が、そのAR対象物の候補を認識してもよい。その結果、受信機200は、認識された候補に対応したAR認識情報とAR表示情報とを、サーバ300または自身のデータベースから取得する。 On the other hand, when it is determined that the visible light signal is not received (N in step S521), the receiver 200 further determines whether or not the AR recognition information has already been acquired (step S523). If it is determined that the AR recognition information has not been acquired (N in step S523), the receiver 200 does not use identification information such as an ID indicated by the visible light signal, or by image processing, or position information or radio wave information. The AR object candidate is recognized using other information such as (step S524). This process may be performed only by the receiver 200. Alternatively, the receiver 200 may transmit information such as a captured image or an image feature amount of the captured image to the server 300, and the server 300 may recognize the AR object candidate. As a result, the receiver 200 acquires AR recognition information and AR display information corresponding to the recognized candidate from the server 300 or its own database.
 ステップS522の後、受信機200は、例えば画像認識など、可視光信号によって示されるID等の識別情報を用いない別の方法で、AR対象物を検出しているか否かを判定する(ステップS525)。つまり、受信機200は、複数の方法でAR対象物を認識したか否かを判定する。具体的には、受信機200は、可視光信号によって示される識別情報に基づいて取得された画像特徴量を用いて、撮像画像からAR対象物(すなわち第1の対象物)を特定する。そして、受信機200は、そのような識別情報を用いずに、画像処理により、撮像画像からAR対象物(すなわち第2の対象物)を特定しているか否かを判定する。 After step S522, the receiver 200 determines whether or not the AR object is detected by another method that does not use identification information such as an ID indicated by the visible light signal, such as image recognition (step S525). ). That is, the receiver 200 determines whether or not the AR object is recognized by a plurality of methods. Specifically, the receiver 200 specifies the AR object (that is, the first object) from the captured image using the image feature amount acquired based on the identification information indicated by the visible light signal. Then, the receiver 200 determines whether or not the AR object (that is, the second object) is specified from the captured image by image processing without using such identification information.
 ここで、受信機200は、複数の方法でAR対象物を認識したと判定すると(ステップS525のY)、可視光信号による認識結果を優先する。つまり、受信機200は、各方法によって認識されたAR対象物が一致しているか否かを確認する。そして、一致していなければ、受信機200は、それらのAR対象物の中から、撮像画像中においてAR画像が重畳される1つのAR対象物を、可視光信号によって認識されたAR対象物に決定する(ステップS526)。つまり、第1の対象物が第2の対象物と異なる場合には、受信機200は、第1の対象物を優先して、AR画像が表示される対象物として認識する。なお、AR画像が表示される対象物は、AR画像が重畳される対象物である。 Here, when the receiver 200 determines that the AR object has been recognized by a plurality of methods (Y in step S525), the recognition result by the visible light signal is prioritized. That is, the receiver 200 confirms whether or not the AR objects recognized by the respective methods match. If they do not match, the receiver 200 selects one AR object on which the AR image is superimposed in the captured image as the AR object recognized by the visible light signal. Determination is made (step S526). That is, when the first object is different from the second object, the receiver 200 gives priority to the first object and recognizes it as the object on which the AR image is displayed. The object on which the AR image is displayed is an object on which the AR image is superimposed.
 または、受信機200は、複数の方法のそれぞれに付与された優先順に基づいて、高い優先順位が付与された方法を優先してもよい。つまり、受信機200は、各方法によって認識されたAR対象物の中から、撮像画像中においてAR画像が重畳される1つのAR対象物を、例えば最も高い優先順位が付与された方法によって認識されたAR対象物に決定する。または、受信機200は、多数決もしくは優先度付き多数決によって、撮像画像中においてAR画像が重畳される1つのAR対象物を決定してもよい。この処理によって、それまでの認識結果が覆された場合は、受信機200はエラー対応処理を行う。 Alternatively, the receiver 200 may prioritize a method having a higher priority order based on a priority order assigned to each of the plurality of methods. That is, the receiver 200 recognizes one AR object on which the AR image is superimposed in the captured image from among the AR objects recognized by each method, for example, by a method having the highest priority. AR target is determined. Alternatively, the receiver 200 may determine one AR object on which the AR image is superimposed in the captured image by a majority decision or a majority decision with priority. If the recognition result up to that point is overturned by this process, the receiver 200 performs an error handling process.
 次に、受信機200は、取得したAR認識情報に基いて、撮像画像中のAR対象物の状態(具体的には、絶対位置、受信機200からの相対位置、大きさ、角度、照明状況、またはオクルージョン等)を認識する(ステップS527)。そして、受信機200は、その認識結果に合わせてAR表示情報(すなわちAR画像)を撮像画像に重畳して表示する(ステップS528)。つまり、受信機200は、撮像画像中の認識されたAR対象物にAR表示情報を重畳する。または、受信機200は、AR表示情報のみを表示する。 Next, the receiver 200 determines the state of the AR object in the captured image (specifically, the absolute position, the relative position from the receiver 200, the size, the angle, and the illumination status) based on the acquired AR recognition information. Or occlusion or the like) (step S527). Then, the receiver 200 displays the AR display information (that is, the AR image) superimposed on the captured image in accordance with the recognition result (step S528). That is, the receiver 200 superimposes the AR display information on the recognized AR object in the captured image. Alternatively, the receiver 200 displays only the AR display information.
 これらにより、画像処理のみでは困難な認識または検出が可能になる。その困難な認識または検出は、例えば、(文字内容だけが異なっているなどの)画像的に類似したAR対象物の識別、模様が少ないAR対象物の検出、反射率もしくは透過率が高いAR対象物の検出、形状もしくは模様が変化するAR対象物(例えば動物など)の検出、または、広い角度(いろいろな方向)からのAR対象物の検出である。つまり、本変形例では、これらのAR対象物の認識とAR表示とを行うことができる。また、可視光信号を用いない画像処理では、認識したいAR対象物が多くなるに従い、画像特徴量の近傍検索に時間がかかり、認識処理に時間がかかるようになり、また、認識率も悪化する。しかし、本変形例では、認識対象の増加による認識時間の増加と認識率の悪化の影響は、まったくないか極めて小さく、効果的なAR対象物の認識が可能となる。また、AR対象物の相対位置を用いることで、効率的な認識が可能となる。例えば、AR対象物までのおおよその距離を利用することで、画像特徴量の計算に際してAR対象物の大きさに非依存とするための処理を省いたり、大きさに依存する特徴を利用することができる。また、AR対象物の角度を利用し、通常であれば多くの角度に対して画像特徴量の評価が必要なところ、そのAR対象物の角度に対応する画像特徴量の保持と計算のみを行えばよく、計算速度またはメモリ効率を向上することができる。 These enable recognition or detection that is difficult with image processing alone. The difficult recognition or detection includes, for example, identification of AR objects that are image-similar (such as only different text content), detection of AR objects with fewer patterns, and AR objects with high reflectivity or transmittance. Detection of an object, detection of an AR object (for example, an animal) whose shape or pattern changes, or detection of an AR object from a wide angle (in various directions). That is, in this modification, recognition of these AR objects and AR display can be performed. In addition, in image processing that does not use a visible light signal, as the number of AR objects to be recognized increases, it takes time to search for neighborhoods of image feature values, which takes time for recognition processing, and the recognition rate also deteriorates. . However, in this modification, the influence of the increase in recognition time and the deterioration of the recognition rate due to the increase in recognition objects is not at all or extremely small, and effective AR object recognition is possible. Further, efficient recognition is possible by using the relative position of the AR object. For example, by using an approximate distance to the AR object, processing for making the AR object size independent of the calculation of the image feature amount can be omitted, or a size-dependent feature can be used. Can do. In addition, when the angle of the AR object is used and it is usually necessary to evaluate the image feature amount for many angles, only the retention and calculation of the image feature amount corresponding to the angle of the AR object is performed. The calculation speed or the memory efficiency can be improved.
 [実施の形態23の変形例4のまとめ]
 図314は、本発明の一態様に係る認識方法を示すフローチャートである。
[Summary of Modification 4 of Embodiment 23]
FIG. 314 is a flowchart illustrating a recognition method according to an aspect of the present invention.
 本発明の一態様に係る表示方法は、拡張現実感画像(AR画像)が表示される対象物の認識方法であって、ステップS531~535を含む。 The display method according to one aspect of the present invention is a method for recognizing an object on which an augmented reality image (AR image) is displayed, and includes steps S531 to S535.
 ステップS531では、受信機200は、可視光信号を光源の輝度変化により送信する送信機100を撮影することにより、識別情報を取得する。識別情報は例えば光IDである。ステップS532では、受信機200は、その識別情報をサーバ300に送信し、サーバ300から識別情報に対応する画像特徴量を取得する。画像特徴量は、AR認識情報または認識情報として示される。 In step S531, the receiver 200 acquires the identification information by photographing the transmitter 100 that transmits a visible light signal by a change in luminance of the light source. The identification information is, for example, an optical ID. In step S <b> 532, the receiver 200 transmits the identification information to the server 300 and acquires an image feature amount corresponding to the identification information from the server 300. The image feature amount is indicated as AR recognition information or recognition information.
 ステップS533では、受信機200は、その画像特徴量を用いて、送信機100の撮像画像から第1の対象物を特定する。ステップS534では、受信機200は、識別情報(すなわち光ID)を用いずに、画像処理により、送信機100の撮像画像から第2の対象物を特定する。 In step S533, the receiver 200 specifies the first object from the captured image of the transmitter 100 using the image feature amount. In step S534, the receiver 200 specifies the second object from the captured image of the transmitter 100 by image processing without using the identification information (that is, the optical ID).
 ステップS535では、ステップS533で特定された第1の対象物が、ステップS534で特定された第2の対象物と異なる場合に、受信機200は、第1の対象物を優先して、拡張現実感画像が表示される対象物として認識する。 In step S535, when the first object specified in step S533 is different from the second object specified in step S534, the receiver 200 gives priority to the first object and augments reality. Recognize as an object to be displayed.
 例えば、拡張現実感画像、撮像画像、および対象物はそれぞれ、実施の形態23およびその各変形例におけるAR画像、撮像表示画像、対象領域に相当する。 For example, the augmented reality image, the captured image, and the target object correspond to the AR image, the captured display image, and the target region in the twenty-third embodiment and each modification thereof, respectively.
 これにより、図313に示すように、可視光信号によって示される識別情報を用いて特定された第1の対象物と、その識別情報を用いずに画像処理によって特定された第2の対象物とが異なる場合であっても、拡張現実感画像が表示される対象物として第1の対象物が優先して認識される。したがって、撮像画像から、拡張現実感画像が表示される対象物を適切に認識することができる。 Thereby, as shown in FIG. 313, the first object specified using the identification information indicated by the visible light signal, and the second object specified by the image processing without using the identification information, Are different, the first object is preferentially recognized as the object on which the augmented reality image is displayed. Therefore, the object on which the augmented reality image is displayed can be appropriately recognized from the captured image.
 また、画像特徴量は、第1の対象物の画像特徴量に加え、第1の対象物の近辺に位置し、第1の対象物とは異なる第3の対象物の画像特徴量も含んでいてもよい。 In addition to the image feature amount of the first object, the image feature amount includes an image feature amount of a third object that is located in the vicinity of the first object and is different from the first object. May be.
 これにより、図313のステップS522に示すように、第1の対象物の画像特徴量だけでなく、第3の対象物の画像特徴量も取得されるため、その後に、第3の対象物が撮像画像に現れるときには、迅速にその第3の対象物を特定または認識することができる。 Thereby, as shown in step S522 of FIG. 313, not only the image feature amount of the first object but also the image feature amount of the third object is acquired. When appearing in the captured image, the third object can be quickly identified or recognized.
 また、受信機200は、ステップS531において、複数の送信機を撮影することにより、複数の識別情報を取得し、ステップS532において、複数の識別情報に対応する複数の画像特徴量を取得する場合がある。このような場合には、受信機200は、ステップS533では、複数の対象物のうち、複数の送信機の撮影を行う受信機200から最も近い対象物の画像特徴量を、第1の対象物の特定に用いてもよい。 In addition, the receiver 200 may acquire a plurality of identification information by photographing a plurality of transmitters in step S531, and may acquire a plurality of image feature amounts corresponding to the plurality of identification information in step S532. is there. In such a case, in step S533, the receiver 200 determines the image feature amount of the object closest to the receiver 200 that performs imaging by the plurality of transmitters among the plurality of objects as the first object. It may be used to identify
 これにより、図313のステップS522に示すように、複数の画像特徴量が取得されても、適切な画像特徴量を第1の対象物の特定に用いることができる。 Thus, as shown in step S522 of FIG. 313, even when a plurality of image feature amounts are acquired, an appropriate image feature amount can be used for specifying the first object.
 なお、本変形例における認識装置は、例えば上述の受信機200に備えられた装置であって、プロセッサと記録媒体とを備える。この記録媒体には、図314に示す認識方法をプロセッサに実行させるプログラムが記録されている。また、本変形例におけるプログラムは、図314に示す認識方法をコンピュータに実行させるプログラムである。 Note that the recognition device in the present modification is, for example, a device provided in the receiver 200 described above, and includes a processor and a recording medium. On this recording medium, a program for causing the processor to execute the recognition method shown in FIG. 314 is recorded. Moreover, the program in this modification is a program which makes a computer perform the recognition method shown in FIG.
 (実施の形態24)
 図315は、本実施の形態に係る可視光信号の動作モードの一例を示す図である。なお、本実施の形態は、実施の形態20の変形例に相当する。
(Embodiment 24)
FIG. 315 is a diagram illustrating an example of an operation mode of a visible light signal according to this embodiment. The present embodiment corresponds to a modification of the twentieth embodiment.
 可視光信号の物理(PHY)層の動作モードには、図315に示すように、2つのモードがある。1つ目の動作モードは、パケットPWM(Pulse Width Modulation)が行われるモードであり、2つ目の動作モードは、パケットPPM(Pulse-Position Modulation)が行われるモードである。上記各実施の形態またはその変形例に係る送信機は、この何れかの動作モードにしたがって送信対象の信号を変調することによって、可視光信号を生成して送信する。 As shown in FIG. 315, there are two modes in the operation mode of the physical (PHY) layer of the visible light signal. The first operation mode is a mode in which packet PWM (Pulse Width Modulation) is performed, and the second operation mode is a mode in which packet PPM (Pulse-Position Modulation) is performed. The transmitter according to each of the above-described embodiments or modifications thereof generates and transmits a visible light signal by modulating a signal to be transmitted according to any one of these operation modes.
 パケットPWMの動作モードでは、RLL(Run-Length Limited)符号化は行われず、光クロックレートは100kHzであり、前方誤り訂正(FEC)は、繰り返し符号化され、典型的なデータレートは5.5kbpsである。 In the operation mode of the packet PWM, RLL (Run-Length Limited) encoding is not performed, the optical clock rate is 100 kHz, forward error correction (FEC) is repeatedly encoded, and a typical data rate is 5.5 kbps. It is.
 このパケットPWMでは、パルス幅が変調され、パルスは、2つの明るさの状態によって表される。2つの明るさの状態は、明るい状態(BrightまたはHigh)と暗い状態(DarkまたはLow)であるが、典型的には、光のオンとオフである。パケット(PHYパケットともいう)と呼ばれる物理層の信号のチャンクは、MAC(medium access control)フレームに対応している。送信機は、PHYパケットを繰り返し送信し、特別な順番によらずに複数のPHYパケットのセットを送信することができる。 In this packet PWM, the pulse width is modulated, and the pulse is represented by two brightness states. The two brightness states are a bright state (Bright or High) and a dark state (Dark or Low), but typically the light is on and off. A chunk of a physical layer signal called a packet (also referred to as a PHY packet) corresponds to a MAC (medium access control) frame. The transmitter can repeatedly transmit PHY packets and transmit a set of a plurality of PHY packets regardless of a special order.
 なお、このパケットPWMは、例えば上述の図188、図189Aの(b)および図197などに示す変調である。また、パケットPWMは、通常の送信機から送信される可視光信号の生成に用いられる。 Note that this packet PWM is, for example, the modulation shown in FIGS. 188 and 189A (b) and 197 described above. The packet PWM is used to generate a visible light signal transmitted from a normal transmitter.
 パケットPPMの動作モードでは、RLL符号化は行われず、光クロックレートは100kHzであり、前方誤り訂正(FEC)は、繰り返し符号化され、典型的なデータレートは8kbpsである。 In the operation mode of packet PPM, RLL encoding is not performed, the optical clock rate is 100 kHz, forward error correction (FEC) is repeatedly encoded, and a typical data rate is 8 kbps.
 このパケットPPMでは、短い時間長のパルスの位置が変調される。つまり、このパルスは、明るいパルス(High)と暗いパルス(Low)のうちの明るいパルスであり、このパルスの位置が変調される。また、このパルスの位置は、パルスと次のパルスとの間のインターバルによって示される。 In this packet PPM, the position of a short time length pulse is modulated. That is, this pulse is a bright pulse of a bright pulse (High) and a dark pulse (Low), and the position of this pulse is modulated. The position of this pulse is indicated by the interval between the pulse and the next pulse.
 パケットPPMは、深い調光を実現する。各実施の形態およびその変形例において説明されていないパケットPPMにおけるフォーマット、波形および特徴は、パケットPWMと同様である。なお、このパケットPPMは、例えば上述の図189B、図199および図213などに示す変調である。また、パケットPPMは、非常に明るく発光する光源を有する送信機から送信される可視光信号の生成に用いられる。 Packet PPM realizes deep dimming. The format, waveform, and characteristics of the packet PPM not described in each embodiment and its modification are the same as those of the packet PWM. The packet PPM is modulated as shown in FIG. 189B, FIG. 199, FIG. The packet PPM is used to generate a visible light signal transmitted from a transmitter having a light source that emits very bright light.
 また、パケットPWMおよびパケットPPMのそれぞれにおいて、可視光信号の物理層における調光は、オプショナルフィールドの平均輝度によって制御される。 In each of the packet PWM and the packet PPM, dimming in the physical layer of the visible light signal is controlled by the average luminance of the optional field.
 <パケットPWMのPPDUフォーマット>
 ここで、PPDU(physical-layer data unit)のフォーマットについて説明する。
<Packet PWM PPDU format>
Here, a format of a PPDU (physical-layer data unit) will be described.
 図316は、パケットPWMのモード1におけるPPDUフォーマットの一例を示す図である。図317は、パケットPWMのモード2におけるPPDUフォーマットの一例を示す図である。図318は、パケットPWMのモード3におけるPPDUフォーマットの一例を示す図である。 FIG. 316 is a diagram illustrating an example of a PPDU format in mode 1 of packet PWM. FIG. 317 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PWM. FIG. 318 is a diagram illustrating an example of a PPDU format in mode 3 of the packet PWM.
 パケットPWMによって変調されるパケットは、モード1およびモード2では、図316および図317に示すように、PHYペイロードAと、SHR(synchronization header)と、PHYペイロードBと、オプショナルフィールドとを含む。SHRは、PHYペイロードAおよびPHYペイロードBに対するヘッダである。なお、PHYペイロードAおよびPHYペイロードBのそれぞれを総称してPHYペイロードという。 In the mode 1 and the mode 2, the packet modulated by the packet PWM includes a PHY payload A, an SHR (synchronization header), a PHY payload B, and an optional field as shown in FIGS. 316 and 317. SHR is a header for PHY payload A and PHY payload B. The PHY payload A and the PHY payload B are collectively referred to as a PHY payload.
 また、モード3では、パケットPWMによって変調されるパケットは、図318に示すように、SHRと、PHYペイロードと、SFT(synchronization footer)と、オプショナルフィールドとを含む。SHTは、PHYペイロードに対するヘッダであり、SFTは、PHYペイロードに対するフッタである。 In mode 3, as shown in FIG. 318, the packet modulated by the packet PWM includes SHR, PHY payload, SFT (synchronization footer), and optional field. SHT is a header for the PHY payload, and SFT is a footer for the PHY payload.
 モード1~3のそれぞれにおいて、PHYペイロードA、SHR、PHYペイロードBおよびSFTでは、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れる。第1の輝度値は、BrightまたはHighであり、第2の輝度値は、DarkまたはLowである。 In each of the modes 1 to 3, in the PHY payload A, SHR, PHY payload B, and SFT, the first and second luminance values that are different from each other appear alternately on the time axis. The first luminance value is Bright or High, and the second luminance value is Dark or Low.
 ここで、パケットPWMのSHRは、2つまたは4つのパルスを含む。それらのパルスは、BrightまたはDarkの明るさのパルスである。 Here, the SHR of the packet PWM includes two or four pulses. These pulses are Bright or Dark brightness pulses.
 図319は、パケットPWMのモード1~3のそれぞれのSHRにおけるパルス幅のパターンの一例を示す図である。 FIG. 319 is a diagram showing an example of a pulse width pattern in each SHR of the packet PWM modes 1 to 3.
 図319に示すように、パケットPWMのモード1では、SHRは2つのパルスを含む。この2つのパルスのうちの送信順で1番目のパルスのパルス幅H1は、100μ秒であり、2番目のパルスのパルス幅H2は、90μ秒である。パケットPWMのモード2では、SHRは4つのパルスを含む。この4つのパルスのうちの送信順で1番目のパルスのパルス幅H1は、100μ秒であり、2番目のパルスのパルス幅H2は、90μ秒であり、3番目のパルスのパルス幅H3は、90μ秒であり、4番目のパルスのパルス幅H4は、100μ秒である。パケットPWMのモード3では、SHRは4つのパルスを含む。この4つのパルスのうちの送信順で1番目のパルスのパルス幅H1は、50μ秒であり、2番目のパルスのパルス幅H2は、40μ秒であり、3番目のパルスのパルス幅H3は、40μ秒であり、4番目のパルスのパルス幅H4は、50μ秒である。 As shown in FIG. 319, in the mode 1 of the packet PWM, the SHR includes two pulses. Of the two pulses, the pulse width H1 of the first pulse in the order of transmission is 100 μsec, and the pulse width H2 of the second pulse is 90 μsec. In mode 2 of packet PWM, the SHR includes 4 pulses. Of the four pulses, the pulse width H1 of the first pulse in the transmission order is 100 μsec, the pulse width H2 of the second pulse is 90 μsec, and the pulse width H3 of the third pulse is 90 μs, and the pulse width H4 of the fourth pulse is 100 μs. In mode 3 of packet PWM, the SHR includes 4 pulses. Of the four pulses, the pulse width H1 of the first pulse in the transmission order is 50 μsec, the pulse width H2 of the second pulse is 40 μsec, and the pulse width H3 of the third pulse is It is 40 μs, and the pulse width H4 of the fourth pulse is 50 μs.
 PHYペイロードは、モード1では、送信対象の信号として6ビットのデータ(すなわちx-x)を含み、モード2では、送信対象の信号として12ビットのデータ(すなわちx-x11)を含む。また、PHYペイロードは、モード3では、送信対象の信号として可変のビット数のデータ(すなわちx-x)を含む。nは、1以上の整数であるが、より具体的には、3の倍数から1を減算することによって得られる整数である。 The PHY payload includes 6-bit data (ie, x 0 -x 5 ) as a transmission target signal in mode 1, and 12-bit data (ie, x 0 -x 11 ) as a transmission target signal in mode 2. Including. In the mode 3, the PHY payload includes data with a variable number of bits (ie, x 0 -x n ) as a transmission target signal. n is an integer of 1 or more, but more specifically is an integer obtained by subtracting 1 from a multiple of 3.
 ここで、パラメータykは、y=y=x3k+x3k+1×2+x3k+2×4として定義される。モード1では、kは0または1であり、モード2では、kは0、1、2または3である。モード3では、kは、0~{(n+1)/3-1}までの整数である。 The parameter yk is defined as y k = y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4. In mode 1, k is 0 or 1, and in mode 2, k is 0, 1, 2, or 3. In mode 3, k is an integer from 0 to {(n + 1) / 3-1}.
 モード1およびモード2のそれぞれでは、PHYペイロードAに含まれる送信対象の信号は、パルス幅PAk=120+30×(7-y)[μ秒]によって、2つのパルス幅PA1およびPA2、または4つのパルス幅PA1~PA4に変調される。PHYペイロードBに含まれる送信対象の信号は、パルス幅PBk=120+30×y[μ秒]によって、2つのパルス幅PB1およびPB2、または4つのパルス幅PB1~PB4に変調される。 In each of mode 1 and mode 2, the signal to be transmitted included in the PHY payload A has two pulse widths P A1 and P A2 , with a pulse width P Ak = 120 + 30 × (7−y k ) [μs], Or, it is modulated to four pulse widths P A1 to P A4 . The signal to be transmitted included in the PHY payload B is modulated into two pulse widths P B1 and P B2 or four pulse widths P B1 to P B4 by a pulse width P Bk = 120 + 30 × y k [μsec]. The
 また、モード3では、PHYペイロードに含まれる送信対象の信号は、パルス幅P=100+20×y[μ秒]によって、(n+1)/3個のパルス幅P1,P2,・・・に変調される。 In mode 3, the signal to be transmitted included in the PHY payload is modulated to (n + 1) / 3 pulse widths P1, P2,... By the pulse width P k = 100 + 20 × y k [μs]. Is done.
 モード1およびモード2では、PHYペイロードAとPHYペイロードBとを含む全ペイロードのうちの半分はオプショナルである。つまり、送信機は、PHYペイロードAおよびPHYペイロードBを送信してもよく、それらのうちの1つだけを送信してもよい。さらに、送信機は、PHYペイロードAの一部のみと、PHYペイロードBの一部のみとを送信してもよい。具体的には、送信機は、モード2では、PHYペイロードAにおけるパルス幅PA3のパルスおよびパルス幅PA4のパルスと、PHYペイロードBにおけるパルス幅PB1のパルスおよびパルス幅PB2のパルスとを送信してもよい。 In mode 1 and mode 2, half of all payloads including PHY payload A and PHY payload B are optional. That is, the transmitter may transmit PHY payload A and PHY payload B, or only one of them. Further, the transmitter may transmit only a part of the PHY payload A and only a part of the PHY payload B. Specifically, in mode 2, the transmitter transmits a pulse having a pulse width P A3 and a pulse having a pulse width P A4 in the PHY payload A, and a pulse having a pulse width P B1 and a pulse having a pulse width P B2 in the PHY payload B. May be sent.
 モード3のSFTは、パルス幅F1~F4がそれぞれ40μ秒、50μ秒、60μ秒および40μ秒である4つのパルスを含む。また、SFTは、オプショナルである。したがって、送信機は、SFTの代わりに、次のSHRを送信してもよい。 Mode 3 SFT includes four pulses with pulse widths F1 to F4 of 40 μsec, 50 μsec, 60 μsec, and 40 μsec, respectively. The SFT is optional. Therefore, the transmitter may transmit the next SHR instead of the SFT.
 送信機は、オプショナルフィールドに含まれる信号として、どのような種類の信号を送信してもよい。しかし、その信号は、SHRのパターンを含んではならない。このようなオプショナルフィールドは、直流電流の補償または調光制御などに用いられる。 The transmitter may transmit any type of signal as a signal included in the optional field. However, the signal must not contain the SHR pattern. Such an optional field is used for direct current compensation or dimming control.
 <パケットPPMのPPDUフォーマット>
 図320は、パケットPPMのモード1におけるPPDUフォーマットの一例を示す図である。図321は、パケットPPMのモード2におけるPPDUフォーマットの一例を示す図である。図322は、パケットPPMのモード3におけるPPDUフォーマットの一例を示す図である。
<PPDU format of packet PPM>
FIG. 320 is a diagram showing an example of a PPDU format in mode 1 of the packet PPM. FIG. 321 is a diagram illustrating an example of a PPDU format in mode 2 of the packet PPM. FIG. 322 is a diagram illustrating an example of a PPDU format in mode 3 of the packet PPM.
 パケットPPMによって変調されるパケットは、モード1およびモード2では、図320および図321に示すように、SHRと、PHYペイロードと、オプショナルフィールドとを含む。SHRは、PHYペイロードに対するヘッダである。 In the mode 1 and the mode 2, the packet modulated by the packet PPM includes the SHR, the PHY payload, and the optional field as shown in FIG. 320 and FIG. SHR is a header for the PHY payload.
 また、モード3では、パケットPPMによって変調されるパケットは、図322に示すように、SHRと、PHYペイロードと、SFTと、オプショナルフィールドとを含む。SFTは、PHYペイロードに対するフッタである。 In mode 3, the packet modulated by the packet PPM includes SHR, PHY payload, SFT, and optional field as shown in FIG. The SFT is a footer for the PHY payload.
 モード1~3のそれぞれにおいて、SHR、PHYペイロードおよびSFTでは、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れる。第1の輝度値は、BrightまたはHighであり、第2の輝度値は、DarkまたはLowである。 In each of the modes 1 to 3, in the SHR, the PHY payload, and the SFT, the first and second luminance values that are different from each other appear alternately along the time axis. The first luminance value is Bright or High, and the second luminance value is Dark or Low.
 パケットPPMにおける短く明るいパルスの時間長(図320~図322中のL)は、10μ秒よりも短い。これにより、可視光信号の平均的な輝度を抑えて暗くすることができる。 The time length of short and bright pulses in packet PPM (L in FIGS. 320 to 322) is shorter than 10 μsec. Thereby, it can darken, suppressing the average brightness | luminance of a visible light signal.
 パケットPPMのSHRの時間長は、3つのインターバルH1~H3を含んでいる。3つのインターバルH1~H3のそれぞれは、連続する4つのパルス(具体的には上述の明るいパルス)のインターバルである。 The SHR time length of the packet PPM includes three intervals H1 to H3. Each of the three intervals H1 to H3 is an interval of four consecutive pulses (specifically, the above-described bright pulse).
 図323は、パケットPPMのモード1~3のそれぞれのSHRにおけるインターバルのパターンの一例を示す図である。 FIG. 323 is a diagram illustrating an example of an interval pattern in each SHR of modes 1 to 3 of the packet PPM.
 図323に示すように、パケットPPMのモード1では、3つのインターバルH1~H3はそれぞれ160μ秒である。パケットPWMのモード2では、3つのインターバルH1~H3のうちの1番目のインターバルH1は160μ秒であり、2番目のインターバルH2は180μ秒であり、3番目のインターバルH3は160μ秒である。パケットPPMのモード3では、3つのインターバルH1~H3のうちの1番目のインターバルH1は80μ秒であり、2番目のインターバルH2は90μ秒であり、3番目のインターバルH3は80μ秒である。 As shown in FIG. 323, in the mode 1 of the packet PPM, the three intervals H1 to H3 are each 160 μsec. In mode 2 of the packet PWM, the first interval H1 among the three intervals H1 to H3 is 160 μsec, the second interval H2 is 180 μsec, and the third interval H3 is 160 μsec. In mode 3 of the packet PPM, the first interval H1 of the three intervals H1 to H3 is 80 μsec, the second interval H2 is 90 μsec, and the third interval H3 is 80 μsec.
 PHYペイロードは、モード1では、送信対象の信号として6ビットのデータ(すなわちx-x)を含み、モード2では、送信対象の信号として12ビットのデータ(すなわちx-x11)を含む。また、PHYペイロードは、モード3では、送信対象の信号として可変のビット数のデータ(すなわちx-x)を含む。nは、5以上の整数であるが、より具体的には、3の倍数から1を減算することによって得られる整数である。 The PHY payload includes 6-bit data (ie, x 0 -x 5 ) as a transmission target signal in mode 1, and 12-bit data (ie, x 0 -x 11 ) as a transmission target signal in mode 2. Including. In the mode 3, the PHY payload includes data with a variable number of bits (ie, x 0 -x n ) as a transmission target signal. n is an integer of 5 or more, but more specifically is an integer obtained by subtracting 1 from a multiple of 3.
 ここで、パラメータykは、y=y=x3k+x3k+1×2+x3k+2×4として定義される。モード1では、kは0または1であり、モード2では、kは0、1、2または3である。モード3では、kは、0~{(n+1)/3-1}までの整数である。 The parameter yk is defined as y k = y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4. In mode 1, k is 0 or 1, and in mode 2, k is 0, 1, 2, or 3. In mode 3, k is an integer from 0 to {(n + 1) / 3-1}.
 モード1およびモード2のそれぞれでは、PHYペイロードに含まれる送信対象の信号は、インターバルP=180+30×y[μ秒]によって、2つのインターバルP1およびP2、または4つのインターバルP1~P4に変調される。 In each of mode 1 and mode 2, the signal to be transmitted included in the PHY payload is modulated into two intervals P1 and P2 or four intervals P1 to P4 by an interval P k = 180 + 30 × y k [μsec]. Is done.
 また、モード3では、PHYペイロードに含まれる送信対象の信号は、インターバルP=100+20×y[μ秒]によって、(n+1)/3個のインターバルP1,P2,・・・に変調される。モード3では、SFTまたは次のSHRまで続くPHYペイロードが送信される。 In mode 3, the signal to be transmitted included in the PHY payload is modulated into (n + 1) / 3 intervals P1, P2,... By the interval P k = 100 + 20 × y k [μsec]. . In mode 3, a PHY payload that continues until SFT or the next SHR is transmitted.
 また、モード3のSFTは、3つのインターバルF1~F3を含み、インターバルF1~F3のそれぞれは90μ秒、80μ秒、および90μ秒である。また、SFTは、オプショナルである。したがって、送信機は、SFTの代わりに、次のSHRを送信してもよい。 In addition, the mode 3 SFT includes three intervals F1 to F3, and each of the intervals F1 to F3 is 90 μsec, 80 μsec, and 90 μsec. The SFT is optional. Therefore, the transmitter may transmit the next SHR instead of the SFT.
 送信機は、オプショナルフィールドに含まれる信号として、どのような種類の信号を送信してもよい。しかし、その信号は、SHRのパターンを含んではならない。このようなオプショナルフィールドは、直流電流の補償または調光制御などに用いられる。 The transmitter may transmit any type of signal as a signal included in the optional field. However, the signal must not contain the SHR pattern. Such an optional field is used for direct current compensation or dimming control.
 <PHYフレームフォーマット>
 以下、パケットPWMおよびパケットPPMのそれぞれのモード1におけるPHYフレームについて説明する。
<PHY frame format>
Hereinafter, the PHY frame in mode 1 of each of the packet PWM and the packet PPM will be described.
 PHYペイロードは、上述のように、6ビットのデータ(すなわちx-x)を含む。そのデータを含むパケットのパケットアドレスA(a,a)は、(x,x)によって示される。そして、パケットデータD(d,d,d,d)は、(x,x,x,x)によって示される。上述のMACフレームであるPHYフレームは、4つのパケットのパケットデータD00,D01,D10,D11を含む16ビットからなる。ここで、パケットデータDkは、kを示すアドレスAを有するパケットのパケットデータDである。 The PHY payload includes 6 bits of data (ie, x 0 -x 5 ) as described above. The packet address A (a 0 , a 1 ) of the packet containing the data is indicated by (x 1 , x 4 ). Packet data D (d 0 , d 1 , d 2 , d 3 ) is indicated by (x 0 , x 2 , x 3 , x 5 ). The PHY frame which is the above-mentioned MAC frame is composed of 16 bits including packet data D 00 , D 01 , D 10 and D 11 of four packets. Here, the packet data Dk is packet data D of a packet having an address A indicating k.
 ここで、上述のように、6ビット(x-x)のうちの2ビット(x,x)がパケットアドレスA(a,a)に用いられる。これにより、6ビットのPHYペイロードの時間長を短くすることができ、その結果、可視光信号を遠距離まで送信することができる。つまり、6ビット(x-x)のうちの2ビット(x,x)のそれぞれは、パケットアドレスAに用いられないため、0にすることができる。また、その2ビット(x,x)に対しては、上述のy=x3k+x3k+1×2+x3k+2×4によって、大きい係数4が乗算され、その乗算結果に基づいてパルス幅またはインターバルが決定される。したがって、その2ビット(x,x)のそれぞれが0の場合には、PHYペイロードの時間長を短くすることができ、その結果、可視光信号の送信距離を延ばすことができる。 Here, as described above, 2 bits (x 1 , x 4 ) out of 6 bits (x 0 -x 5 ) are used for the packet address A (a 0 , a 1 ). As a result, the time length of the 6-bit PHY payload can be shortened, and as a result, a visible light signal can be transmitted to a long distance. That is, since each of 2 bits (x 2 , x 5 ) out of 6 bits (x 0 -x 5 ) is not used for the packet address A, it can be set to 0. The 2 bits (x 2 , x 5 ) are multiplied by a large coefficient 4 by the above-described y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4, and based on the multiplication result The pulse width or interval is determined. Therefore, when each of the 2 bits (x 2 , x 5 ) is 0, the time length of the PHY payload can be shortened, and as a result, the transmission distance of the visible light signal can be extended.
 また、6ビット(x-x)のうちの2ビット(x,x)のそれぞれは、パケットアドレスAに用いられないため、受信エラーを抑えることができる。つまり、6ビット(x-x)のうちの2ビット(x,x)による上述のパラメータy(x3k+x3k+1×2+x3k+2×4)に対する影響は小さい。したがって、この2ビット(x,x)をパケットアドレスAに用いれば、互いに異なるパケットアドレスAに対しても、同じようなパラメータyの数値、つまり同じようなパルス幅またはインターバルが決定される可能性がある。その結果、受信機は、パケットアドレスAを誤ることがある。パケットデータの一部を誤るよりも、パケットアドレスAを誤る方が、PHYフレームの受信エラー率は大きい。したがって、6ビット(x-x)のうちの2ビット(x,x)ではなく(x,x)のそれぞれをパケットアドレスAに用いることによって、受信エラーを抑えることができる。 Also, since 2 bits (x 0 , x 3 ) of 6 bits (x 0 -x 5 ) are not used for the packet address A, reception errors can be suppressed. That is, the influence of 2 bits (x 0 , x 3 ) out of 6 bits (x 0 -x 5 ) on the above-described parameter y k (x 3k + x 3k + 1 × 2 + x 3k + 2 × 4) is small. Therefore, if these 2 bits (x 0 , x 3 ) are used for packet address A, the same numerical value of parameter y k , that is, the same pulse width or interval, is determined for different packet addresses A. There is a possibility. As a result, the receiver may mistake the packet address A. The PHY frame reception error rate is larger when the packet address A is wrong than when a part of the packet data is wrong. Therefore, reception errors can be suppressed by using each of (x 1 , x 4 ) instead of 2 bits (x 0 , x 3 ) of 6 bits (x 0 -x 5 ) for packet address A. .
 ところで、MPDU(medium-access-control protocol-data unit)は、PHYフレームに対して非常に大きなオーバーヘッドを有し、その殆どのフィールドは、短く繰り返されるMSDU(medium-access-control service-data unit)に対して不要である。したがって、PHYフレームは、MHR(medium-access-control header)を持たず、MFR(medium-access-control footer)はオプショナルである。 By the way, MPDU (medium-access-control protocol-data unit) has a very large overhead for PHY frames, and most of its fields are repeated MSDU (medium-access-control service-data unit). Is unnecessary. Therefore, the PHY frame does not have MHR (medium-access-control footer), and MFR (medium-access-control footer) is optional.
 次に、パケットPWMおよびパケットPPMのそれぞれのモード2におけるPHYフレームについて説明する。 Next, the PHY frame in mode 2 of each of the packet PWM and the packet PPM will be described.
 図324は、PHYペイロードに含まれる12ビットのデータの一例を示す図である。 FIG. 324 is a diagram illustrating an example of 12-bit data included in the PHY payload.
 PHYペイロードは、上述のように、12ビットのデータ(すなわちx-x11)を含む。このデータは、パケットアドレスA(a-aの全てまたは一部)と、パケットデータDa(da0-da6の全てまたは一部)と、パケットデータDb(db0-db3の全てまたは一部)と、ストップビットS(s)とからなる。 As described above, the PHY payload includes 12 bits of data (ie, x 0 -x 11 ). This data includes packet address A (all or part of a 0 -a 3 ), packet data Da (all or part of d a0 -d a6 ), and packet data Db (all or part of d b0 -d b3 ) And a stop bit S (s).
 つまり、図324に示すように、3ビット(x,x,x)は(da0,s,db0)を示し、3ビット(x,x,x)は(da1,aまたはda6,db1)を示す。さらに、3ビット(x,x,x)は(da2,aまたはda5,db2)を示し、3ビット(x,x10,x11)は(da3,aまたはda4,aまたはdb3)を示す。 That is, as shown in FIG. 324, 3 bits (x 0 , x 1 , x 2 ) indicate (d a0 , s, d b0 ), and 3 bits (x 3 , x 4 , x 5 ) indicate (d a1 , A 0 or d a6 , d b1 ). Further, 3 bits (x 6 , x 7 , x 8 ) represent (d a2 , a 1 or d a5 , d b2 ), and 3 bits (x 9 , x 10 , x 11 ) represent (d a3 , a 2 Or d a4 , a 3 or d b3 ).
 なお、図324に示す12ビットのデータは、図215に示すデータと同一である。つまり、図215に示す符号w1、w2、w3およびw4は、それぞれ3ビット(x,x,x)、(x,x,x)、(x,x,x)および(x,x10,x11)に相当する。 Note that the 12-bit data shown in FIG. 324 is the same as the data shown in FIG. That is, the codes w1, w2, w3, and w4 shown in FIG. 215 are 3 bits (x 0 , x 1 , x 2 ), (x 3 , x 4 , x 5 ), (x 6 , x 7 , x 8 ), respectively. ) And (x 9 , x 10 , x 11 ).
 ビットx、x、x10およびx11は、パケット分割ルールにしたがって、パケットアドレスおよびパケットデータのうちの何れか一方に用いられる。 Bits x 4 , x 7 , x 10 and x 11 are used for any one of the packet address and the packet data according to the packet division rule.
 図325~図332は、PHYフレームをパケットに分割する処理を示す図である。なお、図325~図332に示す処理は、図216~図226に示すパケットの生成の処理と同様であるが、分割によって生成されるパケットにパリティが含まれない点が、図216~図226に示す処理と異なる。また、図325~図332に示す各ボックス内の上から2行目の数値は、ビットサイズを示し、上から3行目の数値はビットの値(0または1)を示す。 FIG. 325 to FIG. 332 are diagrams showing processing for dividing the PHY frame into packets. The processing shown in FIGS. 325 to 332 is the same as the packet generation processing shown in FIGS. 216 to 226, but the point that the parity is not included in the packet generated by the division is shown in FIGS. 216 to 226. Different from the process shown in. Also, the numerical value on the second line from the top in each box shown in FIGS. 325 to 332 indicates the bit size, and the numerical value on the third line from the top indicates the bit value (0 or 1).
 図325は、PHYフレームを1パケットに収める処理を示す図である。つまり、図325は、PHYフレームを分割することなく、そのPHYフレームに含まれる7ビットのデータを1パケットに収める処理を示す。 FIG. 325 is a diagram illustrating a process of storing the PHY frame in one packet. In other words, FIG. 325 shows a process for storing 7-bit data included in the PHY frame in one packet without dividing the PHY frame.
 具体的には、PHYフレームの7ビットのうち、4ビットからなるパケットデータDa(0)と、3ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「1」を示し、パケットアドレスは「0000」を示す。 Specifically, out of the 7 bits of the PHY frame, 4-bit packet data Da (0) and 3-bit packet data Db (0) consist of a 1-bit stop bit and a 4-bit packet address. Together with packet 0. The stop bit indicates “1” and the packet address indicates “0000”.
 図326は、PHYフレームを2パケットに分割する処理を示す図である。 FIG. 326 is a diagram illustrating processing for dividing the PHY frame into two packets.
 PHYフレームの18ビットのうちの、7ビットからなるパケットデータDa(0)と、4ビットからなるパケットデータDb(0)とが、1ビットのストップビットとともにパケット0に収められる。そのストップビットは「0」を示す。また、PHYフレームの18ビットのうちの、4ビットからなるパケットデータDa(1)と、3ビットからなるパケットデータDb(1)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット1に収められる。そのストップビットは「1」を示し、パケットアドレスは「1000」を示す。 Of the 18 bits of the PHY frame, packet data Da (0) consisting of 7 bits and packet data Db (0) consisting of 4 bits are stored in packet 0 together with 1 stop bit. The stop bit indicates “0”. Of the 18 bits of the PHY frame, packet data Da (1) consisting of 4 bits and packet data Db (1) consisting of 3 bits are packeted together with a 1-bit stop bit and a 4-bit packet address. 1 The stop bit indicates “1”, and the packet address indicates “1000”.
 図327は、PHYフレームを3パケットに分割する処理を示す図である。 FIG. 327 is a diagram showing processing for dividing the PHY frame into three packets.
 PHYフレームの27ビットのうちの、6ビットからなるパケットデータDa(0)と、4ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、1ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「0」を示し、パケットアドレスは「0」を示す。また、PHYフレームの27ビットのうちの、6ビットからなるパケットデータDa(1)と、4ビットからなるパケットデータDb(1)とが、1ビットのストップビットと、1ビットのパケットアドレスとともにパケット1に収められる。そのストップビットは「0」を示し、パケットアドレスは「1」を示す。さらに、PHYフレームの27ビットのうちの、4ビットからなるパケットデータDa(2)と、3ビットからなるパケットデータDb(2)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット2に収められる。そのストップビットは「1」を示し、パケットアドレスは「0100」を示す。 Of the 27 bits of the PHY frame, 6-bit packet data Da (0) and 4-bit packet data Db (0) are transmitted in packet 0 together with a 1-bit stop bit and a 1-bit packet address. Can be stored. The stop bit indicates “0” and the packet address indicates “0”. Of the 27 bits of the PHY frame, packet data Da (1) consisting of 6 bits and packet data Db (1) consisting of 4 bits are packeted together with 1 stop bit and 1 bit packet address. 1 The stop bit indicates “0” and the packet address indicates “1”. Further, of the 27 bits of the PHY frame, packet data Da (2) consisting of 4 bits and packet data Db (2) consisting of 3 bits are packeted together with a 1-bit stop bit and a 4-bit packet address. 2 The stop bit indicates “1” and the packet address indicates “0100”.
 図328は、PHYフレームを4パケットに分割する処理を示す図である。 FIG. 328 is a diagram showing processing for dividing the PHY frame into four packets.
 PHYフレームの34ビットのうちの、5ビットからなるパケットデータDa(0)と、4ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、2ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「0」を示し、パケットアドレスは「00」を示す。また、PHYフレームの34ビットのうちの、5ビットからなるパケットデータDa(1)と、4ビットからなるパケットデータDb(1)とが、1ビットのストップビットと、2ビットのパケットアドレスとともにパケット1に収められる。そのストップビットは「0」を示し、パケットアドレスは「10」を示す。また、PHYフレームの34ビットのうちの、5ビットからなるパケットデータDa(2)と、4ビットからなるパケットデータDb(2)とが、1ビットのストップビットと、2ビットのパケットアドレスとともにパケット2に収められる。そのストップビットは「0」を示し、パケットアドレスは「01」を示す。さらに、PHYフレームの34ビットのうちの、4ビットからなるパケットデータDa(3)と、3ビットからなるパケットデータDb(3)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット3に収められる。そのストップビットは「1」を示し、パケットアドレスは「1100」を示す。 Of the 34 bits of the PHY frame, packet data Da (0) consisting of 5 bits and packet data Db (0) consisting of 4 bits are transferred to packet 0 together with a 1-bit stop bit and a 2-bit packet address. Can be stored. The stop bit indicates “0” and the packet address indicates “00”. Of the 34 bits of the PHY frame, packet data Da (1) consisting of 5 bits and packet data Db (1) consisting of 4 bits are packeted together with a 1-bit stop bit and a 2-bit packet address. 1 The stop bit indicates “0” and the packet address indicates “10”. Of the 34 bits of the PHY frame, packet data Da (2) consisting of 5 bits and packet data Db (2) consisting of 4 bits are packeted together with a 1-bit stop bit and a 2-bit packet address. 2 The stop bit indicates “0” and the packet address indicates “01”. Further, of the 34 bits of the PHY frame, the packet data Da (3) consisting of 4 bits and the packet data Db (3) consisting of 3 bits are packeted together with a 1-bit stop bit and a 4-bit packet address. 3 The stop bit indicates “1”, and the packet address indicates “1100”.
 図329は、PHYフレームを5パケットに分割する処理を示す図である。 FIG. 329 is a diagram showing processing for dividing the PHY frame into 5 packets.
 PHYフレームの43ビットのうちの、5ビットからなるパケットデータDa(0)と、4ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、2ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「0」を示し、パケットアドレスは「00」を示す。同様に、パケット1~パケット3にも、5ビットからなるパケットデータDaと、4ビットからなるパケットデータDbとが、1ビットのストップビットと、2ビットのパケットアドレスとともに収められる。それらのパケットのストップビットは「0」を示す。さらに、PHYフレームの34ビットのうちの、4ビットからなるパケットデータDa(4)と、3ビットからなるパケットデータDb(4)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット4に収められる。そのストップビットは「1」を示し、パケットアドレスは「0010」を示す。 Of the 43 bits of the PHY frame, packet data Da (0) consisting of 5 bits and packet data Db (0) consisting of 4 bits are transferred to packet 0 together with a 1-bit stop bit and a 2-bit packet address. Can be stored. The stop bit indicates “0” and the packet address indicates “00”. Similarly, in packet 1 to packet 3, packet data Da consisting of 5 bits and packet data Db consisting of 4 bits are stored together with a 1-bit stop bit and a 2-bit packet address. The stop bits of these packets indicate “0”. Further, of the 34 bits of the PHY frame, packet data Da (4) consisting of 4 bits and packet data Db (4) consisting of 3 bits are packeted together with a 1-bit stop bit and a 4-bit packet address. 4 The stop bit indicates “1” and the packet address indicates “0010”.
 図330は、PHYフレームをN(N=6、7または8)パケットに分割する処理を示す図である。 FIG. 330 is a diagram showing processing for dividing a PHY frame into N (N = 6, 7 or 8) packets.
 PHYフレームの(8N-1)ビットのうちの、4ビットからなるパケットデータDa(0)と、4ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、3ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「0」を示し、パケットアドレスは「000」を示す。同様に、パケット1~パケット(N-2)にも、4ビットからなるパケットデータDaと、4ビットからなるパケットデータDbとが、1ビットのストップビットと、3ビットのパケットアドレスとともに収められる。それらのパケットのストップビットは「0」を示す。さらに、PHYフレームの(8N-1)ビットのうちの、4ビットからなるパケットデータDa(N-1)と、3ビットからなるパケットデータDb(N-1)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット(N-1)に収められる。そのストップビットは「1」を示す。 Of the (8N-1) bits of the PHY frame, 4-bit packet data Da (0) and 4-bit packet data Db (0) are a 1-bit stop bit and a 3-bit packet address. Together with packet 0. The stop bit indicates “0” and the packet address indicates “000”. Similarly, packet 1 to packet (N-2) also contain 4-bit packet data Da and 4-bit packet data Db together with a 1-bit stop bit and a 3-bit packet address. The stop bits of these packets indicate “0”. Further, of the (8N−1) bits of the PHY frame, the packet data Da (N−1) consisting of 4 bits and the packet data Db (N−1) consisting of 3 bits are expressed as 1 stop bit. A packet (N-1) is stored together with a 4-bit packet address. The stop bit indicates “1”.
 図331は、PHYフレームを9パケットに分割する処理を示す図である。 FIG. 331 is a diagram showing a process of dividing the PHY frame into 9 packets.
 PHYフレームの71ビットのうちの、4ビットからなるパケットデータDa(0)と、4ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、3ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「0」を示し、パケットアドレスは「000」を示す。同様に、パケット1~パケット7にも、4ビットからなるパケットデータDaと、4ビットからなるパケットデータDbとが、1ビットのストップビットと、3ビットのパケットアドレスとともに収められる。それらのパケットのストップビットは「0」を示す。さらに、PHYフレームの71ビットのうちの、4ビットからなるパケットデータDa(8)と、3ビットからなるパケットデータDb(8)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット8に収められる。そのストップビットは「1」を示し、パケットアドレスは「0001」を示す。 Of the 71 bits of the PHY frame, 4-bit packet data Da (0) and 4-bit packet data Db (0) are transmitted in packet 0 together with a 1-bit stop bit and a 3-bit packet address. Can be stored. The stop bit indicates “0” and the packet address indicates “000”. Similarly, packet 1 to packet 7 also contain 4-bit packet data Da and 4-bit packet data Db together with a 1-bit stop bit and a 3-bit packet address. The stop bits of these packets indicate “0”. Further, out of 71 bits of the PHY frame, packet data Da (8) consisting of 4 bits and packet data Db (8) consisting of 3 bits are packeted together with 1 stop bit and 4 bit packet address. 8 The stop bit indicates “1” and the packet address indicates “0001”.
 図332は、PHYフレームをN(N=10~16)パケットに分割する処理を示す図である。 FIG. 332 is a diagram showing processing for dividing the PHY frame into N (N = 10 to 16) packets.
 PHYフレームの7Nビットのうちの、4ビットからなるパケットデータDa(0)と、3ビットからなるパケットデータDb(0)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット0に収められる。そのストップビットは「0」を示し、パケットアドレスは「0000」を示す。同様に、パケット1~パケット(N-2)にも、4ビットからなるパケットデータDaと、3ビットからなるパケットデータDbとが、1ビットのストップビットと、4ビットのパケットアドレスとともに収められる。それらのパケットのストップビットは「0」を示す。さらに、PHYフレームの7Nビットのうちの、4ビットからなるパケットデータDa(N-1)と、3ビットからなるパケットデータDb(N-1)とが、1ビットのストップビットと、4ビットのパケットアドレスとともにパケット(N-1)に収められる。そのストップビットは「1」を示す。 Of the 7N bits of the PHY frame, packet data Da (0) consisting of 4 bits and packet data Db (0) consisting of 3 bits are transferred to packet 0 together with a 1-bit stop bit and a 4-bit packet address. Can be stored. The stop bit indicates “0” and the packet address indicates “0000”. Similarly, packets 1 to (N-2) also contain 4-bit packet data Da and 3-bit packet data Db together with a 1-bit stop bit and a 4-bit packet address. The stop bits of these packets indicate “0”. Further, of 7N bits of the PHY frame, packet data Da (N-1) consisting of 4 bits and packet data Db (N-1) consisting of 3 bits are composed of 1 stop bit and 4 bits. It is stored in the packet (N-1) together with the packet address. The stop bit indicates “1”.
 また、送信機は、112ビットを超えるデータ(PHYフレーム)またはストリームデータなどの大量のデータを送信するときには、パケット15のストップビットを「1」にすることなく「0」に設定する。そして、送信機は、上述の大量のデータのうち、パケット0~パケット15に含めることができなかったデータを、新たにパケット0から配列される各パケットに格納して送信する。言い換えれば、送信機は、パケット0~パケット15に含めることができなかったデータを、再び「0000」から始まるパケットアドレスを有する各パケットに格納して送信する。 Also, when transmitting a large amount of data such as data (PHY frame) or stream data exceeding 112 bits, the transmitter sets the stop bit of the packet 15 to “0” without setting it to “1”. Then, the transmitter stores the data that could not be included in the packet 0 to the packet 15 out of the large amount of data described above and newly transmits it in each packet arranged from the packet 0. In other words, the transmitter stores data that could not be included in the packets 0 to 15 in each packet having a packet address starting from “0000” and transmits the data.
 モード2におけるPHYフレームは、モード1におけるPHYフレームと同様に、MHRを持たず、MFRはオプショナルである。 Like the PHY frame in mode 1, the PHY frame in mode 2 does not have MHR, and MFR is optional.
 (実施の形態24のまとめ)
 実施の形態24に係る可視光信号の生成方法は、図230Aのフローチャートによって示される。
(Summary of Embodiment 24)
The visible light signal generation method according to Embodiment 24 is shown by the flowchart in FIG. 230A.
 つまり、この可視光信号の生成方法は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する方法であって、ステップSD1~SD3を含む。 That is, this visible light signal generation method is a method for generating a visible light signal transmitted by a change in luminance of a light source provided in a transmitter, and includes steps SD1 to SD3.
 ステップSD1では、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 In step SD1, a preamble which is data in which each of the first and second luminance values, which are different luminance values, appears alternately along the time axis is generated.
 ステップSD2では、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1および第2の輝度値のそれぞれが継続する時間長を、送信対象の信号に応じた第1の方式にしたがって決定することにより、第1のペイロードを生成する。 In step SD2, in the data in which the first and second luminance values appear alternately along the time axis, the time length in which each of the first and second luminance values continues is determined according to the signal to be transmitted. The first payload is generated by determining according to the first method.
 最後に、ステップSD3では、プリアンブルと第1のペイロードとを結合することによって可視光信号を生成する。 Finally, in step SD3, a visible light signal is generated by combining the preamble and the first payload.
 例えば、図316~図318に示すように、第1および第2の輝度値は、Bright(High)およびDark(Low)であり、第1のデータは、PHYペイロード(PHYペイロードAまたはPHYペイロードB)である。このように生成された可視光信号を送信することによって、図191~図193に示すように、受信パケット数を増やすことができるとともに、信頼度を高めることができる。その結果、多様な機器間の通信を可能にすることができる。 For example, as shown in FIGS. 316 to 318, the first and second luminance values are Bright (High) and Dark (Low), and the first data is PHY payload (PHY payload A or PHY payload B). ). By transmitting the visible light signal generated in this way, as shown in FIGS. 191 to 193, the number of received packets can be increased and the reliability can be increased. As a result, communication between various devices can be enabled.
 また、この可視光信号の生成方法は、さらに、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1および第2の輝度値のそれぞれが継続する時間長を、送信対象の信号に応じた第2の方式にしたがって決定することによって、第1のペイロードによって表現される明るさと補完関係を有する第2のペイロードを生成してもよい。この場合、可視光信号の生成では、第1のペイロード、プリアンブル、第2のペイロードの順に、プリアンブルと第1および第2のペイロードとを結合することによって、可視光信号を生成する。 In addition, the visible light signal generation method further sets a time length in which each of the first and second luminance values continues in the data in which the first and second luminance values appear alternately along the time axis. The second payload having a complementary relationship with the brightness expressed by the first payload may be generated by determining according to the second scheme according to the signal to be transmitted. In this case, in the generation of the visible light signal, the visible light signal is generated by combining the preamble and the first and second payloads in the order of the first payload, the preamble, and the second payload.
 例えば、図316および図317に示すように、第1および第2の輝度値は、Bright(High)およびDark(Low)であり、第1および第2のペイロードは、PHYペイロードAおよびPHYペイロードBである。 For example, as shown in FIGS. 316 and 317, the first and second luminance values are Bright (High) and Dark (Low), and the first and second payloads are PHY payload A and PHY payload B. It is.
 これにより、第1のペイロードの明るさと第2のペイロードの明るさとは、補完関係を有するため、送信対象の信号に関わらず明るさを一定に保つことができる。さらに、第1のペイロードおよび第2のペイロードは、同じ送信対象の信号を異なる方式にしたがって変調されたデータであるため、受信機は何れか一方のペイロードのみ受信すれば、そのペイロードを送信対象の信号に復調することができる。また、第1のペイロードと第2のペイロードとの間にプリアンブルであるヘッダ(SHR)が配置されている。したがって、受信機は、第1のペイロードの後側の一部のみと、ヘッダと、第2のペイロードの先頭側の一部のみとを受信すれば、それらを送信対象の信号に復調することができる。したがって、可視光信号の受信効率を高めることができる。 Thereby, since the brightness of the first payload and the brightness of the second payload have a complementary relationship, the brightness can be kept constant regardless of the signal to be transmitted. Furthermore, since the first payload and the second payload are data obtained by modulating the same transmission target signal according to different systems, if the receiver receives only one of the payloads, the payload is transmitted to the transmission target. The signal can be demodulated. In addition, a header (SHR) that is a preamble is arranged between the first payload and the second payload. Therefore, if the receiver receives only a part on the rear side of the first payload, a header, and a part on the front side of the second payload, it can demodulate them into a signal to be transmitted. it can. Therefore, the visible light signal reception efficiency can be increased.
 例えば、プリアンブルは、第1および第2のペイロードに対するヘッダであり、そのヘッダでは、第1の時間長の第1の輝度値、第2の時間長の第2の輝度値の順で、それぞれの輝度値が現れる。ここで、その第1の時間長は、100μ秒であり、第2の時間長は、90μ秒である。つまり、図319に示すように、パケットPWMのモード1におけるヘッダ(SHR)に含まれる各パルスの時間長(パルス幅)のパターンが定義される。 For example, the preamble is a header for the first and second payloads, in which the first luminance value of the first time length and the second luminance value of the second time length in that order, respectively. Luminance value appears. Here, the first time length is 100 μsec, and the second time length is 90 μsec. That is, as shown in FIG. 319, a pattern of time length (pulse width) of each pulse included in the header (SHR) in mode 1 of the packet PWM is defined.
 また、プリアンブルは、第1および第2のペイロードに対するヘッダであり、そのヘッダでは、第1の時間長の第1の輝度値、第2の時間長の第2の輝度値、第3の時間長の第1の輝度値、第4の時間長の第2の輝度値の順で、それぞれの輝度値が現れる。ここで、その第1の時間長は、100μ秒であり、第2の時間長は、90μ秒であり、第3の時間長は、90μ秒であり、第4の時間長は、100μ秒である。つまり、図319に示すように、パケットPWMのモード2におけるヘッダ(SHR)に含まれる各パルスの時間長(パルス幅)のパターンが定義される。 The preamble is a header for the first and second payloads, and in the header, the first luminance value having the first time length, the second luminance value having the second time length, and the third time length. The respective luminance values appear in the order of the first luminance value and the second luminance value of the fourth time length. Here, the first time length is 100 μsec, the second time length is 90 μsec, the third time length is 90 μsec, and the fourth time length is 100 μsec. is there. That is, as shown in FIG. 319, a pattern of time length (pulse width) of each pulse included in the header (SHR) in mode 2 of the packet PWM is defined.
 このように、パケットPWMのモード1およびモード2のそれぞれのヘッダのパターンが定義されるため、受信機は、可視光信号における第1および第2のペイロードを適切に受信することができる。 Thus, since the pattern of each header of the mode 1 and mode 2 of the packet PWM is defined, the receiver can appropriately receive the first and second payloads in the visible light signal.
 また、送信対象の信号は、第1のビットxから第6のビットxまでの6ビットからなり、第1および第2のペイロードのそれぞれでは、第3の時間長の第1の輝度値、第4の時間長の第2の輝度値の順で、それぞれの輝度値が現れる。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0または1)、第1のペイロードの生成では、第1のペイロードにおける第3および第4の時間長のそれぞれを、第1の方式である時間長P=120+30×(7-y)[μ秒]にしたがって決定する。また、第2のペイロードの生成では、第2のペイロードにおける第3および第4の時間長のそれぞれを、第2の方式である時間長P=120+30×y[μ秒]にしたがって決定する。つまり、図316に示すように、パケットPWMのモード1では、送信対象の信号が、第1のペイロード(PHYペイロードA)と第2のペイロード(PHYペイロードB)のそれぞれに含まれる各パルスの時間長(パルス幅)として変調される。 The signal to be transmitted consists of 6 bits from the first bit x 0 to the bit x 5 sixth, in each of the first and second payload, a first luminance value of the third time length The respective luminance values appear in the order of the second luminance value of the fourth time length. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is 0 or 1), in the generation of the first payload, in the first payload, Each of the third and fourth time lengths is determined according to the first method, which is the time length P k = 120 + 30 × (7−y k ) [μ seconds]. In the generation of the second payload, each of the third and fourth time lengths in the second payload is determined according to the second method, the time length P k = 120 + 30 × y k [μsec]. . In other words, as shown in FIG. 316, in the mode 1 of the packet PWM, the signal to be transmitted is the time of each pulse included in each of the first payload (PHY payload A) and the second payload (PHY payload B). Modulated as long (pulse width).
 また、送信対象の信号は、第1のビットxから第12のビットx11までの12ビットからなり、第1および第2のペイロードのそれぞれでは、第5の時間長の第1の輝度値、第6の時間長の第2の輝度値、第7の時間長の前記第1の輝度値、第8の時間長の第2の輝度値の順で、それぞれの輝度値が現れる。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0、1、2または3)、第1のペイロードの生成では、第1のペイロードにおける前記第5~第8の時間長のそれぞれを、第1の方式である時間長P=120+30×(7-y)[μ秒]にしたがって決定する。また、第2のペイロードの生成では、第2のペイロードにおける第5~第8の時間長のそれぞれを、第2の方式である時間長P=120+30×y[μ秒]にしたがって決定する。つまり、図317に示すように、パケットPWMのモード2では、送信対象の信号が、第1のペイロード(PHYペイロードA)と第2のペイロード(PHYペイロードB)のそれぞれに含まれる各パルスの時間長(パルス幅)として変調される。 The signal to be transmitted consists of 12 bits from the first bit x 0 to bit x 11 of the 12, in each of the first and second payload, a first luminance value of the fifth duration The respective luminance values appear in the order of the second luminance value having the sixth time length, the first luminance value having the seventh time length, and the second luminance value having the eighth time length. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is 0, 1, 2, or 3), in the generation of the first payload, Each of the fifth to eighth time lengths in one payload is determined in accordance with the first method of time length P k = 120 + 30 × (7−y k ) [μ seconds]. Further, in the generation of the second payload, each of the fifth to eighth time lengths in the second payload is determined according to the second method, the time length P k = 120 + 30 × y k [μsec]. . That is, as shown in FIG. 317, in the mode 2 of the packet PWM, the time of each pulse included in each of the first payload (PHY payload A) and the second payload (PHY payload B) is the signal to be transmitted. Modulated as long (pulse width).
 このように、パケットPWMのモード1およびモード2では、送信対象の信号が各パルスのパルス幅として変調されるため、受信機は、そのパルス幅に基づいて、可視光信号を適切に送信対象の信号に復調することができる。 As described above, in the mode 1 and the mode 2 of the packet PWM, since the transmission target signal is modulated as the pulse width of each pulse, the receiver appropriately transmits the visible light signal based on the pulse width. The signal can be demodulated.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダでは、第1の時間長の第1の輝度値、第2の時間長の第2の輝度値、第3の時間長の第1の輝度値、第4の時間長の第2の輝度値の順で、それぞれの輝度値が現れる。ここで、その第1の時間長は、50μ秒であり、第2の時間長は、40μ秒であり、第3の時間長は、40μ秒であり、第4の時間長は、50μ秒である。つまり、図319に示すように、パケットPWMのモード3におけるヘッダ(SHR)に含まれる各パルスの時間長(パルス幅)のパターンが定義される。 The preamble is a header for the first payload, and in the header, the first luminance value having the first time length, the second luminance value having the second time length, and the first luminance having the third time length. The luminance values appear in the order of the luminance value of the second and the second luminance value of the fourth time length. Here, the first time length is 50 μsec, the second time length is 40 μsec, the third time length is 40 μsec, and the fourth time length is 50 μsec. is there. That is, as shown in FIG. 319, a pattern of time length (pulse width) of each pulse included in the header (SHR) in mode 3 of the packet PWM is defined.
 このように、パケットPWMのモード3のヘッダのパターンが定義されるため、受信機は、可視光信号における第1のペイロードを適切に受信することができる。 Thus, since the pattern of the header of the mode 3 of the packet PWM is defined, the receiver can appropriately receive the first payload in the visible light signal.
 また、送信対象の信号は、第1のビットxから第3nのビットx3n-1までの3nビットからなり(nは2以上の整数)、第1のペイロードの時間長は、それぞれ第1または第2の輝度値が継続する第1~第nの時間長からなる。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0~(n-1)までの整数)、第1のペイロードの生成では、第1のペイロードにおける第1~第nの時間長のそれぞれを、第1の方式である時間長P=100+20×y[μ秒]にしたがって決定する。つまり、図318に示すように、パケットPWMのモード3では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルスの時間長(パルス幅)として変調される。 The signal to be transmitted consists of 3n bits from the first bit x 0 through bit x 3n-1 of the 3n (n is an integer of 2 or more), the time length of the first payload, a respective one Alternatively, it consists of first to nth time lengths in which the second luminance value continues. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is an integer from 0 to (n−1)), the first payload is generated. Then, each of the first to n-th time lengths in the first payload is determined according to the first method, the time length P k = 100 + 20 × y k [μsec]. In other words, as shown in FIG. 318, in the mode 3 of the packet PWM, the signal to be transmitted is modulated as the time length (pulse width) of each pulse included in the first payload (PHY payload).
 このように、パケットPWMのモード3では、送信対象の信号が各パルスのパルス幅として変調されるため、受信機は、そのパルス幅に基づいて、可視光信号を適切に送信対象の信号に復調することができる。 As described above, in the mode 3 of the packet PWM, the signal to be transmitted is modulated as the pulse width of each pulse. Therefore, the receiver appropriately demodulates the visible light signal to the signal to be transmitted based on the pulse width. can do.
 図333Aは、実施の形態24に係る他の可視光信号の生成方法を示すフローチャートである。この可視光信号の生成方法は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する方法であって、ステップSE1~SE3を含む。 FIG. 333A is a flowchart showing another visible light signal generation method according to Embodiment 24. This visible light signal generation method is a method for generating a visible light signal transmitted by a change in luminance of a light source provided in a transmitter, and includes steps SE1 to SE3.
 ステップSE1では、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 In step SE1, a preamble which is data in which each of the first and second luminance values, which are different luminance values, appears alternately along the time axis is generated.
 ステップSE2では、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを、送信対象の信号に応じた方式にしたがって決定することにより、第1のペイロードを生成する。 In step SE2, in the data in which the first and second luminance values appear alternately along the time axis, an interval from when the first luminance value appears until the next first luminance value appears is set as a transmission target. 1st payload is produced | generated by determining according to the system according to this signal.
 ステップSE3では、プリアンブルと第1のペイロードとを結合することによって可視光信号を生成する。 In step SE3, a visible light signal is generated by combining the preamble and the first payload.
 図333Bは、実施の形態24に係る他の信号生成装置の構成を示すブロック図である。この信号生成装置E10は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する信号生成装置であって、プリアンブル生成部E11と、ペイロード生成部E12と、結合部E13とを備える。また、この信号生成装置E10は、図333Aに示すフローチャートの処理を実行する。 FIG. 333B is a block diagram showing a configuration of another signal generating apparatus according to Embodiment 24. The signal generation device E10 is a signal generation device that generates a visible light signal transmitted by the luminance change of the light source provided in the transmitter, and includes a preamble generation unit E11, a payload generation unit E12, and a combining unit E13. . Further, the signal generation device E10 executes the processing of the flowchart shown in FIG. 333A.
 つまり、プリアンブル生成部E11は、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 That is, the preamble generation unit E11 generates a preamble that is data in which the first and second luminance values, which are different luminance values, appear alternately on the time axis.
 ペイロード生成部E12は、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを、送信対象の信号に応じた方式にしたがって決定することにより、第1のペイロードを生成する。 In the data in which the first and second luminance values appear alternately along the time axis, the payload generation unit E12 sets an interval from when the first luminance value appears until the next first luminance value appears. A first payload is generated by determining according to a method according to a signal to be transmitted.
 結合部E13では、プリアンブルと第1のペイロードとを結合することによって可視光信号を生成する。 The combining unit E13 generates a visible light signal by combining the preamble and the first payload.
 例えば、図320~図322に示すように、第1および第2の輝度値は、Bright(High)およびDark(Low)であり、第1のペイロードは、PHYペイロードである。このように生成された可視光信号を送信することによって、図191~図193に示すように、受信パケット数を増やすことができるとともに、信頼度を高めることができる。その結果、多様な機器間の通信を可能にすることができる。 For example, as shown in FIGS. 320 to 322, the first and second luminance values are Bright (High) and Dark (Low), and the first payload is a PHY payload. By transmitting the visible light signal generated in this way, as shown in FIGS. 191 to 193, the number of received packets can be increased and the reliability can be increased. As a result, communication between various devices can be enabled.
 例えば、プリアンブルおよび第1のペイロードのそれぞれにおける第1の輝度値の時間長は、10μ秒以下である。 For example, the time length of the first luminance value in each of the preamble and the first payload is 10 μsec or less.
 これにより、可視光通信を行いながら光源の平均的な輝度を抑えることができる。 This makes it possible to suppress the average luminance of the light source while performing visible light communication.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを3つ含む。ここで、その3つのインターバルのそれぞれは、160μ秒である。つまり、図323に示すように、パケットPPMのモード1におけるヘッダ(SHR)に含まれる各パルス間のインターバルのパターンが定義される。なお、上記各パルスは、例えば第1の輝度値を有するパルスである。 Also, the preamble is a header for the first payload, and the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears. Here, each of the three intervals is 160 μs. That is, as shown in FIG. 323, an interval pattern between pulses included in the header (SHR) in mode 1 of the packet PPM is defined. Each of the pulses is a pulse having a first luminance value, for example.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを3つ含む。ここで、その3つのインターバルのうちの1つ目のインターバルは、160μ秒であり、2つ目のインターバルは、180μ秒であり、3つ目のインターバルは、160μ秒である。つまり、図323に示すように、パケットPPMのモード2におけるヘッダ(SHR)に含まれる各パルス間のインターバルのパターンが定義される。 Also, the preamble is a header for the first payload, and the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears. Here, the first interval among the three intervals is 160 μsec, the second interval is 180 μsec, and the third interval is 160 μsec. That is, as shown in FIG. 323, an interval pattern between pulses included in the header (SHR) in mode 2 of the packet PPM is defined.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを3つ含む。ここで、3つのインターバルのうちの1つ目のインターバルは、80μ秒であり、2つ目のインターバルは、90μ秒であり、3つ目のインターバルは、80μ秒である。つまり、図323に示すように、パケットPPMのモード3におけるヘッダ(SHR)に含まれる各パルス間のインターバルのパターンが定義される。 Also, the preamble is a header for the first payload, and the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears. Here, the first one of the three intervals is 80 μs, the second interval is 90 μs, and the third interval is 80 μs. That is, as shown in FIG. 323, an interval pattern between pulses included in the header (SHR) in mode 3 of the packet PPM is defined.
 このように、パケットPPMのモード1、モード2およびモード3のそれぞれのヘッダのパターンが定義されるため、受信機は、可視光信号における第1のペイロードを適切に受信することができる。 Thus, since the header patterns of the mode 1, mode 2 and mode 3 of the packet PPM are defined in this way, the receiver can appropriately receive the first payload in the visible light signal.
 また、送信対象の信号は、第1のビットxから第6のビットxまでの6ビットからなり、第1のペイロードの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを2つ含む。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0または1)、第1のペイロードの生成では、第1のペイロードにおける2つのインターバルのそれぞれを、上述の方式であるインターバルP=180+30×y[μ秒]にしたがって決定する。つまり、図320に示すように、パケットPPMのモード1では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルス間のインターバルとして変調される。 The signal to be transmitted consists of 6 bits from the first bit x 0 to the bit x 5 in the sixth, the time length of the first payload, first from the first luminance value appears in the following 1 Two intervals until the brightness value of 2 appears. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is 0 or 1), in the generation of the first payload, in the first payload, Each of the two intervals is determined according to the interval P k = 180 + 30 × y k [μsec], which is the above-described method. That is, as shown in FIG. 320, in the mode 1 of the packet PPM, the transmission target signal is modulated as an interval between each pulse included in the first payload (PHY payload).
 また、送信対象の信号は、第1のビットxから第12のビットx11までの12ビットからなり、第1のペイロードの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを4つ含む。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0、1、2または3)、第1のペイロードの生成では、第1のペイロードにおける4つのインターバルのそれぞれを、上述の方式であるインターバルP=180+30×y[μ秒]にしたがって決定する。つまり、図321に示すように、パケットPPMのモード2では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルス間のインターバルとして変調される。 The signal to be transmitted consists of 12 bits from the first bit x 0 to bit x 11 of the 12, the time length of the first payload, first from the first luminance value appears in the following 1 This includes four intervals until the luminance value appears. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is 0, 1, 2, or 3), in the generation of the first payload, Each of the four intervals in one payload is determined according to the interval P k = 180 + 30 × y k [μsec], which is the above-described method. That is, as shown in FIG. 321, in mode 2 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse included in the first payload (PHY payload).
 また、送信対象の信号は、第1のビットxから第3nのビットx3n-1までの3nビットからなり(nは2以上の整数)、第1のペイロードの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルをn個含む。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0~(n-1)までの整数)、第1のペイロードの生成では、第1のペイロードにおけるn個の前記インターバルのそれぞれを、上述の方式であるインターバルP=100+20×y[μ秒]にしたがって決定する。つまり、図322に示すように、パケットPPMのモード3では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルス間のインターバルとして変調される。 The signal to be transmitted consists of 3n bits from the first bit x 0 through bit x 3n-1 of the 3n (n is an integer of 2 or more), the time length of the first payload, a first It includes n intervals from when the luminance value appears until the next first luminance value appears. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is an integer from 0 to (n−1)), the first payload is generated. Then, each of the n intervals in the first payload is determined according to the interval P k = 100 + 20 × y k [μsec], which is the above-described method. That is, as shown in FIG. 322, in the mode 3 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse included in the first payload (PHY payload).
 このように、パケットPPMのモード1、モード2およびモード3では、送信対象の信号が各パルス間のインターバルとして変調されるため、受信機は、そのインターバルに基づいて、可視光信号を適切に送信対象の信号に復調することができる。 As described above, in the mode 1, mode 2 and mode 3 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse, so that the receiver appropriately transmits a visible light signal based on the interval. It can be demodulated into the signal of interest.
 また、可視光信号の生成方法では、さらに、第1のペイロードに対するフッタを生成し、可視光信号の生成では、第1のペイロードの次にそのフッタを結合してもよい。つまり、図318および図322に示すように、パケットPWMおよびパケットPPMのモード3では、第1のペイロード(PHYペイロード)に続いてフッタ(SFT)が送信される。これにより、第1のペイロードの終了をフッタによって明確に特定することができるため、可視光通信を効率的に行うことができる。 In the visible light signal generation method, a footer for the first payload may be further generated. In the generation of the visible light signal, the footer may be combined next to the first payload. That is, as shown in FIGS. 318 and 322, in mode 3 of the packet PWM and the packet PPM, a footer (SFT) is transmitted following the first payload (PHY payload). Thereby, since the end of the first payload can be clearly specified by the footer, visible light communication can be performed efficiently.
 また、可視光信号の生成では、フッタが送信されない場合には、そのフッタに代えて、送信対象の信号の次の信号に対するヘッダを結合してもよい。つまり、パケットPWMおよびパケットPPMのモード3では、図318および図322に示すフッタ(SFT)の代わりに、第1のペイロード(PHYペイロード)に続いて、その次の第1のペイロードに対するヘッダ(SHR)が送信される。これにより、第1のペイロードの終了を、次の第1のペイロードに対するヘッダによって明確に特定することができるとともに、フッタが送信されないため、可視光通信をより効率的に行うことができる。 In addition, when generating a visible light signal, if a footer is not transmitted, a header for a signal next to a transmission target signal may be combined instead of the footer. That is, in the mode 3 of the packet PWM and the packet PPM, instead of the footer (SFT) shown in FIGS. 318 and 322, the header (SHR) for the next first payload is followed by the first payload (PHY payload). ) Is sent. Thus, the end of the first payload can be clearly specified by the header for the next first payload, and since the footer is not transmitted, visible light communication can be performed more efficiently.
 実施の形態24に係る信号生成装置の構成は、図230Bのブロック図によって示される。 The configuration of the signal generation device according to Embodiment 24 is shown by the block diagram in FIG. 230B.
 つまり、実施の形態24に係る信号生成装置D10は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する信号生成装置であって、プリアンブル生成部D11と、データ生成部D12と、結合部D13とを備える。 That is, the signal generation device D10 according to the twenty-fourth embodiment is a signal generation device that generates a visible light signal transmitted by a change in luminance of a light source included in a transmitter, and includes a preamble generation unit D11, a data generation unit D12, and the like. And a connecting part D13.
 プリアンブル生成部D11は、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 The preamble generation unit D11 generates a preamble that is data in which the first and second luminance values, which are different from each other, appear alternately along the time axis.
 データ生成部D12は、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1および第2の輝度値のそれぞれが継続する時間長を、送信対象の信号に応じた第1の方式にしたがって決定することにより、第1のペイロードを生成する。 In the data in which the first and second luminance values appear alternately along the time axis, the data generation unit D12 determines the time length that each of the first and second luminance values continues according to the transmission target signal. By determining according to the first method, the first payload is generated.
 結合部D13は、プリアンブルと第1のペイロードとを結合することによって可視光信号を生成する。 The combining unit D13 generates a visible light signal by combining the preamble and the first payload.
 この信号生成装置D10によって生成された可視光信号を送信することによって、図191~図193に示すように、受信パケット数を増やすことができるとともに、信頼度を高めることができる。その結果、多様な機器間の通信を可能にすることができる。 By transmitting the visible light signal generated by the signal generation device D10, the number of received packets can be increased and the reliability can be increased as shown in FIGS. 191 to 193. As a result, communication between various devices can be enabled.
 なお、上記各実施の形態および各変形例において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図230Aおよび図333Aのフローチャートによって示される可視光信号の生成方法をコンピュータに実行させる。 In each of the above-described embodiments and modifications, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes a computer to execute the visible light signal generation method shown by the flowcharts of FIGS. 230A and 333A.
 以上、一つまたは複数の態様に係る可視光信号の生成方法について、上記各実施の形態および各変形例に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態および変形例における構成要素を組み合わせて構築される形態も、本発明の範囲内に含まれてもよい。 As described above, the visible light signal generation method according to one or a plurality of aspects has been described based on the above-described embodiments and modifications. However, the present invention is not limited to the embodiments. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments and modifications are also within the scope of the present invention. May be included.
 (実施の形態25)
 本実施の形態では、可視光信号の復号方法および符号化方法などについて説明する。
(Embodiment 25)
In this embodiment, a visible light signal decoding method, an encoding method, and the like will be described.
 図334は、MPMにおけるMACフレームのフォーマットを示す図である。 FIG. 334 is a diagram showing a format of a MAC frame in MPM.
 MPM(Mirror Pulse Modulation)におけるMAC(medium access control)フレームのフォーマットは、MHR(medium access control header)とMSDU(medium access control service-data unit)とから構成される。MHRフィールドは、シーケンス番号サブフィールドを含む。MSDUは、フレームペイロードを含み、可変長である。MHRとMSDUとを含むMPDU(medium access control protocol-data unit)のビット長は、macMpmMpduLengthとして設定される。 The format of the MAC (medium access control) header in MPM (Mirror Pulse Modulation) consists of MHR (medium access control header) and MSDU (medium access control service-data unit). The MHR field includes a sequence number subfield. The MSDU includes a frame payload and has a variable length. The bit length of MPDU (medium access control control protocol-data unit) including MHR and MSDU is set as macMpmMpduLength.
 なお、MPMは、実施の形態20および実施の形態24における変調方式であって、例えば、図188~図189B、図197~図230B、および図315~図332に示されるように送信対象の情報または信号を変調する方式である。 Note that MPM is a modulation scheme in the twentieth and twenty-fourth embodiments. For example, as shown in FIGS. 188 to 189B, 197 to 230B, and 315 to 332, information to be transmitted is provided. Alternatively, the signal is modulated.
 図335は、MPMにおけるMACフレームを生成する符号化装置の処理動作を示すフローチャートである。具体的には、図335は、シーケンス番号サブフィールドのビット長の決め方を示す図である。なお、符号化装置は、例えば、可視光信号を送信する上述の送信機または送信装置などに備えられている。 FIG. 335 is a flowchart showing the processing operation of the encoding device for generating a MAC frame in MPM. Specifically, FIG. 335 is a diagram illustrating how to determine the bit length of the sequence number subfield. Note that the encoding device is provided in, for example, the above-described transmitter or transmission device that transmits a visible light signal.
 シーケンス番号サブフィールドは、フレームシーケンス番号(シーケンス番号ともいう)を含む。シーケンス番号サブフィールドのビット長は、macMpmSnLengthとして設定される。シーケンス番号サブフィールドのビット長が可変長に設定されている場合、シーケンス番号サブフィールドにおける先頭のビットは、最終フレームフラグとして使用される。つまり、この場合、シーケンス番号サブフィールドは、最終フレームフラグと、シーケンス番号を示すビット列とを含む。その最終フレームフラグは、最終フレームでは1に設定され、その他のフレームでは、0に設定される。つまり、この最終フレームフラグは、処理対象フレームが最終フレームであるか否かを示す。なお、この最終フレームフラグは、上述のストップビットに相当する。また、シーケンス番号は、上述のアドレスに相当する。 The sequence number subfield includes a frame sequence number (also called a sequence number). The bit length of the sequence number subfield is set as macMpmSnLength. When the bit length of the sequence number subfield is set to a variable length, the first bit in the sequence number subfield is used as the last frame flag. That is, in this case, the sequence number subfield includes a final frame flag and a bit string indicating the sequence number. The final frame flag is set to 1 for the final frame, and is set to 0 for the other frames. That is, the final frame flag indicates whether or not the processing target frame is the final frame. This final frame flag corresponds to the above-described stop bit. The sequence number corresponds to the above address.
 まず、符号化装置は、SNが可変長に設定されているか否かを判定する(ステップS101a)。なお、SNは、シーケンス番号サブフィールドのビット長である。つまり、符号化装置は、macMpmSnLengthが0xfを示すか否かを判定する。macMpmSnLengthが0xfを示すときには、SNは可変長であり、macMpmSnLengthが0xf以外を示すときには、SNは固定長である。符号化装置は、SNが可変長に設定されていない、すなわち、SNが固定長に設定されていると判定すると(ステップS101aのN)、SNをmacMpmSnLengthによって示される値に決定する(ステップS102a)。このとき、符号化装置は、最終フレームフラグ(すなわちLFF)を使用しない。 First, the encoding device determines whether or not SN is set to a variable length (step S101a). SN is the bit length of the sequence number subfield. That is, the encoding apparatus determines whether macMpmSnLength indicates 0xf. When macMpmSnLength indicates 0xf, SN is a variable length, and when macMpmSnLength indicates other than 0xf, SN is a fixed length. When determining that the SN is not set to a variable length, that is, the SN is set to a fixed length (N in step S101a), the encoding apparatus determines the SN to a value indicated by macMpmSnLength (step S102a). . At this time, the encoding apparatus does not use the final frame flag (that is, LFF).
 一方、符号化装置は、SNが可変長に設定されていると判定すると(ステップS101aのY)、処理対象フレームが最終フレームか否かを判定する(ステップS103a)。ここで、符号化装置は、処理対象フレームが最終フレームであると判定すると(ステップS103aのY)、SNを5ビットに決定する(ステップS104a)。このとき、符号化装置は、シーケンス番号サブフィールドにおける先頭のビットとして、1を示す最終フレームフラグを決定する。 On the other hand, when determining that the SN is set to a variable length (Y in step S101a), the encoding apparatus determines whether the processing target frame is the last frame (step S103a). Here, when the encoding apparatus determines that the processing target frame is the final frame (Y in step S103a), the encoding apparatus determines SN to be 5 bits (step S104a). At this time, the encoding apparatus determines a final frame flag indicating 1 as the first bit in the sequence number subfield.
 また、符号化装置は、処理対象フレームが最終フレームでないと判定すると(ステップS103aのN)、最終フレームのシーケンス番号の値が、1-15のうちの何れかを判定する(ステップS105a)。なお、シーケンス番号は、0から昇順に、各フレームに対して割り当てられる整数である。また、ステップS103aでNの場合には、フレーム数は2以上である。したがって、この場合には、最終フレームのシーケンス番号の値は、0を除く1-15のうちの何れかを取り得る。 Further, when the encoding apparatus determines that the processing target frame is not the final frame (N in step S103a), the encoding apparatus determines whether the value of the sequence number of the final frame is 1-15 (step S105a). The sequence number is an integer assigned to each frame in ascending order from 0. In the case of N in step S103a, the number of frames is 2 or more. Therefore, in this case, the value of the sequence number of the last frame can take any of 1-15 except 0.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が1であると判定すると、SNを1ビットに決定する(ステップS106a)。このとき、符号化装置は、シーケンス番号サブフィールドにおける先頭のビットである最終フレームフラグの値を、0に決定する。 If the encoding apparatus determines in step S105a that the value of the sequence number of the final frame is 1, it determines SN as 1 bit (step S106a). At this time, the encoding apparatus determines that the value of the last frame flag which is the first bit in the sequence number subfield is 0.
 例えば、最終フレームのシーケンス番号の値が1である場合、最終フレームのシーケンス番号サブフィールドは、最終フレームフラグ(1)とシーケンス番号の値(1)とを含む(1,1)として表される。このとき、符号化装置は、処理対象フレームのシーケンス番号サブフィールドのビット長を1ビットに決定する。つまり、符号化装置は、最終フレームフラグ(0)のみを含むシーケンス番号サブフィールドを決定する。 For example, when the sequence number value of the last frame is 1, the sequence number subfield of the last frame is represented as (1, 1) including the last frame flag (1) and the sequence number value (1). . At this time, the encoding apparatus determines that the bit length of the sequence number subfield of the processing target frame is 1 bit. That is, the encoding apparatus determines a sequence number subfield including only the last frame flag (0).
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が2であると判定すると、SNを2ビットに決定する(ステップS107a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 If the encoding device determines in step S105a that the value of the sequence number of the last frame is 2, it determines SN to 2 bits (step S107a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 例えば、最終フレームのシーケンス番号の値が2である場合、最終フレームのシーケンス番号サブフィールドは、最終フレームフラグ(1)とシーケンス番号の値(2)とを含む(1,0,1)として表される。なお、シーケンス番号は、ビット列によって示されるが、そのビット列では、左端のビットがLSB(least significant bit)であって、右端のビットがMSB(most significant bit)である。したがって、シーケンス番号の値(2)は、ビット列(0,1)と表記される。このように、最終フレームのシーケンス番号の値が2である場合、符号化装置は、処理対象フレームのシーケンス番号サブフィールドのビット長を2ビットに決定する。つまり、符号化装置は、最終フレームフラグ(0)と、シーケンス番号を示すビット(0)または(1)とを含むシーケンス番号サブフィールドを決定する。 For example, when the sequence number value of the last frame is 2, the sequence number subfield of the last frame is represented as (1, 0, 1) including the last frame flag (1) and the sequence number value (2). Is done. The sequence number is indicated by a bit string. In the bit string, the leftmost bit is LSB (leastlesignificant bit) and the rightmost bit is MSB (most significant bit). Therefore, the value (2) of the sequence number is expressed as a bit string (0, 1). Thus, when the value of the sequence number of the last frame is 2, the encoding apparatus determines the bit length of the sequence number subfield of the processing target frame to be 2 bits. That is, the encoding apparatus determines the sequence number subfield including the last frame flag (0) and the bit (0) or (1) indicating the sequence number.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が3または4であると判定すると、SNを3ビットに決定する(ステップS108a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 If the encoding device determines in step S105a that the value of the sequence number of the final frame is 3 or 4, it determines SN to 3 bits (step S108a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が5-8の何れかの整数であると判定すると、SNを4ビットに決定する(ステップS109a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 If the encoding apparatus determines in step S105a that the value of the sequence number of the last frame is any integer of 5-8, it determines SN as 4 bits (step S109a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が9-15の何れかの整数であると判定すると、SNを5ビットに決定する(ステップS110a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 When the encoding apparatus determines in step S105a that the sequence number value of the final frame is any integer of 9-15, it determines SN as 5 bits (step S110a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 図336は、MPMにおけるMACフレームを復号する復号装置の処理動作を示すフローチャートである。具体的には、図336は、シーケンス番号サブフィールドのビット長の決め方を示す図である。なお、復号装置は、例えば、可視光信号を受信する上述の受信機または受信装置などに備えられている。 FIG. 336 is a flowchart showing the processing operation of the decoding device for decoding the MAC frame in MPM. Specifically, FIG. 336 is a diagram illustrating how to determine the bit length of the sequence number subfield. Note that the decoding device is provided, for example, in the above-described receiver or receiving device that receives a visible light signal.
 ここで、復号装置は、SNが可変長に設定されているか否かを判定する(ステップS201a)。つまり、復号装置は、macMpmSnLengthが0xfを示すか否かを判定する。復号装置は、SNが可変長に設定されていない、すなわち、SNが固定長に設定されていると判定すると(ステップS201aのN)、SNをmacMpmSnLengthによって示される値に決定する(ステップS202a)。このとき、復号装置は、最終フレームフラグ(すなわちLFF)を使用しない。 Here, the decoding apparatus determines whether or not SN is set to a variable length (step S201a). That is, the decoding apparatus determines whether macMpmSnLength indicates 0xf. If the decoding apparatus determines that SN is not set to a variable length, that is, that SN is set to a fixed length (N in step S201a), SN is determined to be a value indicated by macMpmSnLength (step S202a). At this time, the decoding device does not use the final frame flag (ie, LFF).
 一方、復号装置は、SNが可変長に設定されていると判定すると(ステップS201aのY)、復号対象フレームの最終フレームフラグの値が1であるか0であるかを判定する(ステップS203a)。つまり、復号装置は、復号対象フレームが最終フレームであるか否かを判定する。ここで、復号装置は、最終フレームフラグの値が1であると判定すると(ステップS203aの1)、SNを5ビットに決定する(ステップS204a)。 On the other hand, when the decoding apparatus determines that the SN is set to a variable length (Y in step S201a), the decoding apparatus determines whether the value of the final frame flag of the decoding target frame is 1 or 0 (step S203a). . That is, the decoding apparatus determines whether or not the decoding target frame is the last frame. Here, when the decoding apparatus determines that the value of the final frame flag is 1 (1 in step S203a), it determines SN to be 5 bits (step S204a).
 また、復号装置は、最終フレームフラグの値が0であると判定すると(ステップS203aの0)、最終フレームのシーケンス番号サブフィールドにおける第2ビットから第5ビットまでのビット列によって示される値が、1-15のうちの何れであるかを判定する(ステップS205a)。最終フレームは、1を示す最終フレームフラグを有し、復号対象フレームと同じソースから生成されたフレームである。また、各ソースは、撮像画像中の位置によって特定される。なお、ソースは、例えば図325~図332に示すように複数のフレーム(すなわちパケット)に分割される。つまり、最終フレームは、1つのソースの分割によって生成された複数のフレームの中の最後のフレームである。また、シーケンス番号サブフィールドにおける第2ビットから第5ビットまでのビット列によって示される値は、シーケンス番号の値である。 When the decoding apparatus determines that the value of the last frame flag is 0 (0 in step S203a), the value indicated by the bit string from the second bit to the fifth bit in the sequence number subfield of the last frame is 1. It is determined which of −15 (step S205a). The final frame has a final frame flag indicating 1 and is a frame generated from the same source as the decoding target frame. Each source is specified by a position in the captured image. The source is divided into a plurality of frames (that is, packets) as shown in FIGS. 325 to 332, for example. That is, the last frame is the last frame among a plurality of frames generated by dividing one source. The value indicated by the bit string from the second bit to the fifth bit in the sequence number subfield is the value of the sequence number.
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が1であると判定すると、SNを1ビットに決定する(ステップS206a)。例えば、最終フレームのシーケンス番号サブフィールドが(1,1)の2ビットである場合、最終フレームフラグは1であり、最終フレームのシーケンス番号、すなわち上記ビット列によって示される値は1である。このとき、復号装置は、復号対象フレームのシーケンス番号サブフィールドのビット長を1ビットに決定する。つまり、復号装置は、復号対象フレームのシーケンス番号サブフィールドを(0)に決定する。 If the decoding apparatus determines in step S205a that the value indicated by the bit string is 1, it determines the SN to be 1 bit (step S206a). For example, when the sequence number subfield of the last frame is 2 bits of (1, 1), the last frame flag is 1, and the sequence number of the last frame, that is, the value indicated by the bit string is 1. At this time, the decoding apparatus determines that the bit length of the sequence number subfield of the decoding target frame is 1 bit. That is, the decoding apparatus determines (0) as the sequence number subfield of the decoding target frame.
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が2であると判定すると、SNを2ビットに決定する(ステップS207a)。例えば、最終フレームのシーケンス番号サブフィールドが(1,0,1)の3ビットである場合、最終フレームフラグは1であり、最終フレームのシーケンス番号、すなわち上記ビット列(0,1)によって示される値は2である。なお、上記ビット列では、左端のビットがLSB(least significant bit)であって、右端のビットがMSB(most significant bit)である。 If the decoding apparatus determines in step S205a that the value indicated by the bit string is 2, the decoding apparatus determines SN to be 2 bits (step S207a). For example, if the sequence number subfield of the last frame is 3 bits (1, 0, 1), the last frame flag is 1, and the sequence number of the last frame, that is, the value indicated by the bit string (0, 1). Is 2. In the above bit string, the leftmost bit is LSB (least significant bit) and the rightmost bit is MSB (most) significant bit).
 このとき、復号装置は、復号対象フレームのシーケンス番号サブフィールドのビット長を2ビットに決定する。つまり、復号装置は、復号対象フレームのシーケンス番号サブフィールドを(0,0)または(0,1)に決定する。 At this time, the decoding apparatus determines that the bit length of the sequence number subfield of the decoding target frame is 2 bits. That is, the decoding apparatus determines the sequence number subfield of the decoding target frame to be (0, 0) or (0, 1).
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が3または4であると判定すると、SNを3ビットに決定する(ステップS208a)。 If it is determined in step S205a that the value indicated by the bit string is 3 or 4, the decoding apparatus determines SN to be 3 bits (step S208a).
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が5-8の何れかの整数であると判定すると、SNを4ビットに決定する(ステップS209a)。 If the decoding apparatus determines in step S205a that the value indicated by the bit string is any integer of 5-8, it determines SN as 4 bits (step S209a).
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が9-15の何れかの整数であると判定すると、SNを5ビットに決定する(ステップS210a)。 When the decoding apparatus determines in step S205a that the value indicated by the bit string is any integer of 9-15, it determines SN as 5 bits (step S210a).
 図337は、MACのPIBの属性を示す図である。 FIG. 337 is a diagram showing the attributes of the MAC PIB.
 MACのPIB(physical-layer personal-area-network information base)の属性には、macMpmSnLengthとmacMpmMpduLengthとがある。macMpmSnLengthは、0x0-0xfまでの範囲における何れかの整数値であって、シーケンス番号サブフィールドのビット長を示す。具体的には、macMpmSnLengthは、0x0-0xeまでの範囲における何れかの整数値である場合には、その整数値をシーケンス番号サブフィールドの固定のビット長として示す。また、macMpmSnLengthは、0xfである場合には、シーケンス番号サブフィールドのビット長が可変であることを示す。 The MAC PIB (physical-layer personal-area-network information base) attributes include macMpmSnLength and macMpmMpduLength. macMpmSnLength is any integer value in the range of 0x0-0xf, and indicates the bit length of the sequence number subfield. Specifically, when macMpmSnLength is any integer value in the range of 0x0-0xe, the integer value is indicated as a fixed bit length of the sequence number subfield. When macMpmSnLength is 0xf, it indicates that the bit length of the sequence number subfield is variable.
 macMpmMpduLengthは、0x00-0xffまでの範囲における何れかの整数値であって、MPDUのビット長を示す。 MacMpmMpduLength is any integer value in the range of 0x00-0xff and indicates the bit length of the MPDU.
 図338は、MPMの調光方法を説明するための図である。 FIG. 338 is a diagram for explaining an MPM dimming method.
 MPMは、調光機能を有する。MPMの調光方法には、例えば図338に示す、(a)アナログ調光方式、(b)PWM調光方式、(c)VPPM調光方式、および(d)フィールド挿入調光方式とがある。 MPM has a dimming function. Examples of the MPM dimming method include (a) analog dimming method, (b) PWM dimming method, (c) VPPM dimming method, and (d) field insertion dimming method shown in FIG. .
 アナログ調光方式では、例えば(a2)に示すように、輝度を変化させることによって可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(a1)に示すように、可視光信号の全体の輝度を下げる。逆に、その可視光信号を明るくする場合には、例えば(a3)に示すように、可視光信号の全体の輝度を上げる。 In the analog dimming method, for example, as shown in (a2), the visible light signal is transmitted by changing the luminance. Here, when darkening the visible light signal, for example, as shown in (a1), the overall luminance of the visible light signal is lowered. Conversely, when the visible light signal is brightened, for example, as shown in (a3), the overall luminance of the visible light signal is increased.
 PWM調光方式では、例えば(b2)に示すように、輝度を変化させることによって可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(b1)に示すように、(b2)に示す高い輝度の光が出力される期間において、僅かな期間だけその輝度を下げる。逆に、その可視光信号を明るくする場合には、例えば(b3)に示すように、(b2)に示す低い輝度の光が出力される期間において、僅かな期間だけその輝度を上げる。なお、上述の僅かな期間は、元のパルス幅の1/3未満で、50μ秒未満でなければならない。 In the PWM dimming method, for example, as shown in (b2), the visible light signal is transmitted by changing the luminance. Here, when darkening the visible light signal, for example, as shown in (b1), the luminance is lowered for a short period in the period in which the high-luminance light shown in (b2) is output. On the contrary, when the visible light signal is brightened, for example, as shown in (b3), the luminance is raised only for a short period in the period when the low-luminance light shown in (b2) is output. It should be noted that the short period described above must be less than 1/3 of the original pulse width and less than 50 μsec.
 VPPM調光方式では、例えば(c2)に示すように、輝度を変化させることによって可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(c1)に示すように、輝度の立ち下がりのタイミングを早める。逆に、その可視光信号を明るくする場合には、例えば(c3)に示すように、輝度の立ち下がりのタイミングを遅らせる。なお、VPPM変調方式は、MPMにおけるPHYのPPMモードに対してのみ用いることができる。 In the VPPM dimming method, for example, as shown in (c2), the visible light signal is transmitted by changing the luminance. Here, when darkening the visible light signal, for example, as shown in (c1), the timing of the fall of the brightness is advanced. On the other hand, when the visible light signal is brightened, for example, as shown in (c3), the timing of the fall of the luminance is delayed. The VPPM modulation method can be used only for the PHY PPM mode in MPM.
 フィールド挿入調光方式では、例えば(d2)に示すように、複数のPPDU(physical-layer data unit)を含む可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(d1)に示すように、PPDUの間に、PPDUの輝度よりも低い輝度の調光フィールドを挿入する。逆に、その可視光信号を明るくする場合には、例えば(d3)に示すように、PPDUの間に、PPDUの輝度よりも高い輝度の調光フィールドを挿入する。 In the field insertion dimming method, for example, as shown in (d2), a visible light signal including a plurality of PPDUs (physical-layer data units) is transmitted. Here, when darkening the visible light signal, for example, as shown in (d1), a dimming field having a luminance lower than the luminance of the PPDU is inserted between the PPDUs. On the other hand, when the visible light signal is brightened, for example, as shown in (d3), a dimming field having a luminance higher than that of the PPDU is inserted between the PPDUs.
 図339は、PHYのPIBの属性を示す図である。 FIG. 339 is a diagram showing attributes of the PHY PIB.
 PHY(physical layer)のPIBの属性には、phyMpmMode、phyMpmPlcpHeaderMode、phyMpmPlcpCenterMode、phyMpmSymbolSize、phyMpmOddSymbolBit、phyMpmEvenSymbolBit、phyMpmSymbolOffset、およびphyMpmSymbolUnitとがある。 PHY (physical layer) PIB attributes include phyMpmMode, phyMpmPlcpHeaderMode, phyMpmPlcpCenterMode, phyMpmSymbolSize, phyMpmOddSymbolBit, phyMpmEvenSymbolBit, phyMpmSymbolOffset, and phyMpmSymbolUnit.
 phyMpmModeは、0または1であって、MPMのPHYモードを示す。具体的には、phyMpmModeは、0である場合には、PHYモードがPWMモードであることを示し、1である場合には、PHYモードがPWMモードであることを示す。 PhyMpmMode is 0 or 1, and indicates the PHY mode of MPM. Specifically, when phyMpmMode is 0, it indicates that the PHY mode is the PWM mode, and when it is 1, it indicates that the PHY mode is the PWM mode.
 phyMpmPlcpHeaderModeは、0x0-0xfまでの範囲における何れかの整数値であって、PLCP(Physical Layer Conversion Protocol)ヘッダサブフィールドモードおよびPLCPフッタサブフィールドモードを示す。 PhyMpmPlcpHeaderMode is any integer value in the range of 0x0-0xf, and indicates a PLCP (Physical Layer Conversion Protocol) header subfield mode and a PLCP footer subfield mode.
 phyMpmPlcpCenterModeは、0x0-0xfまでの範囲における何れかの整数値であって、PLCPセンタサブフィールドモードを示す。 PhyMpmPlcpCenterMode is any integer value in the range of 0x0-0xf and indicates the PLCP center subfield mode.
 phyMpmSymbolSizeは、0x0-0xfまでの範囲における何れかの整数値であって、ペイロードサブフィールドのシンボル数を示す。具体的には、phyMpmSymbolSizeは、0x0の場合には、そのシンボル数が可変であることを示し、Nとして参照される。 PhyMpmSymbolSize is any integer value in the range of 0x0-0xf, and indicates the number of symbols in the payload subfield. Specifically, when phyMpmSymbolSize is 0x0, it indicates that the number of symbols is variable, and is referred to as N.
 phyMpmOddSymbolBitは、0x0-0xfまでの範囲における何れかの整数値であって、ペイロードサブフィールドの各奇数シンボルに含まれるビット長を示し、Moddとして参照される。 phyMpmOddSymbolBit is an any integer value in the range of up to OxO-Oxf, shows the bit length included in each odd symbol in the payload field and are referred to as M odd.
 phyMpmEvenSymbolBitは、0x0-0xfまでの範囲における何れかの整数値であって、ペイロードサブフィールドの各偶数シンボルに含まれるビット長を示し、Mevenとして参照される。 phyMpmEvenSymbolBit is any integer value in the range of 0x0-0xf, indicates the bit length included in each even symbol of the payload subfield, and is referred to as M even .
 phyMpmSymbolOffsetは、0x00-0xffまでの範囲における何れかの整数値であって、ペイロードサブフィールドのシンボルのオフセット値を示し、Wとして参照される。 phyMpmSymbolOffset is an any integer value in the range of up 0x00-0xFF, it indicates the offset value of the symbol of the payload field and are referred to as W 1.
 phyMpmSymbolUnitは、0x00-0xffまでの範囲における何れかの整数値であって、ペイロードサブフィールドのシンボルのユニット値を示し、Wとして参照される。 phyMpmSymbolUnit is an any integer value in the range of up 0x00-0xFF, indicates the unit value of the symbol of the payload field and are referred to as W 2.
 図340は、MPMを説明するための図である。 FIG. 340 is a diagram for explaining the MPM.
 MPMは、PSDU(PHY service data unit)フィールドのみで構成される。また、PSDUフィールドは、MPMのPLCPによって変換されるMPDUを含む。 MPM consists only of PSDU (PHY service data unit) fields. The PSDU field includes an MPDU converted by the MPM PLCP.
 MPMのPLCPは、図340に示すように、MPDUを5つのサブフィールドに変換する。5つのサブフィールドは、PLCPヘッダサブフィールド、フロントペイロードサブフィールド、PLCPセンタサブフィールド、バックペイロードサブフィールド、およびPLCPフッタサブフィールドである。MPMのPHYモードは、phyMpmModeとして設定される。 MPM PLCP converts MPDU into 5 subfields as shown in FIG. The five subfields are a PLCP header subfield, a front payload subfield, a PLCP center subfield, a back payload subfield, and a PLCP footer subfield. The PHY mode of MPM is set as phyMpmMode.
 図340に示すように、MPMのPLCPは、ビット再配置部301aと、複製部302aと、フロント変換部303aと、バック変換部304aとを備える。 As shown in FIG. 340, the MPM PLCP includes a bit rearrangement unit 301a, a duplication unit 302a, a front conversion unit 303a, and a back conversion unit 304a.
 ここで、(x、x、x、...)は、MPDUに含まれる各ビットであり、LSNは、シーケンス番号サブフィールドのビット長であり、Nは、各ペイロードサブフィールドのシンボル数である。ビット再配置部301aは、以下の(式1)にしたがって、(x、x、x、...)を(y、y、y、...)に再配置する。 Here, (x 0 , x 1 , x 2 ,...) Is each bit included in the MPDU, L SN is the bit length of the sequence number subfield, and N is each payload subfield. The number of symbols. The bit rearrangement unit 301a rearranges (x 0 , x 1 , x 2 ,...) To (y 0 , y 1 , y 2 ,...) According to the following (Equation 1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 この再配置によって、MPDUの先頭にあるシーケンス番号サブフィールドに含まれる各ビットは、LSNだけ後側に移動する。複製部302aは、そのビット再配置後のMPDUを複製する。 This relocation, each bit included in the sequence number subfield at the head of the MPDU, it moves rearward only L SN. The duplication unit 302a duplicates the MPDU after the bit rearrangement.
 フロントペイロードサブフィールドおよびバックペイロードサブフィールドはそれぞれ、N個のシンボルからなる。ここで、Moddは、奇数番目のシンボルに含まれるビット長であり、Mevenは、偶数番目のシンボルに含まれるビット長であり、Wは、シンボル値オフセット(上述のオフセット値)であり、Wは、シンボル値単位(上述のユニット値)である。なお、N、Modd、Meven、W、およびWは、図339に示すPHYのPIBによって設定される。 Each of the front payload subfield and the back payload subfield consists of N symbols. Here, M odd is the bit length included in the odd-numbered symbol, M even is the bit length included in the even-numbered symbol, and W 1 is the symbol value offset (the above-described offset value). , W 2 is the symbol value unit (unit value described above). N, M odd , M even , W 1 , and W 2 are set by the PHY PIB shown in FIG. 339.
 フロント変換部303aおよびバック変換部304aは、再配置されたMPDUのペイロードビット(y0、y1、y2、...)を、以下の(式2)~(式5)によってzに変換する。 The front conversion unit 303a and the back conversion unit 304a convert payload bits (y0, y1, y2,...) Of the rearranged MPDU into z i according to the following (Expression 2) to (Expression 5).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 フロント変換部303aは、zを用いて、フロントペイロードサブフィールドのi番目のシンボル(すなわちシンボル値)を以下の(式6)によって算出する。 The front conversion unit 303a calculates the i-th symbol (that is, the symbol value) of the front payload subfield using z i by the following (Equation 6).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 バック変換部304aは、zを用いて、バックペイロードサブフィールドのi番目のシンボル(すなわちシンボル値)を以下の(式7)によって算出する。 The back conversion unit 304a calculates the i-th symbol (that is, the symbol value) of the back payload subfield using z i according to (Equation 7) below.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 なお、(式6)および(式7)によって算出されるシンボル値は、例えば図188に示す時間長DR1~DR4およびDL1~DL4に相当する。 Note that the symbol values calculated by (Expression 6) and (Expression 7) correspond to the time lengths D R1 to D R4 and D L1 to D L4 shown in FIG. 188, for example.
 図341は、PLCPヘッダサブフィールドを示す図である。 FIG. 341 is a diagram showing a PLCP header subfield.
 PLCPヘッダサブフィールドは、図341に示すように、PWMモードでは、4つのシンボルによって構成され、PPMモードでは、3つのシンボルによって構成される。 As shown in FIG. 341, the PLCP header subfield is composed of four symbols in the PWM mode, and is composed of three symbols in the PPM mode.
 図342は、PLCPセンタサブフィールドを示す図である。 FIG. 342 is a diagram showing a PLCP center subfield.
 PLCPセンタのサブフィールドは、図342に示すように、PWMモードでは、4つのシンボルによって構成され、PPMモードでは、3つのシンボルによって構成される。 As shown in FIG. 342, the PLCP center subfield is composed of four symbols in the PWM mode, and is composed of three symbols in the PPM mode.
 図343は、PLCPフッタサブフィールドを示す図である。 FIG. 343 is a diagram showing a PLCP footer subfield.
 PLCPフッタサブフィールドは、図343に示すように、PWMモードでは、4つのシンボルによって構成され、PPMモードでは、3つのシンボルによって構成される。 As shown in FIG. 343, the PLCP footer subfield is composed of four symbols in the PWM mode, and is composed of three symbols in the PPM mode.
 図344は、MPMにおけるPHYのPWMモードの波形を示す図である。 FIG. 344 is a diagram illustrating a PHY PWM mode waveform in the MPM.
 PWMモードでは、シンボルは、光強度の2つの状態のうちの何れか、すなわち明るい状態または暗い状態として送信されなければならない。MPMにおけるPHYのPWMモードでは、シンボル値は、マイクロ秒単位の連続時間に対応する。例えば、図344に示すように、第1のシンボル値は、第1の明るい状態の連続時間に対応し、第2のシンボル値は、次の暗い状態の連続時間に対応する。なお、図344に示す例では、各サブフィールドの最初の状態は、明るい状態であるが、暗い状態であってもよい。 In PWM mode, a symbol must be transmitted as one of two states of light intensity: bright or dark. In the PHY PWM mode in MPM, the symbol value corresponds to a continuous time in microseconds. For example, as shown in FIG. 344, the first symbol value corresponds to the continuous time of the first bright state, and the second symbol value corresponds to the continuous time of the next dark state. In the example shown in FIG. 344, the initial state of each subfield is a bright state, but may be a dark state.
 図345は、MPMにおけるPHYのPPMモードの波形を示す図である。 FIG. 345 is a diagram showing a PHY PPM mode waveform in MPM.
 PPMモードでは、図345に示すように、シンボル値は、明るい状態の開始から次の明るい状態の開始までの時間をマイクロ秒単位で表す。明るい状態の時間は、シンボル値の90%より短くなければならない。 In the PPM mode, as shown in FIG. 345, the symbol value represents the time from the start of the bright state to the start of the next bright state in units of microseconds. The bright state time must be shorter than 90% of the symbol value.
 両方のモードについて、送信機は、複数のシンボルの一部のみを送信することができる。しかし、送信機は、PLCPセンタサブフィールドのすべてのシンボルと、少なくともN個のシンボルとを送信しなければならない。その少なくともN個のシンボルのぞれぞれは、フロントペイロードサブフィールドおよびバックペイロードサブフィールドの何れかに含まれるシンボルである。 For both modes, the transmitter can transmit only some of the symbols. However, the transmitter must transmit all the symbols of the PLCP center subfield and at least N symbols. Each of the at least N symbols is a symbol included in either the front payload subfield or the back payload subfield.
 (実施の形態25のまとめ)
 図346は、実施の形態25の復号方法の一例を示すフローチャートである。なお、この図346に示すフローチャートは、図336に示すフローチャートに相当する。
(Summary of Embodiment 25)
FIG. 346 is a flowchart illustrating an example of the decoding method according to the twenty-fifth embodiment. Note that the flowchart shown in FIG. 346 corresponds to the flowchart shown in FIG.
 この復号方法は、複数のフレームで構成される可視光信号を復号する方法であって、図346に示すように、ステップS310bと、ステップS320bと、ステップS330bとを含む。また、これらの複数のフレームのそれぞれはシーケンス番号とフレームペイロードとを含む。 This decoding method is a method for decoding a visible light signal composed of a plurality of frames, and includes step S310b, step S320b, and step S330b as shown in FIG. Each of the plurality of frames includes a sequence number and a frame payload.
 ステップS310bでは、復号対象フレームにおいてシーケンス番号が格納されるサブフィールドのビット長を決定するための情報であるmacSnLengthに基づいて、そのサブフィールドのビット長が可変長か否かを判定する可変長判定処理を行う。 In step S310b, based on macSnLength which is information for determining the bit length of the subfield in which the sequence number is stored in the decoding target frame, it is determined whether or not the bit length of the subfield is variable Process.
 ステップS320bでは、その可変長判定処理の結果に基づいて、そのサブフィールドのビット長を決定する。そして、ステップS330bでは、決定されたサブフィールドのビット長に基づいて、復号対象フレームを復号する。 In step S320b, the bit length of the subfield is determined based on the result of the variable length determination process. In step S330b, the decoding target frame is decoded based on the determined bit length of the subfield.
 ここで、ステップS320bにおける上記サブフィールドのビット長の決定は、ステップS321b~S324bを含む。 Here, the determination of the bit length of the subfield in step S320b includes steps S321b to S324b.
 つまり、ステップS310bの可変長判定処理において、サブフィールドのビット長が可変長ではないと判定された場合には、そのサブフィールドのビット長を、上述のmacSnLengthによって示される値に決定する(ステップS321b)。 That is, in the variable length determination process of step S310b, when it is determined that the bit length of the subfield is not variable, the bit length of the subfield is determined to the value indicated by the above macSnLength (step S321b). ).
 一方、ステップS310bの可変長判定処理において、サブフィールドのビット長が可変長であると判定された場合には、復号対象フレームが、上記複数のフレームのうちの最終フレームであるか否か判定する最終判定処理を行う(ステップS322b)。ここで、最終フレームであると判定された場合には(ステップS322bのY)、そのサブフィールドのビット長を所定の値に決定する(ステップS323b)。一方、最終フレームでないと判定された場合には(ステップS322bのN)、最終フレームのシーケンス番号の値に基づいて、そのサブフィールドのビット長を決定する(ステップS324b)。 On the other hand, if it is determined in the variable length determination process in step S310b that the bit length of the subfield is variable length, it is determined whether or not the decoding target frame is the last frame of the plurality of frames. A final determination process is performed (step S322b). If it is determined that the frame is the final frame (Y in step S322b), the bit length of the subfield is determined to be a predetermined value (step S323b). On the other hand, if it is determined that the frame is not the last frame (N in step S322b), the bit length of the subfield is determined based on the sequence number value of the last frame (step S324b).
 これにより、図346に示すように、シーケンス番号が格納されるサブフィールド(具体的には、シーケンス番号サブフィールド)のビット長が固定長であっても可変長であっても、そのサブフィールドのビット長を適切に決定することができる。 As a result, as shown in FIG. 346, regardless of whether the bit length of the subfield in which the sequence number is stored (specifically, the sequence number subfield) is fixed or variable, the subfield The bit length can be determined appropriately.
 ここで、ステップS322bの最終判定処理では、復号対象フレームが最終フレームであるか否かを示す最終フレームフラグに基づいて、その復号対象フレームが最終フレームであるか否かを判定してもよい。具体的には、ステップS322bの最終判定処理では、最終フレームフラグが1を示す場合に、その復号対象フレームが最終フレームであると判定し、最終フレームフラグが0を示す場合に、その復号対象フレームが最終フレームではないと判定してもよい。例えば、最終フレームフラグは、そのサブフィールドの1ビット目に含まれていてもよい。 Here, in the final determination process in step S322b, it may be determined whether the decoding target frame is the final frame based on the final frame flag indicating whether the decoding target frame is the final frame. Specifically, in the final determination process in step S322b, when the final frame flag indicates 1, when it is determined that the decoding target frame is the final frame, and when the final frame flag indicates 0, the decoding target frame May not be the last frame. For example, the last frame flag may be included in the first bit of the subfield.
 これにより、図336のステップS203aに示すように、復号対象フレームが最終フレームであるか否かを適切に判定することができる。 Thereby, as shown in step S203a of FIG. 336, it is possible to appropriately determine whether or not the decoding target frame is the final frame.
 より具体的には、ステップS320bにおけるサブフィールドのビット長の決定では、ステップS322bの最終判定処理において、復号対象フレームが最終フレームであると判定された場合には、サブフィールドのビット長を、上述の所定の値である5ビットに決定してもよい。つまり、図336のステップS204aに示すように、サブフィールドのビット長SNが5ビットに決定される。 More specifically, in the determination of the bit length of the subfield in step S320b, if it is determined in the final determination process in step S322b that the decoding target frame is the final frame, the bit length of the subfield is set as described above. The predetermined value of 5 bits may be determined. That is, as shown in step S204a of FIG. 336, the bit length SN of the subfield is determined to be 5 bits.
 また、ステップS320bにおけるサブフィールドのビット長の決定では、ステップS322bの最終判定処理において、復号対象フレームが最終フレームではないと判定された場合に、最終フレームのシーケンス番号の値が1であるときには、サブフィールドのビット長を、1ビットに決定してもよい。また、最終フレームのシーケンス番号の値が2であるときには、そのサブフィールドのビット長を、2ビットに決定してもよい。また、最終フレームのシーケンス番号の値が3または4であるときには、そのサブフィールドのビット長を、3ビットに決定してもよい。また、最終フレームのシーケンス番号の値が5から8の何れかの整数であるときには、そのサブフィールドのビット長を、4ビットに決定してもよい。また、最終フレームのシーケンス番号の値が9から15の何れかの整数であるときには、そのサブフィールドのビット長を、5ビットに決定してもよい。つまり、図336のステップS206a~S210aに示すように、サブフィールドのビット長SNが1~5ビットの何れかに決定される。 Further, in the determination of the bit length of the subfield in step S320b, when it is determined in the final determination process in step S322b that the decoding target frame is not the final frame, the sequence number value of the final frame is 1. The bit length of the subfield may be determined as 1 bit. Further, when the value of the sequence number of the last frame is 2, the bit length of the subfield may be determined to be 2 bits. Further, when the value of the sequence number of the last frame is 3 or 4, the bit length of the subfield may be determined to be 3 bits. Further, when the value of the sequence number of the last frame is any integer from 5 to 8, the bit length of the subfield may be determined to be 4 bits. Further, when the value of the sequence number of the last frame is any integer from 9 to 15, the bit length of the subfield may be determined to be 5 bits. That is, as shown in steps S206a to S210a in FIG. 336, the bit length SN of the subfield is determined to be any one of 1 to 5 bits.
 図347は、実施の形態25の符号化方法の一例を示すフローチャートである。なお、この図347に示すフローチャートは、図335に示すフローチャートに相当する。 FIG. 347 is a flowchart illustrating an example of the encoding method according to the twenty-fifth embodiment. Note that the flowchart shown in FIG. 347 corresponds to the flowchart shown in FIG.
 この符号化方法は、符号化対象の情報を、複数のフレームで構成される可視光信号に符号化する方法であって、図347に示すように、ステップS410aと、ステップS420aと、ステップS430aとを含む。また、これらの複数のフレームのそれぞれはシーケンス番号とフレームペイロードとを含む。 This encoding method is a method of encoding information to be encoded into a visible light signal composed of a plurality of frames. As shown in FIG. 347, step S410a, step S420a, step S430a, including. Each of the plurality of frames includes a sequence number and a frame payload.
 ステップS410aでは、処理対象フレームにおいてシーケンス番号が格納されるサブフィールドのビット長を決定するための情報であるmacSnLengthに基づいて、そのサブフィールドのビット長が可変長か否かを判定する可変長判定処理を行う。 In step S410a, based on macSnLength which is information for determining the bit length of the subfield in which the sequence number is stored in the processing target frame, it is determined whether or not the bit length of the subfield is variable. Process.
 ステップS420aでは、その可変長判定処理の結果に基づいて、そのサブフィールドのビット長を決定する。そして、ステップS430aでは、決定されたサブフィールドのビット長に基づいて、符号化対象の情報の一部を処理対象フレームに符号化する。 In step S420a, the bit length of the subfield is determined based on the result of the variable length determination process. In step S430a, based on the determined bit length of the subfield, a part of the encoding target information is encoded into the processing target frame.
 ここで、ステップS420aにおける上記サブフィールドのビット長の決定では、ステップS421a~S424aを含む。 Here, the determination of the bit length of the subfield in step S420a includes steps S421a to S424a.
 つまり、ステップS410aの可変長判定処理において、サブフィールドのビット長が可変長ではないと判定された場合には、そのサブフィールドのビット長を、上述のmacSnLengthによって示される値に決定する(ステップS421a)。 That is, in the variable length determination process in step S410a, when it is determined that the bit length of the subfield is not variable length, the bit length of the subfield is determined to the value indicated by the above-described macSnLength (step S421a). ).
 一方、ステップS410aの可変長判定処理において、サブフィールドのビット長が可変長であると判定された場合には、処理対象フレームが、上記複数のフレームのうちの最終フレームであるか否か判定する最終判定処理を行う(ステップS422a)。ここで、最終フレームであると判定された場合には(ステップS422aのY)、そのサブフィールドのビット長を所定の値に決定する(ステップS423a)。一方、最終フレームでないと判定された場合には(ステップS422aのN)、最終フレームのシーケンス番号の値に基づいて、そのサブフィールドのビット長を決定する(ステップS424a)。 On the other hand, if it is determined in the variable length determination process in step S410a that the bit length of the subfield is variable length, it is determined whether or not the processing target frame is the last frame of the plurality of frames. A final determination process is performed (step S422a). If it is determined that the frame is the last frame (Y in step S422a), the bit length of the subfield is determined to be a predetermined value (step S423a). On the other hand, when it is determined that the frame is not the last frame (N in step S422a), the bit length of the subfield is determined based on the sequence number value of the last frame (step S424a).
 これにより、図347に示すように、シーケンス番号が格納されるサブフィールド(具体的には、シーケンス番号サブフィールド)のビット長が固定長であっても可変長であっても、そのサブフィールドのビット長を適切に決定することができる。 As a result, as shown in FIG. 347, regardless of whether the bit length of the subfield in which the sequence number is stored (specifically, the sequence number subfield) is fixed or variable, the subfield The bit length can be determined appropriately.
 なお、本実施の形態における復号装置は、プロセッサとメモリとを備え、メモリには、図346に示す復号方法をプロセッサに実行させるプログラムが記録されている。本実施の形態における符号化装置は、プロセッサとメモリとを備え、メモリには、図347に示す符号化方法をプロセッサに実行させるプログラムが記録されている。また、本実施の形態におけるプログラムは、図346に示す復号方法、または図347に示す符号化方法をコンピュータに実行させるプログラムである。 Note that the decoding device according to the present embodiment includes a processor and a memory, and a program that causes the processor to execute the decoding method shown in FIG. 346 is recorded in the memory. The encoding apparatus in the present embodiment includes a processor and a memory, and a program that causes the processor to execute the encoding method shown in FIG. 347 is recorded in the memory. The program in this embodiment is a program that causes a computer to execute the decoding method shown in FIG. 346 or the encoding method shown in FIG. 347.
 (実施の形態26)
 本実施の形態では、光IDを可視光信号によって送信する送信方法について説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。
(Embodiment 26)
In this embodiment, a transmission method for transmitting an optical ID by a visible light signal will be described. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments.
 図348は、本実施の形態における受信機がAR画像を表示する例を示す図である。 FIG. 348 is a diagram illustrating an example in which the receiver according to the present embodiment displays an AR image.
 本実施の形態における受信機200は、イメージセンサおよびディスプレイ201を備えた受信機であって、例えばスマートフォンとして構成されている。このような受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Paと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 in the present embodiment is a receiver including an image sensor and a display 201, and is configured as a smartphone, for example. Such a receiver 200 acquires the above-described captured display image Pa, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは送信機100を撮像する。送信機100は、例えば電球のような形態を有し、ガラス球141と、そのガラス球141の内部で炎のように光りながら揺らめく発光部142とを備える。この発光部142は、送信機100に備えられた1つまたは複数の発光素子(例えばLED)の点灯によって光る。この送信機100は、その発光部142を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100. The transmitter 100 has a shape such as a light bulb, for example, and includes a glass bulb 141 and a light emitting unit 142 that sways while shining like a flame inside the glass bulb 141. The light emitting unit 142 emits light when one or more light emitting elements (for example, LEDs) provided in the transmitter 100 are turned on. The transmitter 100 changes in luminance by blinking the light emitting unit 142, and transmits an optical ID (optical identification information) by the luminance change. This light ID is the above-mentioned visible light signal.
 受信機200は、送信機100を通常露光時間で撮像することによって、その送信機100が映し出された撮像表示画像Paを取得するとともに、その通常露光時間よりも短い通信用露光時間で送信機100を撮像することによって、復号用画像を取得する。なお、通常露光時間は、上述の通常撮影モードにおける露光時間であり、通信用露光時間は、上述の可視光通信モードにおける露光時間である。 The receiver 200 captures the transmitter 100 with the normal exposure time, thereby acquiring the captured display image Pa projected by the transmitter 100, and the transmitter 100 with a communication exposure time shorter than the normal exposure time. A decoding image is acquired by imaging. The normal exposure time is the exposure time in the above-described normal photographing mode, and the communication exposure time is the exposure time in the above-described visible light communication mode.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P42と認識情報とをサーバから取得する。受信機200は、撮像表示画像Paのうち、その認識情報に応じた領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P42を重畳し、AR画像P42が重畳された撮像表示画像Paをディスプレイ201に表示する。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P42 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. Then, the receiver 200 superimposes the AR image P42 on the target area, and displays the captured display image Pa on which the AR image P42 is superimposed on the display 201.
 例えば、受信機200は、図245に示す例と同様に、認識情報にしたがって、送信機100が映し出されている領域の左上にある領域を対象領域として認識する。その結果、例えば妖精を示すAR画像P42は、送信機100の周りを飛んでいるように表示される。 For example, similarly to the example shown in FIG. 245, the receiver 200 recognizes the area at the upper left of the area where the transmitter 100 is projected as the target area according to the recognition information. As a result, for example, the AR image P <b> 42 showing a fairy is displayed so as to fly around the transmitter 100.
 図349は、AR画像P42が重畳された撮像表示画像Paの他の例を示す図である。 FIG. 349 is a diagram illustrating another example of the captured display image Pa on which the AR image P42 is superimposed.
 受信機200は、図349に示すように、AR画像P42が重畳された撮像表示画像Paをディスプレイ201に表示する。 As shown in FIG. 349, the receiver 200 displays the captured display image Pa on which the AR image P42 is superimposed on the display 201.
 ここで、上述の認識情報は、撮像表示画像Paにおける閾値以上の輝度を有する範囲が基準領域であることを示す。さらに、その認識情報は、その基準領域に対して予め定められた方向に対象領域があることと、その対象領域が基準領域の中心(または重心)から予め定められた距離だけ離れていることを示す。 Here, the above-described recognition information indicates that a range having a luminance equal to or higher than a threshold in the captured display image Pa is a reference region. Further, the recognition information indicates that the target area is in a predetermined direction with respect to the reference area, and that the target area is separated from the center (or center of gravity) of the reference area by a predetermined distance. Show.
 したがって、受信機200によって撮像されている送信機100の発光部142が揺らめくと、図349に示すように、撮像表示画像Paの対象領域に重畳されるAR画像p42も、その発光部142の動きに同期するように動く。つまり、発光部142が揺らめくと、撮像表示画像Paに映し出されている発光部142の像142aも揺らめく。この像142aは、上述の閾値以上の輝度を有する範囲であって、基準領域である。すなわち、基準領域が動くため、受信機200は、その基準領域と対象領域との間の距離が予め定められた距離に維持されるように、対象領域を移動させて、その移動する対象領域にAR画像P42を重畳する。その結果、発光部142が揺らめくと、撮像表示画像Paの対象領域に重畳されるAR画像P42も、その発光部142の動きに同期するように動く。なお、基準領域の中心位置は、発光部142の変形によっても移動することがある。したがって、発光部142が変形する場合にも、AR画像42は、その移動する基準領域の中心位置との間の距離が予め定められた距離に維持されるように動くことがある。 Therefore, when the light emitting unit 142 of the transmitter 100 imaged by the receiver 200 fluctuates, the AR image p42 superimposed on the target region of the captured display image Pa also moves as shown in FIG. Move to sync. That is, when the light emitting unit 142 fluctuates, the image 142a of the light emitting unit 142 displayed in the captured display image Pa also fluctuates. This image 142a is a range having a luminance equal to or higher than the above-described threshold and is a reference area. That is, since the reference area moves, the receiver 200 moves the target area so that the distance between the reference area and the target area is maintained at a predetermined distance. The AR image P42 is superimposed. As a result, when the light emitting unit 142 shakes, the AR image P42 superimposed on the target area of the captured display image Pa also moves in synchronization with the movement of the light emitting unit 142. Note that the center position of the reference region may move even when the light emitting unit 142 is deformed. Therefore, even when the light emitting unit 142 is deformed, the AR image 42 may move so that the distance from the center position of the moving reference area is maintained at a predetermined distance.
 また、上述の例では、受信機200は、認識情報に基づいて対象領域を認識し、その対象領域にAR画像P42を重畳するが、その対象領域を中心にAR画像P42を振動させてもよい。つまり、受信機200は、時間に対する振幅の変化を示す関数にしたがって、そのAR画像P42を例えば上下方向に振動させる。その関数は、例えば正弦波などの三角関数である。 In the above example, the receiver 200 recognizes the target area based on the recognition information and superimposes the AR image P42 on the target area. However, the AR image P42 may be vibrated around the target area. . That is, the receiver 200 vibrates the AR image P42 in the vertical direction, for example, according to a function indicating the change in amplitude with respect to time. The function is a trigonometric function such as a sine wave.
 また、受信機200は、上述の閾値以上の輝度を有する範囲の大きさに応じて、AR画像P42の大きさを変化させてもよい。つまり、受信機200は、撮像表示画像Paにおける明るい領域の面積が大きくなるほど、AR画像P42のサイズを大きくし、逆に、その明るい領域の面積が小さくなるほど、AR画像P42のサイズを小さくする。 In addition, the receiver 200 may change the size of the AR image P42 according to the size of a range having a luminance equal to or higher than the above-described threshold. That is, the receiver 200 increases the size of the AR image P42 as the area of the bright region in the captured display image Pa increases, and conversely decreases the size of the AR image P42 as the area of the bright region decreases.
 または、受信機200は、上述の閾値以上の輝度を有する範囲における平均輝度が高いほど、AR画像P42のサイズを大きくし、逆に、その平均輝度が低いほど、AR画像P42のサイズを小さくしてもよい。なお、AR画像P42のサイズの代わりに、AR画像P42の透明度を、その平均輝度に応じて変化させてもよい。 Alternatively, the receiver 200 increases the size of the AR image P42 as the average luminance in the range having the luminance equal to or higher than the above-described threshold is increased, and conversely decreases the size of the AR image P42 as the average luminance is lower. May be. Instead of the size of the AR image P42, the transparency of the AR image P42 may be changed according to the average luminance.
 また、図349に示す例では、発光部142の像142aの中では何れの画素も閾値以上の輝度を有するが、何れかの画素が閾値未満であってもよい。つまり、像142aに相当する、閾値以上の輝度を有する範囲は、環状であってもよい。この場合にも、その閾値以上の輝度を有する範囲が基準領域として特定され、その基準領域の中心(または重心)から予め定められた距離だけ離れた対象領域に、AR画像P42が重畳される。 In the example shown in FIG. 349, in the image 142a of the light emitting unit 142, any pixel has a luminance equal to or higher than a threshold value, but any pixel may be less than the threshold value. That is, the range corresponding to the image 142a and having a luminance equal to or higher than the threshold value may be annular. Also in this case, a range having a luminance equal to or higher than the threshold is specified as the reference region, and the AR image P42 is superimposed on a target region that is separated from the center (or centroid) of the reference region by a predetermined distance.
 図350は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 350 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図350に示すように、照明装置として構成され、例えば壁に描かれた3つの円からなる図形143を照らしながら輝度変化することによって、光IDを送信している。図形143は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 350, the transmitter 100 is configured as an illuminating device, and transmits a light ID by changing the luminance while illuminating a figure 143 composed of, for example, three circles drawn on a wall. Since the figure 143 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
 受信機200は、送信機100によって照らされた図形143を撮像することによって、上述と同様に、撮像表示画像Paと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、図形143から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P43と認識情報とをサーバから取得する。受信機200は、撮像表示画像Paのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、図形143が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P43を重畳し、AR画像P43が重畳された撮像表示画像Paをディスプレイ201に表示する。例えば、AR画像P43は、キャラクターの顔画像である。 The receiver 200 acquires the captured display image Pa and the decoding image in the same manner as described above by capturing the figure 143 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the graphic 143. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P43 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. For example, the receiver 200 recognizes an area where the graphic 143 is projected as a target area. Then, the receiver 200 superimposes the AR image P43 on the target area, and displays the captured display image Pa on which the AR image P43 is superimposed on the display 201. For example, the AR image P43 is a character face image.
 ここで、図形143は、上述のように3つの円からなるが、この図形143には幾何学的な特徴が少ない。したがって、図形143の撮像によって得られる撮像画像だけからでは、その図形143に応じたAR画像を、サーバに蓄積された多くの画像から適切に選択して取得することは難しい。しかし、本実施の形態では、受信機200は、光IDを取得し、その光IDに対応するAR画像P43をサーバから取得する。したがって、サーバに多くの画像が蓄積されていても、その光IDに対応するAR画像P43を、図形143に応じたAR画像として、その多くの画像の中から適切に選択して取得することができる。 Here, the figure 143 is composed of three circles as described above, but the figure 143 has few geometric features. Therefore, it is difficult to appropriately select and acquire an AR image corresponding to the graphic 143 from many images stored in the server from only the captured image obtained by imaging the graphic 143. However, in the present embodiment, the receiver 200 acquires an optical ID and acquires an AR image P43 corresponding to the optical ID from the server. Therefore, even if many images are stored in the server, the AR image P43 corresponding to the light ID can be appropriately selected and acquired as an AR image corresponding to the graphic 143 from the many images. it can.
 図351は、本実施の形態における受信機200の動作を示すフローチャートである。 FIG. 351 is a flowchart showing the operation of the receiver 200 in the present embodiment.
 本実施の形態における受信機200は、まず、複数のAR画像候補を取得する(ステップS541)。例えば、受信機200は、可視光通信と異なる無線通信(BTLEまたはWi-Fiなど)によって、サーバから複数のAR画像候補を取得する。次に、受信機200は、被写体を撮像する(ステップS542)。受信機200は、この撮像によって、上述のように、撮像表示画像Paと復号用画像とを取得する。しかし、その被写体が送信機100の写真である場合には、その被写体からは光IDは送信されていないため、受信機200は、復号用画像に対する復号を行っても光IDを取得することはできない。 The receiver 200 in the present embodiment first acquires a plurality of AR image candidates (step S541). For example, the receiver 200 acquires a plurality of AR image candidates from the server through wireless communication (such as BTLE or Wi-Fi) different from visible light communication. Next, the receiver 200 captures an image of the subject (step S542). The receiver 200 acquires the captured display image Pa and the decoding image as described above by this imaging. However, when the subject is a photograph of the transmitter 100, since the optical ID is not transmitted from the subject, the receiver 200 does not acquire the optical ID even when decoding the decoding image. Can not.
 そこで、受信機200は、光IDを取得することができたか否か、すなわち、被写体から光IDを受信したか否かを判定する(ステップS543)。 Therefore, the receiver 200 determines whether or not the light ID has been acquired, that is, whether or not the light ID has been received from the subject (step S543).
 ここで、光IDを受信していないと判定すると(ステップS543のNo)、受信機200は、自らに設定されているAR表示フラグが1であるか否かを判定する(ステップS544)。AR表示フラグは、光IDが取得されていなくても撮像表示画像PaだけからAR画像を表示してもよいか否かを示すフラグである。AR表示フラグが1である場合には、そのAR表示フラグは、撮像表示画像PaだけからAR画像を表示してもよいこと示し、AR表示フラグが0である場合には、そのAR表示フラグは、撮像表示画像PaだけからAR画像を表示してはいけないこと示す。 Here, if it is determined that the optical ID is not received (No in step S543), the receiver 200 determines whether or not the AR display flag set for itself is 1 (step S544). The AR display flag is a flag indicating whether or not an AR image may be displayed only from the captured display image Pa even if the light ID is not acquired. When the AR display flag is 1, the AR display flag indicates that the AR image may be displayed only from the captured display image Pa. When the AR display flag is 0, the AR display flag is This indicates that the AR image should not be displayed only from the captured display image Pa.
 AR表示フラグが1であると判定すると(ステップS544のYes)、受信機200は、ステップS541で取得された複数のAR画像候補の中から、撮像表示画像Paに対応する候補をAR画像として選択する(ステップS545)。つまり、受信機200は、撮像表示画像Paに含まれる特徴量を抽出し、その抽出された特徴量に関連付けられている候補をAR画像として選択する。 If it is determined that the AR display flag is 1 (Yes in step S544), the receiver 200 selects a candidate corresponding to the captured display image Pa from the plurality of AR image candidates acquired in step S541 as an AR image. (Step S545). That is, the receiver 200 extracts a feature amount included in the captured display image Pa, and selects a candidate associated with the extracted feature amount as an AR image.
 そして、受信機200は、選択された候補であるAR画像を撮像表示画像Paに重畳して表示する(ステップS546)。 Then, the receiver 200 displays the selected candidate AR image superimposed on the captured display image Pa (step S546).
 一方、AR表示フラグが0であると判定すると(ステップS544のNo)、受信機200は、AR画像を表示しない。 On the other hand, if it is determined that the AR display flag is 0 (No in step S544), the receiver 200 does not display the AR image.
 また、ステップS543で光IDを受信したと判定すると(ステップS543のYes)、受信機200は、ステップS541で取得された複数のAR画像候補の中から、その光IDに関連付けられている候補をAR画像として選択する(ステップS547)。そして、受信機200は、選択された候補であるAR画像を撮像表示画像Paに重畳して表示する(ステップS546)。 If it is determined in step S543 that an optical ID has been received (Yes in step S543), the receiver 200 selects a candidate associated with the optical ID from among a plurality of AR image candidates acquired in step S541. It selects as AR image (step S547). Then, the receiver 200 displays the AR image that is the selected candidate so as to be superimposed on the captured display image Pa (step S546).
 なお、上述の例では、AR表示フラグは受信機200に設定されているが、サーバに設定されていてもよい。この場合には、受信機200は、ステップS544において、サーバにAR表示フラグが1であるか0であるかを問い合わせる。 In the above example, the AR display flag is set in the receiver 200, but may be set in the server. In this case, the receiver 200 inquires of the server whether the AR display flag is 1 or 0 in step S544.
 これにより、受信機200が撮像を行っても光IDを受信していないときに、その受信機200に対してAR画像を表示させるか否かを、AR表示フラグによって制御することができる。 Thus, whether or not to display an AR image on the receiver 200 when the optical ID is not received even when the receiver 200 performs imaging can be controlled by the AR display flag.
 図352は、本実施の形態における送信機100の動作を説明するための図である。 FIG. 352 is a diagram for explaining the operation of the transmitter 100 in the present embodiment.
 例えば、送信機100はプロジェクタとして構成されている。ここで、プロジェクタから照射されてスクリーンに反射される光の強度は、そのプロジェクタの光源の経年劣化、または、その光源からスクリーンまでの距離などの各要因によって変化する。光の強度が小さい場合には、送信機100から送信される光IDが受信機200に受信され難くなる。 For example, the transmitter 100 is configured as a projector. Here, the intensity of the light emitted from the projector and reflected on the screen varies depending on factors such as the aging of the light source of the projector or the distance from the light source to the screen. When the light intensity is low, the optical ID transmitted from the transmitter 100 is difficult to be received by the receiver 200.
 そこで、本実施の形態における送信機100は、その各要因に応じた光の強度の変化を抑えるために、光源を発光させるためのパラメータを調整する。このパラメータは、光源を発光させるためにその光源に入力される電流の値と、その発光時間(より具体的には、単位時間当たりの発光時間)とのうちの少なくとも一方である。例えば、電流の値を大きくするほど、発光時間を長くするほど、光源の光の強度は大きくなる。 Therefore, the transmitter 100 according to the present embodiment adjusts a parameter for causing the light source to emit light in order to suppress a change in light intensity according to each factor. This parameter is at least one of the value of the current input to the light source for causing the light source to emit light and the light emission time (more specifically, the light emission time per unit time). For example, the light intensity of the light source increases as the current value increases or the light emission time increases.
 つまり、送信機100は、光源が経年劣化しているほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、タイマを備え、そのタイマによって計測される光源の使用時間が長いほど、その光源の光を強めるようにパラメータを調整する。つまり、送信機100は、使用時間が長いほど、光源の電流の値を高めたり、発光時間を長くしたりする。または、送信機100は、光源から照射される光の強度を検出し、その検出された光の強度が低下しないようにパラメータを調整する。すなわち、送信機100は、検出される光の強度が小さいほど、その光を強めるようにパラメータを調整する。 That is, the transmitter 100 adjusts the parameter so that the light from the light source becomes stronger as the light source deteriorates with age. Specifically, the transmitter 100 includes a timer, and adjusts the parameter to increase the light of the light source as the usage time of the light source measured by the timer is longer. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the usage time is longer. Alternatively, the transmitter 100 detects the intensity of light emitted from the light source, and adjusts the parameters so that the detected light intensity does not decrease. That is, the transmitter 100 adjusts the parameter so as to increase the light intensity as the detected light intensity decreases.
 また、送信機100は、光源からスクリーンまでの照射距離が長いほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、照射されてスクリーンに反射された光の強度を検出し、その検出された光の強度が小さいほど、光源の光を強めるようにパラメータを調整する。つまり、送信機100は、検出された光の強度が小さいほど、その光源の電流の値を高めたり、発光時間を長くしたりする。これによって、反射される光の強度が照射距離に関わらず一定になるように、パラメータが調整される。または、送信機100は、光源からスクリーンまでの照射距離を測距センサによって検出し、その検出された照射距離が長いほど、光源の光を強めるようにパラメータを調整する。 In addition, the transmitter 100 adjusts the parameter so that the light from the light source becomes stronger as the irradiation distance from the light source to the screen becomes longer. Specifically, the transmitter 100 detects the intensity of the light that is irradiated and reflected by the screen, and adjusts the parameter so that the light from the light source increases as the detected light intensity decreases. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the detected light intensity decreases. Thereby, the parameters are adjusted so that the intensity of the reflected light is constant regardless of the irradiation distance. Alternatively, the transmitter 100 detects the irradiation distance from the light source to the screen by a distance measuring sensor, and adjusts the parameter so that the light from the light source is increased as the detected irradiation distance is longer.
 また、送信機100は、スクリーンの色が黒いほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、スクリーンを撮像することによって、そのスクリーンの色を検出し、その検出された色が黒いほど、光源の光を強めるようにパラメータを調整する。つまり、送信機100は、検出された色が黒いほど、その光源の電流の値を高めたり、発光時間を長くしたりする。これによって、反射される光の強度がスクリーンの色に関わらず一定になるように、パラメータが調整される。 Also, the transmitter 100 adjusts the parameter so that the lighter the light of the light source is, the darker the screen is. Specifically, the transmitter 100 detects the color of the screen by imaging the screen, and adjusts the parameter so that the light from the light source is increased as the detected color is blacker. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the detected color is black. Thus, the parameters are adjusted so that the intensity of the reflected light is constant regardless of the color of the screen.
 また、送信機100は、外光が強いほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、光源をONにして光を照射したときのスクリーンの明るさと、光源をOFFにして光を照射していないときのスクリーンの明るさとの差を検出する。そして、送信機100は、その明るさの差が小さいほど、光源の光を強めるようにパラメータを調整する。つまり、送信機100は、明るさの差が小さいほど、その光源の電流の値を高めたり、発光時間を長くしたりする。これによって、外光に関わらず、光IDのS/N比が一定になるように、パラメータが調整される。または、送信機100は、例えばLEDディスプレイとして構成されている場合には、太陽光の強度を検出し、その太陽光の強度が大きいほど、光源の光を強めるようにパラメータを調整してもよい。 In addition, the transmitter 100 adjusts the parameter so that the light from the light source increases as the external light increases. Specifically, the transmitter 100 detects the difference between the screen brightness when the light source is turned on and irradiated with light and the screen brightness when the light source is turned off and no light is irradiated. Then, the transmitter 100 adjusts the parameter so as to increase the light of the light source as the brightness difference is smaller. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the difference in brightness is smaller. Thus, the parameters are adjusted so that the S / N ratio of the light ID is constant regardless of the external light. Alternatively, when the transmitter 100 is configured as an LED display, for example, the intensity of sunlight may be detected, and the parameter may be adjusted to increase the light from the light source as the intensity of sunlight increases. .
 なお、上述のようなパラメータの調整は、ユーザによる操作が行われたときに実施されてもよい。例えば、送信機100は、キャリブレーションボタンを備え、そのキャリブレーションボタンがユーザによって押下されたときに、上述のパラメータの調整を実施する。または、送信機100は、定期的に上述のパラメータの調整を実施してもよい。 It should be noted that the parameter adjustment as described above may be performed when an operation by the user is performed. For example, the transmitter 100 includes a calibration button, and performs the above-described parameter adjustment when the calibration button is pressed by the user. Alternatively, the transmitter 100 may periodically adjust the parameters described above.
 図353は、本実施の形態における送信機100の他の動作を説明するための図である。 FIG. 353 is a diagram for describing another operation of the transmitter 100 in the present embodiment.
 例えば、送信機100はプロジェクタとして構成され、光源からの光を、前部材を通してスクリーンに照射する。プロジェクタが液晶プロジェクタの場合には、その前部材は液晶パネルであり、プロジェクタがDLP(登録商標)プロジェクタの場合には、その前部材はDMD(Digital Mirror Device)である。つまり、前部材は、映像の輝度を画素ごとに調整する部材である。また、光源は、前部材に向けて光を照射するが、その光の強度をHighとLowとに切り替える。また、光源は、単位時間あたりのHighの時間を調整することによって、時間平均的な明るさを調整する。 For example, the transmitter 100 is configured as a projector, and irradiates the screen with light from the light source through the front member. When the projector is a liquid crystal projector, the front member is a liquid crystal panel, and when the projector is a DLP (registered trademark) projector, the front member is a DMD (Digital Mirror Device). That is, the front member is a member that adjusts the luminance of the image for each pixel. The light source irradiates light toward the front member, and switches the intensity of the light between High and Low. In addition, the light source adjusts the time-average brightness by adjusting the High time per unit time.
 ここで、前部材の透過率が例えば100%である場合には、プロジェクタからスクリーンへ投影される映像が明るすぎることがないように、光源は暗くなる。つまり、光源は、単位時間あたりのHighの時間を短くする。 Here, when the transmittance of the front member is 100%, for example, the light source is dark so that the image projected from the projector onto the screen is not too bright. That is, the light source shortens the High time per unit time.
 このとき、光源は、輝度変化によって光IDを送信する場合には、光IDのパルス幅を広くする。 At this time, the light source widens the pulse width of the light ID when transmitting the light ID due to a change in luminance.
 一方、前部材の透過率が例えば20%である場合には、プロジェクタからスクリーンへ投影される映像が暗すぎることがないように、光源は明るくなる。つまり、光源は、単位時間あたりのHighの時間を長くする。 On the other hand, when the transmittance of the front member is 20%, for example, the light source becomes bright so that the image projected from the projector onto the screen is not too dark. That is, the light source lengthens High time per unit time.
 このとき、光源は、輝度変化によって光IDを送信する場合には、光IDのパルス幅を狭くする。 At this time, the light source narrows the pulse width of the light ID when transmitting the light ID due to a change in luminance.
 このように、光源が暗い場合には、光IDのパルス幅が広くなり、逆に、光源が明るい場合には、光IDのパルス幅が狭くなるため、光IDの送信によって、光源からの光の強度が弱すぎたり、明るすぎたりしてしまうことを抑えることができる。 As described above, when the light source is dark, the pulse width of the light ID is widened. Conversely, when the light source is bright, the pulse width of the light ID is narrowed. It is possible to prevent the intensity of the light from being too weak or too bright.
 なお、上述の例では、送信機100はプロジェクタであるが、大型LEDディスプレイとして構成されていてもよい。大型LEDディスプレイは、図173、図175および図180Bに示すように画素スイッチと共通スイッチとを備える。画素スイッチのONおよびOFFによって映像が表現され、共通スイッチのONおよびオフによって光IDが送信される。この場合、機能的に、画素スイッチが前部材に相当し、共通スイッチが光源に相当する。画素スイッチによる平均輝度が高い場合には、共通スイッチによる光IDのパルス幅を短くしてもよい。 In the above example, the transmitter 100 is a projector, but may be configured as a large LED display. The large LED display includes a pixel switch and a common switch as shown in FIGS. 173, 175, and 180B. An image is expressed by turning on and off the pixel switch, and an optical ID is transmitted by turning on and off the common switch. In this case, functionally, the pixel switch corresponds to the front member, and the common switch corresponds to the light source. When the average luminance by the pixel switch is high, the pulse width of the light ID by the common switch may be shortened.
 図354は、本実施の形態における送信機100の他の動作を説明するための図である。具体的には、図354は、調光機能付きスポットライトとして構成された送信機100の調光度と、その送信機100の光源に入力される電流(具体的にはピーク電流の値)との関係を示す。 FIG. 354 is a diagram for describing another operation of the transmitter 100 in the present embodiment. Specifically, FIG. 354 shows the dimming degree of the transmitter 100 configured as a spotlight with dimming function and the current (specifically, the peak current value) input to the light source of the transmitter 100. Show the relationship.
 送信機100は、自らに備えられている光源に対して指定される調光度を受け付け、その指定された調光度で光源を発光させる。なお、調光度は、光源の平均輝度の最大平均輝度に対する割合である。平均輝度は、瞬間的な輝度ではなく、時間平均における輝度である。また、調光度の調整は、光源に入力される電流の値を調整したり、光源の輝度がLowとなる時間を調整することによって実現される。光源の輝度がLowとなる時間は、光源をオフする時間であってもよい。 The transmitter 100 receives the dimming degree specified for the light source provided in the transmitter 100, and causes the light source to emit light at the specified dimming degree. The dimming degree is a ratio of the average luminance of the light source to the maximum average luminance. The average luminance is not instantaneous luminance but luminance in time average. In addition, the dimming degree is adjusted by adjusting the value of the current input to the light source or adjusting the time during which the luminance of the light source is low. The time when the luminance of the light source becomes Low may be the time when the light source is turned off.
 ここで、送信機100は、送信対象信号を光IDとして送信するときには、その送信対象信号を予め定められたモードで符号化することによって符号化信号を生成する。そして、送信機100は、その符号化信号にしたがって、自らに備えられた光源を輝度変化させることによって、その符号化信号を光ID(すなわち可視光信号)として送信する。 Here, when transmitting the transmission target signal as an optical ID, the transmitter 100 generates an encoded signal by encoding the transmission target signal in a predetermined mode. Then, the transmitter 100 transmits the encoded signal as an optical ID (that is, a visible light signal) by changing the luminance of the light source provided in the transmitter 100 according to the encoded signal.
 例えば、指定された調光度が0%以上x3(%)以下である場合には、送信機100は、デューティ比35%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。x3(%)は例えば50%である。なお、本実施の形態では、デューティ比35%のPWMモードを、第1のモードともいい、上述のx3を、第1の値ともいう。 For example, when the designated dimming degree is 0% or more and x3 (%) or less, the transmitter 100 generates an encoded signal by encoding the transmission target signal in the PWM mode with a duty ratio of 35%. . x3 (%) is, for example, 50%. In the present embodiment, the PWM mode with a duty ratio of 35% is also referred to as a first mode, and the above x3 is also referred to as a first value.
 つまり、送信機100は、指定される調光度が0%以上x3(%)以下である場合には、可視光信号のデューティ比を35%に維持しながら、光源の調光度をピーク電流の値によって調整する。 That is, when the designated dimming degree is 0% or more and x3 (%) or less, the transmitter 100 sets the dimming degree of the light source to the value of the peak current while maintaining the duty ratio of the visible light signal at 35%. Adjust by.
 また、指定された調光度がx3(%)よりも大きく100%以下である場合には、送信機100は、デューティ比65%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。なお、本実施の形態では、デューティ比65%のPWMモードを、第2のモードともいう。 When the designated dimming degree is greater than x3 (%) and equal to or less than 100%, the transmitter 100 encodes the encoded signal by encoding the transmission target signal in the PWM mode with a duty ratio of 65%. Generate. In the present embodiment, the PWM mode with a duty ratio of 65% is also referred to as a second mode.
 つまり、送信機100は、指定される調光度がx3(%)よりも大きく100%以下である場合には、可視光信号のデューティ比を65%に維持しながら、光源の調光度をピーク電流の値によって調整する。 That is, when the designated dimming degree is greater than x3 (%) and equal to or less than 100%, the transmitter 100 sets the dimming degree of the light source to the peak current while maintaining the duty ratio of the visible light signal at 65%. Adjust according to the value of.
 このように、本実施の形態における送信機100は、光源に対して指定される調光度を指定調光度として受け付ける。そして、送信機100は、指定調光度が第1の値以下である場合には、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。また、送信機100は、指定調光度の値が第1の値よりも大きい場合には、その指定調光度で光源を発光させながら、第2のモードで符号化された信号を輝度変化により送信する。具体的には、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された信号のデューティ比よりも大きい。 Thus, the transmitter 100 according to the present embodiment accepts the dimming degree designated for the light source as the designated dimming degree. Then, when the designated dimming degree is equal to or less than the first value, the transmitter 100 transmits the signal encoded in the first mode by the luminance change while causing the light source to emit light at the designated dimming degree. In addition, when the value of the designated dimming degree is larger than the first value, the transmitter 100 transmits the signal encoded in the second mode by changing the luminance while causing the light source to emit light at the designated dimming degree. To do. Specifically, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode.
 ここで、第2のモードのデューティ比は第1のモードのデューティ比よりも大きいため、第2のモードにおける調光度に対するピーク電流の変化率を、第1のモードにおける調光度に対するピーク電流の変化率よりも小さくすることができる。 Here, since the duty ratio of the second mode is larger than the duty ratio of the first mode, the change rate of the peak current with respect to the dimming degree in the second mode is expressed as the change of the peak current with respect to the dimming degree in the first mode. It can be made smaller than the rate.
 また、指定される調光度がx3(%)を超えるときには、モードが第1のモードから第2のモードに切り替えられる。したがって、このときには、ピーク電流を瞬間的に低下させることができる。つまり、指定される調光度がx3(%)であるときには、ピーク電流はy3(mA)であるが、指定される調光度がx3(%)を少しでも超えると、ピーク電流をy2(mA)に抑えることができる。なお、y3(mA)は例えば143mAであり、y2(mA)は例えば100mAである。その結果、調光度を大きくするために、ピーク電流がy3(mA)よりも大きくなることを抑えることができ、大きな電流が流れることによって光源が劣化してしまうことを抑制することができる。 Also, when the designated dimming degree exceeds x3 (%), the mode is switched from the first mode to the second mode. Accordingly, at this time, the peak current can be instantaneously reduced. That is, when the designated dimming degree is x3 (%), the peak current is y3 (mA). However, when the designated dimming degree slightly exceeds x3 (%), the peak current exceeds y2 (mA). Can be suppressed. In addition, y3 (mA) is 143 mA, for example, and y2 (mA) is 100 mA, for example. As a result, in order to increase the dimming degree, it is possible to suppress the peak current from becoming larger than y3 (mA), and it is possible to suppress the deterioration of the light source due to the large current flowing.
 また、指定される調光度がx4(%)を超えるときには、モードが第2のモードであっても、ピーク電流がy3(mA)よりも大きくなる。しかし、指定される調光度がx4(%)を超える頻度が少ない場合には、光源の劣化を抑えることができる。なお、本実施の形態では、上述のx4を、第2の値ともいう。また、図354に示す例では、x4(%)は100%未満であるが、100%であってもよい。 Also, when the specified dimming degree exceeds x4 (%), the peak current becomes larger than y3 (mA) even if the mode is the second mode. However, when the specified dimming rate is less than x4 (%), deterioration of the light source can be suppressed. In the present embodiment, x4 described above is also referred to as a second value. In the example shown in FIG. 354, x4 (%) is less than 100%, but may be 100%.
 つまり、本実施の形態における送信機100では、指定調光度が第1の値よりも大きく第2の値以下である場合に、第2のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値は、指定調光度が第1の値である場合に、第1のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値よりも小さい。 That is, in the transmitter 100 according to the present embodiment, when the designated dimming degree is larger than the first value and equal to or smaller than the second value, the signal encoded in the second mode is transmitted by the luminance change. When the specified dimming level is the first value, the peak current value of the light source is smaller than the peak current value of the light source for transmitting the signal encoded in the first mode by a change in luminance.
 これにより、信号を符号化するモードの切り替えによって、指定調光度が第1の値よりも大きく第2の値以下である場合における光源のピーク電流の値は、指定調光度が第1の値である場合における光源のピーク電流の値よりも小さくなる。したがって、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。 As a result, the peak current value of the light source when the designated dimming degree is larger than the first value and equal to or smaller than the second value by switching the mode for encoding the signal is the designated dimming degree being the first value. It becomes smaller than the peak current value of the light source in some cases. Therefore, as the designated dimming degree is increased, a large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed.
 さらに、本実施の形態における送信機100は、指定される調光度がx1(%)以上x2(%)よりも小さい場合には、指定される調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信するとともに、指定される調光度の変化に対してピーク電流の値を一定の値に維持する。x2(%)はx3(%)よりも小さい。なお、本実施の形態では、上述のx2を第3の値ともいう。 Furthermore, when the designated dimming degree is less than or equal to x1 (%) and less than x2 (%), the transmitter 100 in the present embodiment causes the light source to emit light at the designated dimming degree, while in the first mode. In addition to transmitting the signal encoded by the luminance change, the peak current value is maintained at a constant value with respect to the designated dimming change. x2 (%) is smaller than x3 (%). In the present embodiment, x2 described above is also referred to as a third value.
 つまり、送信機100は、指定調光度がx2(%)よりも小さい場合には、指定調光度が小さくなるにしたがって、光源をオフにする時間を長くすることにより、小さくなるその指定調光度で光源を発光させ、かつ、ピーク電流の値を一定の値に維持する。具体的には、送信機100は、符号化信号のデューティ比を35%に維持しながら、複数の符号化信号のそれぞれを送信する周期を長くする。これにより、光源をオフにする時間、すなわち消灯期間が長くなる。その結果、ピーク電流の値を一定に維持しながら、調光度を小さくすることができる。また、指定調光度が小さくなる場合でも、ピーク電流の値が一定に維持されるため、輝度変化によって送信される信号である可視光信号(すなわち光ID)を、受信機200に受信させ易くすることができる。 That is, when the designated dimming degree is smaller than x2 (%), the transmitter 100 increases the time for turning off the light source as the designated dimming degree becomes smaller, and the specified dimming degree becomes smaller. The light source is caused to emit light, and the peak current value is maintained at a constant value. Specifically, the transmitter 100 lengthens the cycle for transmitting each of the plurality of encoded signals while maintaining the duty ratio of the encoded signal at 35%. Thereby, the time to turn off the light source, that is, the extinguishing period becomes longer. As a result, the dimming degree can be reduced while keeping the peak current value constant. Even when the specified dimming level is small, the peak current value is kept constant, so that the visible light signal (that is, the light ID), which is a signal transmitted by a change in luminance, is easily received by the receiver 200. be able to.
 ここで、送信機100は、符号化信号を輝度変化により送信する時間と、光源をオフにする時間とを足した1周期が10ミリ秒を超えないように、光源をオフする時間を決定する。例えば、光源をオフにする時間が長すぎて、その1周期が10ミリ秒を超えると、符号化信号を送信するための光源の輝度変化がちらつきとして人の眼に認識されてしまう虞がある。そのため、本実施の形態では、1周期が10ミリ秒を超えないように、光源をオフする時間が決定されるため、ちらつきが人に認識されてしまうことを抑えることができる。 Here, the transmitter 100 determines the time to turn off the light source so that one cycle obtained by adding the time to transmit the encoded signal due to the luminance change and the time to turn off the light source does not exceed 10 milliseconds. . For example, if the time to turn off the light source is too long and one period exceeds 10 milliseconds, the luminance change of the light source for transmitting the encoded signal may be recognized by the human eye as flickering. . Therefore, in this embodiment, since the time for turning off the light source is determined so that one period does not exceed 10 milliseconds, flicker can be prevented from being recognized by a person.
 さらに、送信機100は、指定調光度がx1(%)よりも小さい場合にも、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。このとき、送信機100は、指定調光度が小さくなるにしたがって、ピーク電流の値を小さくすることにより、その小さくなる指定調光度で光源を発光させる。x1(%)はx2(%)よりも小さい。なお、本実施の形態では、上述のx1を第4の値ともいう。 Further, even when the designated dimming degree is smaller than x1 (%), the transmitter 100 transmits the signal encoded in the first mode by the luminance change while causing the light source to emit light at the designated dimming degree. At this time, the transmitter 100 causes the light source to emit light at the specified dimming degree that becomes smaller by reducing the value of the peak current as the designated dimming degree becomes smaller. x1 (%) is smaller than x2 (%). In the present embodiment, the above x1 is also referred to as a fourth value.
 これにより、指定調光度がより小さくても、その指定調光度で光源を適切に発光させることができる。 Thereby, even if the designated dimming degree is smaller, the light source can be appropriately emitted with the designated dimming degree.
 ここで、図354に示す例では、第1のモードにおける最大のピーク電流の値(すなわちy3(mA))は、第2のモードにおける最大のピーク電流の値(すなわちy4(mA))よりも小さいが、同じであってもよい。すなわち、送信機100は、指定される調光度がx3(%)よりも大きいx3a(%)まで、第1のモードで送信対象信号を符号化する。そして、送信機100は、指定された調光度がx3a(%)である場合には、第2のモードにおける最大のピーク電流の値(すなわちy4(mA))と同じピーク電流の値で光源を発光させる。この場合、x3aが第1の値となる。なお、第2のモードにおける最大のピーク電流の値は、指定される調光度が最大値、すなわち100%であるときのピーク電流の値である。 In the example shown in FIG. 354, the maximum peak current value in the first mode (ie, y3 (mA)) is greater than the maximum peak current value in the second mode (ie, y4 (mA)). Small but the same. That is, the transmitter 100 encodes the transmission target signal in the first mode until the designated dimming degree is x3a (%) larger than x3 (%). Then, when the designated dimming degree is x3a (%), the transmitter 100 turns on the light source with the same peak current value as the maximum peak current value in the second mode (ie, y4 (mA)). Make it emit light. In this case, x3a is the first value. Note that the maximum peak current value in the second mode is the peak current value when the specified dimming degree is the maximum value, that is, 100%.
 つまり、本実施の形態では、指定調光度が第1の値である場合における、光源のピーク電流の値と、指定調光度が最大値である場合における、光源のピーク電流の値とは同じであってもよい。この場合には、y3(mA)以上のピーク電流で光源を発光させる調光度の範囲が広がるため、調光度の広い範囲で、光IDを受信機200に受信させ易くすることができる。言い換えれば、第1のモードでも、大きいピーク電流を光源に流すことができるため、その光源の輝度変化によって送信される信号を、受信機に受信させ易くすることができる。なお、この場合には、大きいピーク電流が流れる期間が長くなるため、光源が劣化し易くなる。 In other words, in the present embodiment, the value of the peak current of the light source when the designated dimming degree is the first value is the same as the value of the peak current of the light source when the designated dimming degree is the maximum value. There may be. In this case, since the range of the dimming degree that causes the light source to emit light with a peak current of y3 (mA) or more is widened, it is possible to make the receiver 200 easily receive the light ID in a wide range of dimming degree. In other words, even in the first mode, since a large peak current can be passed through the light source, it is possible to make it easier for the receiver to receive a signal transmitted by a change in luminance of the light source. In this case, since the period during which a large peak current flows becomes long, the light source easily deteriorates.
 図355は、本実施の形態における光IDの受信し易さを説明するための比較例を示す図である。 FIG. 355 is a diagram showing a comparative example for explaining the ease of receiving the optical ID in the present embodiment.
 本実施の形態では、図354に示すように、調光度が小さい場合には、第1のモードが用いられ、調光度が大きい場合には、第2のモードが用いられる。第1のモードは、調光度の増加が小さくてもピーク電流の増加を大きくするモードであり、第2のモードは、調光度の増加が大きくてもピーク電流の増加を抑えるモードである。したがって、第2のモードによって、大きなピーク電流が光源に流れることが抑えれるため、光源の劣化を抑制することができる。さらに、第1のモードによって、調光度が小さくても大きなピーク電流が光源に流れるため、光IDを受信機200に容易に受信させることができる。 In this embodiment, as shown in FIG. 354, the first mode is used when the dimming degree is small, and the second mode is used when the dimming degree is large. The first mode is a mode in which the increase in peak current is increased even if the increase in dimming degree is small, and the second mode is a mode in which the increase in peak current is suppressed even if the increase in dimming degree is large. Therefore, since the second mode suppresses a large peak current from flowing to the light source, deterioration of the light source can be suppressed. Furthermore, the first mode allows the receiver 200 to easily receive the optical ID because a large peak current flows through the light source even if the dimming degree is small.
 一方、調光度が小さい場合にも第2のモードが用いられる場合には、図355に示すように、調光度が小さい場合には、ピーク電流の値も小さいため、光IDを受信機200に受信させることが難しくなる。 On the other hand, when the second mode is used even when the dimming degree is small, as shown in FIG. 355, when the dimming degree is small, the value of the peak current is small. It becomes difficult to make it receive.
 したがって、本実施の形態における送信機100では、光源の劣化の抑制と、光IDの受信し易さとの両立を図ることができる。 Therefore, in transmitter 100 according to the present embodiment, it is possible to achieve both suppression of deterioration of the light source and ease of reception of the optical ID.
 また、送信機100は、光源のピーク電流の値が第5の値を超えた場合、その光源の輝度変化による信号の送信を停止してもよい。第5の値は、例えばy3(mA)であってもよい。 Further, when the value of the peak current of the light source exceeds the fifth value, the transmitter 100 may stop transmission of a signal due to a change in luminance of the light source. The fifth value may be, for example, y3 (mA).
 これにより、光源の劣化をさらに抑制することができる。 Thereby, the deterioration of the light source can be further suppressed.
 また、送信機100は、図352に示す例と同様に、光源の使用時間を計測してもよい。そして、その使用時間が所定時間以上である場合、送信機100は、指定調光度よりも大きい調光度で光源を発光させるためのパラメータの値を用いて、信号を輝度変化により送信してもよい。この場合、パラメータの値は、ピーク電流の値または光源をオフにする時間であってもよい。これにより、光源の経時的な劣化によって光IDが受信機200に受信され難くなることを抑えることができる。 Also, the transmitter 100 may measure the usage time of the light source, as in the example shown in FIG. When the usage time is equal to or longer than the predetermined time, the transmitter 100 may transmit a signal by a change in luminance using a parameter value for causing the light source to emit light with a dimming degree greater than the specified dimming degree. . In this case, the parameter value may be a peak current value or a time for turning off the light source. Thereby, it is possible to prevent the optical ID from becoming difficult to be received by the receiver 200 due to deterioration of the light source over time.
 または、送信機100は、光源の使用時間を計測し、その使用時間が所定時間以上である場合、使用時間が所定時間未満である場合よりも、光源の電流のパルス幅を大きくしてもよい。これにより、上述と同様、光源の劣化によって光IDが受信され難くなることを抑えることができる。 Alternatively, the transmitter 100 may measure the usage time of the light source, and if the usage time is equal to or longer than the predetermined time, the transmitter 100 may increase the pulse width of the current of the light source as compared to the case where the usage time is less than the predetermined time. . Thereby, it can suppress that it becomes difficult to receive optical ID by deterioration of a light source like the above-mentioned.
 なお、上記実施の形態では、送信機100は、指定される調光度に応じて第1のモードと第2のモードとが切り換えられるが、ユーザによる操作に応じてそのモードの切り替えを行ってもよい。つまり、送信機100は、ユーザによってスイッチが操作されると、第1のモードを第2のモードに切り替えたり、逆に、第2のモードを第1のモードに切り替えたりする。また、送信機100は、モードが切り換えられるときには、そのことをユーザに通知してもよい。例えば、送信機100は、音を鳴らしたり、人に視認可能な周期で光源を点滅させたり、通知用のLEDを点灯させたりすることによって、モードの切り替えをユーザに通知してもよい。また、送信機100は、モードの切り替えだけでなく、ピーク電流と調光度との関係が変化する時点にも、その関係が変化することをユーザに通知してもよい。その時点は、例えば図354に示す調光度がx1(%)から変化する時点、または調光度がx2(%)から変化する時点である。 In the above-described embodiment, the transmitter 100 is switched between the first mode and the second mode according to the designated dimming degree, but even if the mode is switched according to the operation by the user. Good. That is, when the switch is operated by the user, the transmitter 100 switches the first mode to the second mode, or conversely switches the second mode to the first mode. Further, the transmitter 100 may notify the user when the mode is switched. For example, the transmitter 100 may notify the user of the mode switching by sounding a sound, blinking a light source at a period that can be visually recognized by a person, or turning on a notification LED. Further, the transmitter 100 may notify the user that the relationship changes not only at the mode switching but also at the time when the relationship between the peak current and the dimming degree changes. The time point is, for example, a time point when the dimming degree shown in FIG. 354 changes from x1 (%) or a time point when the dimming degree changes from x2 (%).
 図356Aは、本実施の形態における送信機100の動作を示すフローチャートである。 FIG. 356A is a flowchart showing the operation of the transmitter 100 in the present embodiment.
 送信機100は、まず、光源に対して指定される調光度を指定調光度として受け付ける(ステップS551)。次に、送信機100は、信号を光源の輝度変化により送信する(ステップS552)。具体的には、送信機100は、指定調光度が第1の値以下である場合には、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。また、送信機100は、指定調光度が第1の値よりも大きい場合には、その指定調光度で光源を発光させながら、第2のモードで符号化された信号を輝度変化により送信する。ここで、指定調光度が第1の値よりも大きく第2の値以下である場合に、第2のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値は、指定調光度が第1の値である場合に、第1のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値よりも小さい。 First, the transmitter 100 receives the dimming degree designated for the light source as the designated dimming degree (step S551). Next, the transmitter 100 transmits a signal according to a change in luminance of the light source (step S552). Specifically, when the designated dimming degree is equal to or less than the first value, the transmitter 100 causes the light source to emit light at the designated dimming degree, and changes the signal encoded in the first mode according to the luminance change. Send. In addition, when the designated dimming degree is larger than the first value, the transmitter 100 transmits the signal encoded in the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, when the designated dimming degree is larger than the first value and equal to or smaller than the second value, the value of the peak current of the light source for transmitting the signal encoded in the second mode by the luminance change is When the designated dimming level is the first value, it is smaller than the value of the peak current of the light source for transmitting the signal encoded in the first mode by the luminance change.
 図356Bは、本実施の形態における送信機100の構成を示すブロック図である。 FIG. 356B is a block diagram showing a configuration of transmitter 100 in the present embodiment.
 送信機100は、受付部551と、送信部552とを備える。受付部551は、光源に対して指定される調光度を指定調光度として受け付ける(ステップS551)。送信部552は、信号を光源の輝度変化により送信する。具体的には、送信部552は、指定調光度が第1の値以下である場合には、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。また、送信部552は、指定調光度が第1の値よりも大きい場合には、その指定調光度で光源を発光させながら、第2のモードで符号化された信号を輝度変化により送信する。ここで、指定調光度が第1の値よりも大きく第2の値以下である場合に、第2のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値は、指定調光度が第1の値である場合に、第1のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値よりも小さい。 The transmitter 100 includes a reception unit 551 and a transmission unit 552. The accepting unit 551 accepts the dimming degree designated for the light source as the designated dimming degree (step S551). The transmission unit 552 transmits a signal according to a change in luminance of the light source. Specifically, when the designated dimming level is equal to or smaller than the first value, the transmission unit 552 causes the light source to emit light at the designated dimming level, and the signal encoded in the first mode is changed by the luminance change. Send. In addition, when the designated dimming degree is larger than the first value, the transmission unit 552 transmits the signal encoded in the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, when the designated dimming degree is larger than the first value and equal to or smaller than the second value, the value of the peak current of the light source for transmitting the signal encoded in the second mode by the luminance change is When the designated dimming level is the first value, it is smaller than the value of the peak current of the light source for transmitting the signal encoded in the first mode by the luminance change.
 これにより、図354に示すように、信号を符号化するモードの切り替えによって、指定調光度が第1の値よりも大きく第2の値以下である場合における光源のピーク電流の値は、指定調光度が第1の値である場合における光源のピーク電流の値よりも小さくなる。したがって、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。 As a result, as shown in FIG. 354, the peak current value of the light source when the designated dimming degree is greater than the first value and equal to or less than the second value by switching the signal encoding mode is It becomes smaller than the value of the peak current of the light source when the luminous intensity is the first value. Therefore, as the designated dimming degree is increased, a large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed.
 図357は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 357 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Pkと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 acquires the above-described captured display image Pk, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは、サイネージとして構成されている送信機100と、送信機100の隣にいる人物21とを撮像する。送信機100は、上記各実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)と、すりガラスのように透光性を有する透光板144とを備える。1つまたは複数の発光素子は、送信機100の内部で発光し、1つまたは複数の発光素子からの光は、透光板144を透過して外部に照射される。その結果、送信機100の透光板144が明るく光っている状態になる。このような送信機100は、その1つまたは複数の発光素子を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100 configured as signage and the person 21 adjacent to the transmitter 100. The transmitter 100 is the transmitter in each of the above embodiments, and includes one or a plurality of light emitting elements (for example, LEDs) and a light transmitting plate 144 having a light transmitting property like ground glass. The one or more light emitting elements emit light inside the transmitter 100, and light from the one or more light emitting elements is transmitted through the translucent plate 144 and irradiated outside. As a result, the translucent plate 144 of the transmitter 100 shines brightly. Such a transmitter 100 changes in luminance by blinking one or more light emitting elements, and transmits an optical ID (light identification information) by the change in luminance. This light ID is the above-mentioned visible light signal.
 ここで、透光板144には、「ここにスマートフォンをかざしてください」というメッセージが記載されている。そこで、受信機200のユーザは、人物21を送信機100の隣に立たせて、腕を送信機100の上にかけるようにその人物21に指示する。そして、ユーザは、受信機200のカメラ(すなわちイメージセンサ)を人物21および送信機100に向けて撮像を行う。受信機200は、送信機100および人物21を通常露光時間で撮像することによって、それらが映し出された撮像表示画像Pkを取得する。さらに、受信機200は、その通常露光時間よりも短い通信用露光時間で送信機100および人物21を撮像することによって、復号用画像を取得する。 Here, a message “Please hold your smartphone here” is written on the translucent plate 144. Therefore, the user of the receiver 200 places the person 21 next to the transmitter 100 and instructs the person 21 to put his arm on the transmitter 100. Then, the user images the camera (that is, the image sensor) of the receiver 200 toward the person 21 and the transmitter 100. The receiver 200 captures the transmitter 100 and the person 21 with the normal exposure time, thereby obtaining a captured display image Pk on which they are projected. Furthermore, the receiver 200 acquires a decoding image by capturing the transmitter 100 and the person 21 with a communication exposure time shorter than the normal exposure time.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P44と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pkのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100であるサイネージが映し出されている領域を対象領域として認識する。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P44 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pk as a target area. For example, the receiver 200 recognizes an area in which signage as the transmitter 100 is displayed as a target area.
 そして、受信機200は、その対象領域がAR画像P44によって覆い隠されるように、そのAR画像P44を撮像表示画像Pkに重畳し、その撮像表示画像Pkをディスプレイ201に表示する。例えば、受信機200は、サッカー選手を示すAR画像P44を取得する。この場合、撮像表示画像Pkの対象領域を覆い隠すようにそのAR画像P44が重畳されるため、人物21の隣にサッカー選手が現実に存在するように、撮像表示画像Pkを表示することができる。その結果、人物21は、サッカー選手が隣にいなくても、そのサッカー選手と一緒に写真に写ることができる。より具体的には、人物21の腕をサッカー選手の肩にかけた状態で、そのサッカー選手と一緒に写真に写ることができる。 Then, the receiver 200 superimposes the AR image P44 on the captured display image Pk so that the target area is covered with the AR image P44, and displays the captured display image Pk on the display 201. For example, the receiver 200 acquires an AR image P44 indicating a soccer player. In this case, since the AR image P44 is superimposed so as to cover the target area of the captured display image Pk, the captured display image Pk can be displayed so that a soccer player actually exists next to the person 21. . As a result, even if the soccer player is not next to the person 21, the person 21 can be photographed together with the soccer player. More specifically, in the state where the arm of the person 21 is put on the shoulder of the soccer player, the photograph can be taken together with the soccer player.
 (実施の形態27)
 本実施の形態では、光IDを可視光信号によって送信する送信方法について説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。
(Embodiment 27)
In this embodiment, a transmission method for transmitting an optical ID by a visible light signal will be described. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments.
 図358は、本実施の形態における送信機100の動作を説明するための図である。具体的には、図358は、調光機能付きスポットライトとして構成された送信機100の調光度と、その送信機100の光源に入力される電流(具体的にはピーク電流の値)との関係を示す。 FIG. 358 is a diagram for explaining the operation of the transmitter 100 in the present embodiment. Specifically, FIG. 358 shows the dimming degree of the transmitter 100 configured as a spotlight with dimming function and the current (specifically, the peak current value) input to the light source of the transmitter 100. Show the relationship.
 本実施の形態における送信機100は、指定された調光度が0%以上x14(%)以下である場合には、デューティ比35%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。つまり、送信機100は、指定される調光度が0%からx14(%)に変化する場合には、可視光信号のデューティ比を35%に維持しながら、ピーク電流の値を増加することによって、その指定された調光度で光源を発光させる。なお、デューティ比35%のPWMモードは、実施の形態26と同様、第1のモードともいい、上述のx14を第1の値ともいう。例えば、x14(%)は、50~60%の範囲内の値である。 When the designated dimming degree is 0% or more and x14 (%) or less, the transmitter 100 according to the present embodiment encodes a transmission target signal in a PWM mode with a duty ratio of 35%. Is generated. In other words, when the designated dimming level changes from 0% to x14 (%), the transmitter 100 increases the peak current value while maintaining the duty ratio of the visible light signal at 35%. The light source is caused to emit light at the specified dimming degree. Note that the PWM mode with a duty ratio of 35% is also referred to as a first mode, as in the twenty-sixth embodiment, and the above-described x14 is also referred to as a first value. For example, x14 (%) is a value within the range of 50 to 60%.
 また、送信機100は、指定された調光度がx13(%)以上100%以下である場合には、デューティ比65%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。つまり、送信機100は、指定される調光度が100%からx13(%)に変化する場合には、可視光信号のデューティ比を65%に維持しながら、ピーク電流の値を抑えることによって、その指定された調光度で光源を発光させる。なお、デューティ比65%のPWMモードは、実施の形態26と同様、第2のモードともいい、上述のx13を第2の値ともいう。ここで、x13(%)は、x14(%)よりも小さい値であって、例えば、40~50%の範囲内の値である。 Further, when the designated dimming degree is not less than x13 (%) and not more than 100%, the transmitter 100 generates an encoded signal by encoding the transmission target signal in the PWM mode with a duty ratio of 65%. . That is, when the specified dimming level changes from 100% to x13 (%), the transmitter 100 suppresses the peak current value while maintaining the duty ratio of the visible light signal at 65%. The light source is caused to emit light at the specified dimming degree. Note that the PWM mode with a duty ratio of 65% is also referred to as a second mode, as in the twenty-sixth embodiment, and the above x13 is also referred to as a second value. Here, x13 (%) is a value smaller than x14 (%), for example, a value within a range of 40 to 50%.
 このように、本実施の形態では、指定される調光度が増加する場合には、PWMモードは、調光度x14(%)において、デューティ比35%のPWMモードからデューティ比65%のPWMモードに切り替えられる。一方、指定される調光度が減少する場合には、PWMモードは、調光度x14(%)よりも小さい調光度x13(%)において、デューティ比65%のPWMモードからデューティ比35%のPWMモードに切り替えられる。つまり、本実施の形態では、指定される調光度が増加する場合と、指定される調光度が減少する場合とで、PWMモードが切り替えられる調光度が異なる。以下、PWMモードが切り替えられる調光度を、切り替え点という。 As described above, in this embodiment, when the specified dimming degree increases, the PWM mode is changed from the PWM mode with the duty ratio of 35% to the PWM mode with the duty ratio of 65% at the dimming degree x14 (%). Can be switched. On the other hand, when the specified dimming degree decreases, the PWM mode is a dimming degree x13 (%) smaller than the dimming degree x14 (%), and the PWM mode from the PWM mode having a duty ratio of 65% to the PWM mode having a duty ratio of 35%. Can be switched to. That is, in the present embodiment, the dimming degree at which the PWM mode is switched is different when the designated dimming degree is increased and when the designated dimming degree is reduced. Hereinafter, the dimming degree at which the PWM mode is switched is referred to as a switching point.
 したがって、本実施の形態では、PWMモードの頻繁な切り替えを抑制することができる。実施の形態26の図354に示す例では、PWMモードの切り替え点は、50%であって、指定される調光度が増加する場合と、指定される調光度が減少する場合とで同じである。その結果、図354に示す例では、指定される調光度の増減が50%の前後で繰り返されると、PWMモードが、デューティ比35%のPWMモードとデューティ比65%のPWMモードとに頻繁に切り替えられる。しかし、本実施の形態では、指定される調光度が増加する場合と、指定される調光度が減少する場合とで、PWMモードの切り替え点が異なるため、このようなPWMモードの頻繁な切り替えを抑えることができる。 Therefore, in this embodiment, frequent switching of the PWM mode can be suppressed. In the example shown in FIG. 354 of the twenty-sixth embodiment, the switching point of the PWM mode is 50%, and is the same when the designated dimming degree increases and when the designated dimming degree decreases. . As a result, in the example shown in FIG. 354, when the increase / decrease in the specified dimming degree is repeated around 50%, the PWM mode is frequently switched between the PWM mode with a duty ratio of 35% and the PWM mode with a duty ratio of 65%. Can be switched. However, in this embodiment, since the switching point of the PWM mode is different between the case where the designated dimming degree increases and the designated dimming degree decreases, such frequent switching of the PWM mode is performed. Can be suppressed.
 また、本実施の形態では、実施の形態26の図354に示す例と同様、指定される調光度が小さい場合には、小さいデューティ比のPWMモードが用いられ、逆に、指定される調光度が大きい場合には、大きいデューティ比のPWMモードが用いられる。 In the present embodiment, similarly to the example shown in FIG. 354 of the twenty-sixth embodiment, when the designated dimming degree is small, the PWM mode with a small duty ratio is used, and conversely, the designated dimming degree is used. When is large, a PWM mode with a large duty ratio is used.
 したがって、指定される調光度が大きい場合には、大きいデューティ比のPWMモードが用いられるため、調光度に対するピーク電流の変化率を小さくすることができ、小さいピーク電流によって光源を大きい調光度で発光させることができる。例えば、デューティ比35%のように小さいデューティ比のPWMモードでは、ピーク電流を250mAにしなければ、光源を100%の調光度で発光させることができない。しかし、本実施の形態では、大きい調光度に対しては、デューティ比65%のように大きいデューティ比のPWMモードが用いられるため、例えば、ピーク電流をより小さい154mAにするだけで、光源を100%の調光度で発光させることができる。つまり、光源に過電流を流してその光源の寿命を縮めてしまうことを抑えることができる。 Therefore, when the specified dimming degree is large, the PWM mode with a large duty ratio is used, so the rate of change of the peak current with respect to the dimming degree can be reduced, and the light source emits light with a large dimming degree with the small peak current Can be made. For example, in the PWM mode with a duty ratio as small as 35%, unless the peak current is 250 mA, the light source cannot emit light with a dimming degree of 100%. However, in the present embodiment, for a large dimming degree, a PWM mode with a large duty ratio such as a duty ratio of 65% is used. Therefore, for example, the light source is set to 100 by simply setting the peak current to a smaller 154 mA. % Light can be emitted. That is, it is possible to suppress the overcurrent from flowing through the light source and shorten the life of the light source.
 また、指定される調光度が小さい場合には、小さいデューティ比のPWMモードが用いられるため、調光度に対するピーク電流の変化率を大きくすることができる。その結果、小さい調光度で光源を発光させながら、大きいピーク電流によって可視光信号を送信することができる。光源は、入力される電流が大きいほど、明るく発光する。したがって、大きいピーク電流によって可視光信号が送信される場合には、受信機200に可視光信号を受信させ易くすることができる。言い換えれば、受信機200に受信可能な可視光信号を送信することができる調光度の範囲を、より小さい調光度まで広げることができる。例えば、図385に示すように、受信機200は、ピーク電流がIa(mA)以上であれば、そのピーク電流によって送信される可視光信号を受信することができる。この場合、デューティ比65%のように大きいデューティ比のPWMモードでは、受信可能な可視光信号を送信することができる調光度の範囲は、x12(%)以上である。しかし、デューティ比35%のように小さいデューティ比のPWMモードでは、受信可能な可視光信号を送信することができる調光度の範囲を、x12(%)よりも小さいx11(%)以上にすることができる。 Also, when the specified dimming degree is small, the PWM mode with a small duty ratio is used, so that the rate of change of the peak current with respect to the dimming degree can be increased. As a result, a visible light signal can be transmitted with a large peak current while causing the light source to emit light with a small dimming degree. The light source emits light brighter as the input current increases. Therefore, when a visible light signal is transmitted with a large peak current, the visible light signal can be easily received by the receiver 200. In other words, the range of the dimming level in which the visible light signal receivable to the receiver 200 can be transmitted can be expanded to a smaller dimming level. For example, as shown in FIG. 385, if the peak current is equal to or greater than Ia (mA), the receiver 200 can receive a visible light signal transmitted by the peak current. In this case, in the PWM mode with a large duty ratio such as a duty ratio of 65%, the range of dimming degree capable of transmitting a receivable visible light signal is x12 (%) or more. However, in the PWM mode with a small duty ratio such as a duty ratio of 35%, the range of the dimming degree capable of transmitting a receivable visible light signal is set to x11 (%) or more smaller than x12 (%). Can do.
 このように、PWMモードを切り替えることによって、光源の寿命を長くし、且つ、広い調光度の範囲で可視光信号を送信することができる。 Thus, by switching the PWM mode, the life of the light source can be extended and a visible light signal can be transmitted in a wide dimming range.
 図359Aは、本実施の形態における送信方法を示すフローチャートである。 FIG. 359A is a flowchart showing a transmission method according to the present embodiment.
 本実施の形態における送信方法は、光源の輝度変化により信号を送信する送信方法であって、受付ステップS561と、送信ステップS562とを含む。受付ステップS561では、送信機100は、光源に対して指定される調光度を指定調光度として受け付ける。送信ステップS562では、送信機100は、その指定調光度で光源を発光させながら、第1のモードまたは第2のモードで符号化された信号を輝度変化により送信する。ここで、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された前記信号のデューティ比よりも大きい。また、送信ステップS562では、送信機100は、指定調光度が小さい値から大きい値に変更される場合、指定調光度が第1の値であるときに、信号の符号化に用いられるモードを、第1のモードから第2のモードに切り替える。さらに、送信機100は、指定調光度が大きい値から小さい値に変更される場合、指定調光度が第2の値であるときに、信号の符号化に用いられるモードを、第2のモードから第1のモードに切り替える。ここで、第2の値は、第1の値よりも小さい。 The transmission method in the present embodiment is a transmission method for transmitting a signal according to a change in luminance of a light source, and includes a reception step S561 and a transmission step S562. In the reception step S561, the transmitter 100 receives the dimming degree designated for the light source as the designated dimming degree. In the transmission step S562, the transmitter 100 transmits the signal encoded in the first mode or the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Further, in the transmission step S562, when the designated dimming level is changed from a small value to a large value, the transmitter 100 changes the mode used for signal encoding when the designated dimming level is the first value. Switch from the first mode to the second mode. Furthermore, when the designated dimming level is changed from a large value to a small value, the transmitter 100 changes the mode used for signal encoding from the second mode when the designated dimming level is the second value. Switch to the first mode. Here, the second value is smaller than the first value.
 例えば、第1モードおよび第2のモードはそれぞれ、図358に示すデューティ比35%のPWMモードおよびデューティ比65%のPWMモードである。また、第1の値および第2の値はそれぞれ、図358に示すx14(%)およびx15(%)である。 For example, the first mode and the second mode are the PWM mode with a duty ratio of 35% and the PWM mode with a duty ratio of 65% shown in FIG. 358, respectively. Further, the first value and the second value are x14 (%) and x15 (%) shown in FIG. 358, respectively.
 これにより、第1のモードと第2のモードとの切り替えが行われる指定調光度(すなわち切り替え点)は、指定用光度が増加する場合と減少する場合とで異なる。したがって、これらのモードの頻繁な切り替えを抑えることができる。すなわち、いわゆるチャタリングの発生を抑えることができる。その結果、信号を送信する送信機100の動作を安定させることができる。また、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された信号のデューティ比よりも大きい。したがって、図354に示す送信方法と同様に、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。また、光源の劣化が抑えられるため、多様な機器間の通信を長期的に行うことができる。また、指定調光度が小さい場合には、デューティ比が小さい第1のモードが用いられる。したがって、上述のピーク電流を大きくすることができ、受信機200に受信され易い信号を可視光信号として送信することができる。 Thereby, the designated dimming degree (that is, the switching point) at which the switching between the first mode and the second mode is performed differs depending on whether the designated light intensity increases or decreases. Therefore, frequent switching between these modes can be suppressed. That is, the occurrence of so-called chattering can be suppressed. As a result, the operation of the transmitter 100 that transmits a signal can be stabilized. Further, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Therefore, similarly to the transmission method shown in FIG. 354, the larger the designated dimming degree, the more the large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed. Further, since deterioration of the light source can be suppressed, communication between various devices can be performed for a long time. Further, when the designated dimming degree is small, the first mode with a small duty ratio is used. Therefore, the above-described peak current can be increased, and a signal that is easily received by the receiver 200 can be transmitted as a visible light signal.
 また、送信ステップS562では、第1のモードから第2のモードへの切り替えが行われる際に、送信機100は、符号化された信号を輝度変化により送信するための光源のピーク電流を、第1の電流値から、その第1の電流値よりも小さい第2の電流値に変更する。さらに、第2のモードから第1のモードへの切り替えが行われる際に、送信機100は、ピーク電流を、第3の電流値から、第3の電流値よりも大きい第4の電流値に変更する。ここで、第1の電流値は、第4の電流値よりも大きく、第2の電流値は、第3の電流値よりも大きい。 In addition, in the transmission step S562, when switching from the first mode to the second mode is performed, the transmitter 100 determines the peak current of the light source for transmitting the encoded signal by the luminance change, The current value of 1 is changed to a second current value smaller than the first current value. Furthermore, when switching from the second mode to the first mode is performed, the transmitter 100 changes the peak current from the third current value to a fourth current value that is larger than the third current value. change. Here, the first current value is larger than the fourth current value, and the second current value is larger than the third current value.
 例えば、第1の電流値、第2の電流値、第3の電流値、および第4の電流値はそれぞれ、図358に示す電流値Ie、電流値Ic、電流値Ib、および電流値Idである。 For example, the first current value, the second current value, the third current value, and the fourth current value are respectively the current value Ie, the current value Ic, the current value Ib, and the current value Id shown in FIG. is there.
 これにより、第1のモードと第2のモードとを適切に切り替えることができる。 This makes it possible to appropriately switch between the first mode and the second mode.
 図359Bは、本実施の形態における送信機100の構成を示すブロック図である。 FIG. 359B is a block diagram showing a configuration of transmitter 100 in the present embodiment.
 本実施の形態における送信機100は、光源の輝度変化により信号を送信する送信機であって、受付部561と、送信部562とを備える。受付部561は、光源に対して指定される調光度を指定調光度として受け付ける。送信部562は、その指定調光度で光源を発光させながら、第1のモードまたは第2のモードで符号化された信号を輝度変化により送信する。ここで、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された前記信号のデューティ比よりも大きい。また、送信部562は、指定調光度が小さい値から大きい値に変更される場合、指定調光度が第1の値であるときに、信号の符号化に用いられるモードを、第1のモードから第2のモードに切り替える。さらに、送信部562は、指定調光度が大きい値から小さい値に変更される場合、指定調光度が第2の値であるときに、信号の符号化に用いられるモードを、第2のモードから第1のモードに切り替える。ここで、第2の値は、第1の値よりも小さい。 The transmitter 100 in the present embodiment is a transmitter that transmits a signal according to a change in luminance of a light source, and includes a reception unit 561 and a transmission unit 562. The accepting unit 561 accepts the dimming degree designated for the light source as the designated dimming degree. The transmission unit 562 transmits the signal encoded in the first mode or the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Further, when the designated dimming level is changed from a small value to a large value, the transmission unit 562 changes the mode used for signal encoding from the first mode when the designated dimming level is the first value. Switch to the second mode. Furthermore, when the designated dimming level is changed from a large value to a small value, the transmission unit 562 changes the mode used for signal encoding from the second mode when the designated dimming level is the second value. Switch to the first mode. Here, the second value is smaller than the first value.
 このような送信機100によって、図359Aに示すフローチャートの送信方法が実現される。 The transmission method of the flowchart shown in FIG. 359A is realized by such a transmitter 100.
 図360は、本実施の形態における可視光信号の詳細な構成の一例を示す図である。 FIG. 360 is a diagram illustrating an example of a detailed configuration of a visible light signal according to the present embodiment.
 このような可視光信号は、図188、図189Aの(b)、図197、図212、図316、および図317と同様に、PWMモードの信号である。 Such a visible light signal is a PWM mode signal as in FIGS. 188 and 189A (b), FIGS. 197, 212, 316, and 317.
 可視光信号のパケットは、Lデータ部と、プリアンブルと、Rデータ部とからなる。なお、Lデータ部およびRデータ部はそれぞれ、ペイロードに相当する。 The visible light signal packet is composed of an L data part, a preamble, and an R data part. Note that each of the L data portion and the R data portion corresponds to a payload.
 プリアンブルは、図188、図189Aの(b)、図197および図212のプリアンブルに相当し、図316および図317のSHRに相当する。具体的には、プリアンブルは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、プリアンブルは、時間長CだけHighの輝度値を示し、次の時間長CだけLowの輝度値を示し、次の時間長CだけHighの輝度値を示し、次の時間長CだけLowの輝度値を示す。なお、時間長CおよびCは、例えば100μsである。また、時間長CおよびCは、例えば時間長CおよびCよりも10μsだけ短い90μsである。 The preamble corresponds to the preambles in FIGS. 188 and 189A (b), 197 and 212, and corresponds to the SHR in FIGS. 316 and 317. Specifically, the preamble alternately indicates High and Low luminance values along the time axis. In other words, the preamble indicates a high luminance value only for the time length C 0 , indicates a low luminance value for the next time length C 1 , indicates a high luminance value for the next time length C 2 , and displays the next time length C 2. Only 3 indicates a low luminance value. The time lengths C 0 and C 3 are, for example, 100 μs. Further, the time lengths C 1 and C 2 are 90 μs shorter than the time lengths C 1 and C 3 by 10 μs, for example.
 Lデータ部は、図188、図189Aの(b)、図197および図212のデータLに相当し、図316および図317のPHYペイロードAに相当する。具体的には、Lデータ部は、時間軸に沿ってHighとLowの輝度値を交互に示し、プリアンブルの直前に配置される。つまり、Lデータ部は、時間長D’だけHighの輝度値を示し、次の時間長D’だけLowの輝度値を示し、次の時間長D’だけHighの輝度値を示し、次の時間長D’だけLowの輝度値を示す。なお、時間長D’~D’は、送信対象の信号に応じた数式にしたがって決定される。この数式は、D’=W+W×(3-y)、D’=W+W×(7-y)、D’=W+W×(3-y)、およびD’=W+W×(7-y)である。ここで、定数Wは、例えば110μsであり、定数Wは、例えば30μsである。変数yおよびyは、2ビットで表される0~3の何れかの整数であり、変数yおよびyは、3ビットで表される0~7の何れかの整数である。また、変数y~yは送信対象の信号である。なお、図360~図363では、かけ算を意味する記号として「*」が用いられている。 The L data portion corresponds to the data L in FIGS. 188 and 189A (b), 197 and 212, and corresponds to the PHY payload A in FIGS. 316 and 317. Specifically, the L data portion alternately indicates High and Low luminance values along the time axis, and is arranged immediately before the preamble. That is, the L data portion indicates a high luminance value for the time length D ′ 0 , indicates a low luminance value for the next time length D ′ 1 , indicates a high luminance value for the next time length D ′ 2 , The next time length D ′ 3 indicates a low luminance value. Note that the time lengths D ′ 0 to D ′ 3 are determined according to a mathematical formula corresponding to the signal to be transmitted. This equation is expressed as D ′ 0 = W 0 + W 1 × (3-y 0 ), D ′ 1 = W 0 + W 1 × (7−y 1 ), D ′ 2 = W 0 + W 1 × (3-y 2) ), And D ′ 3 = W 0 + W 1 × (7−y 3 ). Here, the constant W 0 is, for example, 110 μs, and the constant W 1 is, for example, 30 μs. The variables y 0 and y 2 are any integer from 0 to 3 represented by 2 bits, and the variables y 1 and y 3 are any integer from 0 to 7 represented by 3 bits. Variables y 0 to y 3 are signals to be transmitted. In FIGS. 360 to 363, “*” is used as a symbol meaning multiplication.
 Rデータ部は、図188、図189Aの(b)、図197および図212のデータRに相当し、図316および図317のPHYペイロードBに相当する。具体的には、Rデータ部は、時間軸に沿ってHighとLowの輝度値を交互に示し、プリアンブルの直後に配置される。つまり、Rデータ部は、時間長DだけHighの輝度値を示し、次の時間長DだけLowの輝度値を示し、次の時間長DだけHighの輝度値を示し、次の時間長DだけLowの輝度値を示す。なお、時間長D~Dは、送信対象の信号に応じた数式にしたがって決定される。この数式は、D=W+W×y、D=W+W×y、D=W+W×y、およびD=W+W×yである。 The R data portion corresponds to the data R in FIGS. 188 and 189A (b), FIGS. 197 and 212, and corresponds to the PHY payload B in FIGS. 316 and 317. Specifically, the R data portion alternately indicates High and Low luminance values along the time axis, and is arranged immediately after the preamble. That is, the R data portion indicates a high luminance value for the time length D 0 , indicates a low luminance value for the next time length D 1 , indicates a high luminance value for the next time length D 2 , and displays the next time length. the length D 3 shows the luminance value of Low. Note that the time lengths D 0 to D 3 are determined according to a mathematical expression corresponding to the signal to be transmitted. This formula is D 0 = W 0 + W 1 × y 0 , D 1 = W 0 + W 1 × y 1 , D 2 = W 0 + W 1 × y 2 , and D 3 = W 0 + W 1 × y 3 .
 ここで、Lデータ部とRデータ部とは、明るさに対して補完関係がある。つまり、Lデータ部の明るさが明るければ、Rデータ部の明るさは暗く、逆に、Lデータ部の明るさが暗ければ、Rデータ部の明るさは明るくなる。つまり、Lデータ部の時間長とRデータ部の時間長との和は、送信対象の信号に関わらずに一定である。言い換えれば、送信対象の信号に関わらず、光源から送信される可視光信号の時間平均の明るさを一定にすることができる。 Here, the L data portion and the R data portion have a complementary relationship with respect to brightness. That is, if the brightness of the L data portion is bright, the brightness of the R data portion is dark. Conversely, if the brightness of the L data portion is dark, the brightness of the R data portion is bright. That is, the sum of the time length of the L data portion and the time length of the R data portion is constant regardless of the signal to be transmitted. In other words, the brightness of the time average of the visible light signal transmitted from the light source can be made constant regardless of the signal to be transmitted.
 また、D’=W+W×(3-y)、D’=W+W×(7-y)、D’=W+W×(3-y)、およびD’=W+W×(7-y)における、3と7との比率を変更することによって、PWMモードのデューティ比を変更することができる。なお、3と7との比率は、変数yおよびyの最大値と、変数yおよびyの最大値との比率に相当する。例えば、その比率が3:7の場合には、デューティ比が小さいPWMモードが選択され、逆に、その比率が7:3の場合には、デューティ比が大きいPWMモードが選択される。したがって、その比率を調整することによって、PWMモードを、図354および図358に示すデューティ比35%のPWMモードと、デューティ比65%のPWMモードとに切り替えることができる。また、何れのPWMモードに切り替えられているかを受信機200に通知するために、プリアンブルを利用してもよい。例えば、送信機100は、切り替えられたPWMモードに対応付けられたパターンのプリアンブルをパケットに含めることによって、その切り替えられたPWMモードを受信機200に通知する。なお、プリアンブルのパターンは、時間長C、C、CおよびCによって変更される。 Further, D ′ 0 = W 0 + W 1 × (3-y 0 ), D ′ 1 = W 0 + W 1 × (7−y 1 ), D ′ 2 = W 0 + W 1 × (3-y 2 ), And by changing the ratio of 3 and 7 in D ′ 3 = W 0 + W 1 × (7−y 3 ), the duty ratio of the PWM mode can be changed. The ratio between 3 and 7 corresponds to the ratio between the maximum values of the variables y 0 and y 2 and the maximum values of the variables y 1 and y 3 . For example, when the ratio is 3: 7, a PWM mode with a small duty ratio is selected. Conversely, when the ratio is 7: 3, a PWM mode with a large duty ratio is selected. Therefore, by adjusting the ratio, the PWM mode can be switched between the PWM mode with a duty ratio of 35% shown in FIGS. 354 and 358 and the PWM mode with a duty ratio of 65%. In addition, a preamble may be used to notify the receiver 200 which PWM mode is being switched to. For example, the transmitter 100 notifies the receiver 200 of the switched PWM mode by including in the packet the preamble of the pattern associated with the switched PWM mode. The preamble pattern is changed according to the time lengths C 0 , C 1 , C 2 and C 3 .
 しかし、図360に示す可視光信号では、パケットに2つのデータ部が含まれているため、そのパケットの送信に時間がかかってしまう。例えば、送信機100がDLPプロジェクタである場合、送信機100は、赤、緑、および青のそれぞれの映像を時分割で投影する。ここで、送信機100は、赤の映像を投影するときに、可視光信号を送信することが望ましい。それは、このとき送信される可視光信号が、赤色の波長を有するため、受信機200に受信され易いからである。赤の映像が継続して投影される期間は例えば1.5msである。なお、この期間を、以下、赤色投影期間という。このように短い赤色投影期間に、上述のLデータ部、プリアンブルおよびRデータ部からなるパケットを送信することは困難である。 However, in the visible light signal shown in FIG. 360, since the packet includes two data parts, it takes time to transmit the packet. For example, when the transmitter 100 is a DLP projector, the transmitter 100 projects each video image of red, green, and blue in a time division manner. Here, the transmitter 100 desirably transmits a visible light signal when projecting a red video. This is because the visible light signal transmitted at this time has a red wavelength and thus is easily received by the receiver 200. The period during which the red video is continuously projected is, for example, 1.5 ms. This period is hereinafter referred to as a red projection period. In such a short red projection period, it is difficult to transmit a packet including the L data portion, the preamble, and the R data portion described above.
 そこで、2つのデータ部のうちRデータ部のみを有するパケットが想起される。 Therefore, a packet having only the R data portion of the two data portions is recalled.
 図361は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。 FIG. 361 is a diagram showing another example of the detailed configuration of the visible light signal in the present embodiment.
 図361に示す可視光信号のパケットは、図360に示す例と異なり、Lデータ部を含んでいない。その代わりに、図361に示す可視光信号のパケットは、無効データと、平均輝度調整部とを含む。 Unlike the example shown in FIG. 360, the visible light signal packet shown in FIG. 361 does not include the L data portion. Instead, the visible light signal packet shown in FIG. 361 includes invalid data and an average luminance adjustment unit.
 無効データは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、無効データは、時間長AだけHighの輝度値を示し、次の時間長AだけLowの輝度値を示す。時間長Aは、例えば100μsであり、時間長Aは、例えばA=W-Wによって示される。このような無効データは、パケットにLデータ部が含まれていないことを示す。 The invalid data alternately indicates High and Low luminance values along the time axis. In other words, the invalid data indicates a high luminance value for the time length A 0 and indicates a low luminance value for the next time length A 1 . The time length A 0 is, for example, 100 μs, and the time length A 1 is represented by, for example, A 1 = W 0 −W 1 . Such invalid data indicates that the L data portion is not included in the packet.
 平均輝度調整部は、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、無効データは、時間長BだけHighの輝度値を示し、次の時間長BだけLowの輝度値を示す。時間長Bは、例えばB=100+W×((3-y)+(3-y))によって示され、時間長Bは、例えばB=W×((7-y)+(7-y))によって示される。 The average luminance adjustment unit alternately indicates High and Low luminance values along the time axis. In other words, the invalid data indicates a high luminance value for the time length B 0 and indicates a low luminance value for the next time length B 1 . The time length B 0 is represented by, for example, B 0 = 100 + W 1 × ((3-y 0 ) + (3-y 2 )), and the time length B 1 is, for example, B 1 = W 1 × ((7−y 1 ) + (7−y 3 )).
 このような平均輝度調整部によって、パケットにおける平均輝度を、送信対象の信号y~yに関わらず一定にすることができる。つまり、パケットにおいて輝度値がHighの時間長の総和(すなわち合計ON時間)を、A+C+C+D+D+B=790にすることができる。さらに、パケットにおいて輝度値がLowの時間長の総和(すなわち合計OFF時間)を、A+C+C+D+D+B=910にすることができる。 By such an average luminance adjusting unit, the average luminance in the packet can be made constant regardless of the signals y 0 to y 3 to be transmitted. That is, the sum of the lengths of time when the luminance value is High in the packet (that is, the total ON time) can be set to A 0 + C 0 + C 2 + D 0 + D 2 + B 0 = 790. Furthermore, the sum of the time lengths when the luminance value is low in the packet (that is, the total OFF time) can be set to A 1 + C 1 + C 3 + D 1 + D 3 + B 1 = 910.
 しかし、このような可視光信号の構成であっても、パケットにおける全時間長Eのうちの一部の時間長である有効時間長Eを短くすることができない。この有効時間長Eは、パケットにおいて最初にHighの輝度値が現れてから、最後に現れるHighの輝度が終了するまでの時間であって、受信機200が可視光信号のパケットを復調または復号するために必要な時間である。具体的には、有効時間長Eは、E=A+A+C+C+C2+C+D+D+D+D+Bである。なお、全時間長Eは、E=E+Bである。 However, even in such a configuration of the visible light signal, it is impossible to shorten the effective length of time E 1 which is a part of the time length of the total duration E 0 in the packet. The effective duration E 1, first from appeared luminance values of High, a time until the brightness of the last occurrence High is completed, the receiver 200 demodulates or decodes the packets of the visible light signal in packet Is the time required to do. Specifically, the effective time length E 1 is E 1 = A 0 + A 1 + C 0 + C 1 + C 2 + C 3 + D 0 + D 1 + D 2 + D 3 + B 0 . Note that the total time length E 0 is E 0 = E 1 + B 1 .
 つまり、有効時間長Eは、図361に示す構成の可視光信号であっても、最大1700μsであるため、送信機100は、上述の赤色投影期間に、その有効時間長Eだけ継続して1つのパケットを送信することは困難である。 That is, the effective time length E 1 is 1700 μs at the maximum even for the visible light signal having the configuration shown in FIG. 361. Therefore, the transmitter 100 continues for the effective time length E 1 during the red projection period. It is difficult to transmit one packet.
 そこで、有効時間長Eを短くし、かつ、パケットの平均輝度を送信対象の信号に関わらず一定にするために、HighとLowのそれぞれの輝度値の時間長だけでなく、Hightの輝度値も調整することが想起される。 Therefore, in order to shorten the effective time length E 1 and make the average brightness of the packet constant regardless of the signal to be transmitted, not only the time lengths of the brightness values of High and Low but also the brightness value of High It is also recalled to adjust.
 図362は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。 FIG. 362 is a diagram illustrating another example of the detailed configuration of the visible light signal according to the present embodiment.
 図362に示す可視光信号のパケットでは、図361に示す例と異なり、有効時間長Eを短くするために、平均輝度調整部のHighの輝度値の時間長Bは、送信対象の信号に関わらず最短の100μsに固定されている。その代わりに、図362に示す可視光信号のパケットでは、送信対象の信号に含まれる変数yおよびyに応じて、すなわち、時間長DおよびDに応じて、Highの輝度値が調整される。例えば、時間長DおよびDが短い場合には、送信機100は、図362の(a)に示すように、Highの輝度値を大きな値に調整する。また、時間長DおよびDが長い場合には、送信機100は、図362の(b)に示すように、Highの輝度値を小さな値に調整する。具体的には、時間長DおよびDがそれぞれ最短のW(例えば110μs)である場合には、Highの輝度値は100%の明るさである。また、時間長DおよびDがそれぞれ最大の「W+3W」(例えば200μs)である場合には、Highの輝度値は77.2%の明るさである。 In the visible light signal packet shown in FIG. 362, unlike the example shown in FIG. 361, in order to shorten the effective time length E 1 , the time length B 0 of the high luminance value of the average luminance adjustment unit is the signal to be transmitted. Regardless, it is fixed at the shortest 100 μs. Instead, in the visible light signal packet shown in FIG. 362, the High luminance value is in accordance with the variables y 0 and y 2 included in the transmission target signal, that is, in accordance with the time lengths D 0 and D 2. Adjusted. For example, when the time lengths D 0 and D 2 are short, the transmitter 100 adjusts the High luminance value to a large value as illustrated in FIG. When the time lengths D 0 and D 2 are long, the transmitter 100 adjusts the High luminance value to a small value as shown in FIG. Specifically, when the time lengths D 0 and D 2 are the shortest W 0 (for example, 110 μs), the High luminance value is 100% brightness. When the time lengths D 0 and D 2 are the maximum “W 0 + 3W 1 ” (for example, 200 μs), the high luminance value is 77.2% brightness.
 このような可視光信号のパケットでは、輝度値がHighの時間長の総和(すなわち合計ON時間)を、例えば、A+C+C+D+D+B=610~790にすることができる。さらに、輝度値がLowの時間長の総和(すなわち合計OFF時間)を、A+C+C+D+D+B=910にすることができる。 In such a visible light signal packet, the sum of the time lengths of the brightness value High (that is, the total ON time) can be set to, for example, A 0 + C 0 + C 2 + D 0 + D 2 + B 0 = 610 to 790. . Furthermore, the sum of the time lengths when the luminance value is Low (that is, the total OFF time) can be set to A 1 + C 1 + C 3 + D 1 + D 3 + B 1 = 910.
 しかし、図362に示す可視光信号では、パケットにおける全時間長Eおよび有効時間長Eのそれぞれの最短の時間長を、図361に示す例よりも短くすることはできるが、最大の時間長を短くすることができない。 However, in the visible light signal shown in FIG. 362, the shortest time lengths of the total time length E 0 and the effective time length E 1 in the packet can be made shorter than the example shown in FIG. The length cannot be shortened.
 そこで、有効時間長Eを短くし、かつ、パケットの平均輝度を送信対象の信号に関わらず一定にするために、送信対象の信号に応じて、パケットに含まれるデータ部としてLデータ部とRデータ部とを使い分かることが想起される。 Therefore, to shorten the effective length of time E 1, and, in order to make constant regardless of the average brightness of the packet signal to be transmitted in response to a signal to be transmitted, and L data portion as the data portion included in the packet It is recalled that it can be understood using the R data part.
 図363は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。 FIG. 363 is a diagram illustrating another example of the detailed configuration of the visible light signal according to the present embodiment.
 図363に示す可視光信号では、図360~図362に示す例と異なり、有効時間長を短くするために、送信対象の信号である変数y~yの総和に応じて、Lデータ部を含むパケットと、Rデータ部を含むパケットとが使い分けられる。 In the visible light signal shown in FIG. 363, unlike the examples shown in FIG. 360 to FIG. 362, in order to shorten the effective time length, the L data portion is selected according to the sum of the variables y 0 to y 3 that are signals to be transmitted. And a packet including the R data portion are selectively used.
 つまり、送信機100は、変数y~yの総和が7以上の場合には、図363の(a)に示すように、2つのデータ部のうちLデータ部のみを含むパケットを生成する。以下、このパケットをLパケットという。また、送信機100は、変数y~yの総和が6以下の場合には、図363の(b)に示すように、2つのデータ部のうちRデータ部のみを含むパケットを生成する。以下、このパケットをRパケットという。 That is, when the sum of the variables y 0 to y 3 is 7 or more, the transmitter 100 generates a packet including only the L data portion of the two data portions as shown in FIG. 363 (a). . Hereinafter, this packet is referred to as an L packet. In addition, when the sum of the variables y 0 to y 3 is 6 or less, the transmitter 100 generates a packet including only the R data portion of the two data portions as shown in FIG. 363 (b). . Hereinafter, this packet is referred to as an R packet.
 Lパケットは、図363の(a)に示すように、平均輝度調整部と、Lデータ部と、プリアンブルと、無効データとを含む。 The L packet includes an average luminance adjustment unit, an L data unit, a preamble, and invalid data, as shown in FIG.
 Lパケットの平均輝度調整部は、Highの輝度値を示すことなく、時間長B’だけLowの輝度値を示す。時間長B’は、例えばB’=100+W×(y+y+y+y-7)によって示される。 The average luminance adjusting unit of the L packet indicates the low luminance value for the time length B ′ 0 without indicating the high luminance value. The time length B ′ 0 is represented by, for example, B ′ 0 = 100 + W 1 × (y 0 + y 1 + y 2 + y 3 −7).
 Lパケットの無効データは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、無効データは、時間長A’だけHighの輝度値を示し、次の時間長A’だけLowの輝度値を示す。時間長A’は、A’=W-Wによって示され、例えば80μsであり、時間長A’は、例えば150μsである。このような無効データは、その無効データを有するパケットに、Rデータ部が含まれていないことを示す。 The invalid data of the L packet alternately indicates high and low luminance values along the time axis. That is, the invalid data indicates a high luminance value for the time length A ′ 0 and indicates a low luminance value for the next time length A ′ 1 . The time length A ′ 0 is indicated by A ′ 0 = W 0 −W 1 and is, for example, 80 μs, and the time length A ′ 1 is, for example, 150 μs. Such invalid data indicates that the R data portion is not included in the packet having the invalid data.
 このようなLパケットでは、全時間長E’は、送信対象の信号に関わらず、E’=5W+12W+4b+230=1540μsである。また、有効時間長E’は、送信対象の信号に応じた時間長であって、900~1290μsの範囲にある。また、全時間長E’が一定の1540μsであるのに対して、輝度値がHighの時間長の総和(すなわち合計ON時間)は、490~670μsの範囲で送信対象の信号に応じて変化する。したがって、送信機100は、このLパケットにおいても、図362に示す例と同様に、合計ON時間に応じて、すなわち時間長DおよびDに応じて、Highの輝度値を100%~73.1%の範囲で変化させる。 In such an L packet, the total time length E ′ 0 is E ′ 0 = 5W 0 + 12W 1 + 4b + 230 = 1540 μs regardless of the signal to be transmitted. The effective time length E ′ 1 is a time length corresponding to a signal to be transmitted and is in the range of 900 to 1290 μs. In addition, while the total time length E ′ 0 is a constant 1540 μs, the sum of the time lengths where the luminance value is High (that is, the total ON time) varies in the range of 490 to 670 μs depending on the signal to be transmitted. To do. Therefore, the transmitter 100 also sets the high luminance value to 100% to 73 in this L packet according to the total ON time, that is, according to the time lengths D 0 and D 2 , as in the example shown in FIG. Change in the range of 1%.
 Rパケットは、図361に示す例と同様、図363の(b)に示すように、無効データと、プリアンブルと、Rデータ部と、平均輝度調整部とを含む。 As in the example shown in FIG. 361, the R packet includes invalid data, a preamble, an R data portion, and an average luminance adjustment portion as shown in FIG. 363 (b).
 ここで、図363の(b)に示すRパケットでは、有効時間長Eを短くするために、平均輝度調整部におけるHighの輝度値の時間長Bは、送信対象の信号に関わらず最短の100μsに固定されている。また、平均輝度調整部におけるLowの輝度値の時間長Bは、全時間長Eを一定にするために、例えばB=W×(6-(y+y+y+y)によって示される。さらに、図363の(b)に示すRパケットにおいても、送信対象の信号に含まれる変数yおよびyに応じて、すなわち、時間長DおよびDに応じて、Highの輝度値が調整される。 Here, the R packet shown in (b) in FIG. 363, in order to shorten the effective length of time E 1, the time length B 0 of the luminance values of High in the average luminance adjusting unit, shortest regardless signal to be transmitted Of 100 μs. Further, the time length B 1 of the Low luminance value in the average luminance adjustment unit is set to, for example, B 1 = W 1 × (6− (y 0 + y 1 + y 2 + y 3 ) in order to make the total time length E 1 constant. Further, also in the R packet shown in FIG. 363 (b), High is made according to the variables y 0 and y 2 included in the transmission target signal, that is, according to the time lengths D 0 and D 2. The brightness value of is adjusted.
 このようなRパケットでは、全時間長Eは、送信対象の信号に関わらず、E=4W+6W+4b+260=1280μsである。また、有効時間長Eは、送信対象の信号に応じた時間長であって、1100~1280μsの範囲にある。また、全時間長Eが一定の1280μsであるのに対して、輝度値がHighの時間長の総和(すなわち合計ON時間)は、610~790μsの範囲で送信対象の信号に応じて変化する。したがって、送信機100は、このLパケットにおいても、図362に示す例と同様に、合計ON時間に応じて、すなわち時間長DおよびDに応じて、Highの輝度値を80.3%~62.1%の範囲で変化させる。 In such an R packet, the total time length E 0 is E 0 = 4W 0 + 6W 1 + 4b + 260 = 1280 μs regardless of the signal to be transmitted. The effective time length E 1 is a time length corresponding to the signal to be transmitted, and is in the range of 1100 to 1280 μs. In addition, while the total time length E 0 is a constant 1280 μs, the total time length (ie, the total ON time) when the luminance value is High varies within the range of 610 to 790 μs depending on the signal to be transmitted. . Accordingly, the transmitter 100, also in this L packets, similarly to the example shown in FIG. 362, in accordance with the total ON time, i.e. according to the time length D 0 and D 2, the luminance value of the High 80.3% Change in the range of ~ 62.1%.
 このように、図363に示す可視光信号では、パケットにおける有効時間長の最大値を短くすることができる。したがって、送信機100は、上述の赤色投影期間に、その有効時間長EまたはE’だけ継続して1つのパケットを送信することができる。 As described above, in the visible light signal illustrated in FIG. 363, the maximum value of the effective time length in the packet can be shortened. Therefore, the transmitter 100 can continuously transmit one packet during the above-described red projection period by the effective time length E 1 or E ′ 1 .
 ここで、図363に示す例では、送信機100は、変数y~yの総和が7以上の場合に、Lパケットを生成し、変数y~yの総和が6以下の場合に、Rパケットを生成する。言い換えれば、変数y~yの総和は整数であるため、送信機100は、変数y~yの総和が6よりも大きい場合に、Lパケットを生成し、変数y~yの総和が6以下の場合に、Rパケットを生成する。つまり、この例では、パケットのタイプを切り替えるための閾値は6である。しかし、このようなパケットのタイプの切り替えの閾値は、6に限定されずに、3~10の何れかの値であってもよい。 Here, in the example illustrated in FIG. 363, the transmitter 100 generates an L packet when the sum of the variables y 0 to y 3 is 7 or more, and when the sum of the variables y 0 to y 3 is 6 or less. , R packets are generated. In other words, since the sum of the variables y 0 to y 3 is an integer, the transmitter 100 generates an L packet when the sum of the variables y 0 to y 3 is greater than 6, and the variables y 0 to y 3 R packet is generated when the sum of is less than or equal to 6. That is, in this example, the threshold value for switching the packet type is 6. However, the threshold for switching the packet type is not limited to 6, and may be any value from 3 to 10.
 図364は、変数y~yの総和と、全時間長および有効時間長との関係を示す図である。図364に示す全時間長は、Rパケットの全時間長Eと、Lパケットの全時間長E’とのうちの大きい方の時間長である。また、図364に示す有効時間長は、Rパケットの有効時間長Eの最大値と、Lパケットの有効時間長E’の最大値とのうちの大きい方の時間長である。なお、図364に示す例では、定数W、W、およびbは、それぞれW=110μs、W=15μsおよびb=100μsである。 FIG. 364 is a diagram showing the relationship between the sum of the variables y 0 to y 3 and the total time length and effective time length. The total time length shown in FIG. 364 is the larger one of the total time length E 0 of the R packet and the total time length E ′ 0 of the L packet. The effective length of time shown in FIG. 364 is a time length of larger of the maximum value of the effective duration E 1 of R packets, the maximum value of the effective time length E '1 of L packets. In the example shown in FIG. 364, the constants W 0 , W 1 , and b are W 0 = 110 μs, W 1 = 15 μs, and b = 100 μs, respectively.
 全時間長は、図364に示すように、変数y~yの総和に応じて変化するが、その総和が約10で最小になる。また、有効時間長は、図364に示すように、変数y~yの総和に応じて変化するが、その総和が約3で最小になる。 As shown in FIG. 364, the total time length changes according to the sum of the variables y 0 to y 3 , but the sum is about 10 and becomes the minimum. Further, as shown in FIG. 364, the effective time length changes according to the sum of the variables y 0 to y 3 , but the sum is about 3 and becomes the minimum.
 したがって、パケットのタイプの切り替えの閾値は、3~10の範囲で、全時間長および有効時間長のうちの何れを短くするかに応じて設定されてもよい。 Therefore, the packet type switching threshold may be set in the range of 3 to 10 depending on which of the total time length and the effective time length is to be shortened.
 図365Aは、本実施の形態における送信方法を示すフローチャートである。 FIG. 365A is a flowchart showing a transmission method according to the present embodiment.
 本実施の形態における送信方法は、発光体の輝度変化によって可視光信号を送信する送信方法であって、決定ステップS571と、送信ステップS572とを含む。決定ステップS571では、送信機100は、信号を変調することによって、輝度変化のパターンを決定する。送信ステップS572では、送信機100は、その発光体に含まれる光源によって表現される赤色の輝度を、決定されたパターンにしたがって変化させることによって可視光信号を送信する。ここで、可視光信号は、データと、プリアンブルと、ペイロードとを含む。データでは、第1の輝度値、および、その第1の輝度値よりも小さい第2の輝度値が、時間軸上に沿って現れ、第1の輝度値および第2の輝度値のうちの少なくとも一方が継続する時間長は第1の所定の値以下である。プリアンブルでは、第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れる。ペイロードでは、第1および第2の輝度値が時間軸上に沿って交互に現れ、第1および第2の輝度値のそれぞれが継続する時間長は第1の所定の値よりも大きく、かつ、上述の信号および所定の方式にしたがって決定されている。 The transmission method in the present embodiment is a transmission method for transmitting a visible light signal by a change in luminance of a light emitter, and includes a determination step S571 and a transmission step S572. In the determination step S571, the transmitter 100 modulates the signal to determine a luminance change pattern. In the transmission step S572, the transmitter 100 transmits a visible light signal by changing the luminance of red expressed by the light source included in the light emitter according to the determined pattern. Here, the visible light signal includes data, a preamble, and a payload. In the data, a first luminance value and a second luminance value smaller than the first luminance value appear along the time axis, and at least one of the first luminance value and the second luminance value is displayed. The length of time during which one continues is less than or equal to the first predetermined value. In the preamble, each of the first and second luminance values appears alternately along the time axis. In the payload, the first and second luminance values appear alternately along the time axis, the length of time that each of the first and second luminance values continues is greater than the first predetermined value, and It is determined according to the above signal and a predetermined method.
 例えば、データ、プリアンブルおよびペイロードはそれぞれ、図363の(a)および(b)に示す無効データ、プリアンブル、およびLデータ部もしくはRデータ部である。また、例えば、第1の所定の値は100μsである。 For example, data, preamble, and payload are invalid data, preamble, and L data part or R data part shown in (a) and (b) of FIG. 363, respectively. For example, the first predetermined value is 100 μs.
 これにより、図363の(a)および(b)に示すように、可視光信号は、変調される信号に応じて決定される波形のペイロード(すなわち、Lデータ部またはRデータ部)を1つ含み、2つのペイロードを含んでいない。したがって、可視光信号、すなわち可視光信号のパケットを、短くすることができる。その結果、例えば、発光体に含まれる光源によって表現される赤色の光の発光期間が短くても、その発光期間に可視光信号のパケットを送信することができる。 Thus, as shown in FIGS. 363 (a) and (b), the visible light signal has one waveform payload (that is, L data portion or R data portion) determined according to the signal to be modulated. Contains no two payloads. Therefore, the visible light signal, that is, the packet of the visible light signal can be shortened. As a result, for example, even if the light emission period of red light expressed by the light source included in the light emitter is short, a packet of visible light signals can be transmitted during the light emission period.
 また、ペイロードでは、第1の時間長の第1の輝度値、第2の時間長の第2の輝度値、第3の時間長の第1の輝度値、第4の時間長の第2の輝度値の順で、それぞれの輝度値が現れてもよい。この場合、送信ステップS572では、送信機100は、第1の時間長と第3の時間長の和が、第2の所定の値よりも小さい場合、第1の時間長と第3の時間長の和が、第2の所定の値よりも大きい場合よりも、光源に流れる電流値を大きくする。ここで、第2の所定の値は、第1の所定の値よりも大きい。なお、第2の所定の値は、例えば220μsよりも大きい値である。 In the payload, the first luminance value of the first time length, the second luminance value of the second time length, the first luminance value of the third time length, the second luminance value of the fourth time length Each luminance value may appear in the order of the luminance value. In this case, in the transmission step S572, the transmitter 100 determines that the first time length and the third time length when the sum of the first time length and the third time length is smaller than the second predetermined value. Is larger than the second predetermined value, the value of the current flowing in the light source is increased. Here, the second predetermined value is larger than the first predetermined value. Note that the second predetermined value is, for example, a value larger than 220 μs.
 これにより、図362および図363に示すように、第1の時間長と第3の時間長の和が小さい場合には、光源の電流値は大きくされ、第1の時間長と第3の時間長の和が大きい場合には、光源の電流値は小さくされる。したがって、データ、プリアンブルおよびペイロードからなるパケットの平均輝度を、信号に関わらずに一定に保つことができる。 Accordingly, as shown in FIGS. 362 and 363, when the sum of the first time length and the third time length is small, the current value of the light source is increased, and the first time length and the third time length are increased. When the sum of the lengths is large, the current value of the light source is reduced. Therefore, the average luminance of the packet including data, preamble, and payload can be kept constant regardless of the signal.
 また、ペイロードでは、第1の時間長Dの第1の輝度値、第2の時間長Dの第2の輝度値、第3の時間長Dの第1の輝度値、第4の時間長Dの第2の輝度値の順で、それぞれの輝度値が現れてもよい。この場合、信号から得られる4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値以下である場合、第1~4の時間長D~Dのそれぞれは、D=W+W×y(W、Wは、0以上の整数)に従って決定されている。例えば、図363の(b)に示すように、第3の所定の値は3である。 In the payload, the first luminance value of the first time length D 0 , the second luminance value of the second time length D 1 , the first luminance value of the third time length D 2 , the fourth in order of the second luminance value of the time length D 3, it may appear the respective luminance values. In this case, when the sum of the four parameters y k (k = 0, 1, 2, 3) obtained from the signal is equal to or smaller than the third predetermined value, the first to fourth time lengths D 0 to D 3 Each is determined according to D k = W 0 + W 1 × y k (W 0 , W 1 is an integer of 0 or more). For example, as shown in FIG. 363 (b), the third predetermined value is 3.
 これにより、図363の(b)に示すように、第1~4の時間長D~DのそれぞれをW以上にしながら、信号に応じて短い波形のペイロードを生成することができる。 As a result, as shown in FIG. 363 (b), it is possible to generate a payload having a short waveform according to the signal while setting each of the first to fourth time lengths D 0 to D 3 to be W 0 or more.
 また、4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値以下である場合、送信ステップS572では、データ、プリアンブルおよびペイロードを、データ、プリアンブル、ペイロードの順に送信してもよい。なお、図363の(b)に示す例の場合、そのペイロードはRデータ部である。 When the sum of the four parameters y k (k = 0, 1, 2, 3) is equal to or smaller than the third predetermined value, in the transmission step S572, the data, the preamble, and the payload are replaced with the data, the preamble, and the payload. You may transmit in order. In the case of the example shown in FIG. 363 (b), the payload is an R data portion.
 これにより、図363の(b)に示すように、データ(すなわち無効データ)を含む可視光信号のパケットがLデータ部を含んでいないことを、そのデータによって、そのパケットを受信する受信機200に知らせることができる。 Thereby, as shown in FIG. 363 (b), the fact that the packet of the visible light signal including the data (that is, invalid data) does not include the L data portion indicates that the receiver 200 receives the packet with the data. Can let you know.
 また、4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値よりも大きい場合、第1~4の時間長D~Dのそれぞれは、D=W+W×(A-y)、D=W+W×(B-y)、D=W+W×(A-y)、およびD=W+W×(B-y)(AおよびBはそれぞれ、0以上の整数)に従って決定されていてもよい。 When the sum of the four parameters y k (k = 0, 1, 2, 3) is larger than the third predetermined value, each of the first to fourth time lengths D 0 to D 3 is D 0. = W 0 + W 1 × (A−y 0 ), D 1 = W 0 + W 1 × (By 1 ), D 2 = W 0 + W 1 × (A−y 2 ), and D 3 = W 0 + W It may be determined according to 1 × (By 3 ) (A and B are each integers of 0 or more).
 これにより、図363の(a)に示すように、第1~4の時間長D~D(すなわち、第1~4の時間長D’~D’)のそれぞれをW以上にしながら、上述の総和が大きくても、信号に応じて短い波形のペイロードを生成することができる。 Thereby, as shown in FIG. 363 (a), each of the first to fourth time lengths D 0 to D 3 (that is, the first to fourth time lengths D ′ 0 to D ′ 3 ) is set to W 0 or more. However, even if the total sum is large, a short waveform payload can be generated according to the signal.
 また、4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値よりも大きい場合、送信ステップS572では、データ、プリアンブルおよびペイロードを、ペイロード、プリアンブル、データの順に送信してもよい。なお、図363の(a)に示す例の場合、そのペイロードはLデータ部である。 If the sum of the four parameters y k (k = 0, 1, 2, 3) is larger than the third predetermined value, in step S572, the data, preamble, and payload are replaced with the payload, preamble, and data. You may transmit in order. In the case of the example shown in FIG. 363 (a), the payload is the L data portion.
 これにより、図363の(a)に示すように、データ(すなわち無効データ)を含む可視光信号のパケットがRデータ部を含んでいないことを、そのデータによって、そのパケットを受信する受信装置に知らせることができる。 As a result, as shown in FIG. 363 (a), the fact that the packet of the visible light signal including data (that is, invalid data) does not include the R data portion is indicated to the receiving device that receives the packet by the data. I can inform you.
 また、発光体は、赤色の光源、青色の光源、および緑色の光源を含む複数の光源を有し、送信ステップS572では、その複数の光源のうち、赤色の光源のみを用いて可視光信号を送信してもよい。 The light emitter has a plurality of light sources including a red light source, a blue light source, and a green light source. In the transmission step S572, a visible light signal is generated using only the red light source among the plurality of light sources. You may send it.
 これにより、発光体は、赤色の光源、青色の光源、および緑色の光源を用いて映像を表示することができるとともに、受信機200に受信し易い波長の可視光信号を送信することができる。 Thereby, the light emitter can display an image using a red light source, a blue light source, and a green light source, and can transmit a visible light signal having a wavelength that is easy to receive to the receiver 200.
 なお、発光体は例えばDLPプロジェクタであってもよい。DLPプロジェクタは、上述のように、赤色の光源、青色の光源、緑色の光源を含む複数の光源を有していてもよいが、1つの光源のみを有していてもよい。つまり、DLPプロジェクタは、1つの光源と、DMD(Digital Micromirror Device)と、光源とDMDとの間に配置されるカラーホイールとを備えていてもよい。この場合には、DLPプロジェクタは、光源からカラーホイールを介してDMDへ時分割で出力される赤色の光、青色の光、および緑色の光のうち、赤色の光が出力される期間に、可視光信号のパケットを送信する。 The light emitter may be a DLP projector, for example. As described above, the DLP projector may include a plurality of light sources including a red light source, a blue light source, and a green light source, but may include only one light source. That is, the DLP projector may include one light source, a DMD (Digital Micromirror Device), and a color wheel disposed between the light source and the DMD. In this case, the DLP projector is visible during a period in which red light is output among red light, blue light, and green light that are output in time division from the light source to the DMD via the color wheel. Transmit an optical signal packet.
 図365Bは、本実施の形態における送信機100の構成を示すブロック図である。 FIG. 365B is a block diagram showing a configuration of transmitter 100 in the present embodiment.
 本実施の形態における送信機100は、発光体の輝度変化によって可視光信号を送信する送信機、決定部571と、送信部572とを備える。決定部571は、信号を変調することによって、輝度変化のパターンを決定する。送信部572は、その発光体に含まれる光源によって表現される赤色の輝度を、決定されたパターンにしたがって変化させることによって可視光信号を送信する。ここで、可視光信号は、データと、プリアンブルと、ペイロードとを含む。データでは、第1の輝度値、および、その第1の輝度値よりも小さい第2の輝度値が、時間軸上に沿って現れ、第1の輝度値および第2の輝度値のうちの少なくとも一方が継続する時間長は第1の所定の値以下である。プリアンブルでは、第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れる。ペイロードでは、第1および第2の輝度値が時間軸上に沿って交互に現れ、第1および第2の輝度値のそれぞれが継続する時間長は第1の所定の値よりも大きく、かつ、上述の信号および所定の方式にしたがって決定されている。 The transmitter 100 according to the present embodiment includes a transmitter that transmits a visible light signal according to a change in luminance of a light emitter, a determination unit 571, and a transmission unit 572. The determination unit 571 determines a luminance change pattern by modulating a signal. The transmission unit 572 transmits the visible light signal by changing the luminance of red expressed by the light source included in the light emitter according to the determined pattern. Here, the visible light signal includes data, a preamble, and a payload. In the data, a first luminance value and a second luminance value smaller than the first luminance value appear along the time axis, and at least one of the first luminance value and the second luminance value is displayed. The length of time during which one continues is less than or equal to the first predetermined value. In the preamble, each of the first and second luminance values appears alternately along the time axis. In the payload, the first and second luminance values appear alternately along the time axis, the length of time that each of the first and second luminance values continues is greater than the first predetermined value, and It is determined according to the above signal and a predetermined method.
 このような送信機100によって、図365Aに示すフローチャートの送信方法が実現される。 Such a transmitter 100 implements the transmission method of the flowchart shown in FIG. 365A.
 本発明の送信方法は、例えばディスプレイまたは照明などから可視光信号を送信する送信装置等に利用でき、特に、例えばスポットライトなどから可視光信号を送信する送信装置などに利用することができる。 The transmission method of the present invention can be used for, for example, a transmission device that transmits a visible light signal from a display or illumination, and can be used particularly for a transmission device that transmits a visible light signal from, for example, a spotlight.
 100  送信装置
 551  受付部
 552  送信部
100 Transmission Device 551 Reception Unit 552 Transmission Unit

Claims (15)

  1.  光源の輝度変化により信号を送信する送信方法であって、
     前記光源に対して指定される調光度を指定調光度として受け付ける受付ステップと、
     前記指定調光度が第1の値以下である場合には、
     前記指定調光度で前記光源を発光させながら、第1のモードで符号化された前記信号を輝度変化により送信し、
     前記指定調光度が前記第1の値よりも大きい場合には、
     前記指定調光度で前記光源を発光させながら、第2のモードで符号化された前記信号を輝度変化により送信する送信ステップとを含み、
     前記指定調光度が前記第1の値よりも大きく第2の値以下である場合に、前記第2のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値は、
     前記指定調光度が前記第1の値である場合に、前記第1のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値よりも小さい、
     送信方法。
    A transmission method for transmitting a signal by luminance change of a light source,
    An accepting step of accepting the dimming degree designated for the light source as the designated dimming degree;
    When the designated dimming degree is equal to or less than the first value,
    While emitting the light source at the specified dimming degree, the signal encoded in the first mode is transmitted by luminance change,
    When the designated dimming degree is larger than the first value,
    A transmission step of transmitting the signal encoded in the second mode by a luminance change while causing the light source to emit light at the specified dimming degree,
    The peak current value of the light source for transmitting the signal encoded in the second mode by a change in luminance when the designated dimming degree is greater than the first value and less than or equal to a second value. Is
    When the specified dimming level is the first value, the signal encoded in the first mode is smaller than the peak current value of the light source for transmitting the luminance change.
    Transmission method.
  2.  前記指定調光度が第3の値よりも小さい場合には、
     前記指定調光度で前記光源を発光させながら、前記第1のモードで符号化された前記信号を輝度変化により送信するとともに、
     前記指定調光度の変化に対して前記ピーク電流の値を一定の値に維持し、
     前記第3の値は、前記第1の値よりも小さい、
     請求項1に記載の送信方法。
    When the designated dimming degree is smaller than the third value,
    While causing the light source to emit light at the specified dimming level, the signal encoded in the first mode is transmitted by a luminance change,
    Maintaining the value of the peak current at a constant value with respect to the change in the specified dimming degree,
    The third value is less than the first value;
    The transmission method according to claim 1.
  3.  前記指定調光度が前記第3の値よりも小さい場合には、
     前記指定調光度が小さくなるにしたがって、前記光源をオフにする時間を長くすることにより、小さくなる前記指定調光度で前記光源を発光させ、かつ、前記ピーク電流の値を一定の値に維持する、
     請求項2に記載の送信方法。
    When the designated dimming degree is smaller than the third value,
    As the specified dimming level decreases, the light source is caused to emit light at the specified dimming level which is reduced by increasing the time for turning off the light source, and the peak current value is maintained at a constant value. ,
    The transmission method according to claim 2.
  4.  前記指定調光度が第4の値よりも小さい場合には、
     前記指定調光度で前記光源を発光させながら、前記第1のモードで符号化された前記信号を輝度変化により送信するとともに、
     前記指定調光度が小さくなるにしたがって、前記ピーク電流の値を小さくすることにより、小さくなる前記指定調光度で前記光源を発光させ、
     前記第4の値は、前記第2の値よりも小さい、
     請求項1に記載の送信方法。
    When the designated dimming degree is smaller than the fourth value,
    While causing the light source to emit light at the specified dimming level, the signal encoded in the first mode is transmitted by a luminance change,
    By reducing the value of the peak current as the specified dimming level decreases, the light source is caused to emit light at the specified dimming level that decreases.
    The fourth value is smaller than the second value;
    The transmission method according to claim 1.
  5.  前記信号を輝度変化により送信する時間と、前記光源をオフにする時間とを足した1周期が10ミリ秒を超えないように、前記光源をオフする時間を決定する、
     請求項3に記載の送信方法。
    Determining the time to turn off the light source so that one cycle of the time for transmitting the signal due to luminance change and the time to turn off the light source does not exceed 10 milliseconds;
    The transmission method according to claim 3.
  6.  前記指定調光度が前記第1の値である場合における、前記光源のピーク電流の値と、
     前記指定調光度が最大値である場合における、前記光源のピーク電流の値とは同じである、
     請求項1に記載の送信方法。
    A value of a peak current of the light source when the designated dimming is the first value;
    When the designated dimming degree is the maximum value, the value of the peak current of the light source is the same.
    The transmission method according to claim 1.
  7.  前記第2のモードで符号化された前記信号のデューティ比は、前記第1のモードで符号化された前記信号のデューティ比よりも大きい
     請求項1に記載の送信方法。
    The transmission method according to claim 1, wherein a duty ratio of the signal encoded in the second mode is larger than a duty ratio of the signal encoded in the first mode.
  8.  前記光源のピーク電流の値が第5の値を超えた場合、
     前記光源の輝度変化による前記信号の送信を停止する、
     請求項1に記載の送信方法。
    When the peak current value of the light source exceeds a fifth value,
    Stopping transmission of the signal due to a change in luminance of the light source;
    The transmission method according to claim 1.
  9.  前記光源の使用時間を計測し、
     前記使用時間が所定時間以上である場合、
     前記指定調光度よりも大きい調光度で前記光源を発光させるためのパラメータの値を用いて、前記信号を輝度変化により送信する、
     請求項1に記載の送信方法。
    Measure the usage time of the light source,
    When the usage time is a predetermined time or more,
    Using the value of a parameter for causing the light source to emit light with a dimming degree greater than the specified dimming degree, and transmitting the signal by a change in brightness;
    The transmission method according to claim 1.
  10.  前記光源の使用時間を計測し、
     前記使用時間が所定時間以上である場合、
     前記使用時間が所定時間未満である場合よりも、
     前記光源の電流のパルス幅を大きくする、
     請求項1に記載の送信方法。
    Measure the usage time of the light source,
    When the usage time is a predetermined time or more,
    Than when the usage time is less than a predetermined time,
    Increasing the pulse width of the current of the light source,
    The transmission method according to claim 1.
  11.  光源の輝度変化により信号を送信する送信方法であって、
     前記光源に対して指定される調光度を指定調光度として受け付ける受付ステップと、
     前記指定調光度で前記光源を発光させながら、第1のモードまたは第2のモードで符号化された前記信号を輝度変化により送信する送信ステップとを含み、
     前記第2のモードで符号化された前記信号のデューティ比は、前記第1のモードで符号化された前記信号のデューティ比よりも大きく、
     前記送信ステップでは、
     前記指定調光度が小さい値から大きい値に変更される場合、前記指定調光度が第1の値であるときに、前記信号の符号化に用いられるモードを、前記第1のモードから前記第2のモードに切り替え、
     前記指定調光度が大きい値から小さい値に変更される場合、前記指定調光度が第2の値であるときに、前記信号の符号化に用いられるモードを、前記第2のモードから前記第1のモードに切り替え、
     前記第2の値は、前記第1の値よりも小さい、
     送信方法。
    A transmission method for transmitting a signal by luminance change of a light source,
    An accepting step of accepting the dimming degree designated for the light source as the designated dimming degree;
    A transmission step of transmitting the signal encoded in the first mode or the second mode by a luminance change while causing the light source to emit light at the specified dimming degree,
    The duty ratio of the signal encoded in the second mode is greater than the duty ratio of the signal encoded in the first mode;
    In the transmission step,
    When the designated dimming level is changed from a small value to a large value, when the designated dimming level is the first value, the mode used for encoding the signal is changed from the first mode to the second mode. Switch to the mode
    When the designated dimming level is changed from a large value to a small value, when the designated dimming level is the second value, the mode used for encoding the signal is changed from the second mode to the first mode. Switch to the mode
    The second value is smaller than the first value;
    Transmission method.
  12.  前記送信ステップでは、
     前記第1のモードから前記第2のモードへの切り替えが行われる際に、符号化された前記信号を輝度変化により送信するための前記光源のピーク電流を、第1の電流値から、前記第1の電流値よりも小さい第2の電流値に変更し、
     前記第2のモードから前記第1のモードへの切り替えが行われる際に、前記ピーク電流を、第3の電流値から、前記第3の電流値よりも大きい第4の電流値に変更し、
     前記第1の電流値は、前記第4の電流値よりも大きく、前記第2の電流値は、前記第3の電流値よりも大きい
     請求項11に記載の送信方法。
    In the transmission step,
    When the switching from the first mode to the second mode is performed, the peak current of the light source for transmitting the encoded signal by a luminance change is changed from the first current value to the first current value. Change to a second current value smaller than the current value of 1,
    When the switching from the second mode to the first mode is performed, the peak current is changed from a third current value to a fourth current value larger than the third current value;
    The transmission method according to claim 11, wherein the first current value is larger than the fourth current value, and the second current value is larger than the third current value.
  13.  請求項1または11に記載の送信方法をコンピュータに実行させるプログラム。 A program for causing a computer to execute the transmission method according to claim 1 or 11.
  14.  光源の輝度変化により信号を送信する送信装置であって、
     前記光源に対して指定される調光度を指定調光度として受け付ける受付部と、
     前記指定調光度が第1の値以下である場合には、
     前記指定調光度で前記光源を発光させながら、第1のモードで符号化された前記信号を輝度変化により送信し、
     前記調光度が前記第1の値よりも大きい場合には、
     前記指定調光度で前記光源を発光させながら、第2のモードで符号化された前記信号を輝度変化により送信する送信部とを備え、
     前記指定調光度が前記第1の値よりも大きく第2の値以下である場合に、前記第2のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値は、
     前記指定調光度が前記第1の値である場合に、前記第1のモードで符号化された前記信号を輝度変化により送信するための前記光源のピーク電流の値よりも小さい、
     送信装置。
    A transmission device that transmits a signal by luminance change of a light source,
    A reception unit that receives the dimming degree specified for the light source as the specified dimming degree;
    When the designated dimming degree is equal to or less than the first value,
    While emitting the light source at the specified dimming degree, the signal encoded in the first mode is transmitted by luminance change,
    If the dimming degree is greater than the first value,
    A transmitter that transmits the signal encoded in the second mode by a luminance change while causing the light source to emit light at the specified dimming degree,
    The peak current value of the light source for transmitting the signal encoded in the second mode by a change in luminance when the designated dimming degree is greater than the first value and less than or equal to a second value. Is
    When the specified dimming level is the first value, the signal encoded in the first mode is smaller than the peak current value of the light source for transmitting the luminance change.
    Transmitter device.
  15.  光源の輝度変化により信号を送信する送信装置であって、
     前記光源に対して指定される調光度を指定調光度として受け付ける受付部と、
     前記指定調光度で前記光源を発光させながら、第1のモードまたは第2のモードで符号化された前記信号を輝度変化により送信する送信部とを備え、
     前記第2のモードで符号化された前記信号のデューティ比は、前記第1のモードで符号化された前記信号のデューティ比よりも大きく、
     前記送信部は、
     前記指定調光度が小さい値から大きい値に変更される場合、前記指定調光度が第1の値であるときに、前記信号の符号化に用いられるモードを、前記第1のモードから前記第2のモードに切り替え、
     前記指定調光度が大きい値から小さい値に変更される場合、前記指定調光度が第2の値であるときに、前記信号の符号化に用いられるモードを、前記第2のモードから前記第1のモードに切り替え、
     前記第2の値は、前記第1の値よりも小さい、
     送信装置。
    A transmission device that transmits a signal by luminance change of a light source,
    A reception unit that receives the dimming degree specified for the light source as the specified dimming degree;
    A transmitter that transmits the signal encoded in the first mode or the second mode by a luminance change while causing the light source to emit light at the specified dimming degree,
    The duty ratio of the signal encoded in the second mode is greater than the duty ratio of the signal encoded in the first mode;
    The transmitter is
    When the designated dimming level is changed from a small value to a large value, when the designated dimming level is the first value, the mode used for encoding the signal is changed from the first mode to the second mode. Switch to the mode
    When the designated dimming level is changed from a large value to a small value, when the designated dimming level is the second value, the mode used for encoding the signal is changed from the second mode to the first mode. Switch to the mode
    The second value is smaller than the first value;
    Transmitter device.
PCT/JP2017/040032 2016-11-10 2017-11-07 Transmission method, transmission device, and program WO2018088380A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018550203A JP7023239B2 (en) 2016-11-10 2017-11-07 Transmission method, transmitter, and program
CN201780069560.4A CN110114988B (en) 2016-11-10 2017-11-07 Transmission method, transmission device, and recording medium
US16/408,537 US10819428B2 (en) 2016-11-10 2019-05-10 Transmitting method, transmitting apparatus, and program

Applications Claiming Priority (20)

Application Number Priority Date Filing Date Title
JP2016-220024 2016-11-10
JP2016220024 2016-11-10
US201662434644P 2016-12-15 2016-12-15
JP2016243825 2016-12-15
JP2016-243825 2016-12-15
US62/434644 2016-12-15
US201762446632P 2017-01-16 2017-01-16
US62/446632 2017-01-16
US201762457382P 2017-02-10 2017-02-10
US62/457382 2017-02-10
US201762466534P 2017-03-03 2017-03-03
US62/466534 2017-03-03
US201762467376P 2017-03-06 2017-03-06
US62/467376 2017-03-06
JP2017-080664 2017-04-14
JP2017080664 2017-04-14
JP2017-080595 2017-04-14
JP2017080595 2017-04-14
US201762558629P 2017-09-14 2017-09-14
US62/558629 2017-09-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/408,537 Continuation US10819428B2 (en) 2016-11-10 2019-05-10 Transmitting method, transmitting apparatus, and program

Publications (1)

Publication Number Publication Date
WO2018088380A1 true WO2018088380A1 (en) 2018-05-17

Family

ID=62110315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/040032 WO2018088380A1 (en) 2016-11-10 2017-11-07 Transmission method, transmission device, and program

Country Status (3)

Country Link
JP (1) JP7023239B2 (en)
TW (1) TWI736702B (en)
WO (1) WO2018088380A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3393132A4 (en) * 2015-12-17 2019-01-16 Panasonic Intellectual Property Corporation of America Display method and display device
US10263701B2 (en) 2015-11-12 2019-04-16 Panasonic Intellectual Property Corporation Of America Display method, non-transitory recording medium, and display device
US10389446B2 (en) 2014-11-14 2019-08-20 Panasonic Intellectual Property Corporation Of America Reproduction method for reproducing contents
US10819428B2 (en) 2016-11-10 2020-10-27 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
CN112383366A (en) * 2020-11-12 2021-02-19 广州通导信息技术服务有限公司 Frequency spectrum monitoring method and device of digital fluorescence spectrum and storage medium
CN113268400A (en) * 2021-04-27 2021-08-17 新华三信息技术有限公司 Synchronous flashing method and device for indicator lamp and server
WO2021248341A1 (en) * 2020-06-10 2021-12-16 京东方科技集团股份有限公司 Optical communication apparatus, system and method
TWI769471B (en) * 2020-07-02 2022-07-01 黑快馬股份有限公司 Automatic panning shot system and automatic panning shot method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230142456A1 (en) * 2016-10-21 2023-05-11 Panasonic Intellectual Property Corporation Of America Transmission device, reception device, communication system, transmission method, reception method, and communication method
TWI706385B (en) * 2019-10-21 2020-10-01 大陸商南京深視光點科技有限公司 Vehicle optical signal transmission and reception system and its implementation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09312612A (en) * 1996-05-24 1997-12-02 Sharp Corp Light emitting circuit for optical communication
JP2008206087A (en) * 2007-02-22 2008-09-04 Matsushita Electric Works Ltd Visible optical communication system
JP2010056644A (en) * 2008-08-26 2010-03-11 Panasonic Electric Works Co Ltd Visible light communication system
JP2011198524A (en) * 2010-03-17 2011-10-06 Mitsubishi Electric Lighting Corp Lighting device
JP2015173508A (en) * 2009-09-18 2015-10-01 インターデイジタル パテント ホールディングス インコーポレイテッド Method and device for lighting control, which include rate control for visible light communication (vlc)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8446481B1 (en) * 2012-09-11 2013-05-21 Google Inc. Interleaved capture for high dynamic range image acquisition and synthesis
SG11201505027UA (en) * 2012-12-27 2015-07-30 Panasonic Ip Corp America Information communication method
WO2015075937A1 (en) * 2013-11-22 2015-05-28 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Information processing program, receiving program, and information processing device
JP2015184778A (en) * 2014-03-20 2015-10-22 コニカミノルタ株式会社 Augmented reality display system, augmented reality information generation device, augmented reality display device, server, augmented reality information generation program, augmented reality display program, and data structure of augmented reality information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09312612A (en) * 1996-05-24 1997-12-02 Sharp Corp Light emitting circuit for optical communication
JP2008206087A (en) * 2007-02-22 2008-09-04 Matsushita Electric Works Ltd Visible optical communication system
JP2010056644A (en) * 2008-08-26 2010-03-11 Panasonic Electric Works Co Ltd Visible light communication system
JP2015173508A (en) * 2009-09-18 2015-10-01 インターデイジタル パテント ホールディングス インコーポレイテッド Method and device for lighting control, which include rate control for visible light communication (vlc)
JP2011198524A (en) * 2010-03-17 2011-10-06 Mitsubishi Electric Lighting Corp Lighting device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10389446B2 (en) 2014-11-14 2019-08-20 Panasonic Intellectual Property Corporation Of America Reproduction method for reproducing contents
US10263701B2 (en) 2015-11-12 2019-04-16 Panasonic Intellectual Property Corporation Of America Display method, non-transitory recording medium, and display device
US10951309B2 (en) 2015-11-12 2021-03-16 Panasonic Intellectual Property Corporation Of America Display method, non-transitory recording medium, and display device
EP3393132A4 (en) * 2015-12-17 2019-01-16 Panasonic Intellectual Property Corporation of America Display method and display device
US10504584B2 (en) 2015-12-17 2019-12-10 Panasonic Intellectual Property Corporation Of America Display method and display device
US10819428B2 (en) 2016-11-10 2020-10-27 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
WO2021248341A1 (en) * 2020-06-10 2021-12-16 京东方科技集团股份有限公司 Optical communication apparatus, system and method
TWI769471B (en) * 2020-07-02 2022-07-01 黑快馬股份有限公司 Automatic panning shot system and automatic panning shot method
CN112383366A (en) * 2020-11-12 2021-02-19 广州通导信息技术服务有限公司 Frequency spectrum monitoring method and device of digital fluorescence spectrum and storage medium
CN113268400A (en) * 2021-04-27 2021-08-17 新华三信息技术有限公司 Synchronous flashing method and device for indicator lamp and server
CN113268400B (en) * 2021-04-27 2022-07-12 新华三信息技术有限公司 Synchronous flashing method and device for indicator lamp and server

Also Published As

Publication number Publication date
TWI736702B (en) 2021-08-21
JPWO2018088380A1 (en) 2019-10-03
JP7023239B2 (en) 2022-02-21
TW201830892A (en) 2018-08-16

Similar Documents

Publication Publication Date Title
JP6876617B2 (en) Display method and display device
US10521668B2 (en) Display method and display apparatus
JP6876615B2 (en) Display method, program and display device
US10530486B2 (en) Transmitting method, transmitting apparatus, and program
US10819428B2 (en) Transmitting method, transmitting apparatus, and program
JP7023239B2 (en) Transmission method, transmitter, and program
JP6122233B1 (en) Visible light signal generation method, signal generation apparatus, and program
JP6842413B2 (en) Signal decoding methods, signal decoding devices and programs
JP7134094B2 (en) Transmission method, transmission device and program
WO2016136256A1 (en) Signal generation method, signal generation device and program
WO2016098355A1 (en) Transmission method, transmission device and program
WO2018110373A1 (en) Transmission method, transmission device, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17868678

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018550203

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17868678

Country of ref document: EP

Kind code of ref document: A1