WO2013054664A1 - Information processing device, information processing method, program, and electronic apparatus - Google Patents

Information processing device, information processing method, program, and electronic apparatus Download PDF

Info

Publication number
WO2013054664A1
WO2013054664A1 PCT/JP2012/075042 JP2012075042W WO2013054664A1 WO 2013054664 A1 WO2013054664 A1 WO 2013054664A1 JP 2012075042 W JP2012075042 W JP 2012075042W WO 2013054664 A1 WO2013054664 A1 WO 2013054664A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
unit
skin
wavelength
led
Prior art date
Application number
PCT/JP2012/075042
Other languages
French (fr)
Japanese (ja)
Inventor
諭司 三谷
信広 西条
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2013054664A1 publication Critical patent/WO2013054664A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the present disclosure particularly relates to, for example, an information processing apparatus, an information processing method, a program, and an electronic device, and distinguishes between an object other than skin and an object so that the position and movement of the object as skin can be recognized.
  • the present invention relates to an apparatus, an information processing method, a program, and an electronic device.
  • This skin proximity switch detects the proximity of an object and determines whether the detected object is skin (for example, a human fingertip). The skin proximity switch is switched to either the on state or the off state in response to determining that the detected object is skin.
  • the present disclosure has been made in view of such a situation, and makes it possible to recognize the position and movement of an object as skin by distinguishing between the skin and an object other than the skin.
  • the information processing apparatus includes a first irradiation unit that irradiates light having a first wavelength, and second irradiation that irradiates light having a second wavelength different from the first wavelength. And generating a first detection signal in response to receiving the reflected light from the object irradiated with the light having the first wavelength and the light having the second wavelength.
  • a light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light, and the object is skinned based on the first and second detection signals.
  • Recognition for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signal and the skin detection unit for detecting whether or not
  • An information processing apparatus including a generation unit that generates information.
  • Another light receiving unit having the same configuration as the light receiving unit can be further provided, and the generation unit includes at least one of the first or second detection signals generated for each of the plurality of light receiving units. Based on this, the recognition information can be generated.
  • the first irradiation unit is arranged at the same distance from each of the plurality of light receiving units, and the generation unit is based on the first detection signal generated for each of the plurality of light receiving units.
  • the recognition information can be generated.
  • the light receiving unit is disposed at the same distance from the first irradiation unit and the second irradiation unit, respectively, and the skin detection unit includes the first and second generated by the light receiving unit. Based on the detection signal, it can be detected whether or not the object is skin.
  • Another irradiation unit configured in the same manner as the irradiation unit may be further provided, and the first detection is performed for each of the irradiation units that irradiate the light of the first wavelength at different timings in the light receiving unit. And generating the second detection signal for each of the irradiation units that irradiate the light of the second wavelength at different timings, and generating the generated signal for each of the irradiation units.
  • the recognition information can be generated based on at least one of the first or second detection signal.
  • the generation unit When the interval between the first irradiation units in the first direction is longer than the interval between the second irradiation units, the generation unit generates the plurality of irradiation units by the light receiving unit. Based on the first detection signal, the recognition information for recognizing at least one of the position or movement of the object in a second direction perpendicular to the first direction can be generated.
  • a plurality of sensors having the irradiation unit and the light receiving unit, the sensor having the irradiation unit that irradiates a different irradiation range for each sensor may be further provided, and the skin detection unit includes the plurality of sensors. Based on the first and second detection signals generated for each sensor, it is detected whether the object is skin, and the generation unit generates the plurality of sensors. The recognition information can be generated based on at least one of the first and second detection signals.
  • a proximity detection unit that detects whether the object has entered a predetermined detection range based on the first detection signal can be further provided, and the skin detection unit includes the object Corresponding to the detection of having entered the detection range, it is possible to detect whether the object is skin based on the first and second detection signals.
  • an object detection unit that detects whether or not the object exists within a predetermined detection range based on the first detection signal. Further, the information processing apparatus can handle the object as skin if it is detected that the object is within the detection range.
  • a signal generation unit that generates an output signal having a magnitude according to the position of the object can be further provided, and the generation unit can generate the recognition information based on the output signal.
  • the first wavelength ⁇ 1 and the second wavelength ⁇ 2 that is longer than the first wavelength ⁇ 1 are: 640nm ⁇ ⁇ 1 ⁇ 1000nm 900nm ⁇ ⁇ 2 ⁇ 1100nm Can be met.
  • the first irradiation unit can be irradiated with invisible light having the first wavelength ⁇ 1
  • the second irradiation unit can be irradiated with invisible light having the second wavelength ⁇ 2.
  • the light receiving unit may be provided with a visible light cut filter that blocks visible light incident on the light receiving unit.
  • An information processing method includes a first irradiation unit that irradiates light having a first wavelength, and a second irradiation that irradiates light having a second wavelength different from the first wavelength. And generating a first detection signal in response to receiving the reflected light from the object irradiated with the light having the first wavelength and the light having the second wavelength.
  • An information processing method of an information processing apparatus including a light receiving unit that generates a second detection signal in response to reception of reflected light from the object that is irradiated by the information processing apparatus, Based on the first and second detection signals, the skin detection step for detecting whether or not the object is skin and the skin based on at least one of the first or second detection signals Recognition for recognizing at least one of the detected position or movement of the object
  • An information processing method comprising: a generation step of generating a broadcast.
  • the program according to the first aspect of the present disclosure includes a first irradiation unit that irradiates light having a first wavelength, and a second irradiation unit that irradiates light having a second wavelength different from the first wavelength.
  • the first detection signal is generated in response to receiving the reflected light from the irradiation unit having the irradiation unit and the object irradiated with the light of the first wavelength, and the light of the second wavelength is irradiated
  • a computer of an information processing apparatus including a light receiving unit that generates a second detection signal in response to receiving reflected light from the object that is reflected on the basis of the first and second detection signals.
  • At least one of the skin detection unit that detects whether or not the object is skin, and at least one of the first or second detection signal at least one of the position or movement of the object detected as skin Function as a generator that generates recognition information for recognizing Which is the program.
  • the object is skin based on the first and second detection signals, and at least one of the first or second detection signals. Based on one, recognition information for recognizing at least one of the position or movement of the object detected as skin is generated.
  • An electronic apparatus includes a first irradiation unit that irradiates light having a first wavelength, and a second irradiation unit that emits light having a second wavelength different from the first wavelength. And generating a first detection signal in response to receiving the reflected light from the object irradiated with the light of the first wavelength, and the light of the second wavelength.
  • a light receiving unit that generates a second detection signal in response to receiving reflected light from the irradiated object, and the object is skinned based on the first and second detection signals.
  • Recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the skin detection unit for detecting whether or not there is the first or second detection signal And a corresponding process according to the recognition result based on the recognition information
  • An electronic device comprising a processing unit that performs.
  • the object is skin based on the first and second detection signals, and at least one of the first or second detection signals. Based on one, recognition information for recognizing at least one of the position or movement of the object detected as skin is generated, and corresponding processing is performed according to the recognition result based on the recognition information.
  • FIG. 5 is a diagram illustrating an example of an output result output from a sensor when an object moves as illustrated in FIG. 4.
  • FIG. 5 is a diagram showing a first example of a skin detection signal obtained from an output result of a sensor when an object moves as shown in FIG. 4.
  • FIG. 24 is a diagram illustrating an example of an output result output from a PD when an object moves as illustrated in FIG.
  • FIG. 24 is a block diagram which shows the detailed structural example of the digital photo frame of FIG.
  • FIG. 24 shows the detailed structural example of the control part of FIG.
  • FIG. 35 it is a figure which shows an example of the output result of PD when an object moves to the right direction from the left in the figure. It is a block diagram which shows the detailed structural example of the digital photo frame of FIG. It is a block diagram which shows the detailed structural example of the control part of FIG.
  • FIG. 4 is a second diagram for explaining a distance between two sensors shown in FIG. 1.
  • FIG. 22 it is a figure which shows an example of a mode that the digital photo frame was seen from the lower side in the figure.
  • FIG. 33 it is a figure which shows an example of a mode that the digital photo frame was seen from the lower side in the figure.
  • It is a 1st figure for demonstrating adjustment of the output of LED.
  • It is a 2nd figure for demonstrating adjustment of the output of LED.
  • First embodiment an example in the case of having a plurality of sensors including PD and LED unit
  • Second embodiment an example with one LED unit and multiple PDs
  • Third embodiment an example in the case of having a plurality of LED units and one PD
  • Modification 5 Other
  • FIG. 1 shows a configuration example of a digital photo frame 1 according to the first embodiment.
  • the digital photo frame 1 has a display screen 1a for displaying a still image (for example, an image taken as a photograph) or a moving image.
  • the user's hand or the like is described as a recognition target for recognizing the position or movement, but the recognition target is not limited to the user's hand or the like, and any human skin portion may be used.
  • the digital photo frame 1 uses the sensors 21 1 and 21 2 to recognize the position and movement of the user's hand, etc., and changes the content of still images and moving images displayed on the display screen 1a based on the recognition result, for example.
  • the sensors 21 1 and 21 2 determine whether or not the object close to the digital photo frame 1 is human skin, and only the object determined to be human skin is subject to position or movement. Used to recognize at least one of
  • the sensor 21 1 or 21 2 is simply referred to as the sensor 21.
  • the Z direction refers to the normal direction of the display screen 1a in FIG.
  • FIG. 2 shows an example of an output result output from the sensor 21 when the object 41 moves so as to approach the sensor 21.
  • the fan-shaped figure surrounding the object 41 indicates the detection range of the sensor 21.
  • the horizontal axis represents time t
  • the vertical axis represents the output result V from the sensor 21. The same applies to A in FIG. 3 and B in FIG. 3 described later.
  • FIG. 3 shows an example of an output result output from the sensor 21 when the object 41 moves away from the sensor 21.
  • the digital photo frame 1 recognizes the position and movement of the object 41 in the Z direction on the basis of the output results from the sensor 21 as shown in FIGS. 2 and 3, and according to the recognition result, the display screen 1a. Change the display contents.
  • FIG. 4 shows an example when the digital photo frame 1 is viewed from the lower side in FIG.
  • FIG. 5 shows an example of output results respectively output from the sensor 21 1 and the sensor 21 2 when the object 41 moves as shown in FIG.
  • FIG. 5A shows an output result from the sensor 21 1 .
  • the output result from the sensor 21 2 is shown.
  • the horizontal axis represents time t
  • the vertical axis represents the output result V from the sensor 21.
  • the object 41 as shown in FIG. 4, approaching from the left side of the sensor 21 1, passes through the vicinity directly above the sensor 21 1, away from the sensor 21 1 is present on the right side of the sensor 21 1 Move toward sensor 21 2 .
  • the output result from the sensor 21 1 increases, and when the object 41 passes near the sensor 21 1.
  • the output result is maximized (maximum). Then, when the object 41 is moved away through the sensor 21 1, the output result from the sensor 21 1 is decreased.
  • Object 41 after passing through the per directly above the sensor 21 1, approaching from the left side of the sensor 21 2, and passes through the vicinity directly above the sensor 21 2, away from the sensor 21 2 , Move to the right side of the sensor 21 2 .
  • the output result from the sensor 21 2 increases as the object 41 approaches, and when the object 41 passes near the sensor 21 2.
  • the output result is maximized (maximum). Then, when the object 41 is moved away through the sensor 21 2, then the output from the sensor 21 2 decreases.
  • the object 41 when the motion as shown in FIG. 4, as shown in B of A and 5 in FIG. 5, as the output result, the convex lobes above, the sensor 21 1 and the sensor In the order of 21 2 .
  • the object 41 is output in accordance with the timing when the maximum portion is obtained as the output result from the sensor 21 1 and the timing when the maximum portion is obtained as the output result from the sensor 21 2 . Can recognize movement.
  • the digital photo frame 1 it is also determined whether or not the object 41 within the detection range of the sensor 21 is human skin based on the output result from the sensor 21, and the determination result obtained by the determination Is generated.
  • the position of the object 41 is determined based on the output results from the sensors 21 1 and 21 2. Recognize movement.
  • the skin detection signal generation method will be described in detail with reference to FIG.
  • FIG. 6 shows an example of a skin detection signal generated in the digital photo frame 1 when the object 41 moves as shown in FIG.
  • Skin detection signal shown in A of FIG. 6 represents whether the skin is detected within the detection range of the sensor 21 1. Further, the skin detection signal shown in B of FIG. 5 indicates whether or not the skin is detected within the detection range of the sensor 21 2 .
  • the skin detection signal is ON (for example, 1) when an object 41 as human skin exists within the detection range of the sensor 21, and the object 41 as human skin within the detection range of the sensor 21. Is set to OFF (for example, 0) when no exists.
  • the horizontal axis represents time t
  • the vertical axis represents whether skin has been detected or not.
  • the object 41 after passing through the detection range of the sensor 21 1, passes through the detection range of the sensor 21 2.
  • the skin detection signal is turned ON in the order of the skin detection signal shown in A of FIG. 6 and the skin detection signal shown in B of FIG. *
  • the digital photo frame 1 instead of the output results from the sensor 21 1 and the sensor 21 2 , the position of the object 41 or the like based on the skin detection signal as shown in A of FIG. 6 and B of FIG. You may make it recognize a motion etc.
  • the object 41 is turned on according to the timing when turned on in the skin detection signal shown in A of FIG. 6 and the timing when turned on in the skin detection signal shown in B of FIG. Can recognize the movement.
  • FIG. 7 shows another example when the digital photo frame 1 is viewed from the lower side in FIG.
  • FIG. 8 shows an example of output results respectively output from the sensor 21 1 and the sensor 21 2 when the object 41 moves as shown in FIG.
  • FIG. 8A shows the output result from the sensor 21 1 .
  • 8B shows the output result from the sensor 21 2 .
  • the horizontal axis represents the time t
  • the vertical axis represents the output result V from the sensor 21.
  • the object 41 after passing through the per directly above the sensor 21 2 approaches from the right side of the sensor 21 1, passes through the vicinity directly above the sensor 21 1, away from the sensor 21 1 Thus, the sensor 21 1 moves in the left direction.
  • the output result from the sensor 21 1 increases as the object 41 approaches the sensor 21 1 after passing right above the sensor 21 2 .
  • the output result becomes maximum (maximum). Then, when the object 41 is moved away through the sensor 21 1, the output result from the sensor 21 1 is decreased.
  • Object 41 as shown in FIG. 7, approaches from the right side of the sensor 21 2, and passes through the vicinity directly above the sensor 21 2, away from the sensor 21 2, sensor 21 present in the left side of the sensor 21 2 Move towards 1 .
  • the output result from the sensor 21 2 increases, and when the object 41 passes near the sensor 21 2.
  • the output result is maximized (maximum). Then, when the object 41 is moved away through the sensor 21 2, then the output from the sensor 21 2 decreases.
  • the object 41 when the motion as shown in FIG. 7, as shown in B of A and 8 in FIG. 8, as an output result, the convex lobes above, the sensor 21 2 and the sensor In the order of 21 1 .
  • the movement of the object 41 is recognized according to the timing at which the maximum portion is obtained as the output result from the sensor 21 1 and the timing at which the maximum portion is obtained as the output result from the sensor 21 2. be able to.
  • FIG. 9 shows an example of a skin detection signal obtained from the output result of the sensor 21 when the object 41 moves as shown in FIG.
  • a in FIG. 9 and B in FIG. 9 are configured similarly to A in FIG. 6 and B in FIG. That is, the skin detection signal shown in A of FIG. 9 indicates whether or not skin is detected within the detection range of the sensor 21 1 . Further, the skin detection signal shown in FIG. 9B indicates whether or not the skin is detected within the detection range of the sensor 21 2 .
  • the object 41 passes through the detection range of the sensor 21 1 after passing through the detection range of the sensor 21 2 , as shown in FIG.
  • the skin detection signal is turned ON in the order of the skin detection signal shown in B of FIG. 9 and the skin detection signal shown in A of FIG.
  • the digital photo frame 1 instead of the output results from the sensors 21 1 and 21 2 , the position of the object 41 or the like based on the skin detection signal as shown in A of FIG. 9 and B of FIG. You may make it recognize a motion etc.
  • the object 41 is turned on according to the timing when turned on in the skin detection signal shown in A of FIG. 9 and the timing when turned on in the skin detection signal shown in B of FIG. Can recognize the movement.
  • FIG. 10 shows a detailed configuration example of the digital photo frame 1.
  • the digital photo frame 1 includes a CPU (Central Processing Unit) 61, a ROM (Read Only Memory) 62, a RAM (Random Access Memory) 63, a bus 64, an input / output interface 65, and a plurality of sensors 21 1 to 21 N.
  • a display unit 67 having a display screen 1a, a storage unit 68, and a drive 69.
  • two sensors 21 1 and 21 2 are provided in the digital photo frame 1 shown in FIG. 1, two sensors 21 1 and 21 2 are provided. However, as shown in FIG. 10, three or more sensors 21 1 to 21 N are provided. You may do it.
  • the CPU 61 performs various processes by executing programs stored in the ROM 62 and the storage unit 68, for example.
  • the CPU 61 detects whether or not the object 41 close to the digital photo frame 1 is skin based on the skin detection signal supplied from the control unit 66 via the input / output interface 65 and the bus 64. .
  • the CPU 61 detects that the object 41 is skin, the position and movement of the object 41 as skin based on the gesture recognition information supplied from the control unit 66 via the input / output interface 65 and the bus 64. Recognize etc.
  • the gesture recognition information represents information for recognizing the position and movement of the object 41. Therefore, as the gesture recognition information, for example, output results from the sensors 21 1 and 21 2 , skin detection signals, and the like are employed. This is the same in the second and third embodiments described later.
  • the CPU 61 performs corresponding processing according to the recognition result based on the gesture recognition information.
  • the CPU 61 reads out the plurality of still images held in the storage unit 68 via the display 64 and the input / output interface 65, for example, and sequentially displays the read out plurality of still images in a predetermined order. Is displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 2, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and displays a display among a plurality of still images. The still image inside is enlarged and displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 3, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65 to enlarge and display a still image being displayed. The image is reduced to the original size and displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as illustrated in FIG. 4, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65. A still image to be displayed is displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 7, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and immediately before the plurality of still images.
  • the displayed still image is displayed on the display screen 1a.
  • the CPU 61 controls the control unit 66, the display unit 67, the drive 69, and the like.
  • the ROM 62 holds (stores) programs executed by the CPU 61 and other data in advance.
  • the RAM 63 is, for example, a working memory (working memory) used for processing performed by the CPU 61, holds data instructed by the CPU 61, and supplies data instructed to be read from the CPU 61 to the CPU 61.
  • working memory working memory
  • the bus 64 is connected to the CPU 61, the ROM 62, the RAM 63, and the input / output interface 65, and relays data exchange.
  • the input / output interface 65 is connected to the bus 64, the control unit 66, the display unit 67, the storage unit 68, and the drive 69, and relays data exchange.
  • the control unit 66 includes each sensor 21 n , generates a skin detection signal and gesture recognition information for each sensor 21 n based on the output result from the sensor 21 n , and passes through the input / output interface 65 and the bus 64. To the CPU 61. Details of the control unit 66 will be described in detail with reference to FIG.
  • the display unit 67 displays, for example, a still image supplied from the CPU 61 via the bus 64 and the input / output interface 65 on the display screen 1a.
  • the storage unit 68 includes, for example, a hard disk, and stores programs executed by the CPU 61 and various data.
  • the drive 69 drives a removable medium 70 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and acquires programs, data, and the like recorded therein.
  • the acquired program and data are transferred to and stored in the storage unit 68 as necessary.
  • a recording medium for recording (storing) a program that is installed in a computer and can be executed by the computer includes a magnetic disk (including a flexible disk), an optical disk (CD-ROM (Compact Disc- Removable media 70, which is a package media made up of a read-only memory, DVD (digital versatile disc), a magneto-optical disk (including MD (mini-disc)), or a semiconductor memory, or a program is temporarily or It is composed of a ROM 62 that is permanently stored, a hard disk that constitutes the storage unit 68, and the like.
  • CD-ROM Compact Disc- Removable media 70
  • a program is temporarily or It is composed of a ROM 62 that is permanently stored, a hard disk that constitutes the storage unit 68, and
  • the recording of the program on the recording medium uses a wired or wireless communication medium such as a local area network, the Internet, or digital satellite broadcasting through the connected interface by connecting an interface such as a router or a modem as necessary. Can be done.
  • a wired or wireless communication medium such as a local area network, the Internet, or digital satellite broadcasting through the connected interface by connecting an interface such as a router or a modem as necessary. Can be done.
  • FIG. 11 shows a detailed configuration of the control unit 66.
  • control unit 66 includes a processing unit 91, a current control unit 92n, a timing control unit 93n, a gain control unit 94n, and an AD (Analog / Digital) conversion unit 95n.
  • the sensor 21n includes an LED (Light Emitting Diode) driver 111n, an LED 112an, an LED 112bn, a lens 113an, a lens 113bn, a PD (Phase Detector) 114n, and a lens 115n.
  • LED Light Emitting Diode
  • LED 112an an LED 112bn
  • lens 113an a lens 113bn
  • PD Phase Detector
  • the number of the LEDs 112an and the number of the LEDs 112bn are not limited to one each, and can be plural.
  • the processing unit 91 controls the current control unit 92n to instruct the current control unit 92n to supply current to the LED 112an and the LED 112bn.
  • the processing unit 91 also controls the timing control unit 93n to instruct the timing control unit 93n to turn on and off the LEDs 112an and turn on and off the LEDs 112bn.
  • the current control unit 92n controls the LED driver 111n so that the current instructed from the processing unit 91 flows.
  • the timing control unit 93n controls the LED driver 111n so as to turn on and off at the timing instructed by the processing unit 91.
  • the LED driver 111n repeats turning on only the LED 112an, turning on only the LED 112bn, and turning off the LED 112an and the LED 112bn according to control from the current control unit 92n and the timing control unit 93n.
  • the processing unit 91 controls the gain control unit 94n. Thereby, the degree of gain adjustment by the gain control process performed in the PD 114n is adjusted.
  • the processing unit 91 is supplied with a luminance signal V ⁇ 1 , a luminance signal V ⁇ 2 , and a luminance signal V ⁇ off from the PD 114n of each sensor 21n via the AD conversion unit 95n.
  • Processing unit 91 for example, a luminance signal V .lambda.1 from AD conversion unit 95n, a luminance signal V .lambda.2, and based on the luminance signal V? OFF, skin detection signal indicating whether skin exists in the detection range of the sensor 21n Is generated.
  • the processing unit 91 generates gesture recognition information based on the luminance signal V ⁇ 1 from the AD conversion unit 95n, for example.
  • the processing unit 91 supplies the generated skin detection signal and gesture recognition information to the CPU 61 via the input / output interface 65 and the bus 64 of FIG.
  • the gesture recognition information the received light intensity of the reflected light generated based on the luminance signal V ⁇ 1 is adopted, but for example, a skin detection signal can be adopted. This is the same in the second and third embodiments described later.
  • the processing unit 91 determines whether or not the object 41 as skin exists in any detection range of each sensor 21n based on the generated skin detection signal, and the object 41 as skin exists. Only when it is determined, the gesture recognition information may be supplied to the CPU 61.
  • the processing unit 91 may supply only the gesture recognition information to the CPU 61.
  • the CPU 61 performs a corresponding process according to the gesture recognized based on the gesture recognition information from the processing unit 91.
  • the processing unit 91 recognizes the movement of the object 41 based on the generated gesture recognition information, and supplies recognition result information representing the recognition result to the CPU 61 instead of the gesture recognition information. May be.
  • the recognition result information for example, “1” is displayed when the object 41 moves closer to the sensor 21n, and “2” is displayed when the object 41 moves away from the sensor 21n.
  • “3” is adopted when moving from left to right, and “4” is adopted when moving from right to left in FIG. .
  • the processing unit 91 calculates the position of an object (for example, a user's hand) as skin based on the output result from each sensor 21n.
  • the processing unit 91 supplies the recognition result information to the CPU 61, the CPU 61 performs a process according to the recognition result information from the processing unit 91. This is the same in the second and third embodiments.
  • the processing unit 91 detects only the OR signal of the skin detection signal generated for each sensor 21n (for example, when at least one skin detection signal indicating that the skin is detected exists) May be supplied to the CPU 61. This is the same in the second and third embodiments described later.
  • the gain control unit 94n controls the PD 114n and adjusts parameters used for gain control processing performed by the PD 114n. This parameter represents how much the gain of the light reception luminance V obtained by the light reception of the PD 114n is adjusted.
  • the gain is adjusted so that the luminance signal V decreases when the luminance signal V is large, and the received light luminance V increases when the luminance signal V is small.
  • AD conversion unit 95n is output from PD114n V (e.g., luminance signal V .lambda.1 and luminance signal V .lambda.2, luminance signal V? OFF) the AD conversion, and supplies the output V after the AD conversion processing unit 91.
  • V luminance signal V .lambda.1 and luminance signal V .lambda.2, luminance signal V? OFF
  • the LED driver 111n controls lighting and extinguishing of the LED 112an and lighting and extinguishing of the LED 112bn according to an instruction from the timing control unit 93n.
  • the LED driver 111n controls the current flowing to the LED 112an and the current flowing to the LED 112bn in accordance with an instruction from the current control unit 92n.
  • a lens 113an is provided in front of the LED 112an.
  • the LED 112an irradiates light of wavelength ⁇ 1 (for example, infrared light of 870 [nm]) within the detection range of the sensor 21n according to control from the LED 111n driver.
  • the LED 112an is turned on (irradiated with light of wavelength ⁇ 1) and turned off when a current instructed by the current control unit 92n is caused to flow according to the timing instructed by the timing control unit 93n. Done.
  • a lens 113bn is provided on the front surface of the LED 112bn.
  • the LED 112bn irradiates light having a wavelength ⁇ 2 that is longer than the wavelength ⁇ 1 (for example, infrared of 950 [nm]) within the detection range of the sensor 21n in accordance with control from the LED 111n driver.
  • the LED 112an and the LED 112bn are provided with a lens 113an and a lens 113bn, respectively. Therefore, the LED 112an and the LED 112bn can irradiate the irradiation light uniformly within the detection range of the sensor 21n without causing uneven irradiation.
  • the combination ( ⁇ 1, ⁇ 2) of the wavelength ⁇ 1 and the wavelength ⁇ 2 is determined in advance based on, for example, the spectral reflection characteristics with respect to human skin.
  • the spectral reflection characteristics with respect to human skin will be described in detail with reference to FIG.
  • the LED 112bn is repeatedly lit and extinguished by the current instructed by the current controller 92n flowing in accordance with the timing instructed by the timing controller 93n.
  • the LED 112bn is turned on, light having a wavelength ⁇ 2 is emitted from the LED 112bn.
  • a lens 115n is provided on the front surface of the PD 114n.
  • the PD 114n receives reflected light from an object within the detection range of the sensor 21n when the LED 112an is turned on.
  • the PD 114n receives the reflected light from the object 41 irradiated with the light of the wavelength ⁇ 1 when the LED 112an is turned on.
  • the PD 114n performs gain control processing on the light reception luminance V ⁇ 1 obtained by the light reception, and outputs the light reception luminance V ⁇ 1 after processing to the AD conversion unit 95n.
  • the PD 114n receives the reflected light from the object 41 irradiated with light of wavelength ⁇ 2 when the LED 112bn is turned on.
  • the PD 114n performs gain control processing on the light reception luminance V ⁇ 2 obtained by the light reception, and outputs the light reception luminance V ⁇ 2 after processing to the AD conversion unit 95n.
  • the PD 114n receives reflected light from the object 41 irradiated with external light other than the irradiation light when the LED 112an and the LED 112bn are turned off.
  • the PD 114n performs gain control processing on the light reception luminance V ⁇ off obtained by the light reception, and outputs the light reception luminance V ⁇ off after the processing to the AD conversion unit 95n.
  • FIG. 12 shows spectral reflection characteristics with respect to human skin.
  • the horizontal axis indicates the wavelength of the irradiation light irradiated on the human skin
  • the vertical axis indicates the reflectance of the irradiation light irradiated on the human skin.
  • the reflectance of reflected light obtained by irradiating human skin with light of 870 [nm] as infrared rays is about 63%.
  • the reflectance of the reflected light obtained by irradiating 950 [nm] light as infrared rays is about 50 percent.
  • the combination ( ⁇ 1, ⁇ 2) (870,950).
  • This combination is a combination in which the reflectance when the human skin is irradiated with the light with the wavelength ⁇ 1 is larger than the reflectance when the light with the wavelength ⁇ 2 is irradiated.
  • the luminance value represented by the luminance signal V ⁇ 1 obtained by the reflected light from the object 41 as skin becomes a relatively large value
  • the luminance value represented by the luminance signal V ⁇ 2 obtained from the reflected light from the object 41 as skin is a relatively small value.
  • the luminance signal V ⁇ off is used to remove the influence of external light other than the irradiation light from the LED 112an and the LED 112bn, thereby improving the accuracy of skin detection.
  • a visible light cut filter that cuts (blocks) visible light is provided on the front surface of the PD 114n, the influence of visible light as external light can be removed, and the accuracy of the skin detection signal is further improved. Can be made. The same applies to PDs described in the second and third embodiments.
  • the reflectance when irradiating light of wavelength ⁇ 1 is almost the same as the reflectance when irradiating light of wavelength ⁇ 2 to things other than human skin. Combinations that are the same.
  • the difference value represented by the normalized difference signal Rdiff is a relatively small positive or negative value ⁇ 1.
  • the processing unit 91 calculates the normalized difference signal Rdiff based on the luminance signal V ⁇ 1 , the luminance signal V ⁇ 2, and V ⁇ off from the AD conversion unit 95n. Then, the processing unit 91 determines whether the calculated normalized difference signal Rdiff is equal to or greater than a predetermined threshold value (for example, a threshold value that is smaller than ⁇ 1 and larger than ⁇ 1). A detection signal is generated.
  • a predetermined threshold value for example, a threshold value that is smaller than ⁇ 1 and larger than ⁇ 1.
  • the wavelength ⁇ 1 is generally in the range of 640 [nm] to 1000 [nm]
  • the wavelength ⁇ 2 is in the range of 900 [nm] to 1100 [nm].
  • the wavelength ⁇ 1 is a wavelength in the visible light region
  • the value of the wavelength ⁇ 1 is caused because it makes the user feel dazzling or affects the color tone that the user feels when viewing the image on the display screen 1a.
  • the value of wavelength ⁇ 1 is a value in the invisible light region of 800 [nm] or more and less than 900 [nm]
  • the value of wavelength ⁇ 2 is the invisible light region of 900 [nm] or more. Is desirable.
  • FIG. 13 shows an example of irradiation timing in each sensor 21n.
  • FIG. 13 shows an example of irradiation timings of the three sensors 21 1 to 21 3 in order to avoid the drawing from becoming complicated.
  • FIG. 13A to 13C show the irradiation timings of the sensors 21 1 to 21 3 , respectively.
  • the horizontal axis represents time t.
  • LED ⁇ 1 indicates a lighting period in which only the LED 112an is lit (period in which only light of wavelength ⁇ 1 is irradiated).
  • LED ⁇ 2 indicates a lighting period in which only the LED 112bn is lit (a period in which only light of wavelength ⁇ 2 is irradiated).
  • LED ⁇ off indicates a light extinguishing period in which the LED 112an and the LED 112bn are extinguished (period in which neither the light with the wavelength ⁇ 1 nor the light with the wavelength ⁇ 2 is irradiated).
  • the sensor 21 1 as shown in A of FIG. 13, illuminates only LEDs 112a 1 in the lighting period "LED .lambda.1” lit only LED112b 1 in the lighting period "LED .lambda.2". Then, the LEDs 112a 1 and 112b 1 are turned off during the turn-off period “LED off”.
  • the sensor 21 2 lights only the LED 112a 2 in the lighting period “LED ⁇ 1” immediately after the extinguishing period “LED off” shown in FIG. Only the LED 112b 2 is lit in the lighting period “LED ⁇ 2”. Then, the LED 112a 2 and the LED 112b 2 are turned off during the turn-off period “LED off”.
  • the sensors 21 1 and 21 2 wait for a certain interval (time), and the processing described with reference to FIG. 13A and FIG. 13B repeat. Note that, at the fixed interval (not shown), for example, the processing unit 91 of the control unit 66 generates a skin detection signal and gesture recognition information, outputs it to the CPU 61, and the like.
  • the digital photo frame 1 is provided with three sensors 21 1 to 21 3
  • the sensor 21 1 and the sensor 21 2 are respectively described with reference to A of FIG. 13 and B of FIG. Perform proper processing.
  • the sensor 21 3 turns on only the LED 112a 3 in the lighting period “LED ⁇ 1” immediately after the extinguishing period “LED off” shown in FIG. “LED ⁇ 2” turns on only LED 112b 3 .
  • the sensor 21 3 turns off the LEDs 112a 3 and 112b 3 during the turn-off period “LED off”.
  • the sensors 21 1 to 21 3 waited for a certain interval I, for example, and have been described with reference to FIGS. 13A to 13C. Repeat the process. Note that, at the fixed interval I, for example, the processing unit 91 of the control unit 66 generates a skin detection signal and gesture recognition information, outputs it to the CPU 61, and the like.
  • FIG. 14 shows a configuration example of a digital photo frame 131 having three sensors 21 1 to 21 3 arranged in a triangle shape.
  • the digital photo frame 131 is provided with sensors 21 1 and 21 2 in the left-right direction of the display screen 1a in the drawing, and a sensor 21 3 in the lower direction of the display screen 1a. Is provided.
  • the digital photo frame 131 can recognize the movement in the X direction (left and right direction) based on the output results from the sensors 21 1 and 21 2 . Similarly, the digital photo frame 131 can recognize the movement in the Y direction (vertical direction) and the like based on the output results from the sensors 21 1 and 21 3 .
  • output results from the sensors 21 2 and 21 3 can be used instead of the output results from the sensors 21 1 and 21 3 .
  • FIG. 15 shows a configuration example of a digital photo frame 151 having three sensors 21 1 to 21 3 arranged in an L shape.
  • the digital photo frame 151 is provided with a sensor 21 1 at the upper left corner, a sensor 21 2 at the lower left corner, and a sensor 21 3 at the lower right corner.
  • the digital photo frame 151 can recognize the movement in the X direction (left and right direction) based on the output results from the sensors 21 2 and 21 3, and based on the output results from the sensors 21 1 and 21 2 It is possible to recognize movement in the (vertical direction).
  • FIG. 1 illustrates the case where two sensors 21 1 and 21 2 are provided
  • FIGS. 14 and 15 illustrate the case where three sensors 21 1 to 21 3 are provided.
  • the number of sensors 21n is not limited to this.
  • a new sensor 21 4 may be provided between the sensor 21 1 and the sensor 21 2 in the digital photo frame 151 of FIG.
  • the movement in the Y direction can be recognized based on the output results from the sensors 21 1 , 21 2, and 21 4, so that the movement in the Y direction can be more accurately detected. Can be recognized.
  • the digital photo frame may be provided with a distance sensor (for example, a PD or a capacitance sensor) that outputs different output results depending on the distance to the object 41, for example.
  • a distance sensor for example, a PD or a capacitance sensor
  • the digital photo frame is provided with a mixture of sensors 21n and distance sensors. Note that the output from the distance sensor is supplied to the processing unit 91 and used to generate gesture recognition information. This is the same in the second and third embodiments described later.
  • the DSP functioning as the processing unit 91 can be an inexpensive DSP (Digital Signal Processor) with a relatively low processing speed, and the digital photo frame 1 can be manufactured. Costs can be reduced.
  • This first gesture recognition process is started, for example, when the digital photo frame 1 is powered on.
  • step S1 the processing unit 91 pays attention to a predetermined sensor 21 n among the plurality of sensors 21 1 to 21 N and sets it as the attention sensor 21 n .
  • step S2 the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like, generates a luminance signal V ⁇ 1 and AD V ⁇ 1 acquisition processing to be output to the conversion unit 95 n is performed.
  • the AD conversion unit 95 n performs AD conversion on the luminance signal V ⁇ 1 from the sensor of interest 21 n and supplies the luminance signal V ⁇ 1 after AD conversion to the processing unit 91.
  • step S3 the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, etc., and generates a luminance signal V ⁇ 2 to generate an AD conversion unit.
  • V ⁇ 2 acquisition processing to be output to 95 n is performed.
  • the AD conversion unit 95 n performs AD conversion on the luminance signal V ⁇ 2 from the sensor of interest 21 n and supplies the luminance signal V ⁇ 2 after AD conversion to the processing unit 91.
  • step S4 the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like to generate the luminance signal V ⁇ off and generate the AD conversion unit.
  • the V ⁇ off acquisition process for outputting to 95 n is performed.
  • the AD conversion unit 95 n performs AD conversion on the luminance signal V ⁇ off from the sensor of interest 21 n and supplies the luminance signal V ⁇ off after AD conversion to the processing unit 91.
  • step S5 the processing unit 91, attention sensor 21 the luminance signal supplied via the AD converter 95 n from n V .lambda.1, luminance signal V .lambda.2, and based on the raw brightness signal V? OFF, an object of the skin Skin discrimination processing is performed for generating a skin detection signal indicating whether or not the target sensor 21 n exists within the detection range. Details of the skin discrimination processing performed by the processing unit 91 will be described in detail with reference to FIG.
  • step S6 the processing unit 91 generates gesture recognition information based on the luminance signal V ⁇ 1 supplied from the sensor of interest 21 n via the AD conversion unit 95 n .
  • Step S5 and Step S6 may be performed collectively when proceeding from Step S7 to Step S8.
  • the luminance signal V ⁇ 1 representing the received light intensity of the reflected light received by the PD 114n can be used as it is.
  • the square of the distance from the sensor of interest 21 n to the object is proportional to the luminance signal V ⁇ 1 as the received light intensity of the reflected light from the object, and therefore the luminance signal V ⁇ 1 as the received light intensity of the reflected light.
  • an average of the luminance signal V ⁇ 1 (represented by the luminance value) and the luminance signal V ⁇ 2 (represented by the luminance value) may be employed as the gesture recognition information.
  • the processing unit 91 can employ the skin detection signal generated in step S5 as gesture recognition information. Further, the processing unit 91 generates gesture recognition information based on the luminance signal V ⁇ 2 supplied from the sensor of interest 21 n via the AD conversion unit 95 n as in the case of the luminance signal V ⁇ 2. Also good.
  • the processing unit 91 supplies gesture recognition information generated based on at least one of the luminance signal V ⁇ 1 and the luminance signal V ⁇ 2 together with the generated skin detection signal to the CPU 61 via the input / output interface 65 and the bus 64.
  • step S7 when the processing unit 91 determines whether or not attention to all of the plurality of sensors 21 1 to 21 N, it was determined not to be focused on all of the plurality of sensors 21 1 to 21 N, the processing To step S1.
  • step S1 the processing unit 91 sets a sensor 21 n that has not been focused on among the plurality of sensors 21 1 to 21 N as a new focused sensor 21 n, and thereafter the same processing is performed.
  • step S7 if the processing unit 91 determines that all of the plurality of sensors 21 1 to 21 N are focused, the process proceeds to step S8.
  • step S8 the CPU 61 determines that the object 41 as skin is one of the sensors based on the skin detection signal supplied from the sensors 21 1 to 21 N of the control unit 66 via the input / output interface 65 and the bus 64. It is determined whether or not it has been detected within a detection range of 21n.
  • step S8 if the CPU 61 determines that the object 41 as skin has not been detected, the process returns to step S1, and thereafter the same process is performed.
  • step S8 if the CPU 61 determines that the object 41 as skin has been detected, the process proceeds to step S9.
  • step S9 the CPU 61 determines the position and movement of the object 41 as the skin based on the gesture recognition information supplied from the sensors 21 1 to 21 N of the control unit 66 via the input / output interface 65 and the bus 64. Recognize (detect).
  • step S10 the CPU 61 performs corresponding processing according to the recognition result based on the gesture recognition information.
  • the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and is currently displaying among a plurality of still images.
  • the still image is enlarged and displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 3, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65 to enlarge and display a still image being displayed. The image is reduced to the original size and displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as illustrated in FIG. 4, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65. A still image to be displayed is displayed on the display screen 1a.
  • the CPU 61 when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 7, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and immediately before the plurality of still images.
  • the displayed still image is displayed on the display screen 1a.
  • step S10 After the completion of step S10, the process returns to step S1, and thereafter the same process is repeated. Note that the first gesture recognition process is ended when, for example, the digital photo frame 1 is powered off.
  • step S31 LEDs 112a n, under the control from the LED driver 111 n, the light of wavelength .lambda.1, irradiates the detection range of the target sensor 21 n.
  • the LED 112b n is turned off in accordance with the control from the LED driver 111 n .
  • step S32 PD 114 n is, reflected light of light of wavelength ⁇ 1 emitted from the LEDs 112a n (e.g., if the object 41 is present in the detection range of the target sensor 21 n, the light of the wavelength ⁇ 1 emitted to the object 41 Receives reflected light.
  • step S33 the PD 114 n performs photoelectric conversion on the reflected light received in the process of step S32, and supplies the luminance signal V ⁇ 1 obtained as a result to the AD conversion unit 95 n .
  • the process is performed in step S2 of FIG.
  • step S3 the process proceeds to step S3.
  • the AD conversion unit 95 n has a luminance signal V .lambda.1 from PD 114 n AD conversion, and supplies the processing unit 91 the luminance signal V .lambda.1 after AD conversion.
  • step S3 of FIG. 16 the sensor of interest 21 n performs a V ⁇ 2 acquisition process in the same manner as the V ⁇ 1 acquisition process.
  • this V ⁇ 2 acquisition process includes step S31 ′ to step S33 ′.
  • the LED 112b n detects the light of wavelength ⁇ 2 according to the control from the LED driver 111 n and detects the sensor 21 n . Irradiate the area. In this case, LEDs 112a n is assumed to be off according to the control from the LED driver 111 n.
  • the PD 114 n reflects the reflected light of the wavelength ⁇ 2 emitted from the LED 112b n (for example, if the object 41 exists in the detection range of the sensor of interest 21 n , the light of the wavelength ⁇ 2 emitted to the object 41). (Reflected light).
  • step S33 ′ the PD 114 n photoelectrically converts the reflected light received in the process of step S32 ′ and supplies the resulting luminance signal V ⁇ 2 to the AD converter 95 n .
  • the AD conversion unit 95 n has a luminance signal V .lambda.2 from PD 114 n AD conversion, and supplies the processing unit 91 the luminance signal V .lambda.2 after AD conversion.
  • LED driver 111 n controls the LEDs 112a n and LED112b n, none of the LEDs 112a n and LED112b n assumed that is turned off. Therefore, the detection range of the target sensor 21 n, only the external light other than light emitted from the LEDs 112a n and LED112b n is irradiated.
  • step S51 the PD 114 n receives reflected light of outside light (for example, reflected light of outside light irradiated on the object 41 when the object 41 exists in the detection range of the sensor of interest 21 n ).
  • reflected light of outside light for example, reflected light of outside light irradiated on the object 41 when the object 41 exists in the detection range of the sensor of interest 21 n .
  • step S52 the PD 114 n photoelectrically converts the reflected light received in the process of step S51, supplies the luminance signal V ⁇ off obtained as a result to the AD conversion unit 95 n , and the process is performed in step S4 of FIG.
  • step S5 the process proceeds to step S5.
  • the AD conversion unit 95 n has a luminance signal V? OFF from the PD 114 n AD conversion, and supplies the luminance signal V? OFF after AD conversion to the processing unit 91.
  • step S73 the processing unit 91 determines whether or not the object 41 as skin exists within the detection range of the sensor of interest 21 n based on whether or not the normalized difference signal Rdif is greater than or equal to a predetermined threshold value. Is determined.
  • step S74 when the normalized difference signal Rdif is greater than or equal to a predetermined threshold value, the processing unit 91 determines that the object 41 as skin exists within the detection range of the sensor of interest 21 n , and determines the determination result. Generate a skin detection signal to represent.
  • the processing unit 91 determines that the object 41 as skin does not exist within the detection range of the sensor of interest 21 n and represents the determination result. A skin detection signal is generated.
  • step S74 After the completion of step S74, the process returns to step S5 in FIG. 16, and the subsequent processes are performed.
  • the first gesture recognition process when the object 41 as skin exists within the detection range of at least one sensor 21n, the position and movement of the object 41 as skin are recognized. I tried to do it.
  • step S8 if no skin is detected in step S8, the process may return to step S1 after waiting for a predetermined time.
  • the interval (for example, corresponding to the interval I shown in FIG. 13) in which the processes in steps S1 to S7 are performed is determined until the skin is detected in step S8. Can be longer than the interval after detection.
  • V ⁇ 1 acquisition processing, V ⁇ 2 acquisition processing, V ⁇ off acquisition processing, and skin discrimination are performed as processing for detecting the object 41 as skin from the detection range of the sensor of interest 21 n. The process was repeated.
  • the second gesture recognition process is performed to simplify the process for detecting the object 41 as skin from the detection range of the sensor of interest 21 n. Also good.
  • the object 41 as skin when the object 41 as skin is detected, for example, only the V ⁇ 1 acquisition process is performed, and the detection of the attention sensor 21 n is performed based on the luminance signal V ⁇ 1 obtained by the V ⁇ 1 acquisition process. It detects whether an object exists within the range.
  • the processing is simplified by treating the object 41 as skin as having been detected.
  • This second gesture recognition process is started, for example, when the digital photo frame 1 is turned on.
  • steps S91 to S100 the same processes as in steps S1 to S10 in FIG. 16 are performed.
  • step S101 the processing unit 91 pays attention to a predetermined sensor 21 n among the plurality of sensors 21 1 to 21 N and sets it as the attention sensor 21 n .
  • step S102 the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like to perform the V ⁇ 1 acquisition process.
  • the AD conversion unit 95 n performs AD conversion on the luminance signal V ⁇ 1 from the sensor of interest 21 n and supplies the luminance signal V ⁇ 1 after AD conversion to the processing unit 91.
  • step S103 the processing unit 91 generates gesture recognition information based on the luminance signal V ⁇ 1 supplied from the sensor of interest 21 n via the AD conversion unit 95 n, and together with the luminance signal V ⁇ 1 , input / output interface 65.
  • the data is supplied to the CPU 61 via the bus 64.
  • the processing unit 91 supplies only the gesture recognition information as the luminance signal V ⁇ 1 to the CPU 61 via the input / output interface 65 and the bus 64. Will be.
  • step S104 if the processor 91 determines whether attention to all of the plurality of sensors 21 1 to 21 N, it was determined not to be focused on all of the plurality of sensors 21 1 to 21 N, The process returns to step S101.
  • step S101 the processing unit 91 sets a sensor 21 n that has not been noticed among the plurality of sensors 21 1 to 21 N as a new sensor of interest 21 n, and thereafter the same processing is performed.
  • step S104 when the processing unit 91 determines that all of the plurality of sensors 21 1 to 21 N are focused, the process proceeds to step S105.
  • step S105 the CPU 61 determines which of the sensors 21 n is based on whether or not each luminance signal V ⁇ 1 supplied from the control unit 66 via the input / output interface 65 and the bus 64 is equal to or greater than a predetermined threshold. It is determined whether an object is detected within the detection range.
  • step S105 the CPU 61 may determine whether an object is detected within the detection range of the sensor of interest 21 n based on whether the luminance signal V ⁇ 2 is equal to or greater than a threshold value. In this case, in step S102, V ⁇ 2 acquisition processing is performed.
  • step S105 when the CPU 61 determines that an object has been detected, the CPU 61 treats the object 41 as skin as having been detected, returns the process to step S99, and thereafter repeats the same process.
  • step S105 if the CPU 61 determines that no object is detected, it treats that the object 41 as skin has not been detected, returns the process to step S91, and thereafter repeats the same process.
  • the second gesture recognition process After an object as skin is detected within the detection range of the sensor 21n, it is detected whether or not the object exists within the detection range of the sensor 21n. When an object is detected, it is handled as if the object as skin was detected.
  • the processing unit 91 is compared with the case where the V ⁇ 1 acquisition process, the V ⁇ 2 acquisition process, the V ⁇ off acquisition process, and the skin determination process are repeatedly performed regardless of whether an object as skin is detected. It becomes possible to reduce the burden.
  • step S98 if no skin is detected in step S98 and no object is detected in step S105, the process returns to step S91 after waiting for a predetermined time. May be.
  • the skin detects the interval (for example, corresponding to the interval I in FIG. 13) in which the processes in steps S91 to S97 are performed until the skin is detected in step S98.
  • the interval after being made can be longer.
  • step S91 to step S97 it is possible to reduce the load on the control unit 66 by increasing the interval at which the processing from step S91 to step S97 is performed until skin is detected. Further, after the skin is detected, the interval can be shortened, and the detected skin position and movement can be recognized at a shorter interval.
  • the first gesture recognition process or the second gesture recognition process is performed regardless of whether or not an object exists within the detection range of the sensors 21 1 to 21 N. did.
  • the first gesture recognition process or the first The proximity object detection process for performing the gesture recognition process 2 may be performed.
  • This proximity object detection process is started, for example, when the digital photo frame 1 is turned on.
  • step S121 the processing unit 91 pays attention to a predetermined sensor 21 n among the plurality of sensors 21 1 to 21 N and sets it as the attention sensor 21 n .
  • step S122 the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like to perform the V ⁇ 1 acquisition process.
  • the AD conversion unit 95 n performs AD conversion on the luminance signal V ⁇ 1 from the sensor of interest 21 n and supplies the luminance signal V ⁇ 1 after AD conversion to the processing unit 91.
  • step S123 the processing unit 91 determines whether or not an object has entered the detection range of the target sensor 21 n based on the luminance signal V ⁇ 1 supplied from the target sensor 21 n via the AD conversion unit 95 n. judge.
  • the processing unit 91 determines whether the luminance value represented by the luminance signal V ⁇ 1 supplied from the sensor of interest 21 n via the AD conversion unit 95 n is equal to or greater than a predetermined threshold value. It is determined whether an object has entered the detection range of the sensor of interest 21 n .
  • processing unit 91 If the processing unit 91 does not detect the entry of an object into the detection range of the sensor of interest 21 n based on the determination result, the processing unit 91 returns the process to step S121.
  • step S121 the processing unit 91 sets the sensor 21 n that has not been noticed among the plurality of sensors 21 1 to 21 N as a new attention sensor 21 n, and thereafter, the same processing is performed.
  • step S121 when all of the plurality of sensors 21 1 to 21 N are noticed in step S121 and the process of step S121 is further performed, none of the plurality of sensors 21 1 to 21 N has been noticed yet. As a result, the process of step S121 is performed.
  • step S123 processing unit 123 determines based on the result, when detecting an object intrusion for the detection range of the target sensor 21 n, processing proceeds to step S124, the first gesture recognition processing or the second Gesture recognition processing is performed.
  • step S124 when the first gesture recognition process is performed, the first gesture recognition process is terminated as described below, and the proximity object detection process is also terminated.
  • step S8 of the first gesture recognition process when the CPU 61 determines that the object 41 as skin is not detected, in other words, the object 41 that has entered the detection range is not skin.
  • the first gesture recognition process is terminated.
  • the proximity object detection process is terminated, and a proximity object detection process is newly started.
  • step S124 when the second gesture recognition process is performed, the second gesture recognition process is ended as follows, and the proximity object detection process is also ended.
  • step S98 of the second gesture recognition process determines in step S98 of the second gesture recognition process that the object 41 as skin is not detected, in other words, the object 41 that has entered the detection range is not skin. Is determined, the second gesture recognition process is terminated. Then, the proximity object detection process is terminated, and a proximity object detection process is newly started.
  • step S105 of the second gesture recognition process when the CPU 61 determines that no object is detected, it is determined that the object 41 that has entered the detection range does not already exist in the detection range. Then, the second gesture recognition process is terminated. Then, the proximity object detection process is terminated, and a proximity object detection process is newly started.
  • steps S121 to S123 it is determined whether or not a new object has entered the detection range of any of the plurality of sensors 21 1 to 21 N.
  • step S124 when it is detected that an object has newly entered the detection range of any of the plurality of sensors 21 1 to 21 N based on the determination result by the determination, in step S124, a new first The gesture recognition process or the second gesture recognition process is performed.
  • the proximity object detection process it is determined whether or not an object has entered one of the detection ranges of the sensor 21n. Then, based on the determination result, the first or second gesture recognition process is performed when it is detected that an object has entered one of the detection ranges of the sensor 21n.
  • the burden on the processing unit 91 can be reduced as compared with the case where the first or second gesture recognition process is performed regardless of whether an object has entered the detection range of the sensor 21n. It becomes.
  • step S121 attention is sequentially paid to each sensor 21 n in step S121, and whether or not an object is detected in step S123 each time the luminance signal V ⁇ 1 is acquired from the attention sensor 21 n in step S122. Judgment was made.
  • the luminance signal V ⁇ 1 may be acquired for each of the sensors 21 1 to 21 N.
  • step S123 based on the luminance signal V .lambda.1 acquired at each sensor 21 1 to each 21 N, determine whether the object has entered into any of the detection ranges of the sensors 21 1 to 21 N You may make it do.
  • step S123 If it is determined in step S123 that the object has not entered any of the detection ranges of the respective sensors 21 1 to 21 N , the process waits for a predetermined time and the process returns to step S121. It may be.
  • FIG. 22 shows a configuration example of a digital photo frame 171 according to the second embodiment.
  • the digital photo frame 171 is provided with a PD 191 1 and a PD 191 2 in place of the sensor 21 1 and the sensor 21 2 of the digital photo frame 1 according to the first embodiment.
  • the digital photo frame 171 is provided with an LED unit 192 having an LED 112a (FIG. 26) that emits light of wavelength ⁇ 1 and an LED 112b (FIG. 26) that emits light of wavelength ⁇ 2.
  • the built-in LED 112a and LED 112b irradiate the detection range of the sensor 21n.
  • the LED unit 192 in the second embodiment irradiates a range in which the user is supposed to perform a gesture, that is, a range composed of detection ranges of the plurality of sensors 21 1 to 21 N , for example. I am doing so.
  • This range is a detection range for detecting an object.
  • the configuration is the same as that of the digital photo frame 1 (FIG. 1).
  • LED unit 192 the distance from the LEDs 112a (FIG. 26) to PD191 1, the distance from the LEDs 112a (FIG. 26) to PD191 2, the distance from LED112b ( Figure 26) to PD191 1, and from LED112b ( Figure 26) the distance to the PD191 2 is arranged so either the same desirable.
  • the LED unit 192 passes through the center of a line connecting the PD191 1 and PD191 2, must be present on the display screen 1a perpendicular on the normal line. This will be described in detail with reference to FIGS.
  • the digital photo frame 171 is based on the output results from the PD 191 1 and PD 191 2 in the X direction (in FIG. 22, the left and right directions where the PD 191 1 and PD 191 2 exist).
  • FIG. 23 shows an example of a state when the digital photo frame 171 is viewed from the lower side in FIG.
  • the LED unit 192 irradiates a range in which the user is supposed to perform a gesture or the like on the front surface of the digital photo frame 171.
  • the LED unit 192 repeatedly turns on the LED 112a, turns on the LED 112b, and turns off the LED 112a and the LED 112b, similarly to the case shown in FIG. 13A.
  • FIG. 24 the object is, when the motion as shown in FIG. 23, the PD191 1 and PD191 2, shows an example of an output result output respectively.
  • FIG. 24A shows an example of an output result obtained when the object exists on the left side in FIG. That is, the output result of the A to the left PD191 1 in FIG. 24, the A center of FIG. 24 are shown PD191 2 output results.
  • the horizontal axis represents the time t
  • the vertical axis represents the output V from the PD191 1. The same applies to the left side of B in FIG. 24 and the left side of C in FIG.
  • the horizontal axis represents the time t
  • the vertical axis represents the output V from the PD191 2. The same applies to the center B in FIG. 24 and the center C in FIG.
  • FIG. 24B shows an example of the output result obtained when the object exists in the center in FIG. That is, output of B on the left side PD191 1 in FIG. 24, the B middle of Figure 24 is shown PD191 2 output results.
  • FIG. 24C shows an example of an output result obtained when the object exists on the right side in FIG. That is, the output result of the C to the left PD191 1 in FIG. 24, the C center of FIG. 24 are shown PD191 2 output results.
  • the digital photo frame 171 can recognize the movement of the object according to the change in the difference.
  • FIG. 25 shows a detailed configuration example of the digital photo frame 171.
  • the digital photo frame 171 is provided with a control unit 211 having a plurality of PDs 191 1 to 191 N in place of the control unit 66 having a plurality of sensors 21 1 to 21 N (FIG. 10).
  • the configuration is the same as that of the photo frame 1.
  • control unit 211 of the digital photo frame 171 in FIG. 22 includes PDs 191 1 and 191 2 , but the number of PDs 191 n is not limited to two.
  • FIG. 26 shows a detailed configuration example of the control unit 211.
  • control unit 211 includes a processing unit 231, a current control unit 92, a timing control unit 93, a gain control unit 94n, an LED unit 192, and an AD conversion unit 95n. Composed.
  • a lens 232n similar to the lens 115n is provided on the front surface of the PD 191n.
  • the current control unit 92 performs the same processing as the current control unit 92n, and the timing control unit 93 performs the same processing as the timing control unit 93n.
  • the gain control unit 94n and the AD conversion unit 95n perform the same processing as the gain control unit 94n and the AD conversion unit 95n in FIG. 11, respectively.
  • the LED unit 192 includes an LED driver 111, an LED 112a, an LED 112b, a lens 113a, and a lens 113b that are configured in the same manner as the LED driver 111n, LED 112an, LED 112bn, lens 113an, and lens 113bn in FIG.
  • the PD 191n receives the reflected light of the light having the wavelength ⁇ 1 emitted from the LED 112a (for example, the reflected light from the object 41 irradiated with the light having the wavelength ⁇ 1) when the LED 112a is turned on, similarly to the PD 114n.
  • the PD 191n performs gain control processing on the received light brightness V ⁇ 1 obtained by the light reception, and outputs the processed received light brightness V ⁇ 1 to the AD conversion unit 95n.
  • the PD 191n receives the reflected light of the wavelength ⁇ 2 emitted from the LED 112b (for example, the reflected light from the object 41 irradiated with the light of wavelength ⁇ 2).
  • PD191n is the gain control processing on the received light intensity V .lambda.2 obtained by the light receiving outputs the receiving intensity V .lambda.2 after processing to the AD conversion unit 95n.
  • the PD 191n receives reflected light of external light other than the irradiated light (for example, reflected light from the object 41 irradiated with the external light).
  • the PD 191n performs gain control processing on the light reception luminance V ⁇ off obtained by the light reception, and outputs the processed light reception luminance V ⁇ off to the AD conversion unit 95n.
  • the processing unit 231 controls the current control unit 92, the timing control unit 93, and the gain control unit 94n in the same manner as the processing unit 91.
  • the luminance signal V ⁇ 1 , the luminance signal V ⁇ 2 , and the luminance signal V ⁇ off are supplied to the processing unit 231 from each PD 191n via the AD conversion unit 95n.
  • the processing unit 231 is similar to the processing unit 91, for example, based on the luminance signal V ⁇ 1 , luminance signal V ⁇ 2 , and luminance signal V ⁇ off from the AD conversion unit 95n, the irradiation range (detection range) of the LED unit 192.
  • a skin detection signal indicating whether or not skin is present is generated.
  • the processing unit 231 generates gesture recognition information based on the luminance signal V ⁇ 1 from the AD conversion unit 95n, for example.
  • the processing unit 231 can generate the skin detection signal as gesture recognition information or generate the skin detection signal based on the luminance signal V ⁇ 2 in the same manner as the processing unit 91 in the first embodiment.
  • the processing unit 231 supplies the generated skin detection signal and gesture recognition information to the CPU 61 via the input / output interface 65 and the bus 64 of FIG.
  • This third gesture recognition process is started when the digital photo frame 171 is powered on, for example.
  • step S141 the processing unit 231, a current control unit 92, via the timing control unit 93 and the gain control unit 94 n, and it controls the PD191n and LED unit 192.
  • the processing unit 231 performs a V ⁇ 1 acquisition process of generating a luminance signal V ⁇ 1 and outputting it to the AD conversion unit 95 n for each PD 191n.
  • the processing unit 231 instructs the current control unit 92 to supply a current to the LED 112a and the LED 112b, and instructs the timing control unit 93 to turn on and off the LEDs 112a and 112b.
  • the current control unit 92 and the timing control unit 93 control the LED driver 111 in accordance with an instruction from the processing unit 231.
  • the LED driver 111 irradiates light of wavelength ⁇ 1 by turning on only the LED 112a according to the control from the current control unit 92 and the timing control unit 93.
  • each PD191n respectively, receives reflected light obtained by the irradiation of light of wavelength .lambda.1, a luminance signal V .lambda.1 obtained by photoelectrically converting the received reflected light, and outputs the AD conversion unit 95 n.
  • the AD conversion unit 95 n respectively, the luminance signal V .lambda.1 from PD191 n AD conversion, a luminance signal V .lambda.1 after AD conversion, and supplies to the processing unit 231.
  • step S142 the processing unit 231, a current control unit 92, via the timing control unit 93 and the gain control unit 94 n, and it controls the PD191n and LED unit 192.
  • the processing unit 231 performs a V ⁇ 2 acquisition process for generating a luminance signal V ⁇ 2 and outputting it to the AD conversion unit 95 n for each PD 191n.
  • the processing unit 231 instructs the current control unit 92 to supply a current to the LED 112a and the LED 112b, and instructs the timing control unit 93 to turn on and off the LEDs 112a and 112b.
  • the current control unit 92 and the timing control unit 93 control the LED driver 111 in accordance with an instruction from the processing unit 231.
  • the LED driver 111 irradiates light of wavelength ⁇ 2 by turning on only the LED 112b according to control from the current control unit 92 and the timing control unit 93.
  • each PD191n respectively, receives reflected light obtained by the irradiation of light of wavelength .lambda.2, a luminance signal V .lambda.2 obtained by photoelectrically converting the received reflected light, and outputs the AD conversion unit 95 n.
  • the AD conversion unit 95 n respectively, the luminance signal V .lambda.2 from PD191 n AD conversion, a luminance signal V .lambda.2 after AD conversion, and supplies to the processing unit 231.
  • step S143 the processing unit 231, a current control unit 92, via the timing control unit 93 and the gain control unit 94 n, and it controls the PD191n and LED unit 192.
  • the processing unit 231 performs a V ⁇ off acquisition process of generating a luminance signal V ⁇ off and outputting it to the AD conversion unit 95 n for each PD 191n .
  • the processing unit 231 instructs the current control unit 92 to supply a current to the LED 112a and the LED 112b, and instructs the timing control unit 93 to turn on and off the LEDs 112a and 112b.
  • the current control unit 92 and the timing control unit 93 control the LED driver 111 in accordance with an instruction from the processing unit 231.
  • the LED driver 111 turns off both the LED 112a and the LED 112b according to the control from the current control unit 92 and the timing control unit 93.
  • each PD191n respectively, receive the outside light reflected light and outputs the reflected light received luminance signal V? OFF obtained by photoelectrically converting, the AD conversion unit 95 n.
  • the AD conversion unit 95 n respectively, the luminance signal V? OFF from the PD191 n AD conversion, a luminance signal V? OFF after AD conversion, and supplies to the processing unit 231.
  • the luminance signal combinations (V ⁇ 1 , V ⁇ 2 , V ⁇ off ) n generated by the PD 191n are supplied to the processing unit 231 via the AD conversion unit 95n by the processing in steps S141 to S143 described above.
  • step S144 the processing unit 231, a combination of the luminance signal supplied via the AD converter 95n from PD191n (V ⁇ 1, V ⁇ 2, V ⁇ off) based on n, the same skin determination processing and processing unit 91 Do.
  • a skin detection signal Dn is generated for each combination (V ⁇ 1 , V ⁇ 2 , V ⁇ off ) n .
  • step S145 for example, the processing unit 231 generates the gesture recognition information Jn based on the luminance signal V ⁇ 1 of the combination of luminance signals (V ⁇ 1 , V ⁇ 2 , V ⁇ off ) n .
  • the processing unit 231 supplies the generated skin detection signal Dn and gesture recognition information Jn to the CPU 61 via the input / output interface 65 and the bus 64.
  • steps S146 through S148 the same processing as in steps S8 through S10 in FIG. 16 is performed.
  • the third gesture recognition process ends when the digital photo frame 171 is powered off, for example.
  • the third gesture recognition process when the object 41 as skin exists in the irradiation range (detection range) of the LED unit 192, the position, movement, and the like of the object 41 as skin are determined. It was made to recognize.
  • the third gesture recognition process is obtained by replacing steps S1 to S6 in the first gesture recognition process with steps S141 to S145.
  • the same deformation as in the case of the first gesture recognition process can be performed.
  • step S146 if no skin is detected in step S146, the process can return to step S141 after waiting for a predetermined time.
  • V ⁇ 1 acquisition process, V ⁇ 2 acquisition process, V ⁇ off acquisition process, and skin discrimination process are performed as processes for detecting an object as skin from the irradiation range of the LED unit 192. Repeatedly performed.
  • step S146 the process for detecting the object as skin from the irradiation range of the LED unit 192 is simplified as in the second gesture recognition process. You may do it.
  • a V ⁇ 1 acquisition process similar to that in step S141 is performed as a process corresponding to steps S101 to S104 in FIG. 20 after the end of step S148, and is generated by the V ⁇ 1 acquisition process.
  • the processing unit 231 Based on the luminance signal V ⁇ 1 that has been performed, the processing unit 231 generates gesture recognition information.
  • the processing unit 231 supplies the luminance signal V ⁇ 1 and gesture recognition information generated for each PD 191n to the CPU 61 via the input / output interface 65 and the bus 64.
  • CPU 61 based on the luminance signal V .lambda.1 for each PD191n, determines whether or not the object is detected within the irradiation range of the LED unit 192.
  • the CPU 61 determines that an object is detected within the irradiation range of the LED unit 192, the CPU 61 returns the process to step S147, and thereafter the same process is performed.
  • a process corresponding to the proximity object detection process of FIG. 21 can be performed before the third gesture recognition process.
  • V ⁇ 1 acquisition processing in step S141 in FIG. 27 is performed as processing corresponding to step S121 and step S122 in FIG.
  • the luminance signal V ⁇ 1 for each PD 191n is supplied to the processing unit 231.
  • the processing unit 231 determines whether or not the object has entered into the irradiation range of the LED unit 192.
  • the processing unit 231 determines that the object within the irradiation range of the LED unit 192 is based on whether at least one of the luminance values represented by the luminance signal V ⁇ 1 for each PD 191n is equal to or greater than a predetermined threshold. Determine whether or not.
  • the processing unit 231 If the processing unit 231 does not detect the intrusion of an object into the irradiation range of the LED unit 192 based on the determination result, the processing unit 231 returns the processing to the processing corresponding to step S121 and step S122, and thereafter the same processing is performed. .
  • the processing unit 123 detects the intrusion of an object into the irradiation range of the LED unit 192 based on the determination result, the processing unit 123 proceeds with the process corresponding to step S124 in FIG. 21, and the third gesture recognition process is performed. Done.
  • the third gesture recognition process is performed as the process corresponding to step S124, the third gesture recognition process is terminated as follows, and the process corresponding to the proximity object detection process of FIG. 21 is also terminated.
  • step S146 of the third gesture recognition process when the CPU 61 determines that the object 41 as skin is not detected, in other words, the object 41 that has entered the detection range is not skin. Is determined, the third gesture recognition process is terminated. Then, the process corresponding to the proximity object detection process in FIG. 21 is terminated, and a process corresponding to the proximity object detection process is newly started.
  • step S123 when it is determined that an object has not entered the irradiation range of the LED unit 192, it waits for a predetermined time, You may make it return a process to the process corresponded to step S121 and step S122.
  • the LED unit 192 is preferably arranged so that the LEDs 112a and 112b are arranged in the vertical direction in the figure. This is due to the position of LED112a and LED112b, in order to prevent the deviation occurs in the amount of light received by the PD191 1 and PD191 2.
  • the object is, when present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 and PD191 2 are both (almost) the same amount of received light, reflected light of light of wavelength ⁇ 1 from the object Can be received.
  • the LED unit 192 as shown in FIG. 28, the distance from LED112b to PD191 1, the distance from LED112b to PD191 2 becomes the same.
  • the object is, when present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 and PD191 2 are both (almost) the same amount of received light, reflected light of light of wavelength ⁇ 2 from the object Can be received.
  • the same output result (luminance signal V ⁇ 2 ) as shown on the left side and the center of FIG. 24B can be obtained from PD 191 1 and PD 191 2 . Even when it is used, it is possible to recognize a gesture or the like with high accuracy.
  • the LED unit 192 as shown in FIG. 28, the distance from LED112a to PD191 1, the distance from LED112b to PD191 1 becomes the same.
  • PD191 1 is a both (almost) the same amount of received light, and the reflected light of the light of wavelength ⁇ 1 from the object, the wavelength The reflected light of the light of ⁇ 2 can be received.
  • the luminance signal V ⁇ 1 and the luminance signal V ⁇ 2 can be generated in the PD 191 1 , so that the skin can be discriminated with high accuracy.
  • the LED unit 192 since the distance from LED112a to PD191 2, the distance from LED112b to PD191 2 becomes the same, for PD191 2 also can be said similar to PD191 1 .
  • the LED unit 192 shown in FIG. 29 is arranged so that the LEDs 112a and 112b are arranged in the left-right direction in the drawing.
  • the distance from the LEDs 112a to PD191 1, and the distance to the PD191 2 is different from the LEDs 112a.
  • the object is, when present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 and PD191 2, respectively, in different amount of light received by receiving reflected light of light of wavelength ⁇ 1 from the object Become.
  • the PD191 1 and PD191 2 the desired output result, as shown in B left and the center of FIG. 24 (a luminance signal V .lambda.1) can not be obtained.
  • the LED unit 192 when placed as shown in FIG. 29, the gain control processing performed respectively PD191 1 and PD191 2, from PD191 1 and PD191 2, shown in B left and the center of FIG. 24 It is desirable to adjust so as to obtain a desired output result.
  • FIG. 30 shows an example in which three PDs 191 1 to 191 3 are provided in the digital photo frame 171.
  • an output result (luminance signals V ⁇ 1 , V ⁇ 2 , V ⁇ off ) from the PD 191 1 (or PD 191 2 ) in which the distance to the LED 112a is the same as the distance to the LED 112b is used. desirable.
  • FIG. 31 shows an example in which four PDs 191 1 to 191 4 are provided in the digital photo frame 171.
  • the distance of the distance from LED112a to PD191 1, from LED112a to PD191 2 are the same.
  • the distance between the distance from LED112b to PD191 1, from LED112b to PD191 2 are the same.
  • a gesture in the vertical direction in the figure can be recognized in the same manner as when the output results from PD 191 1 and PD 191 2 are used.
  • FIG. 31 based on the output result from the PD191 1 and PD191 4, as shown in C of A to 24 in FIG. 24, it may be recognized and gestures in the vertical direction in the drawing .
  • any of PD191 1 to 191 4 is also the distance to the LEDs 112a, the distance to LED112b different.
  • the first PD and the second PD can be considered as in the case of PD191 1 and PD191 2 shown in FIG. 28.
  • FIG. 32 shows an example in which three PDs 191 1 to 191 3 are provided in the digital photo frame 171 and two LED units 192 and 271 are provided.
  • the LED unit 192 irradiates a first irradiation range
  • the LED unit 271 irradiates a second irradiation range different from the first irradiation range. Note that the first irradiation range and the second irradiation range may partially overlap.
  • the combination of the PD 191 1 , PD 191 2 , and the LED unit 192 having the LEDs 112a and 112b determines whether or not the object in the first irradiation range is skin, In the irradiation range, a gesture in the horizontal direction in the figure is recognized.
  • the combination of the PD 191 2 , the PD 191 3 , and the LED unit 271 having the LED 291a and the LED 291b determines whether or not the object in the second irradiation range is skin, Recognize middle and up / down gestures.
  • FIG. 33 shows a configuration example of a digital photo frame 331 according to the third embodiment.
  • the digital photo frame 331 has a display screen 1a.
  • a PD 351 is provided on the upper side of the display screen 1a.
  • An LED unit 371 1 and an LED unit 371 2 are provided in the horizontal direction of the PD 351 in the drawing.
  • the PD 351 is configured in the same manner as the PD 191n (eg, PD 191 1 ) in FIG. Further, the LED unit 371 1 and the LED unit 371 2 are configured in the same manner as the LED unit 192 of FIG.
  • the LED unit 371 1 and the LED unit 371 2 each irradiate, for example, an assumed range in which a user's gesture is performed in front of the digital photo frame 331 as an irradiation range.
  • FIG. 34 shows an example of the positional relationship between the PD 351, the LED unit 371 1, and the LED unit 371 2 .
  • the LED unit 371 1 includes an LED 391a 1 that emits light having a wavelength ⁇ 1 and an LED 391b 1 that emits light having a wavelength ⁇ 2.
  • the LED unit 371 2 includes an LED 391a 2 that emits light having a wavelength ⁇ 1, and an LED 391b 2 that emits light having a wavelength ⁇ 2.
  • the distance from PD351 to LED391a 1, the distance from the PD351 to LED391b 1 are arranged to have the same.
  • the luminance signal V .lambda.2 generated by PD351 during lighting of LED391b 1 and a luminance signal V generated by PD351 when turning off the LED391a 1 and LED391b 1
  • a relatively accurate skin detection signal can be generated based on ⁇ off .
  • the distance from PD351 to LED391a 2, the distance from the PD351 to LED391b 2 are arranged to have the same.
  • the luminance signal V .lambda.2 generated by PD351 during lighting of LED391b 2 and the luminance signal V generated by PD351 when turning off the LED391a 2 and LED391b 2
  • a relatively accurate skin detection signal can be generated based on ⁇ off .
  • the LED unit 371 2 is arranged on the lower side of the PD 351, in addition to the gesture in the horizontal direction in the figure, the gesture in the vertical direction in the figure can be recognized.
  • FIG. 36 shows an example of the output result of the PD 351 when the object moves from left to right in FIG.
  • a graph indicated by a solid line shows an example of an output result obtained from the PD 351 when the LED 391b 1 is turned on. This graph has a maximum value at time t1.
  • a graph indicated by a dotted line shows an example of an output result obtained from the PD 351 when the LED 391b 2 is turned on. This graph has a maximum value at time t2.
  • FIG. 37 shows a detailed configuration example of the digital photo frame 331.
  • the digital photo frame 331 is provided with a control unit 411 having a plurality of LED units 371 1 to 371 N instead of the control unit 66 (FIG. 10) having the plurality of sensors 21 1 to 21 N.
  • the digital photo frame 1 is configured similarly.
  • FIG. 38 shows a detailed configuration example of the control unit 411.
  • a lens 352 is provided on the front surface of the PD 351.
  • the current control unit 92n, the timing control unit 93n, and the LED driver 111n in FIG. 38 are configured similarly to the current control unit 92n, the timing control unit 93n, and the LED driver 111n in FIG.
  • symbol is attached
  • the gain control unit 94 and the AD conversion unit 95 in FIG. 38 are configured similarly to the gain control unit 94n and the AD conversion unit 95n in FIG. 11, respectively.
  • the PD 351 in FIG. 38 is configured similarly to the PD 114n in FIG.
  • the LED 391an, LED 391bn, lens 392an, and lens 392bn are configured similarly to the LED 112an, LED 112bn, lens 113an, and lens 113bn of FIG.
  • the processing unit 431 controls the current control unit 92n, the timing control unit 93n, and the gain control unit 94 in the same manner as the processing unit 91 in FIG.
  • the LED unit 371n turns on and off the LED 391an and turns on and off the LED 391bn in the same manner as the LED 112an and LED 112bn of the sensor 21n described with reference to FIG.
  • This fourth gesture recognition process is started, for example, when the digital photo frame 331 is turned on.
  • step S161 the processing unit 431 pays attention to a predetermined LED unit 371 n among the plurality of LED units 371 1 to 371 N and sets it as the target LED unit 371 n .
  • step S162 the processing unit 431 uses the target LED unit 371 n and the PD 351 to perform a V ⁇ 1 acquisition process that generates a luminance signal V ⁇ 1 and outputs the luminance signal V ⁇ 1 to the AD conversion unit 95.
  • the processing unit 431 instructs the current control unit 92n to supply a current to the LED 391an and the LED 391bn, and instructs the timing control unit 93n to turn on and off the LED 391an and the LED 391bn.
  • the current control unit 92n and the timing control unit 93n control the LED driver 111n according to an instruction from the processing unit 431.
  • the LED driver 111n irradiates light of wavelength ⁇ 1 by turning on only the LED 391an according to the control from the current control unit 92n and the timing control unit 93n.
  • the PD 351 receives reflected light obtained by irradiation with light of wavelength ⁇ 1, and outputs a luminance signal V ⁇ 1 obtained by photoelectrically converting the received reflected light to the AD converter 95.
  • AD conversion unit 95 a luminance signal V .lambda.1 from PD351 AD conversion, a luminance signal V .lambda.1 after AD conversion, and supplies to the processing unit 431.
  • step S163 the processing unit 431 uses the target LED unit 371 n and the PD 351 to perform a V ⁇ 2 acquisition process that generates a luminance signal V ⁇ 2 and outputs the luminance signal V ⁇ 2 to the AD conversion unit 95.
  • the processing unit 431 instructs the current control unit 92n to supply a current to the LED 391an and the LED 391bn, and instructs the timing control unit 93n to turn on and off the LED 391an and the LED 391bn.
  • the current control unit 92n and the timing control unit 93n control the LED driver 111n according to an instruction from the processing unit 431.
  • the LED driver 111n irradiates light of wavelength ⁇ 2 by turning on only the LED 391bn according to the control from the current control unit 92n and the timing control unit 93n.
  • the PD 351 receives reflected light obtained by irradiation with light of wavelength ⁇ 2, and outputs a luminance signal V ⁇ 2 obtained by photoelectrically converting the received reflected light to the AD conversion unit 95.
  • AD conversion unit 95 a luminance signal V .lambda.2 from PD351 AD conversion, a luminance signal V .lambda.2 after AD conversion, and supplies to the processing unit 431.
  • step S164 the processing unit 431 uses the target LED unit 371 n and the PD 351 to perform a V ⁇ off acquisition process that generates the luminance signal V ⁇ off and outputs the luminance signal V ⁇ off to the AD conversion unit 95.
  • the processing unit 431 instructs the current control unit 92n to supply a current to the LED 391an and the LED 391bn, and instructs the timing control unit 93n to turn on and off the LED 391an and the LED 391bn.
  • the current control unit 92n and the timing control unit 93n control the LED driver 111n according to an instruction from the processing unit 431.
  • the LED driver 111n turns off the LED 391an and the LED 391bn according to the control from the current control unit 92n and the timing control unit 93n.
  • the PD 351 receives reflected light of external light, and outputs a luminance signal V ⁇ off obtained by photoelectrically converting the received reflected light to the AD conversion unit 95.
  • AD conversion unit 95 a luminance signal V? OFF from the PD351 AD conversion, a luminance signal V? OFF after AD conversion, and supplies to the processing unit 431.
  • step S162 to step S164 the combination of luminance signals (V ⁇ 1 , V ⁇ 2 , V ⁇ off ) n is supplied from the PD 351 to the processing unit 431 via the AD conversion unit 95.
  • the subscript n of the combination (V ⁇ 1 , V ⁇ 2 , V ⁇ off ) n corresponds to the subscript n of the target LED unit 371n.
  • step S165 the processing unit 431, a combination of the luminance signal from the AD converter 95 (V ⁇ 1, V ⁇ 2, V ⁇ off) based on n, perform the same skin discrimination processing and the processing unit 91. In this skin discrimination process, a skin detection signal Dn is generated.
  • step S166 the processing unit 431, for example, a combination of the luminance signal (V ⁇ 1, V ⁇ 2, V ⁇ off) based on the luminance signal V .lambda.1 of n, and generates a gesture recognition information Jn.
  • the processing unit 431 supplies the generated skin detection signal and gesture recognition information to the CPU 61 via the input / output interface 65 and the bus 64.
  • step S167 if the processing unit 431 determines whether the attention to all of the plurality of LED units 371 1 to 371 N, was determined not to be focused on all of the plurality of LED units 371 1 to 371 N The process returns to step S161.
  • step S161 the processing unit 431 sets the LED unit 371 n that has not been noticed among the plurality of LED units 371 1 to 371 N as a new noticed LED unit 371 n, and thereafter, the same processing is performed. Done.
  • step S167 when the processing unit 431 determines that all of the plurality of LED units 371 1 to 371 N are focused, the process proceeds to step S168.
  • the CPU 61 receives the skin detection signal and gesture recognition from the PD 351 via the input / output interface 65 and the bus 64 each time the LED units 371 1 to 371 N are noticed. Information is supplied.
  • steps S168 through S170 the same processing as in steps S8 through S10 in FIG. 16 is performed.
  • the object as skin Recognize the position, movement, etc.
  • the fourth gesture recognition process can be modified in the same manner as the first gesture recognition process.
  • the processing unit 431 determines whether or not all the LED units 371n are noticed. Other than that, the same processing as the second gesture recognition processing shown in FIG. 20 is performed.
  • a process corresponding to the proximity object detection process shown in FIG. 21 is performed, and a fourth gesture recognition process is performed in response to the object having entered the detection range of the PD 351. Also good.
  • V ⁇ 1 acquisition processing by a predetermined LED unit 371n and PD 351 is performed as processing corresponding to step S121 and step S122 of FIG. Then, as a process corresponding to step S123 in FIG. 21, the processing unit 431 determines whether an object with respect to the detection range of the PD 351 has entered based on the luminance signal V ⁇ 1 obtained from the V ⁇ 1 acquisition process.
  • step S123 in FIG. 21 when it is detected that an object with respect to the detection range of the PD 351 has not entered based on the determination result, the processing returns to processing corresponding to step S121 and step S122 in FIG. , V ⁇ 1 acquisition processing is performed.
  • step S123 in FIG. 21 when it is detected that an object with respect to the detection range of the PD 351 has entered based on the determination result, a process corresponding to step S124 in FIG. The gesture recognition process is performed.
  • a digital photo frame that distinguishes whether an object is skin and recognizes the position and movement of the object as skin has been described.
  • the digital photo frame 1 is provided with a control unit 66 that outputs a skin detection signal, gesture recognition information, and the like, and the CPU 61 uses the output result from the control unit 66 as a skin. Recognize the movement of objects.
  • control unit 66 that outputs a skin detection signal, gesture recognition information, and the like can also be configured as one gesture output device.
  • a gesture output device is connected to a digital photo frame or the like that does not have the control unit 66.
  • a digital photo frame In such a digital photo frame, the contents of the display screen, etc., according to the output result from the connected gesture output device To change. This also applies to the second and third embodiments.
  • a first irradiation unit that irradiates light of a first wavelength
  • An irradiation unit including: a second irradiation unit configured to irradiate light having a second wavelength different from the first wavelength;
  • a first detection signal is generated,
  • a light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light of the second wavelength;
  • a skin detection unit that detects whether or not the object is skin based on the first and second detection signals;
  • An information processing apparatus comprising: a generation unit configured to generate recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signals.
  • the information processing apparatus includes another light receiving part configured similarly to the light receiving part, The information processing apparatus according to (1), wherein the generation unit generates the recognition information based on at least one of the first or second detection signals generated for each of the plurality of light receiving units.
  • the first irradiation unit is arranged at the same distance from each of the plurality of light receiving units, The information processing apparatus according to (2), wherein the generation unit generates the recognition information based on the first detection signal generated for each of the plurality of light receiving units.
  • the light receiving unit is disposed at the same distance from the first irradiation unit and the second irradiation unit, respectively.
  • the information processing apparatus wherein the skin detection unit detects whether the object is skin based on the first and second detection signals generated by the light receiving unit.
  • the generator is generated for each of the plurality of irradiation units by the light receiving unit when the interval between the first irradiation units in the first direction is longer than the interval between the second irradiation units.
  • the recognition information for recognizing at least one of the position or movement of the object in a second direction perpendicular to the first direction is generated based on the first detection signal.
  • Information processing device
  • the skin detection unit detects whether the object is skin based on the first and second detection signals generated for each of the plurality of sensors,
  • the information processing apparatus according to (1), wherein the generation unit generates the recognition information based on at least one of the first or second detection signal generated for each of the plurality of sensors.
  • the information processing apparatus according to (1) to (7).
  • an object detection unit In response to detecting that the object is skin, an object detection unit that detects whether or not the object exists within a predetermined detection range based on the first detection signal.
  • the information processing apparatus treats the object as skin when it is detected that the object exists within the detection range.
  • a signal generation unit that generates an output signal having a magnitude according to the position of the object; The information processing apparatus according to (1) to (9), wherein the generation unit generates the recognition information based on the output signal.
  • the first wavelength ⁇ 1 and the second wavelength ⁇ 2 that is longer than the first wavelength ⁇ 1 are: 640nm ⁇ ⁇ 1 ⁇ 1000nm 900nm ⁇ ⁇ 2 ⁇ 1100nm
  • the light receiving unit is provided with a visible light cut filter that blocks visible light incident on the light receiving unit.
  • FIG. 40 shows an outline of processing performed by each sensor 21n.
  • the sensor 21 1 As shown in A of FIG. 40, and LEDs 112a 1 in the lighting period "LED .lambda.1” is lit at the lighting period "LED .lambda.2" LED112b 1 is lit, with light-off period "LED off” The LED 112a 1 and the LED 112b 1 are turned off.
  • the PD 114 1 is obtained by receiving reflected light from an object irradiated with light of wavelength ⁇ 1 by the LED 112 a 1.
  • the luminance signal Lum # 1_ ⁇ 1 is output.
  • the sensor 21 as shown in C upper side of FIG. 40, PD 114 1, due LED112b 1, the luminance signal obtained by receiving the reflected light from an object light of the wavelength ⁇ 2 is irradiated Lum # 1_ ⁇ 2 is output.
  • LED112b 2 is turned in the lighting period "LED .lambda.2" off period "LED off
  • the LED 112a 2 and the LED 112b 2 are turned off.
  • the other sensors 21 1 and 21 3 to 21 N are turned off during any of the periods “LED ⁇ 1,” “LED ⁇ 2,” and “LED off” shown in FIG. To do.
  • the sensor 21 as shown in C under side of FIG. 40, PD 114 2, due LEDs 112a 2, receives reflected light from an object light of the wavelength ⁇ 1 is irradiated to give Output luminance signal Lum # 2_ ⁇ 1.
  • the senor 21 2 as shown in C under side of FIG. 40, PD 114 2, due LED112b 2, the luminance signal obtained by receiving the reflected light from an object light of the wavelength ⁇ 2 is irradiated Lum Output # 2_ ⁇ 2.
  • the processing unit 91 based on the PD 114 1 of the sensor 21 1 into a luminance signal Lum # 1_ ⁇ 1 and luminance signal Lum # 1_ ⁇ 2 like is outputted through the AD converter 95 1 Generate a skin detection signal.
  • the processing unit 91 based on the luminance signal Lum # 2_ ⁇ 1 and luminance signal Lum # 2_ ⁇ 2 like output from PD 114 2 sensors 21 2 via the AD converter 95 2 Generate a skin detection signal.
  • the luminance signal Lum as output from PD 114 1 of the sensor 21 1 It can happen that # 1_ ⁇ 1 is saturated.
  • FIG. 41 the upper side of FIG. 41C is different from the case of FIG. 40, but the other points are the same.
  • an erroneous skin detection signal for example, the object is skin, It may occur that a skin detection signal indicating that no skin is detected is generated.
  • the digital photo frame 1 may not be processed according to the movement of the user's hand or the like, and thus the user is prevented from performing the operation by the gesture. It will fall into the sense.
  • each PD 141n in the processing unit 91, it is desirable to hold the internal state of each PD 141n in a built-in memory (not shown) and determine the skin detection state according to the transition of the internal state.
  • FIG. 42 shows an example of how the internal state (information representing) of the PD 141n transitions.
  • the internal state “NO_SKIN” represents a state in which no skin part is detected.
  • the internal state “SKIN_DETECT” represents a state in which a skin part is detected.
  • the processing unit 91 Based on the luminance signal Lum # n_ ⁇ 1 and the luminance signal Lum # n_ ⁇ 2 (#n corresponds to the subscript n of the PD 141n) supplied from the PD 141n via the AD conversion unit 95n, the processing unit 91 ⁇ (Lum # n_ ⁇ 1 -Lum # n_ ⁇ 2) ⁇ 100 / Lum # n_ ⁇ 1 ⁇ is calculated. Note that Lum # n_ ⁇ 1 and Lum # n_ ⁇ 2 indicate the luminance values represented by the luminance signal Lum # n_ ⁇ 1 and the luminance signal Lum # n_ ⁇ 2, respectively.
  • the processing unit 91 displays “skin detection” shown in FIG. And the internal state of the PD 141n is changed from “NO_SKIN” to “SKIN_DETECT”.
  • the processing unit 91 does not change the internal state of the PD 141n and remains “NO_SKIN”.
  • the processing unit 91 changes the internal state of the PD 141n from “SKIN_DETECT” to “NO_SKIN” based on at least one of Lum # n_ ⁇ 1, Lum # n_ ⁇ 2, or (Lum # n_ ⁇ 1 + Lum # n_ ⁇ 2) as the luminance value. Whether or not to make a transition is determined, and “SKIN_DETECT” is not changed to “NO_SKIN” based on the determination by the skin determination process.
  • the processing unit 91 changes the internal state of the PD 141n from “SKIN_DETECT” to “NO_SKIN” when Lum # n_ ⁇ 1 as a luminance value is less than a certain threshold value. Transition.
  • the processing unit 91 does not change the internal state of the PD 141n and remains “SKIN_DETECT”.
  • the processing unit 91 determines the internal state of the PD 141n based on the luminance signal supplied from the PD 141n via the AD conversion unit 95n. Then, the processing unit 91 outputs a skin detection signal corresponding to the determined internal state of the PD 141n.
  • FIG. 43 shows an outline of processing performed by the control unit 211 of the digital photo frame 171 according to the second embodiment.
  • the LED 112a is turned on in the lighting period “LED ⁇ 1”
  • the LED 112b is turned on in the lighting period “LED ⁇ 2”
  • the LED 112a and the LED 112b are turned off.
  • the PD 191 1 receives the reflected light from the object irradiated with the light of the wavelength ⁇ 1 by the LED 112a.
  • Lum # 1_ ⁇ 1 is output.
  • the PD 114 1 receives a luminance signal Lum # 1_ ⁇ 2 obtained by receiving reflected light from an object irradiated with light of wavelength ⁇ 2 by the LED 112b. Output.
  • control unit 211 as shown in C of FIG. 43, PD 114 2, due LEDs 112a, the luminance signal Lum # 2_ ⁇ 1 obtained by receiving the reflected light from an object light of the wavelength ⁇ 1 is irradiated Output.
  • the luminance signal Lum # 2_ ⁇ 2 obtained by receiving the reflected light from an object light of the wavelength ⁇ 2 is irradiated Output.
  • the processing unit 231 based on the luminance signal Lum # 1_ ⁇ 1 and luminance signal Lum # 1_ ⁇ 2 like is outputted through the AD conversion unit 95 1 from PD 114 1 of the control unit 211 Generate a skin detection signal.
  • the processing unit 231 based on the luminance signal Lum # 2_ ⁇ 1 and luminance signal Lum # 2_ ⁇ 2 like is outputted through the AD converter 95 2 from the PD 114 2 of the controller 211 Generate a skin detection signal.
  • the processing unit 231 keeps the internal state of each PD 191n in a built-in memory (not shown) as in the processing unit 91, and adopts judgment based on the skin detection signal according to the transition of the internal state You can decide whether or not.
  • the method for transitioning the internal state of the PD 191n is performed by the processing unit 231 as described with reference to FIG. Thereby, an operation error due to saturation of the PD 191n during the operation of the control unit 211 can be prevented.
  • FIG. 44 shows an outline of processing performed by the control unit 411 of the digital photo frame 331 according to the third embodiment.
  • LED391a 1 is turned in the lighting period "LED .lambda.2" off period “LED off” turns off the LEDs 391a 1 and 391b 1 .
  • the PD 351 receives the reflected light from the object irradiated with the light of the wavelength ⁇ 1 by the LED 391a 1 and obtains the luminance signal.
  • Lum # 1_ ⁇ 1 is output.
  • a luminance signal Lum # 1_ ⁇ 2 obtained by receiving the reflected light from an object light of the wavelength ⁇ 2 is irradiated Output.
  • LED391b 2 is turned in the lighting period "LED .lambda.2" off period "LED When “off”, the LED 391a 2 and the LED 391b 2 are turned off.
  • the other LED units 371 1 and 371 3 to 371 N are turned off during any of the periods “LED ⁇ 1,” “LED ⁇ 2,” and “LED off” shown in FIG. And
  • the PD 351 receives the reflected light from the object irradiated with the light of the wavelength ⁇ 1 by the LED 391a 2 and obtains the luminance signal.
  • Lum # 2_ ⁇ 1 is output.
  • a luminance signal Lum # 2_ ⁇ 2 obtained by receiving the reflected light from an object light of the wavelength ⁇ 2 is irradiated Output.
  • the processing unit 431 determines the skin based on the luminance signal Lum # 1_ ⁇ 1 and the luminance signal Lum # 1_ ⁇ 2 output from the PD 351 of the control unit 211 via the AD conversion unit 95. A detection signal is generated.
  • the processing unit 431 determines the skin based on the luminance signal Lum # 2_ ⁇ 1 and the luminance signal Lum # 2_ ⁇ 2 output from the PD 351 of the control unit 211 via the AD conversion unit 95. A detection signal is generated.
  • the internal state of each PD 351 at the time of irradiation of the LED unit 371n is held in a built-in memory (not shown) or the like, and skin detection is performed according to the transition of the internal state It is possible to decide whether or not to adopt judgment based on signals.
  • the transition method of each internal state of PD351 at the time of irradiation of LED unit 371n is performed by the process part 431 as demonstrated with reference to FIG. Thereby, an operation error due to saturation of the PD 351 during the operation of the control unit 411 can be prevented.
  • the shortest usage distance l is determined in advance when the digital photo frame 1 is manufactured or the like based on the feeling of use by the user's gesture operation.
  • the shortest use distance l can continuously recognize the position or movement of the user's hand or the like. The shortest distance.
  • the position or movement of the user's hand or the like is partially within the range of the minimum use distance l from the digital photo frame 1.
  • Such a range so-called dead zone in which it cannot be recognized occurs.
  • the positions of the sensors 21 1 and 21 2 are within the range (area) of the minimum use distance l from the digital photo frame 1, and the detection ranges of the sensors 21 1 and 21 2 are respectively.
  • the position can be covered.
  • the analog output possible area indicated by hatching represents an area where the position and movement of the user's hand and the like can be continuously recognized reliably by the sensors 21 1 and 21 2 .
  • the distance d between the sensor 21 1 and the sensor 21 2 is based on the shortest use distance l and the half field angle ⁇ (degrees) of the irradiation range (detection range) in the sensor 21n ( 1).
  • d 2 ⁇ l / tan (90- ⁇ ) (1)
  • the distance d between the PD 191 1 and the PD 191 2 is also obtained using the above-described equation (1).
  • FIG. 47 shows an example of a state in which the digital photo frame 171 is viewed from the lower side in FIG.
  • the half angle of view of the detection range of the PD 191n is ⁇ . Then, the PD 191 1 and the PD 191 2 are separated from each other by a distance d obtained by the equation (1).
  • the portion (most) of the irradiation range of the LED unit 192 that is separated by a distance equal to or longer than the shortest usable distance l is an analog output possible region.
  • the distance d between the LED unit 371 1 and the LED unit 371 2 is also obtained using the above-described equation (1).
  • FIG. 48 shows an example of the digital photo frame 331 seen from the lower side in FIG.
  • the half angle of view of the irradiation range of the LED unit 371n is ⁇ . Then, the LED unit 371 1 and the LED unit 371 2 are separated from each other by a distance d obtained by the equation (1).
  • a portion (most of) of the detection range of the PD 351 that is separated by a distance equal to or longer than the shortest usable distance l is an analog output possible region.
  • FIG. 49 shows an example of the digital photo frame 1 viewed from the lower side in FIG.
  • a material 501 having the same reflectivity for the wavelength ⁇ 1 and the reflectivity for the wavelength ⁇ 2 is disposed in front of the sensors 21 1 and 21 2 .
  • the material 501 for example, a gray sheet or a mirror surface of a mirror is used.
  • FIG. 50 based on the luminance signal outputted from the sensor 21 1 and the sensor 21 2 shows an example of when adjusting the output of the sensor 21 1 and the sensor 21 2 each the LED.
  • the sensor 21 1 irradiates the material 501 with light of wavelength ⁇ 1, and receives the reflected light from the material 501 irradiated with light of wavelength ⁇ 1. To do.
  • the sensor 21 1 generates a luminance signal Lum # 1_ ⁇ 1 according to the received light, and outputs to the processing unit 91 via the AD converter 95 1.
  • the sensor 21 1 irradiates the material 501 with light of wavelength ⁇ 2 during the lighting period “LED ⁇ 2” shown in FIG. 50A, and reflects light from the material 501 irradiated with light of wavelength ⁇ 2. Is received.
  • the sensor 21 1 generates a luminance signal Lum # 1_ ⁇ 2 according to the received light, and outputs to the processing unit 91 via the AD converter 95 1.
  • Sensor 21 2 the sensor 21 1 in the same manner as to generate a luminance signal Lum # 2_ ⁇ 1 and luminance signal Lum # 2_ ⁇ 2, and outputs to the processing unit 91 via the AD converter 95 2.
  • the sensor 21 2 irradiates the material 501 with light of the wavelength ⁇ 1 during the lighting period “LED ⁇ 1” shown in FIG. 50B, and the light from the material 501 irradiated with the light of the wavelength ⁇ 1. Receives reflected light.
  • the sensor 21 2 generates a luminance signal Lum # 2_ ⁇ 1 according to the received light, and outputs to the processing unit 91 via the AD converter 95 2.
  • the senor 21 2 irradiates the material 501 with light of wavelength ⁇ 2 during the lighting period “LED ⁇ 2” shown in FIG. 50B, and reflects light from the material 501 irradiated with light of wavelength ⁇ 2. Is received.
  • the sensor 21 2 generates a luminance signal Lum # 2_ ⁇ 2 according to the received light, and outputs to the processing unit 91 via the AD converter 95 2.
  • the processing unit 91 for example, based on the luminance signal Lum # 1_ ⁇ 1 and the luminance signal Lum # 1_ ⁇ 2 from the sensor 21 1 and the luminance signal Lum # 2_ ⁇ 1 and the luminance signal Lum # 2_ ⁇ 2 from the sensor 21 2 , as shown in D in FIG. 50, Lum # 1_ ⁇ 1 as luminance values, Lum # 1_ ⁇ 2, Lum # 2_ ⁇ 1 , to equal any of Lum # 2_ ⁇ 2, adjusts the sensor 21 1 and the sensor 21 2.
  • processor 91 LEDs 112a 1 and LED112b 1 of the sensor 21 1, and adjusts the output of the irradiation of the sensor 21 and second LEDs 112a 2 and LED112b 2.
  • LED for example, adjustment of current to LED by variable resistor connected to LED, PWM (Pulse Width Modulation) output adjustment for current control, correction by program, etc. are adopted. it can.
  • PWM Pulse Width Modulation
  • the output adjustment by the LED is similarly performed for the digital photo frame 171 according to the second embodiment and the digital photo frame 331 according to the third embodiment.
  • FIG. 51 shows an example when an object as skin moves in the left-right direction.
  • the position of an object as a skin is calculated within the range from X1 to X2 shown in FIG. Think about what to do.
  • X1 to X2 is, for example, the analog output possible area shown in FIG.
  • X1 is set to 0 and X2 is set to 639.
  • the processing unit 91 for example, sensor 21 1 from PD 114 1 the luminance signal Lum # 1_ ⁇ 1 supplied via the AD converter 95 1, sensor 21 second PD 114 2 from the AD conversion unit 95 2
  • the position X of the object as skin can be calculated (calculated) on the basis of the luminance signal Lum # 2_ ⁇ 1 supplied via.
  • the processing unit 91 outputs the position X to the CPU 61 as gesture recognition information.
  • the CPU 61 acquires the luminance signal Lum # 1_ ⁇ 1 and the luminance signal Lum # 2_ ⁇ 1 as the gesture recognition information from the processing unit 91, based on the acquired gesture recognition information, the CPU 61 obtains an object as skin. You may make it recognize by calculating the position X of.
  • the position X of the object as the skin can be calculated in the same manner even when three sensors 21 1 to 21 3 are provided.
  • FIG. 53 shows an example in which an object as skin moves in the left-right direction.
  • the position of an object as a skin is calculated within the range from X1 to X3 shown in FIG. Think about it.
  • X1 to X3 is, for example, a region where analog output is possible when configured as shown in FIG.
  • X1 is set to 0
  • X2 is set to 320
  • X3 is set to 639.
  • the position X of the object as the skin can be calculated by the following equation (3).
  • X ⁇ X1 ⁇ L1 / (L1 + L2 + L3) ⁇ + ⁇ X2 ⁇ L2 / (L1 + L2 + L3) ⁇ + ⁇ X3 ⁇ L3 / (L1 + L2 + L3) ⁇ (3)
  • FIG. 54 shows an example of how the user performs a click operation on the digital photo frame 1.
  • the user performs a click operation of bringing his / her hand close to the digital photo frame 1 and returning it to the original position.
  • the digital photo frame 1 recognizes the user's click motion and performs processing according to the recognized click motion, a more intuitive gesture operation is performed on the digital photo frame 1. It can be performed.
  • the CPU 61 acquires the luminance signal Lum # n_ ⁇ 1 as the gesture recognition information from the processing unit 91, the same recognition method is used when the user's click operation is recognized based on the gesture recognition information. It is done.
  • the processing unit 91 calculates Lsum by the following equation (4) based on the luminance signal Lum # n_ ⁇ 1 supplied from each sensor 21n via the AD conversion unit 95n.
  • Lsum ⁇ (Lum # n_ ⁇ 1) (4)
  • the digital photo frame 1 is provided with two sensors 21 1 and 21 2 . Therefore, the luminance signal Lum # 1_ ⁇ 1 is supplied from the sensor 21 1 to the processing unit 91 via the AD conversion unit 95 1 , and the luminance signal Lum # 2_ ⁇ 1 is supplied from the sensor 21 2 to the processing unit 91 via the AD conversion unit 95 2. .
  • the processing unit 91 recognizes (detects) the click operation based on the calculated change in Lsum.
  • a graph indicated by a solid line represents Lsum.
  • a graph indicated by a dotted line represents d (Lsum) obtained by differentiating Lsum.
  • the processing unit 91 differentiates the calculated Lsum, and when the graph of d (Lsum) obtained as a result crosses zero, that is, as shown in FIG. 55, d (Lsum) divides the value 0. In this case, it is recognized that a click operation by the user has been performed.
  • the processing unit 91 may erroneously detect (recognize) the user's click operation.
  • the X direction is a direction perpendicular to the Z direction, which is the normal direction of the display screen 1a, and is a direction in which the sensor 21 1 and the sensor 21 2 in FIG. 1 exist (the left-right direction in FIG. 1).
  • the Y direction is a direction perpendicular to the Z direction and the X direction (the vertical direction in FIG. 1).
  • FIG. 57 is a diagram for explaining a method for more accurately recognizing a user's click operation using the above-described human tendency.
  • the graph indicated by the solid line represents the position X calculated by the processing unit 91.
  • a graph indicated by a dotted line represents whether or not to recognize a click operation by d (Lsum).
  • the horizontal axis represents time
  • the left vertical axis in the figure represents the position X as a value of a graph indicated by a solid line.
  • the vertical axis on the right side in the figure represents the value gate of the graph indicated by the dotted line.
  • a graph indicated by a solid line represents d (Lsum) calculated by the processing unit 91.
  • the horizontal axis represents time
  • the vertical axis represents d (Lsum) as a value of a graph indicated by a solid line.
  • a position X of an object (such as a user's hand) as skin is calculated.
  • the processing unit 91 calculates a dotted line graph (value gate) shown in A of FIG.
  • processing unit 91 also handles a case where a very small change in the position X (for example, a change within 1 pix) has occurred, assuming that the position X does not change.
  • the processing unit 91 recognizes the click operation by d (Lsum) as shown in FIG. Do not do.
  • the processing unit 91 performs a click operation by d (Lsum) as shown in FIG. Recognition.
  • the processing unit 91 can recognize the click operation more accurately. The same can be said for the second and third embodiments.
  • 1 digital photo frame 1a display screen, 21 1 to 21 N sensor, 61 CPU, 62 ROM, 63 RAM, 64 bus, 65 I / O interface, 66 control unit, 67 display unit, 68 storage unit, 69 drive, 91 processing Unit, 92 current control unit, 93 timing control unit, 94 gain control unit, 95 AD conversion unit, 111 LED driver, 112a, 112b LED, 113a, 113b lens, 114n PD, 115n lens, 131, 151, 171 digital photo frame , 191 1 to 191 N PD, 192 LED unit, 211 control unit, 231 processing unit, 232 lens, 331 digital photo frame, 351 PD, 352 lens, 371 1 to 371 N LED unit, 391a, 391b LED, 392a, 392b Lens, 411 control unit, 431 processing unit

Abstract

The present disclosures pertain to an information processing device, information processing method, program, and electronic apparatus that are able to precisely recognize the position, movement, and the like of an object that is considered to be skin. An LED unit has a first LED, which radiates light of a first wavelength, and a second LED, which radiates light of a second wavelength that differs from the first wavelength. A phase detector (PD) generates a first detection signal in response to having received reflected light from the object to which light of the first wavelength has been radiated, and generates a second detection signal in response to having received reflected light from the object to which light of the second wavelength has been radiated. Also, a processing unit detects whether the object is skin on the basis of the first and second detection signals, and generates recognition information for recognizing the position and/or motion of the object detected as skin on the basis of the first and/or second detection signals. The present disclosures can be applied, for example, to an electronic apparatus for recognizing gestures or the like of objects considered to be skin.

Description

情報処理装置、情報処理方法、プログラム、及び電子機器Information processing apparatus, information processing method, program, and electronic apparatus
 本開示は、特に、例えば、情報処理装置、情報処理方法、プログラム、及び電子機器に関し、肌と肌以外の物体を区別して、肌としての物体の位置や動きなどを認識できるようにした情報処理装置、情報処理方法、プログラム、及び電子機器に関する。 The present disclosure particularly relates to, for example, an information processing apparatus, an information processing method, a program, and an electronic device, and distinguishes between an object other than skin and an object so that the position and movement of the object as skin can be recognized. The present invention relates to an apparatus, an information processing method, a program, and an electronic device.
 例えば、近接した物体が肌であるときに、オン状態又はオフ状態の一方に切り替えられる肌近接スイッチが存在する(例えば特許文献1参照)。 For example, there is a skin proximity switch that can be switched to one of an on state and an off state when an adjacent object is skin (see, for example, Patent Document 1).
 この肌近接スイッチでは、物体の近接を検知し、検知した物体が肌(例えば人間の指先等)であるか否かを判別する。そして、肌近接スイッチは、検知した物体が肌であると判別したことに対応して、オン状態又はオフ状態の一方に切り替えられる。 This skin proximity switch detects the proximity of an object and determines whether the detected object is skin (for example, a human fingertip). The skin proximity switch is switched to either the on state or the off state in response to determining that the detected object is skin.
WO2010/117006号公報WO 2010/117006
 しかしながら、上述の肌近接スイッチでは、例えば、近接した物体が肌であるか否かを判別することはできるものの、肌として判別した物体の動きなどを検知することは困難である。 However, with the above-described skin proximity switch, for example, although it is possible to determine whether or not an adjacent object is skin, it is difficult to detect the movement of the object determined as skin.
 本開示は、このような状況に鑑みてなされたものであり、肌と肌以外の物体とを区別して、肌としての物体の位置や動きなどを認識できるようにするものである。 The present disclosure has been made in view of such a situation, and makes it possible to recognize the position and movement of an object as skin by distinguishing between the skin and an object other than the skin.
 本開示の第1の側面の情報処理装置は、第1の波長の光を照射する第1の照射部と、前記第1の波長とは異なる第2の波長の光を照射する第2の照射部とを有する照射ユニットと、前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する受光部と、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部とを含む情報処理装置である。 The information processing apparatus according to the first aspect of the present disclosure includes a first irradiation unit that irradiates light having a first wavelength, and second irradiation that irradiates light having a second wavelength different from the first wavelength. And generating a first detection signal in response to receiving the reflected light from the object irradiated with the light having the first wavelength and the light having the second wavelength. A light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light, and the object is skinned based on the first and second detection signals. Recognition for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signal and the skin detection unit for detecting whether or not An information processing apparatus including a generation unit that generates information.
 前記受光部と同様に構成された他の受光部をさらに設けることができ、前記生成部には、複数の前記受光部毎に生成された前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成させることができる。 Another light receiving unit having the same configuration as the light receiving unit can be further provided, and the generation unit includes at least one of the first or second detection signals generated for each of the plurality of light receiving units. Based on this, the recognition information can be generated.
 前記第1の照射部は、複数の前記受光部のそれぞれから同一の距離に配置されており、前記生成部には、前記複数の受光部毎に生成された前記第1の検出用信号に基づいて、前記認識情報を生成させることができる。 The first irradiation unit is arranged at the same distance from each of the plurality of light receiving units, and the generation unit is based on the first detection signal generated for each of the plurality of light receiving units. Thus, the recognition information can be generated.
 前記受光部は、前記第1の照射部と前記第2の照射部から、それぞれ同一の距離に配置されており、前記肌検出部には、前記受光部により生成された前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出させることができる。 The light receiving unit is disposed at the same distance from the first irradiation unit and the second irradiation unit, respectively, and the skin detection unit includes the first and second generated by the light receiving unit. Based on the detection signal, it can be detected whether or not the object is skin.
 前記照射ユニットと同様に構成された他の照射ユニットをさらに設けることができ、前記受光部には、異なるタイミングで前記第1の波長の光を照射する前記照射ユニット毎に、前記第1の検出用信号を生成させ、異なるタイミングで前記第2の波長の光を照射する前記照射ユニット毎に、前記第2の検出用信号を生成させ、前記生成部には、前記照射ユニット毎に生成された前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成させることができる。 Another irradiation unit configured in the same manner as the irradiation unit may be further provided, and the first detection is performed for each of the irradiation units that irradiate the light of the first wavelength at different timings in the light receiving unit. And generating the second detection signal for each of the irradiation units that irradiate the light of the second wavelength at different timings, and generating the generated signal for each of the irradiation units. The recognition information can be generated based on at least one of the first or second detection signal.
 前記生成部には、第1の方向における前記第1の照射部どうしの間隔が、前記第2の照射部どうしの間隔よりも長い場合、前記受光部により、前記複数の照射ユニット毎に生成された前記第1の検出用信号に基づいて、前記第1の方向に垂直な第2の方向における前記物体の位置又は動きの少なくとも一方を認識するための前記認識情報を生成させることができる。 When the interval between the first irradiation units in the first direction is longer than the interval between the second irradiation units, the generation unit generates the plurality of irradiation units by the light receiving unit. Based on the first detection signal, the recognition information for recognizing at least one of the position or movement of the object in a second direction perpendicular to the first direction can be generated.
 前記照射ユニットと前記受光部とを有する複数のセンサであって、前記センサ毎に異なる照射範囲を照射する前記照射ユニットを有する前記センサをさらに設けることができ、前記肌検出部には、前記複数のセンサ毎に生成される前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出させ、前記生成部には、前記複数のセンサ毎に生成される前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成させることができる。 A plurality of sensors having the irradiation unit and the light receiving unit, the sensor having the irradiation unit that irradiates a different irradiation range for each sensor may be further provided, and the skin detection unit includes the plurality of sensors. Based on the first and second detection signals generated for each sensor, it is detected whether the object is skin, and the generation unit generates the plurality of sensors. The recognition information can be generated based on at least one of the first and second detection signals.
 前記第1の検出用信号に基づいて、前記物体が予め決められた検知範囲内に侵入したか否かを検出する近接検出部をさらに設けることができ、前記肌検出部には、前記物体が前記検知範囲内に侵入したと検出されたことに対応して、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出させることができる。 A proximity detection unit that detects whether the object has entered a predetermined detection range based on the first detection signal can be further provided, and the skin detection unit includes the object Corresponding to the detection of having entered the detection range, it is possible to detect whether the object is skin based on the first and second detection signals.
 前記物体が肌であることが検出されたことに対応して、前記第1の検出用信号に基づき、前記物体が予め決められた検知範囲内に存在するか否かを検出する物体検出部をさらに設けることができ、前記情報処理装置では、前記物体が前記検知範囲内に存在すると検出された場合、前記物体が肌であるものとして取り扱うようにすることができる。 In response to detecting that the object is skin, an object detection unit that detects whether or not the object exists within a predetermined detection range based on the first detection signal. Further, the information processing apparatus can handle the object as skin if it is detected that the object is within the detection range.
 前記物体の位置に応じた大きさの出力信号を生成する信号生成部をさらに設けることができ、前記生成部には、前記出力信号にも基づいて、前記認識情報を生成させることができる。 A signal generation unit that generates an output signal having a magnitude according to the position of the object can be further provided, and the generation unit can generate the recognition information based on the output signal.
 前記第1の波長λ1、及び前記第1の波長λ1よりも長波長である前記第2の波長λ2は、
 640nm ≦ λ1 ≦ 1000nm
 900nm ≦ λ2 ≦ 1100nm
 を満たすようにすることができる。
The first wavelength λ1 and the second wavelength λ2 that is longer than the first wavelength λ1 are:
640nm ≤ λ1 ≤ 1000nm
900nm ≤ λ2 ≤ 1100nm
Can be met.
 前記第1の照射部には、前記第1の波長λ1の不可視光を照射させ、前記第2の照射部には、前記第2の波長λ2の不可視光を照射させることができる。 The first irradiation unit can be irradiated with invisible light having the first wavelength λ1, and the second irradiation unit can be irradiated with invisible light having the second wavelength λ2.
 前記受光部には、前記受光部に入射される可視光を遮断する可視光カットフィルタが設けられているようにすることができる。 The light receiving unit may be provided with a visible light cut filter that blocks visible light incident on the light receiving unit.
 本開示の第1の側面の情報処理方法は、第1の波長の光を照射する第1の照射部と、前記第1の波長とは異なる第2の波長の光を照射する第2の照射部とを有する照射ユニットと、前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する受光部とを含む情報処理装置の情報処理方法であって、前記情報処理装置による、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出ステップと、前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成ステップとを含む情報処理方法である。 An information processing method according to the first aspect of the present disclosure includes a first irradiation unit that irradiates light having a first wavelength, and a second irradiation that irradiates light having a second wavelength different from the first wavelength. And generating a first detection signal in response to receiving the reflected light from the object irradiated with the light having the first wavelength and the light having the second wavelength. An information processing method of an information processing apparatus including a light receiving unit that generates a second detection signal in response to reception of reflected light from the object that is irradiated by the information processing apparatus, Based on the first and second detection signals, the skin detection step for detecting whether or not the object is skin and the skin based on at least one of the first or second detection signals Recognition for recognizing at least one of the detected position or movement of the object An information processing method comprising: a generation step of generating a broadcast.
 本開示の第1の側面のプログラムは、第1の波長の光を照射する第1の照射部と、前記第1の波長とは異なる第2の波長の光を照射する第2の照射部とを有する照射ユニットと、前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する受光部とを含む情報処理装置のコンピュータを、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部として機能させるためのプログラムである。 The program according to the first aspect of the present disclosure includes a first irradiation unit that irradiates light having a first wavelength, and a second irradiation unit that irradiates light having a second wavelength different from the first wavelength. The first detection signal is generated in response to receiving the reflected light from the irradiation unit having the irradiation unit and the object irradiated with the light of the first wavelength, and the light of the second wavelength is irradiated A computer of an information processing apparatus including a light receiving unit that generates a second detection signal in response to receiving reflected light from the object that is reflected on the basis of the first and second detection signals. Then, based on at least one of the skin detection unit that detects whether or not the object is skin, and at least one of the first or second detection signal, at least one of the position or movement of the object detected as skin Function as a generator that generates recognition information for recognizing Which is the program.
 本開示の第1の側面によれば、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かが検出され、前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報が生成される。 According to the first aspect of the present disclosure, it is detected whether the object is skin based on the first and second detection signals, and at least one of the first or second detection signals. Based on one, recognition information for recognizing at least one of the position or movement of the object detected as skin is generated.
 本開示の第2の側面の電子機器は、第1の波長の光を照射する第1の照射部と、前記第1の波長とは異なる第2の波長の光を照射する第2の照射部とを有する照射ユニットと、前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する受光部と、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部と、前記認識情報に基づく認識結果に応じて、対応する処理を行う処理部とを含む電子機器である。 An electronic apparatus according to a second aspect of the present disclosure includes a first irradiation unit that irradiates light having a first wavelength, and a second irradiation unit that emits light having a second wavelength different from the first wavelength. And generating a first detection signal in response to receiving the reflected light from the object irradiated with the light of the first wavelength, and the light of the second wavelength A light receiving unit that generates a second detection signal in response to receiving reflected light from the irradiated object, and the object is skinned based on the first and second detection signals. Recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the skin detection unit for detecting whether or not there is the first or second detection signal And a corresponding process according to the recognition result based on the recognition information An electronic device comprising a processing unit that performs.
 本開示の第2の側面によれば、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かが検出され、前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報が生成され、前記認識情報に基づく認識結果に応じて、対応する処理が行われる。 According to the second aspect of the present disclosure, it is detected whether the object is skin based on the first and second detection signals, and at least one of the first or second detection signals. Based on one, recognition information for recognizing at least one of the position or movement of the object detected as skin is generated, and corresponding processing is performed according to the recognition result based on the recognition information.
 本開示によれば、肌としての物体の位置や動きなどを精度良く認識することが可能となる。 According to the present disclosure, it is possible to accurately recognize the position and movement of an object as skin.
第1の実施の形態であるデジタルフォトフレームの構成例を示す図である。It is a figure which shows the structural example of the digital photo frame which is 1st Embodiment. 物体の位置や動きを認識する認識方法を説明するための第1の図である。It is a 1st figure for demonstrating the recognition method which recognizes the position and motion of an object. 物体の位置や動きを認識する認識方法を説明するための第2の図である。It is a 2nd figure for demonstrating the recognition method which recognizes the position and motion of an object. 図1において下側からデジタルフォトフレームを見たときの様子の第1の例を示す図である。It is a figure which shows the 1st example of a mode when a digital photo frame is seen from the lower side in FIG. 物体が、図4に示したような動きをした場合に、センサから出力される出力結果の一例を示す図である。FIG. 5 is a diagram illustrating an example of an output result output from a sensor when an object moves as illustrated in FIG. 4. 物体が、図4に示したような動きをした場合、センサの出力結果から得られる肌検出信号の第1の例を示す図である。FIG. 5 is a diagram showing a first example of a skin detection signal obtained from an output result of a sensor when an object moves as shown in FIG. 4. 図1において下側からデジタルフォトフレームを見たときの様子の第2の例を示す図である。It is a figure which shows the 2nd example of a mode when a digital photo frame is seen from the lower side in FIG. 物体が、図7に示したような動きをした場合に、センサから出力される出力結果の一例を示す図である。It is a figure which shows an example of the output result output from a sensor, when an object moves as shown in FIG. 物体が、図7に示したような動きをした場合、センサの出力結果から得られる肌検出信号の第2の例を示す図である。It is a figure which shows the 2nd example of the skin detection signal obtained from the output result of a sensor, when an object moves as shown in FIG. 図1のデジタルフォトフレームの詳細な構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the digital photo frame of FIG. 図10の制御部の詳細な構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the control part of FIG. 人間の肌に対する分光反射特性を示す図である。It is a figure which shows the spectral reflection characteristic with respect to human skin. 各センサにおける照射タイミングの一例を示す図である。It is a figure which shows an example of the irradiation timing in each sensor. トライアングル状に配置された3個のセンサを有するデジタルフォトフレームの構成例を示す図である。It is a figure which shows the structural example of the digital photo frame which has three sensors arrange | positioned at triangle shape. L字状に配置された3個のセンサを有するデジタルフォトフレームの構成例を示す図である。It is a figure which shows the structural example of the digital photo frame which has three sensors arrange | positioned at L shape. 図1のデジタルフォトフレームが行う第1のジェスチャ認識処理を説明するためのフローチャートである。It is a flowchart for demonstrating the 1st gesture recognition process which the digital photo frame of FIG. 1 performs. 注目センサが行うVλ1取得処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the V ( lambda) 1 acquisition process which an attention sensor performs. 注目センサが行うVλoff取得処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the V ( lambda) off acquisition process which an attention sensor performs. 注目センサの処理部が行う肌判別処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the skin discrimination | determination process which the process part of an attention sensor performs. 図1のデジタルフォトフレームが行う第2のジェスチャ認識処理を説明するためのフローチャートである。It is a flowchart for demonstrating the 2nd gesture recognition process which the digital photo frame of FIG. 1 performs. 図1のデジタルフォトフレームが行う近接物体検出処理を説明するためのフローチャートである。3 is a flowchart for explaining proximity object detection processing performed by the digital photo frame of FIG. 1. 第2の実施の形態であるデジタルフォトフレームの構成例を示す図である。It is a figure which shows the structural example of the digital photo frame which is 2nd Embodiment. 図22において下側からデジタルフォトフレームを見たときの様子の一例を示す図である。It is a figure which shows an example when a digital photo frame is seen from the lower side in FIG. 物体が、図23に示したような動きをした場合に、PDから出力される出力結果の一例を示す図である。FIG. 24 is a diagram illustrating an example of an output result output from a PD when an object moves as illustrated in FIG. 図22のデジタルフォトフレームの詳細な構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the digital photo frame of FIG. 図25の制御部の詳細な構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the control part of FIG. 図22のデジタルフォトフレームが行う第3のジェスチャ認識処理を説明するためのフローチャートである。It is a flowchart for demonstrating the 3rd gesture recognition process which the digital photo frame of FIG. 22 performs. 2個のPDを結ぶ線分の中心を通る、表示画面と垂直な法線上に存在するLEDユニットの一例を示す第1の図である。It is the 1st figure which shows an example of the LED unit which exists on the normal line perpendicular | vertical to a display screen which passes along the center of the line segment which connects two PD. 2個のPDを結ぶ線分の中心を通る、表示画面と垂直な法線上に存在するLEDユニットの一例を示す第2の図である。It is the 2nd figure which shows an example of the LED unit which exists on the normal line perpendicular | vertical to a display screen which passes along the center of the line segment which connects two PD. デジタルフォトフレームにおいて、3個のPDを設けるようにした場合の一例を示す図である。It is a figure which shows an example at the time of providing 3 PD in a digital photo frame. デジタルフォトフレームにおいて、4個のPDを設けるようにした場合の一例を示す図である。It is a figure which shows an example at the time of providing four PD in a digital photo frame. デジタルフォトフレームにおいて、3個のPDと、2個のLEDユニットを設けるようにした場合の一例を示す図である。It is a figure which shows an example at the time of providing 3 PD and 2 LED units in a digital photo frame. 第3の実施の形態であるデジタルフォトフレームの構成例を示す図である。It is a figure which shows the structural example of the digital photo frame which is 3rd Embodiment. PDと2個のLEDユニットの位置関係の一例を示す第1の図である。It is a 1st figure which shows an example of the positional relationship of PD and two LED units. PDと2個のLEDユニットの位置関係の一例を示す第2の図である。It is a 2nd figure which shows an example of the positional relationship of PD and two LED units. 図35において、図中、左から右方向に物体が動いたときのPDの出力結果の一例を示す図である。In FIG. 35, it is a figure which shows an example of the output result of PD when an object moves to the right direction from the left in the figure. 図33のデジタルフォトフレームの詳細な構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the digital photo frame of FIG. 図37の制御部の詳細な構成例を示すブロック図である。It is a block diagram which shows the detailed structural example of the control part of FIG. 図33のデジタルフォトフレームが行う第4のジェスチャ認識処理を説明するためのフローチャートである。It is a flowchart for demonstrating the 4th gesture recognition process which the digital photo frame of FIG. 33 performs. 図11の各センサが行う処理の概要の一例を示す第1の図である。It is a 1st figure which shows an example of the outline | summary of the process which each sensor of FIG. 11 performs. 図11の各センサが行う処理の概要の一例を示す第2の図である。It is a 2nd figure which shows an example of the outline | summary of the process which each sensor of FIG. 11 performs. PDの内部状態が遷移する様子の一例を示す図である。It is a figure which shows an example of a mode that the internal state of PD changes. 図26の制御部が行う処理の概要の一例を示す図である。It is a figure which shows an example of the outline | summary of the process which the control part of FIG. 26 performs. 図38の制御部が行う処理の概要の一例を示す図である。It is a figure which shows an example of the outline | summary of the process which the control part of FIG. 38 performs. 図1に示される2個のセンサ間の距離について説明するための第1の図である。It is a 1st figure for demonstrating the distance between two sensors shown by FIG. 図1に示される2個のセンサ間の距離について説明するための第2の図である。FIG. 4 is a second diagram for explaining a distance between two sensors shown in FIG. 1. 図22において、図中下側からデジタルフォトフレームを見た様子の一例を示す図である。In FIG. 22, it is a figure which shows an example of a mode that the digital photo frame was seen from the lower side in the figure. 図33において、図中下側からデジタルフォトフレームを見た様子の一例を示す図である。In FIG. 33, it is a figure which shows an example of a mode that the digital photo frame was seen from the lower side in the figure. LEDの出力の調整について説明するための第1の図である。It is a 1st figure for demonstrating adjustment of the output of LED. LEDの出力の調整について説明するための第2の図である。It is a 2nd figure for demonstrating adjustment of the output of LED. 肌としての物体の位置を演算する演算方法を説明するための第1の図である。It is a 1st figure for demonstrating the calculation method which calculates the position of the object as skin. 肌としての物体の位置を演算する演算方法を説明するための第2の図である。It is a 2nd figure for demonstrating the calculation method which calculates the position of the object as skin. 肌としての物体の位置を演算する演算方法を説明するための第3の図である。It is a 3rd figure for demonstrating the calculation method which calculates the position of the object as skin. ユーザがクリック動作を行う様子の一例を示す図である。It is a figure which shows an example of a mode that a user performs click operation | movement. Lsumの変化に応じてクリック動作を認識する方法の一例を説明するための図である。It is a figure for demonstrating an example of the method of recognizing click operation | movement according to the change of Lsum. ユーザの認識と、実際の手の動きとが異なることを説明するための図である。It is a figure for demonstrating that a user's recognition differs from an actual hand movement. クリック動作を認識する方法の一例を説明するための他の図である。It is another figure for demonstrating an example of the method of recognizing click action.
 以下、本開示における実施の形態(以下、実施の形態という)について説明する。なお、説明は以下の順序で行う。
1.第1の実施の形態(PD及びLEDユニットを含む複数のセンサを有する場合の一例)
2.第2の実施の形態(1個のLEDユニットと複数のPDを有する場合の一例)
3.第3の実施の形態(複数のLEDユニットと1個のPDを有する場合の一例)
4.変形例
5.その他
Hereinafter, embodiments of the present disclosure (hereinafter referred to as embodiments) will be described. The description will be given in the following order.
1. First embodiment (an example in the case of having a plurality of sensors including PD and LED unit)
2. Second embodiment (an example with one LED unit and multiple PDs)
3. Third embodiment (an example in the case of having a plurality of LED units and one PD)
4). Modification 5 Other
<1.第1の実施の形態>
[デジタルフォトフレーム1の構成例]
 図1は、第1の実施の形態であるデジタルフォトフレーム1の構成例を示している。
<1. First Embodiment>
[Configuration example of digital photo frame 1]
FIG. 1 shows a configuration example of a digital photo frame 1 according to the first embodiment.
 このデジタルフォトフレーム1は、静止画(例えば、写真として撮像された画像)や動画を表示させる表示画面1aを有している。 The digital photo frame 1 has a display screen 1a for displaying a still image (for example, an image taken as a photograph) or a moving image.
 また、デジタルフォトフレーム1において、図1に示されるように、表示画面1aの左右方向(以下、X方向ともいう)には、ユーザの手等の位置や動き(ジェスチャ)を認識するためのセンサ211及び212が、それぞれ設けられている。 In the digital photo frame 1, as shown in FIG. 1, a sensor for recognizing the position and movement (gesture) of a user's hand or the like in the left-right direction (hereinafter also referred to as the X direction) of the display screen 1a. 21 1 and 21 2 are provided, respectively.
 なお、本開示では、ユーザの手等を、位置や動きを認識する認識対象として説明するが、認識対象としては、ユーザの手等に限定されず、人間の肌部分であれば何でも良い。 In the present disclosure, the user's hand or the like is described as a recognition target for recognizing the position or movement, but the recognition target is not limited to the user's hand or the like, and any human skin portion may be used.
 デジタルフォトフレーム1は、センサ211及び212を用いて、ユーザの手等の位置や動きを認識し、その認識結果に基づいて、例えば表示画面1aに表示させる静止画や動画の内容を変更させる。 The digital photo frame 1 uses the sensors 21 1 and 21 2 to recognize the position and movement of the user's hand, etc., and changes the content of still images and moving images displayed on the display screen 1a based on the recognition result, for example. Let
 なお、センサ211及び212は、デジタルフォトフレーム1に近接した物体が、人間の肌であるか否かを判別し、人間の肌であると判別した物体のみを対象として、位置又は動き等の少なくとも一方を認識するために用いられる。 The sensors 21 1 and 21 2 determine whether or not the object close to the digital photo frame 1 is human skin, and only the object determined to be human skin is subject to position or movement. Used to recognize at least one of
 また、以下において、センサ211及びセンサ212を区別する必要がない場合、センサ211又は212を、それぞれ、単にセンサ21という。 In the following, when it is not necessary to distinguish between the sensor 21 1 and the sensor 21 2 , the sensor 21 1 or 21 2 is simply referred to as the sensor 21.
 次に、図2及び図3を参照して、デジタルフォトフレーム1が、センサ21からの出力結果に基づいて、Z方向における物体41の位置や動きを認識する認識方法の一例を説明する。なお、Z方向とは、図1において、表示画面1aの法線方向をいう。 Next, an example of a recognition method in which the digital photo frame 1 recognizes the position and movement of the object 41 in the Z direction based on the output result from the sensor 21 will be described with reference to FIGS. The Z direction refers to the normal direction of the display screen 1a in FIG.
 図2は、センサ21に物体41が近づくように移動している場合、センサ21から出力される出力結果の一例を示している。 FIG. 2 shows an example of an output result output from the sensor 21 when the object 41 moves so as to approach the sensor 21.
 図2のAに示されるように、物体41が、センサ21に近づくように移動する場合、センサ21に物体41が近づくにつれて、センサ21からの出力Vが指数関数的に大きくなる。 2A, when the object 41 moves so as to approach the sensor 21, the output V from the sensor 21 increases exponentially as the object 41 approaches the sensor 21.
 なお、図2のAにおいて、物体41を囲む扇状の図形は、センサ21の検知範囲を示している。また、図2のBにおいて、横軸は時刻tを表しており、縦軸はセンサ21からの出力結果Vを示している。これらのことは、後述する図3のA及び図3のBについても同様である。 In FIG. 2A, the fan-shaped figure surrounding the object 41 indicates the detection range of the sensor 21. In FIG. 2B, the horizontal axis represents time t, and the vertical axis represents the output result V from the sensor 21. The same applies to A in FIG. 3 and B in FIG. 3 described later.
 図3は、センサ21から物体41が離れるように移動している場合、センサ21から出力される出力結果の一例を示している。 FIG. 3 shows an example of an output result output from the sensor 21 when the object 41 moves away from the sensor 21.
 図3のAに示されるように、物体41が、センサ21から離れるように移動する場合、センサ21から物体41が離れるにつれて、センサ21からの出力Vが指数関数的に小さくなる。 As shown in FIG. 3A, when the object 41 moves away from the sensor 21, the output V from the sensor 21 decreases exponentially as the object 41 moves away from the sensor 21.
 デジタルフォトフレーム1では、図2及び図3に示したようなセンサ21からの出力結果に基づいて、Z方向における物体41の位置や動きを認識し、その認識結果に応じて、表示画面1aの表示内容を変更する。 The digital photo frame 1 recognizes the position and movement of the object 41 in the Z direction on the basis of the output results from the sensor 21 as shown in FIGS. 2 and 3, and according to the recognition result, the display screen 1a. Change the display contents.
 次に、図4乃至図9を参照して、デジタルフォトフレーム1が、センサ21からの出力結果に基づいて、X方向における物体41の位置や動きを認識する認識方法の一例を説明する。 Next, an example of a recognition method in which the digital photo frame 1 recognizes the position and movement of the object 41 in the X direction based on the output result from the sensor 21 will be described with reference to FIGS.
 図4は、図1において図中下側からデジタルフォトフレーム1を見たときの様子の一例を示している。 FIG. 4 shows an example when the digital photo frame 1 is viewed from the lower side in FIG.
 例えば、物体41が、図4において、図中左から右方向に移動した場合、つまり、物体41が、センサ211及びセンサ212の順序で、センサ21上を移動した場合、センサ211及び212からの出力結果に基づいて、物体41の動きが認識される。 For example, when the object 41 moves from left to right in FIG. 4, that is, when the object 41 moves on the sensor 21 in the order of the sensor 21 1 and the sensor 21 2 , the sensor 21 1 and Based on the output result from 21 2 , the motion of the object 41 is recognized.
 次に、図5は、物体41が、図4に示したような動きをした場合に、センサ211及びセンサ212から、それぞれ出力される出力結果の一例を示している。 Next, FIG. 5 shows an example of output results respectively output from the sensor 21 1 and the sensor 21 2 when the object 41 moves as shown in FIG.
 図5のAには、センサ211からの出力結果が示されている。また、図5のBには、センサ212からの出力結果が示されている。なお、図5のA及び図5のBにおいて、横軸は時刻tを表しており、縦軸はセンサ21からの出力結果Vを示している。また、物体41がセンサ21の検知範囲内に存在しない場合、センサ21からは、比較的低い一定値が出力される。 FIG. 5A shows an output result from the sensor 21 1 . In FIG. 5B, the output result from the sensor 21 2 is shown. 5A and 5B, the horizontal axis represents time t, and the vertical axis represents the output result V from the sensor 21. When the object 41 does not exist within the detection range of the sensor 21, a relatively low constant value is output from the sensor 21.
 なお、物体41は、図4に示したように、センサ211の左側から近づき、センサ211の真上付近を通過して、センサ211から離れるように、センサ211の右側に存在するセンサ212に向かって移動する。 Incidentally, the object 41, as shown in FIG. 4, approaching from the left side of the sensor 21 1, passes through the vicinity directly above the sensor 21 1, away from the sensor 21 1 is present on the right side of the sensor 21 1 Move toward sensor 21 2 .
 この場合、センサ211では、図5のAに示されるように、物体41が近づくにつれ、センサ211からの出力結果が増加し、物体41がセンサ211の真上付近を通過する際に、出力結果が最大(極大)となる。そして、物体41がセンサ211を通過して離れるように移動することで、センサ211からの出力結果が減少していく。 In this case, in the sensor 21 1 , as shown in FIG. 5A, as the object 41 approaches, the output result from the sensor 21 1 increases, and when the object 41 passes near the sensor 21 1. The output result is maximized (maximum). Then, when the object 41 is moved away through the sensor 21 1, the output result from the sensor 21 1 is decreased.
 物体41は、図4に示したように、センサ211の真上あたりを通過後、センサ212の左側から近づき、センサ212の真上付近を通過して、センサ212から離れるように、センサ212の右側の方向に移動する。 Object 41, as shown in FIG. 4, after passing through the per directly above the sensor 21 1, approaching from the left side of the sensor 21 2, and passes through the vicinity directly above the sensor 21 2, away from the sensor 21 2 , Move to the right side of the sensor 21 2 .
 この場合、センサ212では、図5のBに示されるように、物体41が近づくにつれ、センサ212からの出力結果が増加し、物体41がセンサ212の真上付近を通過する際に、出力結果が最大(極大)となる。そして、物体41がセンサ212を通過して離れるように移動することで、センサ212からの出力結果が減少していく。 In this case, in the sensor 21 2 , as shown in FIG. 5B, the output result from the sensor 21 2 increases as the object 41 approaches, and when the object 41 passes near the sensor 21 2. The output result is maximized (maximum). Then, when the object 41 is moved away through the sensor 21 2, then the output from the sensor 21 2 decreases.
 すなわち、物体41が、図4に示したような動きをした場合、図5のA及び図5のBに示したように、出力結果として、上に凸な極大部分が、センサ211及びセンサ212の順序で得られる。 That is, the object 41, when the motion as shown in FIG. 4, as shown in B of A and 5 in FIG. 5, as the output result, the convex lobes above, the sensor 21 1 and the sensor In the order of 21 2 .
 このため、デジタルフォトフレーム1では、例えば、センサ211からの出力結果として極大部分が得られたタイミングと、センサ212からの出力結果として極大部分が得られたタイミングに応じて、物体41の動きを認識することができる。 For this reason, in the digital photo frame 1, for example, the object 41 is output in accordance with the timing when the maximum portion is obtained as the output result from the sensor 21 1 and the timing when the maximum portion is obtained as the output result from the sensor 21 2 . Can recognize movement.
 また、デジタルフォトフレーム1では、センサ21からの出力結果に基づいて、センサ21の検知範囲内の物体41が、人間の肌であるか否かの判別も行われ、その判別により得られる判別結果を表す肌検出信号が生成される。 Further, in the digital photo frame 1, it is also determined whether or not the object 41 within the detection range of the sensor 21 is human skin based on the output result from the sensor 21, and the determination result obtained by the determination Is generated.
 そして、デジタルフォトフレーム1では、生成した肌検出信号に基づいて、物体41が肌であることを検出した場合に、センサ211及びセンサ212からの出力結果に基づいて、物体41の位置や動きを認識する。 In the digital photo frame 1, when it is detected that the object 41 is skin based on the generated skin detection signal, the position of the object 41 is determined based on the output results from the sensors 21 1 and 21 2. Recognize movement.
 なお、肌検出信号の生成方法については、図10等を参照して詳述する。 The skin detection signal generation method will be described in detail with reference to FIG.
 次に、図6は、物体41が、図4に示したような動きをした場合、デジタルフォトフレーム1において生成される肌検出信号の一例を示している。 Next, FIG. 6 shows an example of a skin detection signal generated in the digital photo frame 1 when the object 41 moves as shown in FIG.
 図6のAに示される肌検出信号は、センサ211の検知範囲内で肌が検出されたか否かを表す。また、図5のBに示される肌検出信号は、センサ212の検知範囲内で肌が検出されたか否かを表す。 Skin detection signal shown in A of FIG. 6 represents whether the skin is detected within the detection range of the sensor 21 1. Further, the skin detection signal shown in B of FIG. 5 indicates whether or not the skin is detected within the detection range of the sensor 21 2 .
 なお、肌検出信号は、センサ21の検知範囲内に、人間の肌としての物体41が存在する場合にON(例えば1)とされ、センサ21の検知範囲内に、人間の肌としての物体41が存在しない場合にOFF(例えば0)とされる。 Note that the skin detection signal is ON (for example, 1) when an object 41 as human skin exists within the detection range of the sensor 21, and the object 41 as human skin within the detection range of the sensor 21. Is set to OFF (for example, 0) when no exists.
 また、図6のA及び図6のBにおいて、横軸は時刻tを表しており、縦軸は肌が検出されたか否かを表している。 In FIG. 6A and FIG. 6B, the horizontal axis represents time t, and the vertical axis represents whether skin has been detected or not.
 上述したように、物体41は、図4に示したように、センサ211の検知範囲を通過した後、センサ212の検知範囲を通過する。 As described above, the object 41, as shown in FIG. 4, after passing through the detection range of the sensor 21 1, passes through the detection range of the sensor 21 2.
 したがって、図6のAに示される肌検出信号、及び図6のBに示される肌検出信号の順序で、肌検出信号がONとされる。  Therefore, the skin detection signal is turned ON in the order of the skin detection signal shown in A of FIG. 6 and the skin detection signal shown in B of FIG. *
 このため、デジタルフォトフレーム1では、センサ211及びセンサ212からの出力結果に代えて、図6のA及び図6のBに示されるような肌検出信号に基づいて、物体41の位置や動き等を認識するようにしてもよい。 For this reason, in the digital photo frame 1, instead of the output results from the sensor 21 1 and the sensor 21 2 , the position of the object 41 or the like based on the skin detection signal as shown in A of FIG. 6 and B of FIG. You may make it recognize a motion etc.
 すなわち、例えば、デジタルフォトフレーム1では、図6のAに示される肌検出信号においてONとされるタイミングと、図6のBに示される肌検出信号においてONとされるタイミングに応じて、物体41の動きを認識することができる。 That is, for example, in the digital photo frame 1, the object 41 is turned on according to the timing when turned on in the skin detection signal shown in A of FIG. 6 and the timing when turned on in the skin detection signal shown in B of FIG. Can recognize the movement.
 その他、例えば、デジタルフォトフレーム1では、センサ211及びセンサ212からの出力結果と、図6のA及び図6のBに示されるような肌検出信号とに基づいて、物体41の位置や動き等を総合的に判断(認識)するようにしてもよい。 In addition, for example, in the digital photo frame 1, based on the output results from the sensors 21 1 and 21 2 and the skin detection signals as shown in FIG. 6A and FIG. You may make it judge (recognize) movement etc. comprehensively.
 図7は、図1において、図中下側からデジタルフォトフレーム1を見たときの様子の他の一例を示している。 FIG. 7 shows another example when the digital photo frame 1 is viewed from the lower side in FIG.
 例えば、物体41が、図7において、図中右から左方向に移動した場合、つまり、物体41が、センサ212及びセンサ211の順序で、センサ21上を移動した場合、センサ211及び212からの出力結果に基づいて、物体41の動きが認識される。 For example, when the object 41 moves from the right to the left in FIG. 7, that is, when the object 41 moves on the sensor 21 in the order of the sensor 21 2 and the sensor 21 1 , the sensor 21 1 and Based on the output result from 21 2 , the motion of the object 41 is recognized.
 次に、図8は、物体41が、図7に示したような動きをした場合に、センサ211及びセンサ212から、それぞれ出力される出力結果の一例を示している。 Next, FIG. 8 shows an example of output results respectively output from the sensor 21 1 and the sensor 21 2 when the object 41 moves as shown in FIG.
 図8のAには、センサ211からの出力結果が示されている。また、図8のBには、センサ212からの出力結果が示されている。なお、図8のA及び図8のBにおいて、横軸は時刻tを表しており、縦軸はセンサ21からの出力結果Vを示している。 FIG. 8A shows the output result from the sensor 21 1 . 8B shows the output result from the sensor 21 2 . 8A and 8B, the horizontal axis represents the time t, and the vertical axis represents the output result V from the sensor 21.
 なお、物体41は、図7に示したように、センサ212の真上あたりを通過後、センサ211の右側から近づき、センサ211の真上付近を通過して、センサ211から離れるように、センサ211の左側の方向に移動する。 Incidentally, the object 41, as shown in FIG. 7, after passing through the per directly above the sensor 21 2 approaches from the right side of the sensor 21 1, passes through the vicinity directly above the sensor 21 1, away from the sensor 21 1 Thus, the sensor 21 1 moves in the left direction.
この場合、センサ211では、図8のAに示されるように、物体41がセンサ212の真上あたりを通過後、センサ211に近づくにつれ、センサ211からの出力結果が増加し、物体41がセンサ211の真上付近を通過する際に、出力結果が最大(極大)となる。そして、物体41がセンサ211を通過して離れるように移動することで、センサ211からの出力結果が減少していく。 In this case, in the sensor 21 1 , as shown in FIG. 8A, the output result from the sensor 21 1 increases as the object 41 approaches the sensor 21 1 after passing right above the sensor 21 2 . When the object 41 passes in the vicinity of just above the sensor 21 1 , the output result becomes maximum (maximum). Then, when the object 41 is moved away through the sensor 21 1, the output result from the sensor 21 1 is decreased.
 物体41は、図7に示したように、センサ212の右側から近づき、センサ212の真上付近を通過して、センサ212から離れるように、センサ212の左側に存在するセンサ211に向かって移動する。 Object 41, as shown in FIG. 7, approaches from the right side of the sensor 21 2, and passes through the vicinity directly above the sensor 21 2, away from the sensor 21 2, sensor 21 present in the left side of the sensor 21 2 Move towards 1 .
 この場合、センサ212では、図8のBに示されるように、物体41が近づくにつれ、センサ212からの出力結果が増加し、物体41がセンサ212の真上付近を通過する際に、出力結果が最大(極大)となる。そして、物体41がセンサ212を通過して離れるように移動することで、センサ212からの出力結果が減少していく。 In this case, in the sensor 21 2 , as shown in FIG. 8B, as the object 41 approaches, the output result from the sensor 21 2 increases, and when the object 41 passes near the sensor 21 2. The output result is maximized (maximum). Then, when the object 41 is moved away through the sensor 21 2, then the output from the sensor 21 2 decreases.
 すなわち、物体41が、図7に示したような動きをした場合、図8のA及び図8のBに示したように、出力結果として、上に凸な極大部分が、センサ212及びセンサ211の順序で得られる。 That is, the object 41, when the motion as shown in FIG. 7, as shown in B of A and 8 in FIG. 8, as an output result, the convex lobes above, the sensor 21 2 and the sensor In the order of 21 1 .
 このため、デジタルフォトフレーム1では、センサ211からの出力結果として極大部分が得られるタイミングと、センサ212からの出力結果として極大部分が得られるタイミングに応じて、物体41の動きを認識することができる。 For this reason, in the digital photo frame 1, the movement of the object 41 is recognized according to the timing at which the maximum portion is obtained as the output result from the sensor 21 1 and the timing at which the maximum portion is obtained as the output result from the sensor 21 2. be able to.
 なお、物体41が、図7に示したような動きをした場合にも、図6で説明した場合と同様にして、肌検出信号から、物体41の動きを認識することができる。 Even when the object 41 moves as shown in FIG. 7, the movement of the object 41 can be recognized from the skin detection signal in the same manner as in the case described with reference to FIG.
 次に、図9は、物体41が、図7に示したような動きをした場合、センサ21の出力結果から得られる肌検出信号の一例を示している。 Next, FIG. 9 shows an example of a skin detection signal obtained from the output result of the sensor 21 when the object 41 moves as shown in FIG.
 なお、図9のA及び図9のBは、図6のA及び図6のBと同様に構成される。すなわち、図9のAに示される肌検出信号は、センサ211の検知範囲内で肌が検出されたか否かを表す。また、図9のBに示される肌検出信号は、センサ212の検知範囲内で肌が検出されたか否かを表す。 Note that A in FIG. 9 and B in FIG. 9 are configured similarly to A in FIG. 6 and B in FIG. That is, the skin detection signal shown in A of FIG. 9 indicates whether or not skin is detected within the detection range of the sensor 21 1 . Further, the skin detection signal shown in FIG. 9B indicates whether or not the skin is detected within the detection range of the sensor 21 2 .
 上述したように、物体41は、図7に示したように、センサ212の検知範囲を通過した後、センサ211の検知範囲を通過する。 As described above, the object 41 passes through the detection range of the sensor 21 1 after passing through the detection range of the sensor 21 2 , as shown in FIG.
 したがって、図9のBに示される肌検出信号、及び図9のAに示される肌検出信号の順序で、肌検出信号がONとされる。 Therefore, the skin detection signal is turned ON in the order of the skin detection signal shown in B of FIG. 9 and the skin detection signal shown in A of FIG.
 このため、デジタルフォトフレーム1では、センサ211及びセンサ212からの出力結果に代えて、図9のA及び図9のBに示されるような肌検出信号に基づいて、物体41の位置や動き等を認識するようにしてもよい。 Therefore, in the digital photo frame 1, instead of the output results from the sensors 21 1 and 21 2 , the position of the object 41 or the like based on the skin detection signal as shown in A of FIG. 9 and B of FIG. You may make it recognize a motion etc.
 すなわち、例えば、デジタルフォトフレーム1では、図9のAに示される肌検出信号においてONとされるタイミングと、図9のBに示される肌検出信号においてONとされるタイミングに応じて、物体41の動きを認識することができる。 That is, for example, in the digital photo frame 1, the object 41 is turned on according to the timing when turned on in the skin detection signal shown in A of FIG. 9 and the timing when turned on in the skin detection signal shown in B of FIG. Can recognize the movement.
 その他、例えば、デジタルフォトフレーム1では、センサ211及びセンサ212からの出力結果と、図9のA及び図9のBに示されるような肌検出信号とに基づいて、物体41の位置や動き等を総合的に判断(認識)するようにしてもよい。 In addition, for example, in the digital photo frame 1, based on the output results from the sensors 21 1 and 21 2 and the skin detection signals as shown in FIG. 9A and FIG. You may make it judge (recognize) movement etc. comprehensively.
[デジタルフォトフレーム1の詳細な構成例]
 次に、図10は、デジタルフォトフレーム1の詳細な構成例を示している。
[Detailed configuration example of digital photo frame 1]
Next, FIG. 10 shows a detailed configuration example of the digital photo frame 1.
 このデジタルフォトフレーム1は、CPU(Central Processing Unit)61、ROM(Read Only Memory)62、RAM(Random Access Memory)63、バス64、入出力インタフェース65、複数のセンサ211乃至21Nを有する制御部66、表示画面1aを有する表示部67、記憶部68、及びドライブ69から構成される。 The digital photo frame 1 includes a CPU (Central Processing Unit) 61, a ROM (Read Only Memory) 62, a RAM (Random Access Memory) 63, a bus 64, an input / output interface 65, and a plurality of sensors 21 1 to 21 N. A display unit 67 having a display screen 1a, a storage unit 68, and a drive 69.
 なお、図1のデジタルフォトフレーム1では、2個のセンサ211及び212が設けられているようにしたが、図10に示されるように、3個以上のセンサ211乃至21Nを設けるようにしてもよい。 In the digital photo frame 1 shown in FIG. 1, two sensors 21 1 and 21 2 are provided. However, as shown in FIG. 10, three or more sensors 21 1 to 21 N are provided. You may do it.
 CPU61は、例えば、ROM62や記憶部68に保持されているプログラムを実行することにより、各種の処理を行う。 The CPU 61 performs various processes by executing programs stored in the ROM 62 and the storage unit 68, for example.
 すなわち、例えば、CPU61は、制御部66から入出力インタフェース65及びバス64を介して供給される肌検出信号に基づいて、デジタルフォトフレーム1に近接した物体41が肌であるか否かを検出する。 That is, for example, the CPU 61 detects whether or not the object 41 close to the digital photo frame 1 is skin based on the skin detection signal supplied from the control unit 66 via the input / output interface 65 and the bus 64. .
 また、CPU61は、物体41を肌であることを検出した場合、制御部66から入出力インタフェース65及びバス64を介して供給されるジェスチャ認識情報に基づいて、肌としての物体41の位置や動き等を認識する。 Further, when the CPU 61 detects that the object 41 is skin, the position and movement of the object 41 as skin based on the gesture recognition information supplied from the control unit 66 via the input / output interface 65 and the bus 64. Recognize etc.
 なお、ジェスチャ認識情報とは、物体41の位置や動き等を認識するための情報を表す。したがって、ジェスチャ認識情報としては、例えばセンサ211及びセンサ212からの出力結果や、肌検出信号等が採用される。このことは、後述する第2及び第3の実施の形態でも同様である。 The gesture recognition information represents information for recognizing the position and movement of the object 41. Therefore, as the gesture recognition information, for example, output results from the sensors 21 1 and 21 2 , skin detection signals, and the like are employed. This is the same in the second and third embodiments described later.
 さらに、例えば、CPU61は、ジェスチャ認識情報に基づく認識結果に応じて、対応する処理を行う。 Further, for example, the CPU 61 performs corresponding processing according to the recognition result based on the gesture recognition information.
 ここで、CPU61は、例えば、ス64及び入出力インタフェース65を介して、記憶部68に保持済みの複数の静止画を読み出し、読み出した複数の静止画を、所定の順序で順次、表示部67の表示画面1aに表示させているものとする。 Here, the CPU 61 reads out the plurality of still images held in the storage unit 68 via the display 64 and the input / output interface 65, for example, and sequentially displays the read out plurality of still images in a predetermined order. Is displayed on the display screen 1a.
 この場合、例えば、CPU61は、図2に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、複数の静止画のうち、表示中の静止画を拡大させて、表示画面1aに表示させる。 In this case, for example, when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 2, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and displays a display among a plurality of still images. The still image inside is enlarged and displayed on the display screen 1a.
 また、例えば、CPU61は、図3に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、拡大して表示中の静止画を、元の大きさに縮小させて、表示画面1aに表示させる。 Further, for example, when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 3, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65 to enlarge and display a still image being displayed. The image is reduced to the original size and displayed on the display screen 1a.
 さらに、例えば、CPU61は、図4に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、複数の静止画のうち、次に表示されるべき静止画を、表示画面1aに表示させる。 Further, for example, when the CPU 61 recognizes the movement of the object 41 as illustrated in FIG. 4, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65. A still image to be displayed is displayed on the display screen 1a.
 また、例えば、CPU61は、図7に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、複数の静止画のうち、直前に表示された静止画を、表示画面1aに表示させる。 For example, when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 7, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and immediately before the plurality of still images. The displayed still image is displayed on the display screen 1a.
 さらに、CPU61は、制御部66、表示部67、及びドライブ69などを制御する。 Further, the CPU 61 controls the control unit 66, the display unit 67, the drive 69, and the like.
 ROM62は、CPU61に実行されるプログラムや、その他のデータなどを予め保持(記憶)している。 The ROM 62 holds (stores) programs executed by the CPU 61 and other data in advance.
 RAM63は、例えば、CPU61が行う処理に用いられるワーキングメモリ(作業用メモリ)であり、CPU61から指示されたデータを保持し、CPU61から読み出しが指示されたデータを、CPU61に供給する。 The RAM 63 is, for example, a working memory (working memory) used for processing performed by the CPU 61, holds data instructed by the CPU 61, and supplies data instructed to be read from the CPU 61 to the CPU 61.
 バス64は、CPU61、ROM62、RAM63及び入出力インタフェース65と相互に接続され、データのやり取りを中継する。入出力インタフェース65は、バス64、制御部66、表示部67、記憶部68、及びドライブ69と相互に接続され、データのやり取りを中継する。 The bus 64 is connected to the CPU 61, the ROM 62, the RAM 63, and the input / output interface 65, and relays data exchange. The input / output interface 65 is connected to the bus 64, the control unit 66, the display unit 67, the storage unit 68, and the drive 69, and relays data exchange.
 制御部66は、各センサ21nを有し、センサ21nからの出力結果に基づいて、各センサ21n毎に、肌検出信号及びジェスチャ認識情報を生成し、入出力インタフェース65及びバス64を介して、CPU61に供給する。なお、制御部66の詳細は、図11を参照して詳述する。 The control unit 66 includes each sensor 21 n , generates a skin detection signal and gesture recognition information for each sensor 21 n based on the output result from the sensor 21 n , and passes through the input / output interface 65 and the bus 64. To the CPU 61. Details of the control unit 66 will be described in detail with reference to FIG.
 表示部67は、例えば、CPU61からバス64及び入出力インタフェース65を介して供給される静止画等を、表示画面1aに表示させる。 The display unit 67 displays, for example, a still image supplied from the CPU 61 via the bus 64 and the input / output interface 65 on the display screen 1a.
 記憶部68は、例えばハードディスクからなり、CPU61が実行するプログラムや各種のデータを記憶する。 The storage unit 68 includes, for example, a hard disk, and stores programs executed by the CPU 61 and various data.
 ドライブ69は、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリ等のリムーバブルメディア70が装着されたとき、それらを駆動し、そこに記録されているプログラムやデータ等を取得する。取得されたプログラムやデータは、必要に応じて記憶部68に転送され、記憶される。 The drive 69 drives a removable medium 70 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and acquires programs, data, and the like recorded therein. The acquired program and data are transferred to and stored in the storage unit 68 as necessary.
 コンピュータにインストールされ、コンピュータによって実行可能な状態とされるプログラムを記録(記憶)する記録媒体は、図10に示すように、磁気ディスク(フレキシブルディスクを含む)、光ディスク(CD-ROM(Compact Disc-Read Only Memory),DVD(Digital Versatile Disc)を含む)、光磁気ディスク(MD(Mini-Disc)を含む)、もしくは半導体メモリ等よりなるパッケージメディアであるリムーバブルメディア70、又は、プログラムが一時的もしくは永続的に格納されるROM62や、記憶部68を構成するハードディスク等により構成される。 As shown in FIG. 10, a recording medium for recording (storing) a program that is installed in a computer and can be executed by the computer includes a magnetic disk (including a flexible disk), an optical disk (CD-ROM (Compact Disc- Removable media 70, which is a package media made up of a read-only memory, DVD (digital versatile disc), a magneto-optical disk (including MD (mini-disc)), or a semiconductor memory, or a program is temporarily or It is composed of a ROM 62 that is permanently stored, a hard disk that constitutes the storage unit 68, and the like.
 記録媒体へのプログラムの記録は、必要に応じてルータ、モデム等のインタフェースを接続し、接続したインタフェースを介して、ローカルエリアネットワーク、インタネット、デジタル衛星放送といった、有線又は無線の通信媒体を利用して行うことができる。 The recording of the program on the recording medium uses a wired or wireless communication medium such as a local area network, the Internet, or digital satellite broadcasting through the connected interface by connecting an interface such as a router or a modem as necessary. Can be done.
[制御部66の詳細な構成例]
 次に、図11は、制御部66の詳細な構成を示している。
[Detailed Configuration Example of Control Unit 66]
Next, FIG. 11 shows a detailed configuration of the control unit 66.
 制御部66は、センサ21n(n=1,2,…,N)の他、処理部91、電流制御部92n、タイミング制御部93n、ゲイン制御部94n、及びAD(Analog/Digital)変換部95nから構成される。 In addition to the sensor 21n (n = 1, 2,..., N), the control unit 66 includes a processing unit 91, a current control unit 92n, a timing control unit 93n, a gain control unit 94n, and an AD (Analog / Digital) conversion unit 95n. Consists of
 また、センサ21nは、LED(Light Emitting Diode)ドライバ111n、LED112an、LED112bn、レンズ113an、レンズ113bn、PD(Phase detector)114n、及びレンズ115nから構成される。 The sensor 21n includes an LED (Light Emitting Diode) driver 111n, an LED 112an, an LED 112bn, a lens 113an, a lens 113bn, a PD (Phase Detector) 114n, and a lens 115n.
 なお、センサ21nにおいて、LED112anの個数、及びLED112bnの個数は、それぞれ1個に限定されず、複数個とすることができる。 In the sensor 21n, the number of the LEDs 112an and the number of the LEDs 112bn are not limited to one each, and can be plural.
 処理部91は、電流制御部92nを制御して、電流制御部92nに、LED112anやLED112bnに流す電流を指示する。また、処理部91は、タイミング制御部93nを制御して、タイミング制御部93nに、LED112anの点灯及び消灯のタイミング、並びにLED112bnの点灯及び消灯のタイミングを指示する。 The processing unit 91 controls the current control unit 92n to instruct the current control unit 92n to supply current to the LED 112an and the LED 112bn. The processing unit 91 also controls the timing control unit 93n to instruct the timing control unit 93n to turn on and off the LEDs 112an and turn on and off the LEDs 112bn.
 これに対して、電流制御部92nは、処理部91から指示された電流を流させるように、LEDドライバ111nを制御する。また、タイミング制御部93nは、処理部91から指示されたタイミングで点灯や消灯を行わせるように、LEDドライバ111nを制御する。 On the other hand, the current control unit 92n controls the LED driver 111n so that the current instructed from the processing unit 91 flows. In addition, the timing control unit 93n controls the LED driver 111n so as to turn on and off at the timing instructed by the processing unit 91.
 LEDドライバ111nは、電流制御部92n及びタイミング制御部93nからの制御にしたがって、LED112anのみの点灯、LED112bnのみの点灯、LED112an及びLED112bnの消灯を繰り返させる。 The LED driver 111n repeats turning on only the LED 112an, turning on only the LED 112bn, and turning off the LED 112an and the LED 112bn according to control from the current control unit 92n and the timing control unit 93n.
 なお、各センサ21nにおける点灯及び消灯のタイミングについては、後述する図13を参照して詳述する。 Note that the timing of turning on and off each sensor 21n will be described in detail with reference to FIG.
 また、処理部91は、ゲイン制御部94nを制御する。これにより、PD114nにおいて行われるゲインコントロール処理によるゲインの調整の度合いが調整される。 Further, the processing unit 91 controls the gain control unit 94n. Thereby, the degree of gain adjustment by the gain control process performed in the PD 114n is adjusted.
 処理部91には、各センサ21nのPD114nからAD変換部95nを介して、輝度信号Vλ1、輝度信号Vλ2、及び輝度信号Vλoffが供給される。処理部91は、例えば、AD変換部95nからの輝度信号Vλ1、輝度信号Vλ2、及び輝度信号Vλoffに基づいて、センサ21nの検知範囲に肌が存在するか否かを表す肌検出信号を生成する。 The processing unit 91 is supplied with a luminance signal V λ1 , a luminance signal V λ2 , and a luminance signal V λoff from the PD 114n of each sensor 21n via the AD conversion unit 95n. Processing unit 91, for example, a luminance signal V .lambda.1 from AD conversion unit 95n, a luminance signal V .lambda.2, and based on the luminance signal V? OFF, skin detection signal indicating whether skin exists in the detection range of the sensor 21n Is generated.
 また、処理部91は、例えば、AD変換部95nからの輝度信号Vλ1に基づいて、ジェスチャ認識情報を生成する。 Also, the processing unit 91 generates gesture recognition information based on the luminance signal V λ1 from the AD conversion unit 95n, for example.
 処理部91は、生成した肌検出信号及びジェスチャ認識情報を、図10の入出力インタフェース65及びバス64を介して、CPU61に供給する。 The processing unit 91 supplies the generated skin detection signal and gesture recognition information to the CPU 61 via the input / output interface 65 and the bus 64 of FIG.
 なお、ジェスチャ認識情報として、輝度信号Vλ1に基づき生成される、反射光の受光強度などを採用するようにしたが、その他、例えば、肌検出信号を採用することができる。このことは、後述する第2及び第3の実施の形態でも同様である。 Note that, as the gesture recognition information, the received light intensity of the reflected light generated based on the luminance signal V λ1 is adopted, but for example, a skin detection signal can be adopted. This is the same in the second and third embodiments described later.
 ところで、例えば、処理部91は、生成した肌検出信号に基づいて、各センサ21nのいずれの検知範囲に、肌としての物体41が存在するか否かを判定し、肌としての物体41が存在すると判定した場合のみ、ジェスチャ認識情報を、CPU61に供給するようにしてもよい。 By the way, for example, the processing unit 91 determines whether or not the object 41 as skin exists in any detection range of each sensor 21n based on the generated skin detection signal, and the object 41 as skin exists. Only when it is determined, the gesture recognition information may be supplied to the CPU 61.
 また、このとき、処理部91は、ジェスチャ認識情報のみを、CPU61に供給してもよい。この場合、CPU61は、処理部91からのジェスチャ認識情報に基づき認識されるジェスチャに応じて、対応する処理を行うこととなる。これらのことは、第2及び第3の実施の形態でも同様である。 At this time, the processing unit 91 may supply only the gesture recognition information to the CPU 61. In this case, the CPU 61 performs a corresponding process according to the gesture recognized based on the gesture recognition information from the processing unit 91. The same applies to the second and third embodiments.
 また、例えば、処理部91は、生成したジェスチャ認識情報に基づいて、物体41の動き等を認識し、その認識結果を表す認識結果情報を、ジェスチャ認識情報に代えて、CPU61に供給するようにしてもよい。 Further, for example, the processing unit 91 recognizes the movement of the object 41 based on the generated gesture recognition information, and supplies recognition result information representing the recognition result to the CPU 61 instead of the gesture recognition information. May be.
 ここで、認識結果情報としては、例えば、物体41がセンサ21nに近づく動きをしている場合には"1"が、物体41がセンサ21nから離れる動きをしている場合には"2"が、図1において左から右方向に移動する動きをしている場合には"3"が、図1において右から左方向に移動する動きをしている場合には"4"などが採用される。 Here, as the recognition result information, for example, “1” is displayed when the object 41 moves closer to the sensor 21n, and “2” is displayed when the object 41 moves away from the sensor 21n. In FIG. 1, “3” is adopted when moving from left to right, and “4” is adopted when moving from right to left in FIG. .
 また、認識結果情報としては、各センサ21nからの出力結果に基づき算出されるユーザの手等の位置を採用するようにしてもよい。なお、この場合、処理部91は、各センサ21nからの出力結果に基づき、肌としての物体(例えば、ユーザの手等)の位置を算出することとなる。 Further, as the recognition result information, the position of the user's hand or the like calculated based on the output result from each sensor 21n may be adopted. In this case, the processing unit 91 calculates the position of an object (for example, a user's hand) as skin based on the output result from each sensor 21n.
 処理部91が認識結果情報をCPU61に供給する場合、CPU61では、処理部91からの認識結果情報に応じた処理を行うこととなる。このことは、第2及び第3の実施の形態でも同様である。 When the processing unit 91 supplies the recognition result information to the CPU 61, the CPU 61 performs a process according to the recognition result information from the processing unit 91. This is the same in the second and third embodiments.
 また、処理部91は、各センサ21n毎に生成される肌検出信号のOR信号のみ(例えば、肌が検出されたことを示す肌検出信号が少なくとも1つ存在する場合、肌が検出されたことを表す肌検出信号のみ)を、CPU61に供給するようにしてもよい。このことは、後述する第2及び第3の実施の形態でも同様である。 In addition, the processing unit 91 detects only the OR signal of the skin detection signal generated for each sensor 21n (for example, when at least one skin detection signal indicating that the skin is detected exists) May be supplied to the CPU 61. This is the same in the second and third embodiments described later.
 ゲイン制御部94nは、PD114nを制御し、PD114nで行われるゲインコントロール処理に用いられるパラメータを調整する。このパラメータは、PD114nの受光により得られる受光輝度Vのゲインを、どの程度、調整するかを表す。 The gain control unit 94n controls the PD 114n and adjusts parameters used for gain control processing performed by the PD 114n. This parameter represents how much the gain of the light reception luminance V obtained by the light reception of the PD 114n is adjusted.
 これは、物体41がPD114nに近い程に、PD114nが受光する物体41からの反射光の受光強度(受光光量)が大きくなり、その反射光の受光により得られる輝度信号Vも大きくなることによる。 This is because the closer the object 41 is to the PD 114n, the greater the received light intensity (received light amount) of the reflected light from the object 41 received by the PD 114n, and the greater the luminance signal V obtained by receiving the reflected light.
 すなわち、輝度信号Vが大きい場合には、輝度信号Vが小さくなるように、輝度信号Vが小さい場合には、受光輝度Vが大きくなるように、それぞれ、ゲインを調整する。 That is, the gain is adjusted so that the luminance signal V decreases when the luminance signal V is large, and the received light luminance V increases when the luminance signal V is small.
 これにより、輝度信号Vが飽和してしまう事態を防止できるので、輝度信号Vの飽和により輝度信号Vの階調が失われる事態を防止することができ、ひいては、より正確な肌検出信号やジェスチャ認識情報を生成することが可能となる。 As a result, the situation where the luminance signal V is saturated can be prevented, so that the situation where the gradation of the luminance signal V is lost due to the saturation of the luminance signal V can be prevented. As a result, a more accurate skin detection signal and gesture can be prevented. Recognition information can be generated.
 AD変換部95nは、PD114nからの出力結果V(例えば、輝度信号Vλ1や輝度信号Vλ2、輝度信号Vλoff)をAD変換し、AD変換後の出力結果Vを処理部91に供給する。 AD conversion unit 95n is output from PD114n V (e.g., luminance signal V .lambda.1 and luminance signal V .lambda.2, luminance signal V? OFF) the AD conversion, and supplies the output V after the AD conversion processing unit 91.
 LEDドライバ111nは、タイミング制御部93nからの指示にしたがって、LED112anの点灯及び消灯、並びにLED112bnの点灯及び消灯のタイミングを制御する。 The LED driver 111n controls lighting and extinguishing of the LED 112an and lighting and extinguishing of the LED 112bn according to an instruction from the timing control unit 93n.
 そして、LEDドライバ111nは、電流制御部92nからの指示にしたがって、LED112anに流す電流、及びLED112bnに流す電流を制御する。 The LED driver 111n controls the current flowing to the LED 112an and the current flowing to the LED 112bn in accordance with an instruction from the current control unit 92n.
 LED112anの前面には、レンズ113anが設けられている。また、LED112anは、LED111nドライバからの制御にしたがって、波長λ1の光(例えば870[nm]の赤外線)を、センサ21nの検知範囲内に照射する。 A lens 113an is provided in front of the LED 112an. In addition, the LED 112an irradiates light of wavelength λ1 (for example, infrared light of 870 [nm]) within the detection range of the sensor 21n according to control from the LED 111n driver.
 すなわち、例えば、LED112anには、タイミング制御部93nにより指示されたタイミングに応じて、電流制御部92nにより指示された電流が流されることにより、LED112anの点灯(波長λ1の光の照射)及び消灯が行われる。 That is, for example, the LED 112an is turned on (irradiated with light of wavelength λ1) and turned off when a current instructed by the current control unit 92n is caused to flow according to the timing instructed by the timing control unit 93n. Done.
 LED112bnの前面には、レンズ113bnが設けられている。また、LED112bnは、LED111nドライバからの制御にしたがって、波長λ1よりも長波長である波長λ2の光(例えば950[nm]の赤外線)を、センサ21nの検知範囲内に照射する。 A lens 113bn is provided on the front surface of the LED 112bn. In addition, the LED 112bn irradiates light having a wavelength λ2 that is longer than the wavelength λ1 (for example, infrared of 950 [nm]) within the detection range of the sensor 21n in accordance with control from the LED 111n driver.
 LED112an及びLED112bnでは、それぞれ、レンズ113an及びレンズ113bnが設けられている。したがって、LED112an及びLED112bnでは、それぞれ、照射むらを生じることなく均一に、センサ21nの検知範囲内に照射光を照射することができる。 The LED 112an and the LED 112bn are provided with a lens 113an and a lens 113bn, respectively. Therefore, the LED 112an and the LED 112bn can irradiate the irradiation light uniformly within the detection range of the sensor 21n without causing uneven irradiation.
 なお、波長λ1と波長λ2との組合せ(λ1,λ2)は、例えば、人間の肌に対する分光反射特性に基づいて予め決定される。人間の肌に対する分光反射特性については、図12を参照して詳述する。 Note that the combination (λ1, λ2) of the wavelength λ1 and the wavelength λ2 is determined in advance based on, for example, the spectral reflection characteristics with respect to human skin. The spectral reflection characteristics with respect to human skin will be described in detail with reference to FIG.
 すなわち、例えば、LED112bnには、タイミング制御部93nにより指示されたタイミングに応じて、電流制御部92nにより指示された電流が流されることにより、LED112bnの点灯及び消灯が繰り返される。そして、LED112bnの点灯時に、波長λ2の光が、LED112bnから照射される。 That is, for example, the LED 112bn is repeatedly lit and extinguished by the current instructed by the current controller 92n flowing in accordance with the timing instructed by the timing controller 93n. When the LED 112bn is turned on, light having a wavelength λ2 is emitted from the LED 112bn.
 PD114nの前面には、レンズ115nが設けられている。PD114nは、LED112anの点灯時に、センサ21nの検知範囲内の物体からの反射光を受光する。 A lens 115n is provided on the front surface of the PD 114n. The PD 114n receives reflected light from an object within the detection range of the sensor 21n when the LED 112an is turned on.
 すなわち、例えば、センサ21nの検知範囲内に物体41が存在する場合、PD114nは、LED112anの点灯時に、波長λ1の光が照射されている物体41からの反射光を受光する。 That is, for example, when the object 41 exists within the detection range of the sensor 21n, the PD 114n receives the reflected light from the object 41 irradiated with the light of the wavelength λ1 when the LED 112an is turned on.
 そして、PD114nは、その受光により得られる受光輝度Vλ1に対してゲインコントロール処理を施し、処理後の受光輝度Vλ1をAD変換部95nに出力する。 Then, the PD 114n performs gain control processing on the light reception luminance V λ1 obtained by the light reception, and outputs the light reception luminance V λ1 after processing to the AD conversion unit 95n.
 また、例えば、センサ21nの検知範囲内に物体41が存在する場合、PD114nは、LED112bnの点灯時に、波長λ2の光が照射されている物体41からの反射光を受光する。 Also, for example, when the object 41 exists within the detection range of the sensor 21n, the PD 114n receives the reflected light from the object 41 irradiated with light of wavelength λ2 when the LED 112bn is turned on.
 そして、PD114nは、その受光により得られる受光輝度Vλ2に対してゲインコントロール処理を施し、処理後の受光輝度Vλ2をAD変換部95nに出力する。 Then, the PD 114n performs gain control processing on the light reception luminance V λ2 obtained by the light reception, and outputs the light reception luminance V λ2 after processing to the AD conversion unit 95n.
 さらに、例えば、センサ21nの検知範囲内に物体41が存在する場合、PD114nは、LED112an及びLED112bnの消灯時に、照射光以外の外光が照射されている物体41からの反射光を受光する。 Further, for example, when the object 41 exists within the detection range of the sensor 21n, the PD 114n receives reflected light from the object 41 irradiated with external light other than the irradiation light when the LED 112an and the LED 112bn are turned off.
 そして、PD114nは、その受光により得られる受光輝度Vλoffに対してゲインコントロール処理を施し、処理後の受光輝度VλoffをAD変換部95nに出力する。 Then, the PD 114n performs gain control processing on the light reception luminance Vλoff obtained by the light reception, and outputs the light reception luminance Vλoff after the processing to the AD conversion unit 95n.
 図12は、人間の肌に対する分光反射特性を示している。 FIG. 12 shows spectral reflection characteristics with respect to human skin.
 なお、この分光反射特性は、人間の肌の色の違い(人種の違い)や状態(日焼け等)に拘らず、一般性があるものである。 It should be noted that this spectral reflection characteristic is general regardless of the difference in human skin color (difference in race) and state (sunburn, etc.).
 図12において、横軸は、人間の肌に照射される照射光の波長を示しており、縦軸は、人間の肌に照射された照射光の反射率を示している。 12, the horizontal axis indicates the wavelength of the irradiation light irradiated on the human skin, and the vertical axis indicates the reflectance of the irradiation light irradiated on the human skin.
 人間の肌に照射された反射光の反射率は、800[nm]付近をピークとして、900[nm]付近から急激に減少し、1000[nm]付近を極小値として再び上昇することが知られている。 It is known that the reflectance of reflected light radiated on human skin decreases sharply from around 900 [nm], peaking at around 800 [nm], and rises again around 1000 [nm] as a minimum. ing.
 具体的には、例えば、図12に示されるように、人間の肌に対して、赤外線としての870[nm]の光を照射して得られる反射光の反射率は約63パーセントである。また、赤外線としての950[nm]の光を照射して得られる反射光の反射率は約50パーセントである。 Specifically, for example, as shown in FIG. 12, the reflectance of reflected light obtained by irradiating human skin with light of 870 [nm] as infrared rays is about 63%. Moreover, the reflectance of the reflected light obtained by irradiating 950 [nm] light as infrared rays is about 50 percent.
 これは、人間の肌について特有のものであり、人間の肌以外の物体(例えば、衣服等)では、800乃至1000[nm]付近において、反射率の変化は緩やかになっている。また周波数が高くなるほど、反射率が少しずつ大きくなることが多い。 This is peculiar to human skin. In an object other than human skin (for example, clothes), the change in reflectance is moderate in the vicinity of 800 to 1000 [nm]. In addition, as the frequency increases, the reflectance often increases little by little.
 本開示では、例えば、組合せ(λ1,λ2)=(870,950)とされる。この組合せは、人間の肌に対して、波長λ1の光を照射したときの反射率が、波長λ2の光を照射したときの反射率よりも大きくなる組合せである。 In the present disclosure, for example, the combination (λ1, λ2) = (870,950). This combination is a combination in which the reflectance when the human skin is irradiated with the light with the wavelength λ1 is larger than the reflectance when the light with the wavelength λ2 is irradiated.
 したがって、肌としての物体41からの反射光により得られた輝度信号Vλ1が表す輝度値は比較的大きな値となり、肌としての物体41からの反射光により得られた輝度信号Vλ2が表す輝度値は比較的小さな値となる。 Therefore, the luminance value represented by the luminance signal V λ1 obtained by the reflected light from the object 41 as skin becomes a relatively large value, and the luminance value represented by the luminance signal V λ2 obtained from the reflected light from the object 41 as skin. The value is a relatively small value.
 このため、正規化差分信号Rdiff(=100×(Vλ1 - Vλ2)/(Vλ1  Vλoff))が表す差分値は、比較的大きな正の値α1となる。 For this reason, the difference value represented by the normalized difference signal Rdiff (= 100 × (V λ1 −V λ2 ) / (V λ1 V λoff )) is a relatively large positive value α1.
 なお、輝度信号Vλoffは、LED112an及びLED112bnからの照射光以外の外光の影響を除去するために用いており、これにより肌検出の精度を向上することができる。 Note that the luminance signal V λoff is used to remove the influence of external light other than the irradiation light from the LED 112an and the LED 112bn, thereby improving the accuracy of skin detection.
 なお、外光の影響が軽微である場合には、輝度信号Vλoffを省略して、正規化差分信号Rdiff(=100×(Vλ1 - Vλ2)/Vλ1)を算出するようにしてもよい。 When the influence of external light is slight, the luminance signal V λoff is omitted and the normalized difference signal Rdiff (= 100 × (V λ1 −V λ2 ) / V λ1 ) is calculated. Good.
 また、PD114nの前面に、可視光をカット(遮断)する可視光カットフィルタを設けるように構成すれば、外光としての可視光の影響を除去することができ、肌検出信号の精度をより向上させることができる。このことは、第2及び第3の実施の形態で説明するPDについても同様である。 If a visible light cut filter that cuts (blocks) visible light is provided on the front surface of the PD 114n, the influence of visible light as external light can be removed, and the accuracy of the skin detection signal is further improved. Can be made. The same applies to PDs described in the second and third embodiments.
 また、組み合わせ(λ1,λ2)=(870,950)は、人間の肌以外のものに対して、波長λ1の光を照射したときの反射率が、波長λ2の光を照射したときの反射率と殆ど同一となる組み合わせである。 Further, in the combination (λ1, λ2) = (870,950), the reflectance when irradiating light of wavelength λ1 is almost the same as the reflectance when irradiating light of wavelength λ2 to things other than human skin. Combinations that are the same.
 このため、正規化差分信号Rdiffが表す差分値は、比較的小さな正または負の値β1となる。 For this reason, the difference value represented by the normalized difference signal Rdiff is a relatively small positive or negative value β1.
 よって、処理部91では、AD変換部95nからの輝度信号Vλ1、輝度信号Vλ2及びVλoffに基づいて、正規化差分信号Rdiffを算出する。そして、処理部91は、算出した正規化差分信号Rdiffが、予め決められた閾値(例えば、α1よりも小であり、β1よりも大である閾値)以上であるか否かに基づいて、肌検出信号を生成するようにしている。 Therefore, the processing unit 91 calculates the normalized difference signal Rdiff based on the luminance signal V λ1 , the luminance signal V λ2, and V λoff from the AD conversion unit 95n. Then, the processing unit 91 determines whether the calculated normalized difference signal Rdiff is equal to or greater than a predetermined threshold value (for example, a threshold value that is smaller than α1 and larger than β1). A detection signal is generated.
ここで、組み合わせ(λ1,λ2)は、(λ1,λ2)=(870,950)に限定されず、反射率の差が十分に大きくなる組み合わせ(λ1,λ2)であれば、どのような組み合わせでもよい。 Here, the combination (λ1, λ2) is not limited to (λ1, λ2) = (870,950), and may be any combination as long as the difference in reflectance is sufficiently large (λ1, λ2). .
 なお、正確に肌検出を行うためには、概ね、波長λ1の値は640[nm]乃至1000[nm]の範囲内で、波長λ2の値は900[nm]から1100[nm]の範囲内で、それぞれ設定することが望ましいことが、本発明者が予め行なった実験によりわかっている。 For accurate skin detection, the wavelength λ1 is generally in the range of 640 [nm] to 1000 [nm], and the wavelength λ2 is in the range of 900 [nm] to 1100 [nm]. Thus, it is known from experiments previously conducted by the present inventor that it is desirable to set each of them.
 但し、波長λ1が可視光領域の波長である場合、ユーザに眩しさを感じさせることや、ユーザが表示画面1a上の画像を見る際に感じる色調にも影響すること等から、波長λ1の値は不可視光領域の800[nm]以上とすることが望ましい。 However, when the wavelength λ1 is a wavelength in the visible light region, the value of the wavelength λ1 is caused because it makes the user feel dazzling or affects the color tone that the user feels when viewing the image on the display screen 1a. Is preferably 800 [nm] or more in the invisible light region.
 すなわち、例えば、上述の範囲内で、波長λ1の値は800[nm]以上900[nm]未満の不可視光領域の値とし、波長λ2の値は900[nm]以上の不可視光領域とすることが望ましい。 That is, for example, within the above-mentioned range, the value of wavelength λ1 is a value in the invisible light region of 800 [nm] or more and less than 900 [nm], and the value of wavelength λ2 is the invisible light region of 900 [nm] or more. Is desirable.
 次に、図13は、各センサ21nにおける照射タイミングの一例を示している。 Next, FIG. 13 shows an example of irradiation timing in each sensor 21n.
 なお、図13では、図面が煩雑になるのを避けるため、3個のセンサ211乃至213における照射タイミングの一例を示している。 FIG. 13 shows an example of irradiation timings of the three sensors 21 1 to 21 3 in order to avoid the drawing from becoming complicated.
 図13のA乃至図13のCは、それぞれ、センサ211乃至213の照射タイミングを示している。なお、図13のA乃至図13のCにおいて、横軸は時刻tを表している。 13A to 13C show the irradiation timings of the sensors 21 1 to 21 3 , respectively. In FIG. 13A to FIG. 13C, the horizontal axis represents time t.
 また、図13のA乃至図13のCにおいて、「LED λ1」は、LED112anのみを点灯する点灯期間(波長λ1の光のみを照射する期間)を示している。さらに、「LED λ2」は、LED112bnのみを点灯する点灯期間(波長λ2の光のみを照射する期間)を示している。 Further, in FIG. 13A to FIG. 13C, “LED λ1” indicates a lighting period in which only the LED 112an is lit (period in which only light of wavelength λ1 is irradiated). Further, “LED λ2” indicates a lighting period in which only the LED 112bn is lit (a period in which only light of wavelength λ2 is irradiated).
 また、「LED off」は、LED112an及びLED112bnを消灯する消灯期間(波長λ1の光、及び波長λ2の光のいずれも照射しない期間)を示している。 “LED「 off ”indicates a light extinguishing period in which the LED 112an and the LED 112bn are extinguished (period in which neither the light with the wavelength λ1 nor the light with the wavelength λ2 is irradiated).
 デジタルフォトフレーム1に、図1に示されるようなセンサ211及びセンサ212が設けられている場合、センサ211及びセンサ212は、図13のA及び図13のBに示されるような照射を繰り返す。 The digital photo frame 1, when the sensor 21 1 and the sensor 21 2 as shown in FIG. 1 are provided, the sensor 21 1 and the sensor 21 2, as shown in B in A and 13 in FIG. 13 Repeat irradiation.
 すなわち、例えば、センサ211は、図13のAに示されるように、点灯期間「LED λ1」でLED112a1のみを点灯し、点灯期間「LED λ2」でLED112b1のみを点灯する。そして、消灯期間「LED off」でLED112a1及びLED112b1を消灯する。 That is, for example, the sensor 21 1, as shown in A of FIG. 13, illuminates only LEDs 112a 1 in the lighting period "LED .lambda.1" lit only LED112b 1 in the lighting period "LED .lambda.2". Then, the LEDs 112a 1 and 112b 1 are turned off during the turn-off period “LED off”.
 また、例えば、センサ212は、図13のBに示されるように、図13のAに示される消灯期間「LED off」の直後から、点灯期間「LED λ1」でLED112a2のみを点灯し、点灯期間「LED λ2」でLED112b2のみを点灯する。そして、消灯期間「LED off」でLED112a2及びLED112b2を消灯する。 Further, for example, as shown in FIG. 13B, the sensor 21 2 lights only the LED 112a 2 in the lighting period “LED λ1” immediately after the extinguishing period “LED off” shown in FIG. Only the LED 112b 2 is lit in the lighting period “LED λ2”. Then, the LED 112a 2 and the LED 112b 2 are turned off during the turn-off period “LED off”.
 図13Bに示される消灯期間「LED off」の経過後、センサ211及びセンサ212は、一定の間隔(時間)だけ待って、図13のA及び図13のBを参照して説明した処理を繰り返す。なお、その一定の間隔(図示せず)において、例えば、制御部66の処理部91により、肌検出信号及びジェスチャ認識情報の生成、CPU61への出力等が行われる。 After the extinguishing period “LED off” shown in FIG. 13B, the sensors 21 1 and 21 2 wait for a certain interval (time), and the processing described with reference to FIG. 13A and FIG. 13B repeat. Note that, at the fixed interval (not shown), for example, the processing unit 91 of the control unit 66 generates a skin detection signal and gesture recognition information, outputs it to the CPU 61, and the like.
 また、例えば、デジタルフォトフレーム1に、3個のセンサ211乃至213が設けられている場合、センサ211及びセンサ212は、それぞれ、図13のA及び図13のBで説明したような処理を行う。 Further, for example, when the digital photo frame 1 is provided with three sensors 21 1 to 21 3 , the sensor 21 1 and the sensor 21 2 are respectively described with reference to A of FIG. 13 and B of FIG. Perform proper processing.
 そして、センサ213は、図13のCに示されるように、図13のBに示される消灯期間「LED off」の直後から、点灯期間「LED λ1」でLED112a3のみを点灯し、点灯期間「LED λ2」でLED112b3のみを点灯する。また、センサ213は、消灯期間「LED off」でLED112a3及びLED112b3を消灯する。 Then, as shown in FIG. 13C, the sensor 21 3 turns on only the LED 112a 3 in the lighting period “LED λ1” immediately after the extinguishing period “LED off” shown in FIG. “LED λ2” turns on only LED 112b 3 . In addition, the sensor 21 3 turns off the LEDs 112a 3 and 112b 3 during the turn-off period “LED off”.
 図13のCに示される消灯期間「LED off」の経過後、センサ211乃至213は、例えば、一定の間隔Iを待って、図13のA乃至図13のCを参照して説明した処理を繰り返す。なお、その一定の間隔Iにおいて、例えば、制御部66の処理部91により、肌検出信号及びジェスチャ認識情報の生成、CPU61への出力等が行われる。 After the extinguishing period “LED off” shown in FIG. 13C, the sensors 21 1 to 21 3 waited for a certain interval I, for example, and have been described with reference to FIGS. 13A to 13C. Repeat the process. Note that, at the fixed interval I, for example, the processing unit 91 of the control unit 66 generates a skin detection signal and gesture recognition information, outputs it to the CPU 61, and the like.
 次に、図14及び図15を参照して、3個のセンサ211乃至213が設けられたデジタルフォトフレームの一例を説明する。 Next, an example of a digital photo frame provided with three sensors 21 1 to 21 3 will be described with reference to FIGS.
 図14は、トライアングル状に配置された3個のセンサ211乃至213を有するデジタルフォトフレーム131の構成例を示している。 FIG. 14 shows a configuration example of a digital photo frame 131 having three sensors 21 1 to 21 3 arranged in a triangle shape.
 このデジタルフォトフレーム131には、図1の場合と同様に、図中、表示画面1aの左右方向にセンサ211及び212が設けられている他、表示画面1aの下方向にセンサ213が設けられている。 As in the case of FIG. 1, the digital photo frame 131 is provided with sensors 21 1 and 21 2 in the left-right direction of the display screen 1a in the drawing, and a sensor 21 3 in the lower direction of the display screen 1a. Is provided.
 デジタルフォトフレーム131では、センサ211及び212からの出力結果に基づいて、X方向(左右方向)の動き等を認識できる。また、デジタルフォトフレーム131では、同様にして、センサ211及びセンサ213からの出力結果に基づいて、Y方向(上下方向)の動き等を認識することができる。 The digital photo frame 131 can recognize the movement in the X direction (left and right direction) based on the output results from the sensors 21 1 and 21 2 . Similarly, the digital photo frame 131 can recognize the movement in the Y direction (vertical direction) and the like based on the output results from the sensors 21 1 and 21 3 .
 なお、Y方向の動き等を認識する場合、センサ211及びセンサ213からの出力結果に代えて、例えば、センサ212及びセンサ213からの出力結果を用いることができる。 When recognizing the movement in the Y direction or the like, for example, output results from the sensors 21 2 and 21 3 can be used instead of the output results from the sensors 21 1 and 21 3 .
 次に、図15は、L字状に配置された3個のセンサ211乃至213を有するデジタルフォトフレーム151の構成例を示している。 Next, FIG. 15 shows a configuration example of a digital photo frame 151 having three sensors 21 1 to 21 3 arranged in an L shape.
 このデジタルフォトフレーム151には、その左上の角部分にセンサ211が、左下の角部分にセンサ212が、右下の角部分にセンサ213が、それぞれ設けられている。 The digital photo frame 151 is provided with a sensor 21 1 at the upper left corner, a sensor 21 2 at the lower left corner, and a sensor 21 3 at the lower right corner.
 デジタルフォトフレーム151では、センサ212及び213からの出力結果に基づいて、X方向(左右方向)の動き等を認識でき、センサ211及びセンサ212からの出力結果に基づいて、Y方向(上下方向)の動き等を認識することができる。 The digital photo frame 151 can recognize the movement in the X direction (left and right direction) based on the output results from the sensors 21 2 and 21 3, and based on the output results from the sensors 21 1 and 21 2 It is possible to recognize movement in the (vertical direction).
 以上のように、図1では、2個のセンサ211及び212が設けられた場合を、図14及び図15では、3個のセンサ211乃至213が設けられた場合を説明したが、センサ21nの個数はこれに限定されない。 As described above, FIG. 1 illustrates the case where two sensors 21 1 and 21 2 are provided, and FIGS. 14 and 15 illustrate the case where three sensors 21 1 to 21 3 are provided. The number of sensors 21n is not limited to this.
 すなわち、例えば、図15のデジタルフォトフレーム151において、センサ211とセンサ212との間に、新たなセンサ214を設けるようにしてもよい。 That is, for example, a new sensor 21 4 may be provided between the sensor 21 1 and the sensor 21 2 in the digital photo frame 151 of FIG.
 この場合、デジタルフォトフレーム151では、例えば、Y方向の動き等を、センサ211,212及び214からの出力結果に基づいて認識することができるので、Y方向の動き等を、より正確に認識することができる。 In this case, in the digital photo frame 151, for example, the movement in the Y direction can be recognized based on the output results from the sensors 21 1 , 21 2, and 21 4, so that the movement in the Y direction can be more accurately detected. Can be recognized.
 また、デジタルフォトフレームには、例えば、物体41までの距離に応じて異なる出力結果を出力する距離センサ(例えばPDや静電容量センサ等)を設けるようにしてもよい。 Also, the digital photo frame may be provided with a distance sensor (for example, a PD or a capacitance sensor) that outputs different output results depending on the distance to the object 41, for example.
 この場合、デジタルフォトフレームには、センサ21nと距離センサが混在して設けられる。なお、距離センサからの出力は、処理部91に供給され、ジェスチャ認識情報を生成するために用いられる。このことは、後述する第2及び第3の実施の形態においても同様である。 In this case, the digital photo frame is provided with a mixture of sensors 21n and distance sensors. Note that the output from the distance sensor is supplied to the processing unit 91 and used to generate gesture recognition information. This is the same in the second and third embodiments described later.
 センサ21nと距離センサが混在する形で、デジタルフォトフレームに設けるようにした場合、距離センサに置き換えられた分だけ、処理部91において肌検出信号を生成する必要がなくなるため、肌検出信号の生成に要する負担を軽減できる。 When the sensor 21n and the distance sensor are mixed and provided in the digital photo frame, it is not necessary to generate the skin detection signal in the processing unit 91 by the amount replaced with the distance sensor. Can reduce the burden required.
 また、処理部91の負担を軽減できるので、処理部91として機能するDSP等を、比較的、処理速度の遅い安価なDSP(Digital Signal Processor)等とすることができ、デジタルフォトフレーム1の製造コストを削減することが可能となる。 Further, since the burden on the processing unit 91 can be reduced, the DSP functioning as the processing unit 91 can be an inexpensive DSP (Digital Signal Processor) with a relatively low processing speed, and the digital photo frame 1 can be manufactured. Costs can be reduced.
[第1のジェスチャ認識処理の詳細]
 次に、図16を参照して、デジタルフォトフレーム1が行う第1のジェスチャ認識処理について説明する。
[Details of the first gesture recognition process]
Next, the first gesture recognition process performed by the digital photo frame 1 will be described with reference to FIG.
 この第1のジェスチャ認識処理は、例えば、デジタルフォトフレーム1の電源がオンされたときに開始される。 This first gesture recognition process is started, for example, when the digital photo frame 1 is powered on.
 ステップS1において、処理部91は、複数のセンサ211乃至21Nのうち、所定のセンサ21nに注目し、注目センサ21nとする。 In step S1, the processing unit 91 pays attention to a predetermined sensor 21 n among the plurality of sensors 21 1 to 21 N and sets it as the attention sensor 21 n .
 そして、ステップS2において、処理部91は、電流制御部92n、タイミング制御部93n及びゲイン制御部94n等を介して、注目センサ21nを制御し、輝度信号Vλ1を生成してAD変換部95nに出力するVλ1取得処理を行わせる。また、AD変換部95nは、注目センサ21nからの輝度信号Vλ1をAD変換し、AD変換後の輝度信号Vλ1を、処理部91に供給する。 In step S2, the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like, generates a luminance signal V λ1 and AD V λ1 acquisition processing to be output to the conversion unit 95 n is performed. In addition, the AD conversion unit 95 n performs AD conversion on the luminance signal V λ1 from the sensor of interest 21 n and supplies the luminance signal V λ1 after AD conversion to the processing unit 91.
 ステップS3において、処理部91は、電流制御部92n、タイミング制御部93n及びゲイン制御部94n等を介して、注目センサ21nを制御し、輝度信号Vλ2を生成してAD変換部95nに出力するVλ2取得処理を行わせる。また、AD変換部95nは、注目センサ21nからの輝度信号Vλ2をAD変換し、AD変換後の輝度信号Vλ2を、処理部91に供給する。 In step S3, the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, etc., and generates a luminance signal V λ2 to generate an AD conversion unit. V λ2 acquisition processing to be output to 95 n is performed. Further, the AD conversion unit 95 n performs AD conversion on the luminance signal V λ2 from the sensor of interest 21 n and supplies the luminance signal V λ2 after AD conversion to the processing unit 91.
 なお、注目センサ21nが行うVλ1取得処理及びVλ2取得処理の詳細は、図17を参照して詳述する。 Details of the V λ1 acquisition process and the V λ2 acquisition process performed by the sensor of interest 21 n will be described in detail with reference to FIG.
 ステップS4において、処理部91は、電流制御部92n、タイミング制御部93n及びゲイン制御部94n等を介して、注目センサ21nを制御し、輝度信号Vλoffを生成してAD変換部95nに出力するVλoff取得処理を行わせる。また、AD変換部95nは、注目センサ21nからの輝度信号VλoffをAD変換し、AD変換後の輝度信号Vλoffを、処理部91に供給する。 In step S4, the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like to generate the luminance signal V λoff and generate the AD conversion unit. The V λoff acquisition process for outputting to 95 n is performed. In addition, the AD conversion unit 95 n performs AD conversion on the luminance signal V λoff from the sensor of interest 21 n and supplies the luminance signal V λoff after AD conversion to the processing unit 91.
 なお、注目センサ21nが行うVλoff取得処理の詳細は、図18を参照して詳述する。 Details of the V λoff acquisition process performed by the sensor of interest 21 n will be described in detail with reference to FIG.
 ステップS5では、処理部91は、注目センサ21nからAD変換部95nを介して供給された輝度信号Vλ1、輝度信号Vλ2、及び生輝度信号Vλoffに基づいて、肌としての物体が注目センサ21nの検知範囲内に存在するか否かを表す肌検出信号を生成する肌判別処理を行う。なお、処理部91が行う肌判別処理の詳細は、図19を参照して詳述する。 In step S5, the processing unit 91, attention sensor 21 the luminance signal supplied via the AD converter 95 n from n V .lambda.1, luminance signal V .lambda.2, and based on the raw brightness signal V? OFF, an object of the skin Skin discrimination processing is performed for generating a skin detection signal indicating whether or not the target sensor 21 n exists within the detection range. Details of the skin discrimination processing performed by the processing unit 91 will be described in detail with reference to FIG.
 ステップS6では、処理部91は、注目センサ21nからAD変換部95nを介して供給された輝度信号Vλ1に基づいて、ジェスチャ認識情報を生成する。 In step S6, the processing unit 91 generates gesture recognition information based on the luminance signal V λ1 supplied from the sensor of interest 21 n via the AD conversion unit 95 n .
 ここで、注目される各センサ21nにおいて行われるステップS2乃至ステップS6の処理のうち、ステップS5及びステップS6の処理を、ステップS7からステップS8に進むときにまとめて行うようにしてもよい。 Here, among the processes of Steps S2 to S6 performed in each sensor 21 n to be noted, the processes of Step S5 and Step S6 may be performed collectively when proceeding from Step S7 to Step S8.
 この場合、各センサ21nで輝度信号が取得される時間差を短くすることができるので、より精度の高い肌検出信号及びジェスチャ認識情報を生成することができるようになる。このことは、後述する他のジェスチャ認識処理(例えば第4のジェスチャ認識処理等)においても同様である。 In this case, since the time difference at which the luminance signal is acquired by each sensor 21 n can be shortened, a more accurate skin detection signal and gesture recognition information can be generated. The same applies to other gesture recognition processes (for example, a fourth gesture recognition process) described later.
 なお、ジェスチャ認識情報としては、PD114nにより受光された反射光の受光強度を表す輝度信号Vλ1をそのまま採用することができる。 As the gesture recognition information, the luminance signal V λ1 representing the received light intensity of the reflected light received by the PD 114n can be used as it is.
 また、例えば、注目センサ21nから物体までの距離の2乗は、物体からの反射光の受光強度としての輝度信号Vλ1に比例することから、反射光の受光強度としての輝度信号Vλ1の平方根を、ジェスチャ認識情報として採用するようにしてもよい。 Further, for example, the square of the distance from the sensor of interest 21 n to the object is proportional to the luminance signal V λ1 as the received light intensity of the reflected light from the object, and therefore the luminance signal V λ1 as the received light intensity of the reflected light. You may make it employ | adopt square root as gesture recognition information.
 また、例えば、輝度信号Vλ1(が表す輝度値)と輝度信号Vλ2(が表す輝度値)との平均を、ジェスチャ認識情報として採用してもよい。 Further, for example, an average of the luminance signal V λ1 (represented by the luminance value) and the luminance signal V λ2 (represented by the luminance value) may be employed as the gesture recognition information.
 さらに、処理部91は、ステップS5で生成した肌検出信号をジェスチャ認識情報として採用することができる。また、処理部91は、注目センサ21nからAD変換部95nを介して供給された輝度信号Vλ2に基づいて、輝度信号Vλ2の場合と同様に、ジェスチャ認識情報を生成するようにしてもよい。 Further, the processing unit 91 can employ the skin detection signal generated in step S5 as gesture recognition information. Further, the processing unit 91 generates gesture recognition information based on the luminance signal V λ2 supplied from the sensor of interest 21 n via the AD conversion unit 95 n as in the case of the luminance signal V λ2. Also good.
 ジェスチャ認識情報については、第2及び第3の実施の形態においても同様のことが言える。 The same can be said for the gesture recognition information in the second and third embodiments.
 処理部91は、生成した肌検出信号とともに、輝度信号Vλ1又は輝度信号Vλ2の少なくとも一方に基づき生成したジェスチャ認識情報を、入出力インタフェース65及びバス64を介して、CPU61に供給する。 The processing unit 91 supplies gesture recognition information generated based on at least one of the luminance signal V λ1 and the luminance signal V λ2 together with the generated skin detection signal to the CPU 61 via the input / output interface 65 and the bus 64.
 ステップS7では、処理部91は、複数のセンサ211乃至21Nの全てに注目したか否かを判定し、複数のセンサ211乃至21Nの全てに注目していないと判定した場合、処理をステップS1に戻す。 In step S7, when the processing unit 91 determines whether or not attention to all of the plurality of sensors 21 1 to 21 N, it was determined not to be focused on all of the plurality of sensors 21 1 to 21 N, the processing To step S1.
 そして、ステップS1では、処理部91は、複数のセンサ211乃至21Nのうち、まだ注目していないセンサ21nを、新たな注目センサ21nとし、それ以降、同様の処理が行われる。 In step S1, the processing unit 91 sets a sensor 21 n that has not been focused on among the plurality of sensors 21 1 to 21 N as a new focused sensor 21 n, and thereafter the same processing is performed.
 また、ステップS7では、処理部91は、複数のセンサ211乃至21Nの全てに注目したと判定した場合、処理をステップS8に進める。 In step S7, if the processing unit 91 determines that all of the plurality of sensors 21 1 to 21 N are focused, the process proceeds to step S8.
 以上説明したステップS1乃至ステップS7の処理により、CPU61には、注目された各センサ211乃至21Nから入出力インタフェース65及びバス64を介して、センサ211乃至21N毎に生成された肌検出信号及びジェスチャ認識情報が供給される。 Thus the process of steps S1 to S7 described, the CPU 61, skin output interface 65 and the bus 64 from the target by the sensor 21 1 through 21 N has been generated in the sensor 21 1 to each 21 N Detection signals and gesture recognition information are supplied.
 ステップS8では、CPU61は、制御部66の各センサ211乃至21Nから入出力インタフェース65及びバス64を介して供給された肌検出信号に基づいて、肌としての物体41が、いずれかのセンサ21nの検知範囲で検出されたか否かを判定する。 In step S8, the CPU 61 determines that the object 41 as skin is one of the sensors based on the skin detection signal supplied from the sensors 21 1 to 21 N of the control unit 66 via the input / output interface 65 and the bus 64. It is determined whether or not it has been detected within a detection range of 21n.
 そして、ステップS8では、CPU61は、肌としての物体41が検出されていないと判定した場合、処理はステップS1に戻り、それ以降同様の処理が行われる。 In step S8, if the CPU 61 determines that the object 41 as skin has not been detected, the process returns to step S1, and thereafter the same process is performed.
 また、ステップS8では、CPU61は、肌としての物体41が検出されたと判定した場合、処理はステップS9に進められる。ステップS9では、CPU61は、制御部66の各センサ211乃至21Nから入出力インタフェース65及びバス64を介して供給されたジェスチャ認識情報に基づいて、肌としての物体41の位置や動きなどを認識(検出)する。 In step S8, if the CPU 61 determines that the object 41 as skin has been detected, the process proceeds to step S9. In step S9, the CPU 61 determines the position and movement of the object 41 as the skin based on the gesture recognition information supplied from the sensors 21 1 to 21 N of the control unit 66 via the input / output interface 65 and the bus 64. Recognize (detect).
 そして、ステップS10では、CPU61は、ジェスチャ認識情報に基づく認識結果に応じて、対応する処理を行う。 In step S10, the CPU 61 performs corresponding processing according to the recognition result based on the gesture recognition information.
 すなわち、例えば、CPU61は、図2に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、複数の静止画のうち、表示中の静止画を拡大させて、表示画面1aに表示させる。 That is, for example, when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 2, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and is currently displaying among a plurality of still images. The still image is enlarged and displayed on the display screen 1a.
 また、例えば、CPU61は、図3に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、拡大して表示中の静止画を、元の大きさに縮小させて、表示画面1aに表示させる。 Further, for example, when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 3, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65 to enlarge and display a still image being displayed. The image is reduced to the original size and displayed on the display screen 1a.
 さらに、例えば、CPU61は、図4に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、複数の静止画のうち、次に表示されるべき静止画を、表示画面1aに表示させる。 Further, for example, when the CPU 61 recognizes the movement of the object 41 as illustrated in FIG. 4, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65. A still image to be displayed is displayed on the display screen 1a.
 また、例えば、CPU61は、図7に示したような物体41の動きを認識した場合、バス64及び入出力インタフェース65を介して、表示部67を制御し、複数の静止画のうち、直前に表示された静止画を、表示画面1aに表示させる。 For example, when the CPU 61 recognizes the movement of the object 41 as shown in FIG. 7, the CPU 61 controls the display unit 67 via the bus 64 and the input / output interface 65, and immediately before the plurality of still images. The displayed still image is displayed on the display screen 1a.
 ステップS10の終了後、処理はステップS1に戻り、それ以降同様の処理が繰り返される。なお、この第1のジェスチャ認識処理は、例えば、デジタルフォトフレーム1の電源がオフされたときに終了される。 After the completion of step S10, the process returns to step S1, and thereafter the same process is repeated. Note that the first gesture recognition process is ended when, for example, the digital photo frame 1 is powered off.
[Vλ1取得処理の詳細]
 次に、図17のフローチャートを参照して、図16のステップS2において、注目センサ21nが行うVλ1取得処理の詳細を説明する。
[Details of V λ1 acquisition processing]
Next, details of the V λ1 acquisition process performed by the sensor of interest 21 n in step S2 of FIG. 16 will be described with reference to the flowchart of FIG.
 ステップS31において、LED112anは、LEDドライバ111nからの制御にしたがって、波長λ1の光を、注目センサ21nの検知範囲に照射する。なお、この場合、LED112bnは、LEDドライバ111nからの制御にしたがって消灯しているものとする。 In step S31, LEDs 112a n, under the control from the LED driver 111 n, the light of wavelength .lambda.1, irradiates the detection range of the target sensor 21 n. In this case, it is assumed that the LED 112b n is turned off in accordance with the control from the LED driver 111 n .
 ステップS32において、PD114nは、LED112anから照射された波長λ1の光の反射光(例えば、注目センサ21nの検知範囲に物体41が存在する場合、物体41に照射された波長λ1の光の反射光)を受光する。 In step S32, PD 114 n is, reflected light of light of wavelength λ1 emitted from the LEDs 112a n (e.g., if the object 41 is present in the detection range of the target sensor 21 n, the light of the wavelength λ1 emitted to the object 41 Receives reflected light.
 ステップS33において、PD114nは、ステップS32の処理で受光した反射光を光電変換し、その結果得られる輝度信号Vλ1を、AD変換部95nに供給して、処理は、図16のステップS2に戻り、ステップS3に進められる。なお、AD変換部95nは、PD114nからの輝度信号Vλ1をAD変換し、AD変換後の輝度信号Vλ1を処理部91に供給する。 In step S33, the PD 114 n performs photoelectric conversion on the reflected light received in the process of step S32, and supplies the luminance signal V λ1 obtained as a result to the AD conversion unit 95 n . The process is performed in step S2 of FIG. Returning to step S3, the process proceeds to step S3. Incidentally, the AD conversion unit 95 n has a luminance signal V .lambda.1 from PD 114 n AD conversion, and supplies the processing unit 91 the luminance signal V .lambda.1 after AD conversion.
[Vλ2取得処理の詳細]
 図16のステップS3では、注目センサ21nは、Vλ1取得処理と同様にして、Vλ2取得処理を行う。
[Details of V λ2 acquisition processing]
In step S3 of FIG. 16, the sensor of interest 21 n performs a V λ2 acquisition process in the same manner as the V λ1 acquisition process.
 すなわち、このVλ2取得処理は、ステップS31’乃至ステップS33’からなり、ステップS31’において、LED112bnは、LEDドライバ111nからの制御にしたがって、波長λ2の光を、注目センサ21nの検知範囲に照射する。なお、この場合、LED112anは、LEDドライバ111nからの制御にしたがって消灯しているものとする。 In other words, this V λ2 acquisition process includes step S31 ′ to step S33 ′. In step S31 ′, the LED 112b n detects the light of wavelength λ2 according to the control from the LED driver 111 n and detects the sensor 21 n . Irradiate the area. In this case, LEDs 112a n is assumed to be off according to the control from the LED driver 111 n.
 ステップS32’において、PD114nは、LED112bnから照射された波長λ2の光の反射光(例えば、注目センサ21nの検知範囲に物体41が存在する場合、物体41に照射された波長λ2の光の反射光)を受光する。 In step S32 ′, the PD 114 n reflects the reflected light of the wavelength λ2 emitted from the LED 112b n (for example, if the object 41 exists in the detection range of the sensor of interest 21 n , the light of the wavelength λ2 emitted to the object 41). (Reflected light).
 ステップS33’において、PD114nは、ステップS32’の処理で受光した反射光を光電変換し、その結果得られる輝度信号Vλ2を、AD変換部95nに供給して、処理は、図16のステップS3に戻り、ステップS4に進められる。なお、AD変換部95nは、PD114nからの輝度信号Vλ2をAD変換し、AD変換後の輝度信号Vλ2を処理部91に供給する。 In step S33 ′, the PD 114 n photoelectrically converts the reflected light received in the process of step S32 ′ and supplies the resulting luminance signal V λ2 to the AD converter 95 n . Returning to step S3, the process proceeds to step S4. Incidentally, the AD conversion unit 95 n has a luminance signal V .lambda.2 from PD 114 n AD conversion, and supplies the processing unit 91 the luminance signal V .lambda.2 after AD conversion.
[Vλoff取得処理の詳細]
 次に、図18のフローチャートを参照して、図16のステップS4において、注目センサ21nが行うVλoff取得処理の詳細を説明する。
[Details of V λoff acquisition processing]
Next, details of the V λoff acquisition process performed by the sensor of interest 21 n in step S4 of FIG. 16 will be described with reference to the flowchart of FIG.
 なお、LEDドライバ111nは、LED112an及びLED112bnを制御し、LED112an及びLED112bnのいずれも消灯させているものとする。したがって、注目センサ21nの検知範囲には、LED112an及びLED112bnからの照射光以外の外光のみが照射されている。 Incidentally, LED driver 111 n controls the LEDs 112a n and LED112b n, none of the LEDs 112a n and LED112b n assumed that is turned off. Therefore, the detection range of the target sensor 21 n, only the external light other than light emitted from the LEDs 112a n and LED112b n is irradiated.
 ステップS51において、PD114nは、外光の反射光(例えば、注目センサ21nの検知範囲に物体41が存在する場合、物体41に照射された外光の反射光)を受光する。 In step S51, the PD 114 n receives reflected light of outside light (for example, reflected light of outside light irradiated on the object 41 when the object 41 exists in the detection range of the sensor of interest 21 n ).
 ステップS52において、PD114nは、ステップS51の処理で受光した反射光を光電変換し、その結果得られる輝度信号Vλoffを、AD変換部95nに供給して、処理は、図16のステップS4に戻り、ステップS5に進められる。なお、AD変換部95nは、PD114nからの輝度信号VλoffをAD変換し、AD変換後の輝度信号Vλoffを処理部91に供給する。 In step S52, the PD 114 n photoelectrically converts the reflected light received in the process of step S51, supplies the luminance signal V λoff obtained as a result to the AD conversion unit 95 n , and the process is performed in step S4 of FIG. Returning to step S5, the process proceeds to step S5. Incidentally, the AD conversion unit 95 n has a luminance signal V? OFF from the PD 114 n AD conversion, and supplies the luminance signal V? OFF after AD conversion to the processing unit 91.
[肌判別処理の詳細]
 次に、図19のフローチャートを参照して、図16のステップS5において、注目センサ21nの処理部91が行う肌判別処理の詳細を説明する。
[Details of skin discrimination processing]
Next, with reference to the flowchart of FIG. 19, the detail of the skin discrimination | determination process which the process part 91 of the attention sensor 21 n performs in step S5 of FIG.
 ステップS71において、処理部91は、AD変換部95nからの輝度信号Vλ1及び輝度信号Vλ2に基づいて、差分信号Vdif(=Vλ1-Vλ2)を算出する。 In step S71, the processing unit 91, based on the luminance signal V .lambda.1 and the luminance signal V .lambda.2 from AD conversion unit 95 n, and calculates the difference signal Vdif (= V λ1 -V λ2) .
 ステップS72において、処理部91は、差分信号Vdif(=Vλ1-Vλ2)を、輝度信号Vλ1又は輝度信号Vλ2の少なくとも一方に基づく値、すなわち、例えば、値(=Vλ1-Vλoff)で正規化(除算)する。 In step S72, the processing section 91, the difference signal Vdif (= V λ1 -V λ2) and on at least one value based on the luminance signal V .lambda.1 or luminance signal V .lambda.2, i.e., for example, a value (= V λ1 -V λoff ) To normalize (divide).
 そして、正規化後の差分信号に、例えば100を乗算して、正規化差分信号Rdif(=100×Vdif/(Vλ1-Vλoff))を算出する。 Then, the difference signal after normalization, for example by multiplying by 100, to calculate a normalized difference signal Rdif (= 100 × Vdif / ( V λ1 -V λoff)).
 ステップS73において、処理部91は、正規化差分信号Rdifが予め決められた閾値以上であるか否かに基づいて、注目センサ21nの検知範囲内に、肌としての物体41が存在するか否かを判別する。 In step S73, the processing unit 91 determines whether or not the object 41 as skin exists within the detection range of the sensor of interest 21 n based on whether or not the normalized difference signal Rdif is greater than or equal to a predetermined threshold value. Is determined.
 ステップS74において、処理部91は、正規化差分信号Rdifが予め決められた閾値以上である場合、注目センサ21nの検知範囲内に、肌としての物体41が存在すると判別し、その判別結果を表す肌検出信号を生成する。 In step S74, when the normalized difference signal Rdif is greater than or equal to a predetermined threshold value, the processing unit 91 determines that the object 41 as skin exists within the detection range of the sensor of interest 21 n , and determines the determination result. Generate a skin detection signal to represent.
 また、処理部91は、正規化差分信号Rdifが予め決められた閾値以上ではない場合、注目センサ21nの検知範囲内に、肌としての物体41が存在しないと判別し、その判別結果を表す肌検出信号を生成する。 Further, when the normalized difference signal Rdif is not greater than or equal to a predetermined threshold value, the processing unit 91 determines that the object 41 as skin does not exist within the detection range of the sensor of interest 21 n and represents the determination result. A skin detection signal is generated.
 ステップS74の終了後、処理は、図16のステップS5に戻り、それ以降の処理が行われる。 After the completion of step S74, the process returns to step S5 in FIG. 16, and the subsequent processes are performed.
 以上説明したように、第1のジェスチャ認識処理によれば、少なくとも1個のセンサ21nの検知範囲内に、肌としての物体41が存在する場合、肌としての物体41の位置や動き等を認識するようにした。 As described above, according to the first gesture recognition process, when the object 41 as skin exists within the detection range of at least one sensor 21n, the position and movement of the object 41 as skin are recognized. I tried to do it.
 したがって、物体41が肌以外のものである場合に、物体41の位置や動き等を誤って認識し、その認識結果に応じた処理を行う事態を防止することが可能となる。 Therefore, when the object 41 is something other than the skin, it is possible to prevent a situation in which the position or movement of the object 41 is erroneously recognized and processing according to the recognition result is performed.
 なお、第1のジェスチャ認識処理では、ステップS8において、肌が検出されなかった場合、所定の時間だけ待機した上で、処理をステップS1に戻すようにしてもよい。 In the first gesture recognition process, if no skin is detected in step S8, the process may return to step S1 after waiting for a predetermined time.
 この場合、第1のジェスチャ認識処理では、ステップS8において肌が検出されるまでの間、ステップS1乃至ステップS7の処理が行われる間隔(例えば、図13に示される間隔Iに対応)を、肌が検出された後の間隔よりも長くすることができる。 In this case, in the first gesture recognition process, the interval (for example, corresponding to the interval I shown in FIG. 13) in which the processes in steps S1 to S7 are performed is determined until the skin is detected in step S8. Can be longer than the interval after detection.
 よって、肌が検出されるまでの間、ステップS1乃至ステップS7の処理が行われる間隔を長くするようにして、制御部66の負荷を軽減することが可能となる。また、肌が検出された後は、間隔を短くするようにして、より短い間隔(時間)で、検出された肌の位置や動きなどを認識することが可能となる。 Therefore, it is possible to reduce the load on the control unit 66 by increasing the interval at which the processing from step S1 to step S7 is performed until skin is detected. Further, after the skin is detected, the interval can be shortened, and the detected skin position and movement can be recognized at a shorter interval (time).
 ところで、第1のジェスチャ認識処理では、注目センサ21nの検知範囲から、肌としての物体41を検出するための処理として、Vλ1取得処理、Vλ2取得処理、Vλoff取得処理、及び肌判別処理を繰り返して行うようにした。 By the way, in the first gesture recognition processing, V λ1 acquisition processing, V λ2 acquisition processing, V λoff acquisition processing, and skin discrimination are performed as processing for detecting the object 41 as skin from the detection range of the sensor of interest 21 n. The process was repeated.
 しかしながら、肌としての物体41が検出された場合には、注目センサ21nの検知範囲から、肌としての物体41を検出するための処理を簡略化する第2のジェスチャ認識処理を行うようにしてもよい。 However, when the object 41 as skin is detected, the second gesture recognition process is performed to simplify the process for detecting the object 41 as skin from the detection range of the sensor of interest 21 n. Also good.
 この第2のジェスチャ認識処理では、肌としての物体41が検出された場合、例えばVλ1取得処理のみを行い、Vλ1取得処理により得られる輝度信号Vλ1に基づいて、注目センサ21nの検知範囲内に物体が存在するか否かを検出する。 In the second gesture recognition process, when the object 41 as skin is detected, for example, only the V λ1 acquisition process is performed, and the detection of the attention sensor 21 n is performed based on the luminance signal V λ1 obtained by the V λ1 acquisition process. It detects whether an object exists within the range.
 そして、注目センサ21nの検知範囲内に物体が存在する旨が検出されたときには、肌としての物体41が検出されたものとして扱うようにして、処理を簡略化している。 Then, when it is detected that an object exists within the detection range of the sensor of interest 21 n , the processing is simplified by treating the object 41 as skin as having been detected.
[第2のジェスチャ認識処理]
 次に、図20のフローチャートを参照して、デジタルフォトフレーム1が行う第2のジェスチャ認識処理の詳細を説明する。
[Second gesture recognition process]
Next, details of the second gesture recognition process performed by the digital photo frame 1 will be described with reference to the flowchart of FIG.
 この第2のジェスチャ認識処理は、例えばデジタルフォトフレーム1の電源がオンされたときに開始される。 This second gesture recognition process is started, for example, when the digital photo frame 1 is turned on.
 ステップS91乃至ステップS100では、それぞれ、図16のステップS1乃至ステップS10と同様の処理が行われる。 In steps S91 to S100, the same processes as in steps S1 to S10 in FIG. 16 are performed.
 ステップS101では、処理部91は、複数のセンサ211乃至21Nのうち、所定のセンサ21nに注目し、注目センサ21nとする。 In step S101, the processing unit 91 pays attention to a predetermined sensor 21 n among the plurality of sensors 21 1 to 21 N and sets it as the attention sensor 21 n .
 そして、ステップS102において、処理部91は、電流制御部92n、タイミング制御部93n及びゲイン制御部94n等を介して、注目センサ21nを制御し、Vλ1取得処理を行わせる。また、AD変換部95nは、注目センサ21nからの輝度信号Vλ1をAD変換し、AD変換後の輝度信号Vλ1を、処理部91に供給する。 In step S102, the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like to perform the V λ1 acquisition process. In addition, the AD conversion unit 95 n performs AD conversion on the luminance signal V λ1 from the sensor of interest 21 n and supplies the luminance signal V λ1 after AD conversion to the processing unit 91.
 ステップS103では、処理部91は、注目センサ21nからAD変換部95nを介して供給された輝度信号Vλ1に基づいて、ジェスチャ認識情報を生成し、輝度信号Vλ1とともに、入出力インタフェース65及びバス64を介して、CPU61に供給する。 In step S103, the processing unit 91 generates gesture recognition information based on the luminance signal V λ1 supplied from the sensor of interest 21 n via the AD conversion unit 95 n, and together with the luminance signal V λ1 , input / output interface 65. The data is supplied to the CPU 61 via the bus 64.
 なお、ステップS103において、ジェスチャ認識情報が、輝度信号Vλ1である場合、処理部91は、輝度信号Vλ1としてのジェスチャ認識情報のみを、入出力インタフェース65及びバス64を介して、CPU61に供給することとなる。 If the gesture recognition information is the luminance signal V λ1 in step S103, the processing unit 91 supplies only the gesture recognition information as the luminance signal V λ1 to the CPU 61 via the input / output interface 65 and the bus 64. Will be.
 ステップS104では、処理部91は、複数のセンサ211乃至21Nの全てに注目してか否かを判定し、複数のセンサ211乃至21Nの全てに注目していないと判定した場合、処理をステップS101に戻す。 In step S104, if the processor 91 determines whether attention to all of the plurality of sensors 21 1 to 21 N, it was determined not to be focused on all of the plurality of sensors 21 1 to 21 N, The process returns to step S101.
 そして、ステップS101では、処理部91は、複数のセンサ211乃至21Nのうち、まだ注目していないセンサ21nを、新たな注目センサ21nとし、それ以降、同様の処理が行われる。 In step S101, the processing unit 91 sets a sensor 21 n that has not been noticed among the plurality of sensors 21 1 to 21 N as a new sensor of interest 21 n, and thereafter the same processing is performed.
 また、ステップS104では、処理部91は、複数のセンサ211乃至21Nの全てに注目したと判定した場合、処理をステップS105に進める。 In step S104, when the processing unit 91 determines that all of the plurality of sensors 21 1 to 21 N are focused, the process proceeds to step S105.
 以上説明したステップS101乃至ステップS104の処理により、CPU61には、注目された各センサ211乃至21Nから入出力インタフェース65及びバス64を介して、センサ211乃至21N毎に生成されたジェスチャ認識情報及び輝度信号Vλ1が供給される。 Thus the process of steps S101 through S104 described, the CPU 61, via the input-output interface 65 and the bus 64 from the target by the sensor 21 1 through 21 N has been generated in the sensor 21 1 to each 21 N gesture Recognition information and a luminance signal V λ1 are supplied.
 ステップS105では、CPU61は、制御部66から入出力インタフェース65及びバス64を介して供給された各輝度信号Vλ1が所定の閾値以上であるか否かに基づいて、各センサ21nのいずれかの検知範囲内に物体が検出されたか否かを判定する。 In step S105, the CPU 61 determines which of the sensors 21 n is based on whether or not each luminance signal V λ1 supplied from the control unit 66 via the input / output interface 65 and the bus 64 is equal to or greater than a predetermined threshold. It is determined whether an object is detected within the detection range.
 なお、ステップS105において、CPU61は、輝度信号Vλ2が閾値以上であるか否かに基づいて、注目センサ21nの検知範囲内に物体が検出されたか否かを判定するようにしてもよい。この場合、ステップS102において、Vλ2取得処理が行われる。 In step S105, the CPU 61 may determine whether an object is detected within the detection range of the sensor of interest 21 n based on whether the luminance signal V λ2 is equal to or greater than a threshold value. In this case, in step S102, V λ2 acquisition processing is performed.
 そして、ステップS105では、CPU61は、物体が検出されたと判定した場合、肌としての物体41が検出されたものと扱い、処理をステップS99に戻し、それ以降同様の処理を繰り返す。 In step S105, when the CPU 61 determines that an object has been detected, the CPU 61 treats the object 41 as skin as having been detected, returns the process to step S99, and thereafter repeats the same process.
 また、ステップS105において、CPU61は、物体が検出されていないと判定した場合、肌としての物体41が検出されなかったものと扱い、処理をステップS91に戻し、それ以降同様の処理を繰り返す。 In step S105, if the CPU 61 determines that no object is detected, it treats that the object 41 as skin has not been detected, returns the process to step S91, and thereafter repeats the same process.
 なお、この第2のジェスチャ認識処理は、例えばデジタルフォトフレーム1の電源がオフされたときに終了される。 Note that this second gesture recognition process is terminated when, for example, the digital photo frame 1 is powered off.
 以上説明したように、第2のジェスチャ認識処理によれば、センサ21nの検知範囲内に、肌としての物体を検出した後は、センサ21nの検知範囲内に物体が存在するか否かを検出するようにし、物体を検出した場合、肌としての物体を検出したものと取り扱うようにした。 As described above, according to the second gesture recognition process, after an object as skin is detected within the detection range of the sensor 21n, it is detected whether or not the object exists within the detection range of the sensor 21n. When an object is detected, it is handled as if the object as skin was detected.
 したがって、例えば、肌としての物体が検出されたか否かに拘らず、Vλ1取得処理、Vλ2取得処理、Vλoff取得処理、及び肌判別処理を繰り返して行う場合と比較して、処理部91の負担を軽減することが可能となる。 Therefore, for example, the processing unit 91 is compared with the case where the V λ1 acquisition process, the V λ2 acquisition process, the V λoff acquisition process, and the skin determination process are repeatedly performed regardless of whether an object as skin is detected. It becomes possible to reduce the burden.
 なお、第2のジェスチャ認識処理では、ステップS98において肌が検出されなかった場合、及びステップS105において物体が検出されなかった場合、所定の時間だけ待機した上で、処理をステップS91に戻すようにしてもよい。 In the second gesture recognition process, if no skin is detected in step S98 and no object is detected in step S105, the process returns to step S91 after waiting for a predetermined time. May be.
 この場合、第2のジェスチャ認識処理では、ステップS98において肌が検出されるまでの間、ステップS91乃至ステップS97の処理が行われる間隔(例えば、図13の間隔Iに対応)を、肌が検出された後の間隔よりも長くすることができる。 In this case, in the second gesture recognition process, the skin detects the interval (for example, corresponding to the interval I in FIG. 13) in which the processes in steps S91 to S97 are performed until the skin is detected in step S98. The interval after being made can be longer.
 よって、肌が検出されるまでの間、ステップS91乃至ステップS97の処理が行われる間隔を長くするようにして、制御部66の負荷を軽減することが可能となる。また、肌が検出された後は、間隔を短くするようにして、より短い間隔で、検出された肌の位置や動きなどを認識することが可能となる。 Therefore, it is possible to reduce the load on the control unit 66 by increasing the interval at which the processing from step S91 to step S97 is performed until skin is detected. Further, after the skin is detected, the interval can be shortened, and the detected skin position and movement can be recognized at a shorter interval.
 第1の実施の形態では、例えば、センサ211乃至21Nの検知範囲内に物体が存在するか否かに拘らず、第1のジェスチャ認識処理、又は第2のジェスチャ認識処理を行うようにした。 In the first embodiment, for example, the first gesture recognition process or the second gesture recognition process is performed regardless of whether or not an object exists within the detection range of the sensors 21 1 to 21 N. did.
 しかしながら、その他、例えば、センサ211乃至21Nの検知範囲内に物体が存在するか否かを判定し、検知範囲内に物体が存在すると判定したときに、第1のジェスチャ認識処理、又は第2のジェスチャ認識処理を行う近接物体検出処理を行うようにしてもよい。 However, for example, when it is determined whether or not an object exists in the detection range of the sensors 21 1 to 21 N and it is determined that an object exists in the detection range, the first gesture recognition process or the first The proximity object detection process for performing the gesture recognition process 2 may be performed.
[デジタルフォトフレーム1が行う近接物体検出処理]
 次に、図21のフローチャートを参照して、デジタルフォトフレーム1が行う近接物体検出処理の詳細を説明する。
[Nearby object detection processing performed by digital photo frame 1]
Next, details of the proximity object detection process performed by the digital photo frame 1 will be described with reference to the flowchart of FIG.
 この近接物体検出処理は、例えばデジタルフォトフレーム1の電源がオンされたときに開始される。 This proximity object detection process is started, for example, when the digital photo frame 1 is turned on.
 ステップS121において、処理部91は、複数のセンサ211乃至21Nのうち、所定のセンサ21nに注目し、注目センサ21nとする。 In step S121, the processing unit 91 pays attention to a predetermined sensor 21 n among the plurality of sensors 21 1 to 21 N and sets it as the attention sensor 21 n .
 そして、ステップS122において、処理部91は、電流制御部92n、タイミング制御部93n及びゲイン制御部94n等を介して、注目センサ21nを制御し、Vλ1取得処理を行わせる。また、AD変換部95nは、注目センサ21nからの輝度信号Vλ1をAD変換し、AD変換後の輝度信号Vλ1を、処理部91に供給する。 In step S122, the processing unit 91 controls the sensor of interest 21 n via the current control unit 92 n , the timing control unit 93 n, the gain control unit 94 n, and the like to perform the V λ1 acquisition process. In addition, the AD conversion unit 95 n performs AD conversion on the luminance signal V λ1 from the sensor of interest 21 n and supplies the luminance signal V λ1 after AD conversion to the processing unit 91.
 ステップS123では、処理部91は、注目センサ21nからAD変換部95nを介して供給された輝度信号Vλ1に基づいて、注目センサ21nの検知範囲内に物体が侵入したか否かを判定する。 In step S123, the processing unit 91 determines whether or not an object has entered the detection range of the target sensor 21 n based on the luminance signal V λ1 supplied from the target sensor 21 n via the AD conversion unit 95 n. judge.
 すなわち、例えば、処理部91は、注目センサ21nからAD変換部95nを介して供給された輝度信号Vλ1が表す輝度値が、予め決められた閾値以上となったか否かに基づいて、注目センサ21nの検知範囲内に物体が侵入したか否かを判定する。 That is, for example, the processing unit 91 determines whether the luminance value represented by the luminance signal V λ1 supplied from the sensor of interest 21 n via the AD conversion unit 95 n is equal to or greater than a predetermined threshold value. It is determined whether an object has entered the detection range of the sensor of interest 21 n .
 処理部91は、その判定結果に基づいて、注目センサ21nの検知範囲に対する物体の侵入を検出しない場合、処理をステップS121に戻す。 If the processing unit 91 does not detect the entry of an object into the detection range of the sensor of interest 21 n based on the determination result, the processing unit 91 returns the process to step S121.
 そして、ステップS121では、処理部91は、複数のセンサ211乃至21Nのうち、まだ注目していないセンサ21nを、新たな注目センサ21nとし、それ以降、同様の処理が行われる。 In step S121, the processing unit 91 sets the sensor 21 n that has not been noticed among the plurality of sensors 21 1 to 21 N as a new attention sensor 21 n, and thereafter, the same processing is performed.
 なお、ステップS121において、複数のセンサ211乃至21Nの全てが注目された後、さらにステップS121の処理が行われる場合、複数のセンサ211乃至21Nのいずれも、まだ注目されていないものとして、ステップS121の処理が行われることとなる。 In addition, when all of the plurality of sensors 21 1 to 21 N are noticed in step S121 and the process of step S121 is further performed, none of the plurality of sensors 21 1 to 21 N has been noticed yet. As a result, the process of step S121 is performed.
 また、ステップS123において、処理部123は、判定結果に基づいて、注目センサ21nの検知範囲に対する物体の侵入を検出した場合、処理をステップS124に進め、第1のジェスチャ認識処理又は第2のジェスチャ認識処理が行われる。 Further, in step S123, processing unit 123 determines based on the result, when detecting an object intrusion for the detection range of the target sensor 21 n, processing proceeds to step S124, the first gesture recognition processing or the second Gesture recognition processing is performed.
 なお、ステップS124において、第1のジェスチャ認識処理が行われる場合、以下のようにして第1のジェスチャ認識処理が終了され、近接物体検出処理も終了される。 In step S124, when the first gesture recognition process is performed, the first gesture recognition process is terminated as described below, and the proximity object detection process is also terminated.
 すなわち、例えば、第1のジェスチャ認識処理のステップS8において、CPU61により、肌としての物体41が検出されていないと判定された場合、換言すれば、検知範囲内に侵入した物体41が肌ではないと判定された場合、第1のジェスチャ認識処理は終了される。そして、近接物体検出処理は終了され、新たに近接物体検出処理が開始される。 That is, for example, in step S8 of the first gesture recognition process, when the CPU 61 determines that the object 41 as skin is not detected, in other words, the object 41 that has entered the detection range is not skin. When it is determined that, the first gesture recognition process is terminated. Then, the proximity object detection process is terminated, and a proximity object detection process is newly started.
 また、ステップS124において、第2のジェスチャ認識処理が行われる場合、以下のようにして第2のジェスチャ認識処理が終了され、近接物体検出処理も終了される。 In step S124, when the second gesture recognition process is performed, the second gesture recognition process is ended as follows, and the proximity object detection process is also ended.
 すなわち、例えば、第2のジェスチャ認識処理のステップS98において、CPU61により、肌としての物体41が検出されていないと判定された場合、換言すれば、検知範囲内に侵入した物体41が肌ではないと判定された場合、第2のジェスチャ認識処理は終了される。そして、近接物体検出処理は終了され、新たに近接物体検出処理が開始される。 That is, for example, when the CPU 61 determines in step S98 of the second gesture recognition process that the object 41 as skin is not detected, in other words, the object 41 that has entered the detection range is not skin. Is determined, the second gesture recognition process is terminated. Then, the proximity object detection process is terminated, and a proximity object detection process is newly started.
 また、例えば、第2のジェスチャ認識処理のステップS105において、CPU61により、物体が検出されなかったと判定された場合、検知範囲内に侵入した物体41が既に検知範囲内に存在しないと判定されたものとし、第2のジェスチャ認識処理は終了される。そして、近接物体検出処理は終了され、新たに近接物体検出処理が開始される。 In addition, for example, in step S105 of the second gesture recognition process, when the CPU 61 determines that no object is detected, it is determined that the object 41 that has entered the detection range does not already exist in the detection range. Then, the second gesture recognition process is terminated. Then, the proximity object detection process is terminated, and a proximity object detection process is newly started.
 新たに開始される近接物体検出処理では、ステップS121乃至ステップS123において、複数のセンサ211乃至21Nのいずれかの検知範囲内に、新たに物体が侵入したか否かが判定される。 In the newly started proximity object detection process, in steps S121 to S123, it is determined whether or not a new object has entered the detection range of any of the plurality of sensors 21 1 to 21 N.
 そして、その判定による判定結果に基づいて、複数のセンサ211乃至21Nのいずれかの検知範囲内に、新たに物体が侵入したことが検出された場合、ステップS124において、新たに、第1のジェスチャ認識処理又は第2のジェスチャ認識処理が行われる。 Then, when it is detected that an object has newly entered the detection range of any of the plurality of sensors 21 1 to 21 N based on the determination result by the determination, in step S124, a new first The gesture recognition process or the second gesture recognition process is performed.
 以上説明したように、近接物体検出処理によれば、センサ21nのいずれかの検知範囲に物体が侵入したか否かを判定するようにした。そして、その判定結果に基づいて、センサ21nのいずれかの検知範囲に物体が侵入したことを検出した場合に、第1又は第2のジェスチャ認識処理を行うようにした。 As described above, according to the proximity object detection process, it is determined whether or not an object has entered one of the detection ranges of the sensor 21n. Then, based on the determination result, the first or second gesture recognition process is performed when it is detected that an object has entered one of the detection ranges of the sensor 21n.
 したがって、センサ21nのいずれかの検知範囲に物体が侵入したか否かに拘らず、第1又は第2のジェスチャ認識処理を行う場合と比較して、処理部91の負担を軽減することが可能となる。 Therefore, the burden on the processing unit 91 can be reduced as compared with the case where the first or second gesture recognition process is performed regardless of whether an object has entered the detection range of the sensor 21n. It becomes.
 また、近接物体検出処理では、ステップS121において各センサ21nに順次注目し、ステップS122において注目センサ21nから輝度信号Vλ1が取得される毎に、ステップS123において物体が検出されたか否かを判定するようにした。 Further, in the proximity object detection process, attention is sequentially paid to each sensor 21 n in step S121, and whether or not an object is detected in step S123 each time the luminance signal V λ1 is acquired from the attention sensor 21 n in step S122. Judgment was made.
 しかしながら、その他、例えば、ステップS121及びステップS122において、各センサ211乃至21N毎に輝度信号Vλ1を取得するようにしてもよい。 However, for example, in step S121 and step S122, the luminance signal V λ1 may be acquired for each of the sensors 21 1 to 21 N.
 そして、ステップS123では、各センサ211乃至21N毎に取得された輝度信号Vλ1に基づいて、各センサ211乃至21Nのいずれかの検知範囲内に物体が侵入したか否かを判定するようにしてもよい。 Then, in step S123, based on the luminance signal V .lambda.1 acquired at each sensor 21 1 to each 21 N, determine whether the object has entered into any of the detection ranges of the sensors 21 1 to 21 N You may make it do.
 また、ステップS123において、注目した各センサ211乃至21Nのいずれの検知範囲内にも物体が侵入していないと判定された場合、所定の時間だけ待機して、処理をステップS121に戻すようにしてもよい。 If it is determined in step S123 that the object has not entered any of the detection ranges of the respective sensors 21 1 to 21 N , the process waits for a predetermined time and the process returns to step S121. It may be.
 この場合、各センサ211乃至21Nのいずれかの検知範囲内に物体が侵入したか否かを判定するステップS121乃至ステップS123の処理による負荷を軽減することが可能となる。 In this case, it is possible to reduce the load caused by the processing in steps S121 to S123 for determining whether or not an object has entered the detection range of any one of the sensors 21 1 to 21 N.
<2.第2の実施の形態>
[デジタルフォトフレーム171の構成例]
 次に、図22は、第2の実施の形態であるデジタルフォトフレーム171の構成例を示している。
<2. Second Embodiment>
[Configuration example of digital photo frame 171]
Next, FIG. 22 shows a configuration example of a digital photo frame 171 according to the second embodiment.
 このデジタルフォトフレーム171には、第1の実施の形態であるデジタルフォトフレーム1のセンサ211及びセンサ212に代えて、PD1911及びPD1912が設けられている。 The digital photo frame 171 is provided with a PD 191 1 and a PD 191 2 in place of the sensor 21 1 and the sensor 21 2 of the digital photo frame 1 according to the first embodiment.
 また、デジタルフォトフレーム171は、波長λ1の光を照射するLED112a(図26)と、波長λ2の光を照射するLED112b(図26)を有するLEDユニット192が設けられている。 Also, the digital photo frame 171 is provided with an LED unit 192 having an LED 112a (FIG. 26) that emits light of wavelength λ1 and an LED 112b (FIG. 26) that emits light of wavelength λ2.
 なお、第1の実施の形態におけるセンサ21nでは、内蔵するLED112a及びLED112bが、センサ21nの検知範囲内を照射するようにしている。 In the sensor 21n in the first embodiment, the built-in LED 112a and LED 112b irradiate the detection range of the sensor 21n.
 これに対して、第2の実施の形態におけるLEDユニット192は、ユーザがジェスチャを行うと想定される範囲、すなわち、例えば、複数のセンサ211乃至21Nそれぞれの検知範囲からなる範囲を照射するようにしている。この範囲が、物体を検知する検知範囲とされる。 On the other hand, the LED unit 192 in the second embodiment irradiates a range in which the user is supposed to perform a gesture, that is, a range composed of detection ranges of the plurality of sensors 21 1 to 21 N , for example. I am doing so. This range is a detection range for detecting an object.
 それ以外は、デジタルフォトフレーム1(図1)の場合と同様に構成される。 Other than that, the configuration is the same as that of the digital photo frame 1 (FIG. 1).
 なお、LEDユニット192は、LED112a(図26)からPD1911までの距離、LED112a(図26)からPD1912までの距離、LED112b(図26)からPD1911までの距離、及びLED112b(図26)からPD1912までの距離がいずれも同一となるように配置されることが望ましい。 Incidentally, LED unit 192, the distance from the LEDs 112a (FIG. 26) to PD191 1, the distance from the LEDs 112a (FIG. 26) to PD191 2, the distance from LED112b (Figure 26) to PD191 1, and from LED112b (Figure 26) the distance to the PD191 2 is arranged so either the same desirable.
 このように配置するためには、LEDユニット192は、PD1911とPD1912を結ぶ線分の中心を通る、表示画面1aと垂直な法線上に存在する必要がある。このことは、図28及び図29を参照して詳述する。 To this arrangement, the LED unit 192, passes through the center of a line connecting the PD191 1 and PD191 2, must be present on the display screen 1a perpendicular on the normal line. This will be described in detail with reference to FIGS.
 次に、図23及び図24を参照して、デジタルフォトフレーム171が、PD1911及びPD1912からの出力結果に基づいて、X方向(図22において、PD1911及びPD1912が存在する左右方向)における物体41の位置や動きを認識する認識方法を説明する。 Next, referring to FIG. 23 and FIG. 24, the digital photo frame 171 is based on the output results from the PD 191 1 and PD 191 2 in the X direction (in FIG. 22, the left and right directions where the PD 191 1 and PD 191 2 exist). A recognition method for recognizing the position and movement of the object 41 in FIG.
 なお、PD1911及びPD1912において、Z方向における物体41の位置や動きを認識する認識方法は、図2及び図3で説明した場合と同様である。 Note that the recognition method for recognizing the position and movement of the object 41 in the Z direction in the PD 191 1 and the PD 191 2 is the same as that described with reference to FIGS.
 図23は、図22において、図中下側からデジタルフォトフレーム171を見たときの様子の一例を示している。 FIG. 23 shows an example of a state when the digital photo frame 171 is viewed from the lower side in FIG.
 なお、LEDユニット192は、図23に示されるように、デジタルフォトフレーム171の前面において、ユーザがジェスチャなどを行うと想定される範囲を照射する。LEDユニット192は、例えば、図13Aに示された場合と同様に、LED112aの点灯、LED112bの点灯、及びLED112a及びLED112bの消灯を繰り返す。 In addition, as shown in FIG. 23, the LED unit 192 irradiates a range in which the user is supposed to perform a gesture or the like on the front surface of the digital photo frame 171. For example, the LED unit 192 repeatedly turns on the LED 112a, turns on the LED 112b, and turns off the LED 112a and the LED 112b, similarly to the case shown in FIG. 13A.
 図23において、例えば、物体が、図中、左から中央、そして右方向に移動した場合、つまり、物体が、PD1911の真上付近からPD1912の真上付近まで移動した場合、PD1911及びPD1912からの出力結果に基づいて、物体の動きが認識される。 23, for example, if the object is, in the figure, when moved from the left center, and to the right, i.e., the object has moved to the vicinity just above the PD191 2 from the vicinity just above the PD191 1, PD191 1 and based on the output results from the PD191 2, movement of the object is recognized.
 次に、図24は、物体が、図23に示したような動きをした場合に、PD1911及びPD1912から、それぞれ出力される出力結果の一例を示している。 Next, FIG. 24, the object is, when the motion as shown in FIG. 23, the PD191 1 and PD191 2, shows an example of an output result output respectively.
 なお、図24のA乃至図24のCにおいては、PD1911及びPD1912からの出力結果として得られる極大部分を、簡略化して記載するようにしている。 In the C of A to 24 in FIG. 24, so that the lobes obtained as output from the PD191 1 and PD191 2, described in a simplified manner.
 図24のAには、図23において、物体が左側に存在する場合に得られる出力結果の一例が示されている。すなわち、図24のA左側にはPD1911の出力結果が、図24のA中央にはPD1912の出力結果が示されている。 FIG. 24A shows an example of an output result obtained when the object exists on the left side in FIG. That is, the output result of the A to the left PD191 1 in FIG. 24, the A center of FIG. 24 are shown PD191 2 output results.
 なお、図24のA左側において、横軸は時刻tを表しており、縦軸はPD1911からの出力結果Vを示している。このことは、後述する図24のB左側、及び図24のC左側についても同様である。 Incidentally, in the A left side of FIG. 24, the horizontal axis represents the time t, the vertical axis represents the output V from the PD191 1. The same applies to the left side of B in FIG. 24 and the left side of C in FIG.
また、図24のA中央において、横軸は時刻tを表しており、縦軸はPD1912からの出力結果Vを示している。このことは、後述する図24のB中央、及び図24のC中央についても同様である。 Further, in A center of FIG. 24, the horizontal axis represents the time t, the vertical axis represents the output V from the PD191 2. The same applies to the center B in FIG. 24 and the center C in FIG.
 さらに、図24のA右側には、PD1911の出力結果(図24のA左側)からPD1912の出力結果(図24のA中央)を差し引いて得られる差分が示されている。 In addition, the A right side of FIG. 24, PD191 1 of an output result from the PD191 2 of output (A left side of FIG. 24) is the difference obtained by subtracting the (A center of FIG. 24) is shown.
 図24のBには、図23において、物体が中央に存在する場合に得られる出力結果の一例が示されている。すなわち、図24のB左側にはPD1911の出力結果が、図24のB中央にはPD1912の出力結果が示されている。 FIG. 24B shows an example of the output result obtained when the object exists in the center in FIG. That is, output of B on the left side PD191 1 in FIG. 24, the B middle of Figure 24 is shown PD191 2 output results.
 また、図24のB右側には、PD1911の出力結果(図241B左側)からPD1912の出力結果(図24のB中央)を差し引いて得られる差分が示されている。 Further, in the B right side of FIG. 24, the difference obtained by subtracting the PD191 1 of output results (Fig. 241B left) from PD191 2 of output (B middle of FIG. 24) is shown.
 図24のCには、図23において、物体が右側に存在する場合に得られる出力結果の一例が示されている。すなわち、図24のC左側にはPD1911の出力結果が、図24のC中央にはPD1912の出力結果が示されている。 FIG. 24C shows an example of an output result obtained when the object exists on the right side in FIG. That is, the output result of the C to the left PD191 1 in FIG. 24, the C center of FIG. 24 are shown PD191 2 output results.
 また、図24のC右側には、PD1911の出力結果(図24のC左側)からPD1912の出力結果(図24のC中央)を差し引いて得られる差分が示されている。 In addition, the C right side of FIG. 24, the difference obtained by subtracting the PD191 1 of an output result (C center of FIG. 24) from the PD191 2 of an output result (C left side in FIG. 24) is shown.
 例えば、図23に示されるように、物体が左側に存在する場合、図24のA左側及び中央に示されるように、PD1911の出力結果が大となり、PD1912の出力結果が小となる。したがって、図24のA右側に示されるように、差分は正の値となる。 For example, as shown in FIG. 23, when an object is present on the left side, as shown in A left and the center of FIG. 24, PD191 1 of output becomes large, PD191 2 output result becomes small. Therefore, the difference is a positive value as shown on the right side of FIG.
 また、図23に示されるように、物体が中央に存在する場合、図24のB左側及び中央に示されるように、PD1911の出力結果と、PD1912の出力結果が(殆ど)同一となる。したがって、図24のB右側に示されるように、差分は0となる。 Further, as shown in FIG. 23, when an object is present in the center, as shown in B left and the center of FIG. 24, and the first output result and PD191, PD191 2 output result and (almost) the same . Accordingly, the difference is 0 as shown on the right side of FIG.
 さらに、図23に示されるように、物体が右側に存在する場合、図24のC左側及び中央に示されるように、PD1911の出力結果が小となり、PD1912の出力結果が大となる。したがって、図24のC右側に示されるように、差分は負の値となる。 Furthermore, as shown in FIG. 23, when an object is present on the right side, as shown in C the left and the center of FIG. 24, PD191 1 of an output result small becomes, PD191 2 of output is large. Therefore, as shown on the right side of FIG. 24C, the difference is a negative value.
 すなわち、物体が、図23に示したような動きをした場合、図24のA右側、図24のB右側、及び図24のC右側に示されるように、物体の動きに応じて、差分が減少していく。 That is, when the object moves as shown in FIG. 23, the difference is determined according to the movement of the object, as shown on the right side of FIG. 24, the right side of FIG. 24B, and the right side of FIG. It will decrease.
 このため、デジタルフォトフレーム171は、差分の変化に応じて、物体の動きを認識することができる。 For this reason, the digital photo frame 171 can recognize the movement of the object according to the change in the difference.
[デジタルフォトフレーム171の詳細な構成例]
 次に、図25は、デジタルフォトフレーム171の詳細な構成例を示している。
[Detailed configuration example of the digital photo frame 171]
Next, FIG. 25 shows a detailed configuration example of the digital photo frame 171.
 なお、このデジタルフォトフレーム171は、図10に示されるデジタルフォトフレーム1と同様に構成されている部分について、同一の符号を付すようにしているため、それらの説明は、以下、適宜省略する。 In the digital photo frame 171, the same reference numerals are given to the same parts as those of the digital photo frame 1 shown in FIG.
 すなわち、デジタルフォトフレーム171において、複数のセンサ211乃至21Nを有する制御部66(図10)に代えて、複数のPD1911乃至191Nを有する制御部211が設けられている他は、デジタルフォトフレーム1と同様に構成される。 That is, the digital photo frame 171 is provided with a control unit 211 having a plurality of PDs 191 1 to 191 N in place of the control unit 66 having a plurality of sensors 21 1 to 21 N (FIG. 10). The configuration is the same as that of the photo frame 1.
 なお、図22のデジタルフォトフレーム171の制御部211は、PD1911及び1912を有するものであるが、PD191nの個数は、2個に限定されない。 Note that the control unit 211 of the digital photo frame 171 in FIG. 22 includes PDs 191 1 and 191 2 , but the number of PDs 191 n is not limited to two.
[制御部211の詳細な構成例]
 図26は、制御部211の詳細な構成例を示している。
[Detailed configuration example of the control unit 211]
FIG. 26 shows a detailed configuration example of the control unit 211.
 この制御部211は、PD191n(n=1,2,…,N)の他、処理部231、電流制御部92、タイミング制御部93、ゲイン制御部94n、LEDユニット192、及びAD変換部95nから構成される。なお、PD191nの前面には、レンズ115nと同様のレンズ232nが設けられている。 In addition to the PD 191n (n = 1, 2,..., N), the control unit 211 includes a processing unit 231, a current control unit 92, a timing control unit 93, a gain control unit 94n, an LED unit 192, and an AD conversion unit 95n. Composed. A lens 232n similar to the lens 115n is provided on the front surface of the PD 191n.
 電流制御部92は、電流制御部92nと同様の処理を行い、タイミング制御部93は、タイミング制御部93nと同様の処理を行う。また、図26において、ゲイン制御部94n及びAD変換部95nは、それぞれ、図11のゲイン制御部94n及びAD変換部95nと同様の処理を行う。 The current control unit 92 performs the same processing as the current control unit 92n, and the timing control unit 93 performs the same processing as the timing control unit 93n. In FIG. 26, the gain control unit 94n and the AD conversion unit 95n perform the same processing as the gain control unit 94n and the AD conversion unit 95n in FIG. 11, respectively.
 さらに、LEDユニット192は、図11におけるLEDドライバ111n、LED112an、LED112bn、レンズ113an、及びレンズ113bnとそれぞれ同様に構成されるLEDドライバ111、LED112a、LED112b、レンズ113a、及びレンズ113bから構成される。 Further, the LED unit 192 includes an LED driver 111, an LED 112a, an LED 112b, a lens 113a, and a lens 113b that are configured in the same manner as the LED driver 111n, LED 112an, LED 112bn, lens 113an, and lens 113bn in FIG.
 PD191nは、PD114nと同様に、LED112aの点灯時に、LED112aから照射される波長λ1の光の反射光(例えば、波長λ1の光が照射されている物体41からの反射光)を受光する。 The PD 191n receives the reflected light of the light having the wavelength λ1 emitted from the LED 112a (for example, the reflected light from the object 41 irradiated with the light having the wavelength λ1) when the LED 112a is turned on, similarly to the PD 114n.
 そして、PD191nは、その受光により得られる受光輝度Vλ1に対してゲインコントロール処理を施し、処理後の受光輝度Vλ1をAD変換部95nに出力する。 Then, the PD 191n performs gain control processing on the received light brightness V λ1 obtained by the light reception, and outputs the processed received light brightness V λ1 to the AD conversion unit 95n.
 また、PD191nは、LED112bの点灯時に、LED112bから照射される波長λ2の光の反射光(例えば、波長λ2の光が照射されている物体41からの反射光)を受光する。 Further, when the LED 112b is turned on, the PD 191n receives the reflected light of the wavelength λ2 emitted from the LED 112b (for example, the reflected light from the object 41 irradiated with the light of wavelength λ2).
 そして、PD191nは、その受光により得られる受光輝度Vλ2に対してゲインコントロール処理を施し、処理後の受光輝度Vλ2をAD変換部95nに出力する。 Then, PD191n is the gain control processing on the received light intensity V .lambda.2 obtained by the light receiving outputs the receiving intensity V .lambda.2 after processing to the AD conversion unit 95n.
 さらに、PD191nは、LED112a及びLED112bの消灯時に、照射光以外の外光の反射光(例えば、外光が照射されている物体41からの反射光)を受光する。 Furthermore, when the LED 112a and the LED 112b are turned off, the PD 191n receives reflected light of external light other than the irradiated light (for example, reflected light from the object 41 irradiated with the external light).
 そして、PD191nは、その受光により得られる受光輝度Vλoffに対してゲインコントロール処理を施し、処理後の受光輝度VλoffをAD変換部95nに出力する。 Then, the PD 191n performs gain control processing on the light reception luminance V λoff obtained by the light reception, and outputs the processed light reception luminance V λoff to the AD conversion unit 95n.
 処理部231は、処理部91と同様にして、電流制御部92、タイミング制御部93、及びゲイン制御部94nを制御する。 The processing unit 231 controls the current control unit 92, the timing control unit 93, and the gain control unit 94n in the same manner as the processing unit 91.
 また、処理部231には、各PD191nからAD変換部95nからを介して、輝度信号Vλ1、輝度信号Vλ2、及び輝度信号Vλoffが供給される。 Further, the luminance signal V λ1 , the luminance signal V λ2 , and the luminance signal V λoff are supplied to the processing unit 231 from each PD 191n via the AD conversion unit 95n.
 処理部231は、処理部91と同様にして、例えば、AD変換部95nからの輝度信号Vλ1、輝度信号Vλ2、及び輝度信号Vλoffに基づいて、LEDユニット192の照射範囲(検知範囲)に肌が存在するか否かを表す肌検出信号を生成する。 The processing unit 231 is similar to the processing unit 91, for example, based on the luminance signal V λ1 , luminance signal V λ2 , and luminance signal V λoff from the AD conversion unit 95n, the irradiation range (detection range) of the LED unit 192. A skin detection signal indicating whether or not skin is present is generated.
 また、処理部231は、例えば、AD変換部95nからの輝度信号Vλ1に基づいて、ジェスチャ認識情報を生成する。なお、処理部231は、第1の実施の形態における処理部91と同様にして、輝度信号Vλ2に基づいて生成したり、肌検出信号を、ジェスチャ認識情報とすることができる。 Further, the processing unit 231 generates gesture recognition information based on the luminance signal V λ1 from the AD conversion unit 95n, for example. Note that the processing unit 231 can generate the skin detection signal as gesture recognition information or generate the skin detection signal based on the luminance signal V λ2 in the same manner as the processing unit 91 in the first embodiment.
 処理部231は、生成した肌検出信号及びジェスチャ認識情報を、図25の入出力インタフェース65及びバス64を介して、CPU61に供給する。 The processing unit 231 supplies the generated skin detection signal and gesture recognition information to the CPU 61 via the input / output interface 65 and the bus 64 of FIG.
[デジタルフォトフレーム171が行う第3のジェスチャ認識処理]
 次に、図27のフローチャートを参照して、デジタルフォトフレーム171が行う第3のジェスチャ認識処理について説明する。
[Third gesture recognition processing performed by the digital photo frame 171]
Next, the third gesture recognition process performed by the digital photo frame 171 will be described with reference to the flowchart of FIG.
 この第3のジェスチャ認識処理は、例えば、デジタルフォトフレーム171の電源がオンされたときに開始される。 This third gesture recognition process is started when the digital photo frame 171 is powered on, for example.
 ステップS141において、処理部231は、電流制御部92、タイミング制御部93及びゲイン制御部94n等を介して、PD191n及びLEDユニット192を制御する。 In step S141, the processing unit 231, a current control unit 92, via the timing control unit 93 and the gain control unit 94 n, and it controls the PD191n and LED unit 192.
 そして、処理部231は、各PD191n毎に、輝度信号Vλ1を生成してAD変換部95nに出力するVλ1取得処理を行わせる。 Then, the processing unit 231 performs a V λ1 acquisition process of generating a luminance signal V λ1 and outputting it to the AD conversion unit 95 n for each PD 191n.
 すなわち、例えば、処理部231は、電流制御部92に、LED112aやLED112bに流す電流を、タイミング制御部93に、LED112aやLED112bの点灯及び消灯のタイミングを指示する。 That is, for example, the processing unit 231 instructs the current control unit 92 to supply a current to the LED 112a and the LED 112b, and instructs the timing control unit 93 to turn on and off the LEDs 112a and 112b.
 これに対して、電流制御部92及びタイミング制御部93は、処理部231からの指示にしたがって、LEDドライバ111を制御する。 In contrast, the current control unit 92 and the timing control unit 93 control the LED driver 111 in accordance with an instruction from the processing unit 231.
 LEDドライバ111は、電流制御部92及びタイミング制御部93からの制御にしたがって、LED112aのみを点灯させることにより、波長λ1の光を照射させる。 The LED driver 111 irradiates light of wavelength λ1 by turning on only the LED 112a according to the control from the current control unit 92 and the timing control unit 93.
 このとき、各PD191nは、それぞれ、波長λ1の光の照射により得られる反射光を受光し、受光した反射光を光電変換して得られる輝度信号Vλ1を、AD変換部95nに出力する。 At this time, each PD191n, respectively, receives reflected light obtained by the irradiation of light of wavelength .lambda.1, a luminance signal V .lambda.1 obtained by photoelectrically converting the received reflected light, and outputs the AD conversion unit 95 n.
 また、AD変換部95nは、それぞれ、PD191nからの輝度信号Vλ1をAD変換し、AD変換後の輝度信号Vλ1を、処理部231に供給する。 Further, the AD conversion unit 95 n, respectively, the luminance signal V .lambda.1 from PD191 n AD conversion, a luminance signal V .lambda.1 after AD conversion, and supplies to the processing unit 231.
 ステップS142において、処理部231は、電流制御部92、タイミング制御部93及びゲイン制御部94n等を介して、PD191n及びLEDユニット192を制御する。 In step S142, the processing unit 231, a current control unit 92, via the timing control unit 93 and the gain control unit 94 n, and it controls the PD191n and LED unit 192.
 そして、処理部231は、各PD191n毎に、輝度信号Vλ2を生成してAD変換部95nに出力するVλ2取得処理を行わせる。 Then, the processing unit 231 performs a V λ2 acquisition process for generating a luminance signal V λ2 and outputting it to the AD conversion unit 95 n for each PD 191n.
 すなわち、例えば、処理部231は、電流制御部92に、LED112aやLED112bに流す電流を、タイミング制御部93に、LED112aやLED112bの点灯及び消灯のタイミングを指示する。 That is, for example, the processing unit 231 instructs the current control unit 92 to supply a current to the LED 112a and the LED 112b, and instructs the timing control unit 93 to turn on and off the LEDs 112a and 112b.
 これに対して、電流制御部92及びタイミング制御部93は、処理部231からの指示にしたがって、LEDドライバ111を制御する。 In contrast, the current control unit 92 and the timing control unit 93 control the LED driver 111 in accordance with an instruction from the processing unit 231.
 LEDドライバ111は、電流制御部92及びタイミング制御部93からの制御にしたがって、LED112bのみを点灯させることにより、波長λ2の光を照射させる。 The LED driver 111 irradiates light of wavelength λ2 by turning on only the LED 112b according to control from the current control unit 92 and the timing control unit 93.
 このとき、各PD191nは、それぞれ、波長λ2の光の照射により得られる反射光を受光し、受光した反射光を光電変換して得られる輝度信号Vλ2を、AD変換部95nに出力する。 At this time, each PD191n, respectively, receives reflected light obtained by the irradiation of light of wavelength .lambda.2, a luminance signal V .lambda.2 obtained by photoelectrically converting the received reflected light, and outputs the AD conversion unit 95 n.
 また、AD変換部95nは、それぞれ、PD191nからの輝度信号Vλ2をAD変換し、AD変換後の輝度信号Vλ2を、処理部231に供給する。 Further, the AD conversion unit 95 n, respectively, the luminance signal V .lambda.2 from PD191 n AD conversion, a luminance signal V .lambda.2 after AD conversion, and supplies to the processing unit 231.
 ステップS143において、処理部231は、電流制御部92、タイミング制御部93及びゲイン制御部94n等を介して、PD191n及びLEDユニット192を制御する。 In step S143, the processing unit 231, a current control unit 92, via the timing control unit 93 and the gain control unit 94 n, and it controls the PD191n and LED unit 192.
 そして、処理部231は、各PD191n毎に、輝度信号Vλoffを生成してAD変換部95nに出力するVλoff取得処理を行わせる。 Then, the processing unit 231 performs a V λoff acquisition process of generating a luminance signal V λoff and outputting it to the AD conversion unit 95 n for each PD 191n .
 すなわち、例えば、処理部231は、電流制御部92に、LED112aやLED112bに流す電流を、タイミング制御部93に、LED112aやLED112bの点灯及び消灯のタイミングを指示する。 That is, for example, the processing unit 231 instructs the current control unit 92 to supply a current to the LED 112a and the LED 112b, and instructs the timing control unit 93 to turn on and off the LEDs 112a and 112b.
 これに対して、電流制御部92及びタイミング制御部93は、処理部231からの指示にしたがって、LEDドライバ111を制御する。 In contrast, the current control unit 92 and the timing control unit 93 control the LED driver 111 in accordance with an instruction from the processing unit 231.
 LEDドライバ111は、電流制御部92及びタイミング制御部93からの制御にしたがって、LED112a及びLED112bのいずれも消灯させる。 The LED driver 111 turns off both the LED 112a and the LED 112b according to the control from the current control unit 92 and the timing control unit 93.
 このとき、各PD191nは、それぞれ、外光の反射光を受光し、受光した反射光を光電変換して得られる輝度信号Vλoffを、AD変換部95nに出力する。 At this time, each PD191n, respectively, receive the outside light reflected light and outputs the reflected light received luminance signal V? OFF obtained by photoelectrically converting, the AD conversion unit 95 n.
 また、AD変換部95nは、それぞれ、PD191nからの輝度信号VλoffをAD変換し、AD変換後の輝度信号Vλoffを、処理部231に供給する。 Further, the AD conversion unit 95 n, respectively, the luminance signal V? OFF from the PD191 n AD conversion, a luminance signal V? OFF after AD conversion, and supplies to the processing unit 231.
 以上説明したステップS141乃至ステップS143の処理により、PD191nにより生成された輝度信号の組合せ(Vλ1,Vλ2,Vλoff)nが、AD変換部95nを介して処理部231にそれぞれ供給される。 The luminance signal combinations (V λ1 , V λ2 , V λoff ) n generated by the PD 191n are supplied to the processing unit 231 via the AD conversion unit 95n by the processing in steps S141 to S143 described above.
 ステップS144において、処理部231は、PD191nからAD変換部95nを介して供給された輝度信号の組合せ(Vλ1,Vλ2,Vλoff)nに基づいて、処理部91と同様の肌判別処理を行う。この肌判別処理では、組合せ(Vλ1,Vλ2,Vλoff)n毎に、肌検出信号Dnが生成される。 In step S144, the processing unit 231, a combination of the luminance signal supplied via the AD converter 95n from PD191n (V λ1, V λ2, V λoff) based on n, the same skin determination processing and processing unit 91 Do. In this skin discrimination process, a skin detection signal Dn is generated for each combination (V λ1 , V λ2 , V λoff ) n .
 ステップS145において、処理部231は、例えば、輝度信号の組合せ(Vλ1,Vλ2,Vλoff)nの輝度信号Vλ1に基づいて、ジェスチャ認識情報Jnを生成する。 In step S145, for example, the processing unit 231 generates the gesture recognition information Jn based on the luminance signal V λ1 of the combination of luminance signals (V λ1 , V λ2 , V λoff ) n .
 処理部231は、生成した肌検出信号Dn及びジェスチャ認識情報Jnを、入出力インタフェース65及びバス64を介して、CPU61に供給する。 The processing unit 231 supplies the generated skin detection signal Dn and gesture recognition information Jn to the CPU 61 via the input / output interface 65 and the bus 64.
 ステップS146乃至ステップS148では、図16のステップS8乃至ステップS10と同様の処理が行われる。 In steps S146 through S148, the same processing as in steps S8 through S10 in FIG. 16 is performed.
 なお、第3のジェスチャ認識処理は、例えばデジタルフォトフレーム171の電源がオフされたときに終了される。 Note that the third gesture recognition process ends when the digital photo frame 171 is powered off, for example.
 以上説明したように、第3のジェスチャ認識処理によれば、LEDユニット192の照射範囲(検知範囲)内に、肌としての物体41が存在する場合、肌としての物体41の位置や動き等を認識するようにした。 As described above, according to the third gesture recognition process, when the object 41 as skin exists in the irradiation range (detection range) of the LED unit 192, the position, movement, and the like of the object 41 as skin are determined. It was made to recognize.
 したがって、物体41が肌以外のものである場合に、物体41の位置や動き等を誤って認識し、その認識結果に応じた処理を行う事態を防止することが可能となる。 Therefore, when the object 41 is something other than the skin, it is possible to prevent a situation in which the position or movement of the object 41 is erroneously recognized and processing according to the recognition result is performed.
 なお、第3のジェスチャ認識処理は、第1のジェスチャ認識処理におけるステップS1乃至ステップS6を、ステップS141乃至ステップS145に置き換えたものである。 Note that the third gesture recognition process is obtained by replacing steps S1 to S6 in the first gesture recognition process with steps S141 to S145.
 したがって、例えば、第3のジェスチャ認識処理では、第1のジェスチャ認識処理の場合と同様の変形を行うことができる。 Therefore, for example, in the third gesture recognition process, the same deformation as in the case of the first gesture recognition process can be performed.
 すなわち、例えば、第3のジェスチャ認識処理では、ステップS146において、肌が検出されなかった場合、所定の時間だけ待機した上で、処理をステップS141に戻すことができる。 That is, for example, in the third gesture recognition process, if no skin is detected in step S146, the process can return to step S141 after waiting for a predetermined time.
 また、第3のジェスチャ認識処理では、LEDユニット192の照射範囲から、肌としての物体を検出するための処理として、Vλ1取得処理、Vλ2取得処理、Vλoff取得処理、及び肌判別処理を繰り返して行うようにした。 In the third gesture recognition process, V λ1 acquisition process, V λ2 acquisition process, V λoff acquisition process, and skin discrimination process are performed as processes for detecting an object as skin from the irradiation range of the LED unit 192. Repeatedly performed.
 しかしながら、ステップS146において、肌としての物体が検出された場合には、第2のジェスチャ認識処理と同様に、LEDユニット192の照射範囲から、肌としての物体を検出するための処理を簡略化するようにしてもよい。 However, if an object as skin is detected in step S146, the process for detecting the object as skin from the irradiation range of the LED unit 192 is simplified as in the second gesture recognition process. You may do it.
 この場合、第3ジェスチャ認識処理では、ステップS148の終了後、図20のステップS101乃至ステップS104に相当する処理として、ステップS141と同様のVλ1取得処理が行われ、そのVλ1取得処理により生成された輝度信号Vλ1に基づいて、処理部231が、ジェスチャ認識情報を生成する。 In this case, in the third gesture recognition process, a V λ1 acquisition process similar to that in step S141 is performed as a process corresponding to steps S101 to S104 in FIG. 20 after the end of step S148, and is generated by the V λ1 acquisition process. Based on the luminance signal V λ1 that has been performed, the processing unit 231 generates gesture recognition information.
 また、処理部231は、PD191n毎に生成された輝度信号Vλ1及びジェスチャ認識情報を、入出力インタフェース65及びバス64を介して、CPU61に供給する。 Further, the processing unit 231 supplies the luminance signal V λ1 and gesture recognition information generated for each PD 191n to the CPU 61 via the input / output interface 65 and the bus 64.
 図20のステップS105に相当する処理として、CPU61は、各PD191n毎の輝度信号Vλ1に基づいて、LEDユニット192の照射範囲内に物体が検出されたか否かを判定する。 As the processing corresponding to step S105 of FIG. 20, CPU 61, based on the luminance signal V .lambda.1 for each PD191n, determines whether or not the object is detected within the irradiation range of the LED unit 192.
 そして、CPU61は、LEDユニット192の照射範囲内に物体が検出されたと判定した場合、処理をステップS147に戻し、それ以降同様の処理が行われる。 When the CPU 61 determines that an object is detected within the irradiation range of the LED unit 192, the CPU 61 returns the process to step S147, and thereafter the same process is performed.
 また、CPU61は、LEDユニット192の照射範囲内に物体が検出されていないと判定した場合、処理をステップS141に戻し、それ以降同様の処理が行われる。 Further, when the CPU 61 determines that an object is not detected within the irradiation range of the LED unit 192, the process returns to step S141, and thereafter the same process is performed.
 さらに、第2の実施の形態では、第3のジェスチャ認識処理を行う前に、図21の近接物体検出処理に相当する処理を行なうことができる。 Furthermore, in the second embodiment, a process corresponding to the proximity object detection process of FIG. 21 can be performed before the third gesture recognition process.
 すなわち、例えば、図21のステップS121及びステップS122に相当する処理として、図27のステップS141におけるVλ1取得処理を行う。これにより、処理部231には、PD191n毎の輝度信号Vλ1が供給される。 That is, for example, V λ1 acquisition processing in step S141 in FIG. 27 is performed as processing corresponding to step S121 and step S122 in FIG. As a result, the luminance signal V λ1 for each PD 191n is supplied to the processing unit 231.
 次に、図21のステップS123に相当する処理として、処理部231は、PD191n毎の輝度信号Vλ1に基づいて、LEDユニット192の照射範囲内に物体が侵入したか否かを判定する。 Next, as the processing corresponding to step S123 of FIG. 21, the processing unit 231, based on the luminance signal V .lambda.1 per PD191n, determines whether or not the object has entered into the irradiation range of the LED unit 192.
 すなわち、例えば、処理部231は、PD191n毎の輝度信号Vλ1がそれぞれ表す輝度値の少なくとも1つが、予め決められた閾値以上となったか否かに基づいて、LEDユニット192の照射範囲内に物体が侵入したか否かを判定する。 That is, for example, the processing unit 231 determines that the object within the irradiation range of the LED unit 192 is based on whether at least one of the luminance values represented by the luminance signal V λ1 for each PD 191n is equal to or greater than a predetermined threshold. Determine whether or not.
 処理部231は、その判定結果に基づいて、LEDユニット192の照射範囲に対する物体の侵入を検出しない場合、処理を、ステップS121及びステップS122に相当する処理に戻し、それ以降同様の処理が行われる。 If the processing unit 231 does not detect the intrusion of an object into the irradiation range of the LED unit 192 based on the determination result, the processing unit 231 returns the processing to the processing corresponding to step S121 and step S122, and thereafter the same processing is performed. .
 また、処理部123は、判定結果に基づいて、LEDユニット192の照射範囲に対する物体の侵入を検出した場合、処理を、図21のステップS124に相当する処理に進め、第3のジェスチャ認識処理が行われる。 In addition, when the processing unit 123 detects the intrusion of an object into the irradiation range of the LED unit 192 based on the determination result, the processing unit 123 proceeds with the process corresponding to step S124 in FIG. 21, and the third gesture recognition process is performed. Done.
 なお、ステップS124に相当する処理として、第3のジェスチャ認識処理が行われる場合、以下のようにして第3のジェスチャ認識処理が終了され、図21の近接物体検出処理に相当する処理も終了される。 When the third gesture recognition process is performed as the process corresponding to step S124, the third gesture recognition process is terminated as follows, and the process corresponding to the proximity object detection process of FIG. 21 is also terminated. The
 すなわち、例えば、第3のジェスチャ認識処理のステップS146において、CPU61により、肌としての物体41が検出されていないと判定された場合、換言すれば、検知範囲内に侵入した物体41が肌ではないと判定された場合、第3のジェスチャ認識処理は終了される。そして、図21の近接物体検出処理に相当する処理は終了され、新たに、近接物体検出処理に相当する処理が開始される。 That is, for example, in step S146 of the third gesture recognition process, when the CPU 61 determines that the object 41 as skin is not detected, in other words, the object 41 that has entered the detection range is not skin. Is determined, the third gesture recognition process is terminated. Then, the process corresponding to the proximity object detection process in FIG. 21 is terminated, and a process corresponding to the proximity object detection process is newly started.
 また、第1の実施の形態の場合と同様にして、ステップS123に相当する処理として、LEDユニット192の照射範囲に物体が侵入していないと判定された場合、所定の時間だけ待機して、処理を、ステップS121及びステップS122に相当する処理に戻すようにしてもよい。 Similarly to the case of the first embodiment, as a process corresponding to step S123, when it is determined that an object has not entered the irradiation range of the LED unit 192, it waits for a predetermined time, You may make it return a process to the process corresponded to step S121 and step S122.
[LEDユニット192の配置]
 図28は、PD1911とPD1912を結ぶ線分の中心を通る、表示画面1aと垂直な法線上に存在するLEDユニット192の一例を示している。
[Arrangement of LED unit 192]
28, through the center of a line connecting the PD191 1 and PD191 2, shows an example of an LED unit 192 present in the display screen 1a perpendicular on the normal line.
 なお、LEDユニット192は、図28に示されるように、図中上下方向にLED112a及びLED112bが並ぶように配置されることが望ましい。これは、LED112a及びLED112bの位置に起因して、PD1911とPD1912による受光量に偏りが生じることを防止するためである。 As shown in FIG. 28, the LED unit 192 is preferably arranged so that the LEDs 112a and 112b are arranged in the vertical direction in the figure. This is due to the position of LED112a and LED112b, in order to prevent the deviation occurs in the amount of light received by the PD191 1 and PD191 2.
 図28では、LED112aからPD1911までの距離と、LED112aからPD1912までの距離が同一となる。 In Figure 28, the distance from LED112a to PD191 1, the distance from LED112a to PD191 2 becomes the same.
 したがって、例えば、物体が、PD1911とPD1912を結ぶ線分の中心に存在する場合、PD1911及びPD1912は、いずれも(殆ど)同一の受光量で、物体から波長λ1の光の反射光を受光することができる。 Thus, for example, the object is, when present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 and PD191 2 are both (almost) the same amount of received light, reflected light of light of wavelength λ1 from the object Can be received.
 このため、PD1911及びPD1912から、図24のB左側及び中央に示されるような同一の出力結果(輝度信号Vλ1)を得ることができるので、精度良くジェスチャなどを認識することが可能となる。 Therefore, the same output result (luminance signal V λ1 ) as shown on the left side and the center of FIG. 24B can be obtained from PD 191 1 and PD 191 2 , so that it is possible to recognize gestures and the like with high accuracy. Become.
 また、LEDユニット192では、図28に示されるように、LED112bからPD1911までの距離と、LED112bからPD1912までの距離が同一となる。 Further, the LED unit 192, as shown in FIG. 28, the distance from LED112b to PD191 1, the distance from LED112b to PD191 2 becomes the same.
 したがって、例えば、物体が、PD1911とPD1912を結ぶ線分の中心に存在する場合、PD1911及びPD1912は、いずれも(殆ど)同一の受光量で、物体から波長λ2の光の反射光を受光することができる。 Thus, for example, the object is, when present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 and PD191 2 are both (almost) the same amount of received light, reflected light of light of wavelength λ2 from the object Can be received.
 このため、PD1911及びPD1912から、図24のB左側及び中央に示されるような同一の出力結果(輝度信号Vλ2)を得ることができるので、波長λ2の光の照射時における出力結果を用いる場合にも、精度良くジェスチャなどを認識することが可能となる。 Therefore, the same output result (luminance signal V λ2 ) as shown on the left side and the center of FIG. 24B can be obtained from PD 191 1 and PD 191 2 . Even when it is used, it is possible to recognize a gesture or the like with high accuracy.
 さらに、LEDユニット192では、図28に示されるように、LED112aからPD1911までの距離と、LED112bからPD1911までの距離が同一となる。 Furthermore, the LED unit 192, as shown in FIG. 28, the distance from LED112a to PD191 1, the distance from LED112b to PD191 1 becomes the same.
 したがって、例えば、物体が、PD1911とPD1912を結ぶ線分の中心に存在する場合、PD1911は、いずれも(殆ど)同一の受光量で、物体から波長λ1の光の反射光と、波長λ2の光の反射光とを受光することができる。 Thus, for example, if an object is present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 is a both (almost) the same amount of received light, and the reflected light of the light of wavelength λ1 from the object, the wavelength The reflected light of the light of λ2 can be received.
 このような状況で、PD1911において輝度信号Vλ1及び輝度信号Vλ2を生成することができるので、精度良く肌を判別することが可能となる。 In such a situation, the luminance signal V λ1 and the luminance signal V λ2 can be generated in the PD 191 1 , so that the skin can be discriminated with high accuracy.
 また、LEDユニット192では、図28に示されるように、LED112aからPD1912までの距離と、LED112bからPD1912までの距離が同一となるため、PD1912についても、PD1911と同様のことが言える。 Further, the LED unit 192, as shown in FIG. 28, since the distance from LED112a to PD191 2, the distance from LED112b to PD191 2 becomes the same, for PD191 2 also can be said similar to PD191 1 .
 次に、図29は、PD1911とPD1912を結ぶ線分の中心を通る、表示画面1aと垂直な法線上に存在するLEDユニット192の他の一例を示している。 Next, FIG. 29, through the center of a line connecting the PD191 1 and PD191 2, shows another example of the LED units 192 present in the display screen 1a perpendicular on the normal line.
 なお、図29に示されるLEDユニット192は、図中左右方向にLED112a及びLED112bが並ぶように配置されている。 The LED unit 192 shown in FIG. 29 is arranged so that the LEDs 112a and 112b are arranged in the left-right direction in the drawing.
 この場合、図28に示した場合とは異なり、例えば、LED112aからPD1911までの距離と、LED112aからPD1912までの距離が異なるものとなる。 In this case, unlike the case shown in FIG. 28, for example, the distance from the LEDs 112a to PD191 1, and the distance to the PD191 2 is different from the LEDs 112a.
 したがって、例えば、物体が、PD1911とPD1912を結ぶ線分の中心に存在する場合、PD1911及びPD1912は、それぞれ、異なる受光量で、物体から波長λ1の光の反射光を受光することになってしまう。 Thus, for example, the object is, when present in the center of a line connecting the PD191 1 and PD191 2, PD191 1 and PD191 2, respectively, in different amount of light received by receiving reflected light of light of wavelength λ1 from the object Become.
 この場合、PD1911及びPD1912から、図24のB左側及び中央に示されるような所望の出力結果(輝度信号Vλ1)が得られなくなってしまう。 In this case, the PD191 1 and PD191 2, the desired output result, as shown in B left and the center of FIG. 24 (a luminance signal V .lambda.1) can not be obtained.
 そこで、LEDユニット192を、図29に示したように配置した場合には、PD1911及びPD1912でそれぞれ行われるゲインコントロール処理により、PD1911及びPD1912から、図24のB左側及び中央に示されるような所望の出力結果が得られるように調整することが望ましい。 Therefore, the LED unit 192, when placed as shown in FIG. 29, the gain control processing performed respectively PD191 1 and PD191 2, from PD191 1 and PD191 2, shown in B left and the center of FIG. 24 It is desirable to adjust so as to obtain a desired output result.
[PDを3個とする場合の構成]
 次に、図30は、デジタルフォトフレーム171において、3個のPD1911乃至1913を設けるようにした場合の一例が示されている。
[Configuration with 3 PDs]
Next, FIG. 30 shows an example in which three PDs 191 1 to 191 3 are provided in the digital photo frame 171.
 図30に示される構成では、PD1911及びPD1912からの出力結果に基づいて、図24のA乃至図24のCに示したようにして、図中左右方向のジェスチャなどを認識することができる。 In the configuration shown in FIG. 30, it can be based on the output results from the PD191 1 and PD191 2, as shown in C of A to 24 in FIG. 24, recognizes and lateral direction of the gesture in the drawing .
 また、PD1912及びPD1913からの出力結果に基づいて、PD1911及びPD1912からの出力結果を用いる場合と同様にして、図中上下方向のジェスチャなどを認識することができる。 Further, it is possible on the basis of the output result from the PD191 2 and PD191 3, in the same manner as in the case of using the output from the PD191 1 and PD191 2, to recognize and vertical gesture in FIG.
 なお、PD1912及びPD1913からの出力結果を用いる場合、LED112bから波長λ2の光が照射された物体の反射光を受光することにより出力される出力結果が用いられる。 In the case of using the output from the PD191 2 and PD191 3, and the output result output by receiving the reflected light of the object irradiated with light of the wavelength λ2 from LED112b it is used.
 これは、LED112bからPD1912までの距離と、LED112bからPD1913までの距離が同一であることによる。これにより、PD1912及びPD1913からは、図24のA乃至図24のCに示したような所望の出力結果を得ることができる。 This is because the distance from LED112b to PD191 2, the distance from LED112b to PD191 3 are identical. Accordingly, desired output results as shown in A of FIG. 24 to C of FIG. 24 can be obtained from the PD 191 2 and the PD 191 3 .
 なお、肌検出信号の生成時には、LED112aまでの距離と、LED112bまでの距離が同一となるPD1911(又はPD1912)からの出力結果(輝度信号Vλ1,Vλ2,Vλoff)を用いることが望ましい。 When the skin detection signal is generated, an output result (luminance signals V λ1 , V λ2 , V λoff ) from the PD 191 1 (or PD 191 2 ) in which the distance to the LED 112a is the same as the distance to the LED 112b is used. desirable.
[PDを4個とする場合の構成]
 次に、図31は、デジタルフォトフレーム171において、4個のPD1911乃至1914を設けるようにした場合の一例が示されている。
[Configuration with 4 PDs]
Next, FIG. 31 shows an example in which four PDs 191 1 to 191 4 are provided in the digital photo frame 171.
 図31に示される構成では、PD1911及びPD1912からの出力結果に基づいて、図24のA乃至図24のCに示したようにして、図中左右方向のジェスチャなどを認識することができる。 In the configuration shown in FIG. 31, based on the output results from PD 191 1 and PD 191 2 , as shown in A to C of FIG. .
 なお、図31において、LED112aからPD1911までの距離と、LED112aからPD1912までの距離は同一である。また、LED112bからPD1911までの距離と、LED112bからPD1912までの距離は同一である。 Incidentally, in FIG. 31, the distance of the distance from LED112a to PD191 1, from LED112a to PD191 2 are the same. The distance between the distance from LED112b to PD191 1, from LED112b to PD191 2 are the same.
 したがって、PD1911及びPD1912からの出力結果を用いる場合、LED112aから波長λ1の光が照射されたている物体の反射光を受光することにより出力される出力結果、又はLED112bから波長λ2の光が照射されている物体の反射光を受光することにより出力される出力結果のいずれを用いるようにしてもよい。 Therefore, when using an output from the PD191 1 and PD191 2, the output result output by the light of the wavelength λ1 from LED112a to receive the reflected light of the object being irradiated, or light of wavelength λ2 from LED112b Any of the output results output by receiving the reflected light of the irradiated object may be used.
 このことは、図31において、PD1913及びPD1914からの出力結果に基づいて、図24のA乃至図24のCに示したようにして、図中左右方向のジェスチャなどを認識する場合についても同様である。 This is also true in the case of recognizing a gesture in the horizontal direction in the figure as shown in A to C of FIG. 24 based on the output results from PD 191 3 and PD 191 4 in FIG. It is the same.
 また、図31では、PD1912及びPD1913からの出力結果に基づいて、PD1911及びPD1912からの出力結果を用いる場合と同様にして、図中上下方向のジェスチャなどを認識することができる。 Further, in FIG. 31, based on the output results from PD 191 2 and PD 191 3, a gesture in the vertical direction in the figure can be recognized in the same manner as when the output results from PD 191 1 and PD 191 2 are used.
 さらに、図31において、PD1911及びPD1914からの出力結果に基づいて、図24のA乃至図24のCに示したようにして、図中上下方向のジェスチャなどを認識するようにしてもよい。 Further, in FIG. 31, based on the output result from the PD191 1 and PD191 4, as shown in C of A to 24 in FIG. 24, it may be recognized and gestures in the vertical direction in the drawing .
 なお、図31では、いずれのPD1911乃至1914も、LED112aまでの距離と、LED112bまでの距離が異なる。 In FIG. 31, any of PD191 1 to 191 4 is also the distance to the LEDs 112a, the distance to LED112b different.
 したがって、肌検出信号の生成時には、PD1911とPD1914とを1個の第1のPDとして扱い、PD1912とPD1913とを1個の第2のPDとして扱うようにする。 Therefore, when generation of skin detection signal, treats and PD191 1 and PD191 4 as one of the first PD, to treat and PD191 2 and PD191 3 as one second PD.
 そして、PD1911からの出力結果と、PD1914からの出力結果との和を、第1のPDからの出力結果とし、PD1912からの出力結果と、PD1913からの出力結果との和を、第2のPDからの出力結果として扱う。 Then, the output from the PD191 1, the sum of the output from the PD191 4, the output from the first PD, and outputs the result from the PD191 2, the sum of the output from the PD191 3, Treated as the output result from the second PD.
 このようにすれば、第1のPDと第2のPDは、図28に示されるPD1911とPD1912の場合と同様に考えることができる。 In this way, the first PD and the second PD can be considered as in the case of PD191 1 and PD191 2 shown in FIG. 28.
[PDを3個とし、LEDユニットを2個とする場合の構成]
 次に、図32は、デジタルフォトフレーム171において、3個のPD1911乃至1913を設けるとともに、2個のLEDユニット192及びLEDユニット271を設けるようにした場合の一例が示されている。
[Configuration with 3 PDs and 2 LED units]
Next, FIG. 32 shows an example in which three PDs 191 1 to 191 3 are provided in the digital photo frame 171 and two LED units 192 and 271 are provided.
 なお、図32に示される場合、LEDユニット192は、第1の照射範囲を照射し、LEDユニット271は、第1の照射範囲とは異なる第2の照射範囲を照射する。なお、第1の照射範囲と第2の照射範囲とは、一部だけ重複していてもよい。 32, the LED unit 192 irradiates a first irradiation range, and the LED unit 271 irradiates a second irradiation range different from the first irradiation range. Note that the first irradiation range and the second irradiation range may partially overlap.
 図32のように構成した場合、PD1911,PD1912、並びに、LED112a及びLED112bを有するLEDユニット192の組合せにより、第1の照射範囲内における物体が肌であるか否かの判別や、第1の照射範囲内において、図中左右方向のジェスチャなどを認識する。 When configured as shown in FIG. 32, the combination of the PD 191 1 , PD 191 2 , and the LED unit 192 having the LEDs 112a and 112b determines whether or not the object in the first irradiation range is skin, In the irradiation range, a gesture in the horizontal direction in the figure is recognized.
 また、PD1912,PD1913、並びに、LED291a及びLED291bを有するLEDユニット271の組合せにより、第2の照射範囲内における物体が肌であるか否かの判別や、第2の照射範囲内において、図中上下方向のジェスチャなどを認識する。 Further, the combination of the PD 191 2 , the PD 191 3 , and the LED unit 271 having the LED 291a and the LED 291b determines whether or not the object in the second irradiation range is skin, Recognize middle and up / down gestures.
<3.第3の実施の形態>
[デジタルフォトフレーム331の構成例]
 次に、図33は、第3の実施の形態であるデジタルフォトフレーム331の構成例を示している。
<3. Third Embodiment>
[Configuration example of digital photo frame 331]
Next, FIG. 33 shows a configuration example of a digital photo frame 331 according to the third embodiment.
 このデジタルフォトフレーム331は、表示画面1aを有している。また、表示画面1aの上側に、PD351が設けられている。そして、PD351の図中左右方向にLEDユニット3711及びLEDユニット3712が設けられている。 The digital photo frame 331 has a display screen 1a. A PD 351 is provided on the upper side of the display screen 1a. An LED unit 371 1 and an LED unit 371 2 are provided in the horizontal direction of the PD 351 in the drawing.
 PD351は、図22のPD191n(例えば、PD1911)と同様に構成される。また、LEDユニット3711及びLEDユニット3712は、それぞれ、図22のLEDユニット192と同様に構成される。 The PD 351 is configured in the same manner as the PD 191n (eg, PD 191 1 ) in FIG. Further, the LED unit 371 1 and the LED unit 371 2 are configured in the same manner as the LED unit 192 of FIG.
 なお、LEDユニット3711及びLEDユニット3712は、それぞれ、例えば、デジタルフォトフレーム331の前でユーザのジェスチャなどが行われると想定される想定範囲を照射範囲として照射する。 Note that the LED unit 371 1 and the LED unit 371 2 each irradiate, for example, an assumed range in which a user's gesture is performed in front of the digital photo frame 331 as an irradiation range.
 次に、図34は、PD351とLEDユニット3711及びLEDユニット3712の位置関係の一例を示している。 Next, FIG. 34 shows an example of the positional relationship between the PD 351, the LED unit 371 1, and the LED unit 371 2 .
 なお、LEDユニット3711は、波長λ1の光を照射するLED391a1、及び波長λ2の光を照射するLED391b1を有している。 The LED unit 371 1 includes an LED 391a 1 that emits light having a wavelength λ1 and an LED 391b 1 that emits light having a wavelength λ2.
 また、LEDユニット3712は、波長λ1の光を照射するLED391a2、及び波長λ2の光を照射するLED391b2を有している。 The LED unit 371 2 includes an LED 391a 2 that emits light having a wavelength λ1, and an LED 391b 2 that emits light having a wavelength λ2.
 PD351とLEDユニット3711は、図34に示されるように、PD351からLED391a1までの距離と、PD351からLED391b1までの距離が同一となるように配置されている。 PD351 and the LED unit 371 1, as shown in FIG. 34, the distance from PD351 to LED391a 1, the distance from the PD351 to LED391b 1 are arranged to have the same.
 したがって、LED391a1の点灯時にPD351により生成される輝度信号Vλ1、LED391b1の点灯時にPD351により生成される輝度信号Vλ2、並びに、LED391a1及びLED391b1の消灯時にPD351により生成される輝度信号Vλoffに基づいて、比較的、精度の良い肌検出信号を生成することができる。 Thus, the luminance signal V .lambda.1 generated by PD351 during lighting of LED391a 1, the luminance signal V .lambda.2 generated by PD351 during lighting of LED391b 1, and a luminance signal V generated by PD351 when turning off the LED391a 1 and LED391b 1 A relatively accurate skin detection signal can be generated based on λoff .
 さらに、PD351とLEDユニット3712は、図34に示されるように、PD351からLED391a2までの距離と、PD351からLED391b2までの距離が同一となるように配置されている。 Furthermore, PD351 and the LED unit 371 2, as shown in FIG. 34, the distance from PD351 to LED391a 2, the distance from the PD351 to LED391b 2 are arranged to have the same.
 したがって、LED391a2の点灯時にPD351により生成される輝度信号Vλ1、LED391b2の点灯時にPD351により生成される輝度信号Vλ2、並びに、LED391a2及びLED391b2の消灯時にPD351により生成される輝度信号Vλoffに基づいて、比較的、精度の良い肌検出信号を生成することができる。 Thus, the luminance signal V .lambda.1 generated by PD351 during lighting of LED391a 2, the luminance signal V .lambda.2 generated by PD351 during lighting of LED391b 2, and the luminance signal V generated by PD351 when turning off the LED391a 2 and LED391b 2 A relatively accurate skin detection signal can be generated based on λoff .
 図34では、LED391a1の点灯時にPD351から出力される出力結果と、LED391a2の点灯時にPD351から出力される出力結果を比較することにより、図中左右方向の動きなどを認識することができる。なお、LED391b1の点灯時及びLED391b2の点灯時に、それぞれ、PD351から出力される出力結果を比較して、図中左右方向の動き等を認識するようにしてもよい。 In FIG. 34, by comparing the output result output from the PD 351 when the LED 391a 1 is lit and the output result output from the PD 351 when the LED 391a 2 is lit, it is possible to recognize the movement in the horizontal direction in the figure. Note that, when the LED 391b 1 is lit and when the LED 391b 2 is lit, the output results output from the PD 351 may be compared to recognize the movement in the horizontal direction in the figure.
 すなわち、例えば、LEDユニット3711の真上付近に物体が存在する場合、LED391a1の点灯時にPD351から出力される出力結果として、図24のA左側に示されるような出力結果が、LED391a2の点灯時にPD351から出力される出力結果として、図24のA中央に示されるような出力結果が得られる。 That is, for example, if an object in the vicinity directly above the LED unit 371 1 is present as the output result output from the PD351 during lighting of LED391a 1, is output as shown in A left side of FIG. 24, the LED391a 2 As an output result output from the PD 351 at the time of lighting, an output result as shown in the center A of FIG. 24 is obtained.
 また、例えば、PD351の真上付近に物体が存在する場合、LED391a1の点灯時にPD351から出力される出力結果として、図24のB左側に示されるような出力結果が、LED391a2の点灯時にPD351から出力される出力結果として、図24のB中央に示されるような出力結果が得られる。 Further, for example, if there is an object in the vicinity just above the PD351, as an output result output from the PD351 during lighting of LED391a 1, the output result as shown B on the left side of FIG. 24, when the lighting of LED391a 2 PD351 As an output result output from, an output result as shown in the center B of FIG. 24 is obtained.
 さらに、例えば、LEDユニット3712の真上付近に物体が存在する場合、LED391a1の点灯時にPD351から出力される出力結果として、図24のC左側に示されるような出力結果が、LED391a2の点灯時にPD351から出力される出力結果として、図24のC中央に示されるような出力結果が得られる。 Furthermore, for example, if an object in the vicinity directly above the LED unit 371 2 is present as the output result output from the PD351 during lighting of LED391a 1, is output as shown in C the left side of FIG. 24, the LED391a 2 As an output result output from the PD 351 at the time of lighting, an output result as shown in the center C of FIG. 24 is obtained.
 このことは、波長λ1の光を照射するLED391a1及びLED391a2に代えて、波長λ2の光を照射するLED391b1及びLED391b2を点灯させる場合についても同様のことが言える。 The same applies to the case where the LEDs 391b 1 and 391b 2 that emit light of wavelength λ2 are turned on instead of the LEDs 391a 1 and 391a 2 that emit light of wavelength λ1.
 なお、図35に示されるように、LEDユニット3712を、PD351の下側に配置するようにすれば、図中左右方向のジェスチャの他、図中上下方向のジェスチャを認識することができる。 As shown in FIG. 35, if the LED unit 371 2 is arranged on the lower side of the PD 351, in addition to the gesture in the horizontal direction in the figure, the gesture in the vertical direction in the figure can be recognized.
 図35において、図中左右方向のジェスチャを認識するときには、LED391b1の点灯時におけるPD351の出力結果と、LED391b2の点灯時におけるPD351の出力結果が利用される。 In FIG. 35, when recognizing a gesture in the left-right direction in the figure, the output result of the PD 351 when the LED 391b 1 is lit and the output result of the PD 351 when the LED 391b 2 is lit are used.
 また、図35において、図中上下方向のジェスチャを認識するときには、LED391a1の点灯時におけるPD351の出力結果と、LED391a2の点灯時におけるPD351の出力結果が利用される。 In FIG. 35, when recognizing a vertical gesture in the figure, the output result of the PD 351 when the LED 391a 1 is lit and the output result of the PD 351 when the LED 391a 2 is lit are used.
 なお、図35において、左右方向のジェスチャを認識する場合と、上下方向のジェスチャを認識する場合とで、異なるLEDの組合せを用いるようにしているのは、以下の理由による。次に、図36を参照して、その理由を説明する。 In FIG. 35, different LED combinations are used for the case of recognizing the left-right direction gesture and the case of recognizing the vertical direction gesture for the following reason. Next, the reason will be described with reference to FIG.
 図36は、図35において、図中、左から右方向に物体が動いたときのPD351の出力結果の一例を示している。 FIG. 36 shows an example of the output result of the PD 351 when the object moves from left to right in FIG.
 図中、実線で示されるグラフは、LED391b1の点灯時にPD351から得られる出力結果の一例を示している。このグラフは、時刻t1において極大値とされている。 In the drawing, a graph indicated by a solid line shows an example of an output result obtained from the PD 351 when the LED 391b 1 is turned on. This graph has a maximum value at time t1.
 また、図中、点線で示されるグラフは、LED391b2の点灯時にPD351から得られる出力結果の一例を示している。このグラフは、時刻t2において極大値とされている。 In the figure, a graph indicated by a dotted line shows an example of an output result obtained from the PD 351 when the LED 391b 2 is turned on. This graph has a maximum value at time t2.
 図35において、図中、左から右方向に動く物体からLED391b1までの距離と、その物体からLED391b2までの距離との差が比較的小さいものとなっている。 In FIG. 35, in the figure, the difference between the distance from the object moving from the left to the right to the LED 391b 1 and the distance from the object to the LED 391b 2 is relatively small.
 したがって、図36に示されるように、LED391b1の点灯時にPD351から得られる出力結果と、LED391b2の点灯時にPD351から得られる出力結果とで、極大値が生じるまでの時間差|t1-t2|が比較的短いものとなっている。 Therefore, as shown in FIG. 36, there is a time difference | t1-t2 | between the output result obtained from the PD 351 when the LED 391b 1 is lit and the output result obtained from the PD 351 when the LED 391b 2 is lit. It is relatively short.
 このため、例えば、PD351の検出結果によっては、ほぼ同時刻に極大値が生じた出力結果が得られてしまい、物体の動きを正確に認識できないことが生じ得る。 For this reason, for example, depending on the detection result of the PD 351, an output result in which a maximum value occurs at approximately the same time may be obtained, and the movement of the object may not be accurately recognized.
 そこで、図中、左から右方向に動く物体のジェスチャを認識する際には、LED391a1とLED391a2との組合せを用いるようにしている。これは、LED391b1とLED391b2との組合せと比較して、物体からLED391a1までの距離と、物体からLED391a2までの距離との差が比較的大きいことによる。 Therefore, in the figure, when recognizing a gesture of an object moving from left to right, a combination of LED 391a 1 and LED 391a 2 is used. This is because the difference between the distance from the object to the LED 391a 1 and the distance from the object to the LED 391a 2 is relatively large compared to the combination of the LED 391b 1 and the LED 391b 2 .
[デジタルフォトフレーム331の詳細な構成例]
 次に、図37は、デジタルフォトフレーム331の詳細な構成例を示している。
[Detailed configuration example of the digital photo frame 331]
Next, FIG. 37 shows a detailed configuration example of the digital photo frame 331.
 なお、このデジタルフォトフレーム331は、図10に示されるデジタルフォトフレーム1と同様に構成されている部分について、同一の符号を付すようにしているため、それらの説明は、以下、適宜省略する。 In the digital photo frame 331, the same reference numerals are given to the same parts as those of the digital photo frame 1 shown in FIG.
 すなわち、デジタルフォトフレーム331において、複数のセンサ211乃至21Nを有する制御部66(図10)に代えて、複数のLEDユニット3711乃至371Nを有する制御部411が設けられている他は、デジタルフォトフレーム1と同様に構成される。 That is, the digital photo frame 331 is provided with a control unit 411 having a plurality of LED units 371 1 to 371 N instead of the control unit 66 (FIG. 10) having the plurality of sensors 21 1 to 21 N. The digital photo frame 1 is configured similarly.
[制御部411の詳細な構成例]
 図38は、制御部411の詳細な構成例を示している。
[Detailed Configuration Example of Control Unit 411]
FIG. 38 shows a detailed configuration example of the control unit 411.
 この制御部411は、LEDユニット371n(n=1,2,…,N)の他、電流制御部92n、タイミング制御部93n、ゲイン制御部94、AD変換部95、PD351、及び処理部431から構成される。なお、PD351の前面には、レンズ352が設けられている。 The control unit 411 includes an LED unit 371n (n = 1, 2,..., N), a current control unit 92n, a timing control unit 93n, a gain control unit 94, an AD conversion unit 95, a PD 351, and a processing unit 431. Composed. A lens 352 is provided on the front surface of the PD 351.
 また、図38の電流制御部92n、タイミング制御部93n、及びLEDドライバ111nは、それぞれ、図11の電流制御部92n、タイミング制御部93n、及びLEDドライバ111nと同様に構成されているため、同一の符号を付している。 In addition, the current control unit 92n, the timing control unit 93n, and the LED driver 111n in FIG. 38 are configured similarly to the current control unit 92n, the timing control unit 93n, and the LED driver 111n in FIG. The code | symbol is attached | subjected.
 さらに、図38のゲイン制御部94及びAD変換部95は、それぞれ、図11のゲイン制御部94n及びAD変換部95nと同様に構成される。また、図38のPD351は、図11のPD114nと同様に構成される。 Furthermore, the gain control unit 94 and the AD conversion unit 95 in FIG. 38 are configured similarly to the gain control unit 94n and the AD conversion unit 95n in FIG. 11, respectively. Further, the PD 351 in FIG. 38 is configured similarly to the PD 114n in FIG.
 LED391an、LED391bn、レンズ392an、及びレンズ392bnは、それぞれ、図11のLED112an、LED112bn、レンズ113an、及びレンズ113bnと同様に構成される。 The LED 391an, LED 391bn, lens 392an, and lens 392bn are configured similarly to the LED 112an, LED 112bn, lens 113an, and lens 113bn of FIG.
 処理部431は、図11の処理部91と同様にして、電流制御部92n、タイミング制御部93n及びゲイン制御部94を制御する。 The processing unit 431 controls the current control unit 92n, the timing control unit 93n, and the gain control unit 94 in the same manner as the processing unit 91 in FIG.
 なお、LEDユニット371nは、図13を参照して説明したセンサ21nのLED112an及びLED112bnと同様にして、LED391anの点灯及び消灯、並びに、LED391bnの点灯及び消灯を行う。 The LED unit 371n turns on and off the LED 391an and turns on and off the LED 391bn in the same manner as the LED 112an and LED 112bn of the sensor 21n described with reference to FIG.
[デジタルフォトフレーム331が行う第4のジェスチャ認識処理]
 次に、図39のフローチャートを参照して、デジタルフォトフレーム331が行う第4のジェスチャ認識処理について説明する。
[Fourth Gesture Recognition Process Performed by Digital Photo Frame 331]
Next, the fourth gesture recognition process performed by the digital photo frame 331 will be described with reference to the flowchart in FIG.
 この第4のジェスチャ認識処理は、例えば、デジタルフォトフレーム331の電源がオンされたときに開始される。 This fourth gesture recognition process is started, for example, when the digital photo frame 331 is turned on.
 ステップS161において、処理部431は、複数のLEDユニット3711乃至371Nのうち、所定のLEDユニット371nに注目し、注目LEDユニット371nとする。 In step S161, the processing unit 431 pays attention to a predetermined LED unit 371 n among the plurality of LED units 371 1 to 371 N and sets it as the target LED unit 371 n .
 ステップS162において、処理部431は、注目LEDユニット371n及びPD351を用いて、輝度信号Vλ1を生成してAD変換部95に出力させるVλ1取得処理を行わせる。 In step S162, the processing unit 431 uses the target LED unit 371 n and the PD 351 to perform a V λ1 acquisition process that generates a luminance signal V λ1 and outputs the luminance signal V λ1 to the AD conversion unit 95.
 すなわち、例えば、処理部431は、電流制御部92nに、LED391anやLED391bnに流す電流を、タイミング制御部93nに、LED391anやLED391bnの点灯及び消灯のタイミングを指示する。 That is, for example, the processing unit 431 instructs the current control unit 92n to supply a current to the LED 391an and the LED 391bn, and instructs the timing control unit 93n to turn on and off the LED 391an and the LED 391bn.
 これに対して、電流制御部92n及びタイミング制御部93nは、処理部431からの指示にしたがって、LEDドライバ111nを制御する。 In contrast, the current control unit 92n and the timing control unit 93n control the LED driver 111n according to an instruction from the processing unit 431.
 LEDドライバ111nは、電流制御部92n及びタイミング制御部93nからの制御にしたがって、LED391anのみを点灯させることにより、波長λ1の光を照射させる。 The LED driver 111n irradiates light of wavelength λ1 by turning on only the LED 391an according to the control from the current control unit 92n and the timing control unit 93n.
 このとき、PD351は、波長λ1の光の照射により得られる反射光を受光し、受光した反射光を光電変換して得られる輝度信号Vλ1を、AD変換部95に出力する。 At this time, the PD 351 receives reflected light obtained by irradiation with light of wavelength λ1, and outputs a luminance signal V λ1 obtained by photoelectrically converting the received reflected light to the AD converter 95.
 AD変換部95は、PD351からの輝度信号Vλ1をAD変換し、AD変換後の輝度信号Vλ1を、処理部431に供給する。 AD conversion unit 95, a luminance signal V .lambda.1 from PD351 AD conversion, a luminance signal V .lambda.1 after AD conversion, and supplies to the processing unit 431.
 ステップS163において、処理部431は、注目LEDユニット371n及びPD351を用いて、輝度信号Vλ2を生成してAD変換部95に出力させるVλ2取得処理を行わせる。 In step S163, the processing unit 431 uses the target LED unit 371 n and the PD 351 to perform a V λ2 acquisition process that generates a luminance signal V λ2 and outputs the luminance signal V λ2 to the AD conversion unit 95.
 すなわち、例えば、処理部431は、電流制御部92nに、LED391anやLED391bnに流す電流を、タイミング制御部93nに、LED391anやLED391bnの点灯及び消灯のタイミングを指示する。 That is, for example, the processing unit 431 instructs the current control unit 92n to supply a current to the LED 391an and the LED 391bn, and instructs the timing control unit 93n to turn on and off the LED 391an and the LED 391bn.
 これに対して、電流制御部92n及びタイミング制御部93nは、処理部431からの指示にしたがって、LEDドライバ111nを制御する。 In contrast, the current control unit 92n and the timing control unit 93n control the LED driver 111n according to an instruction from the processing unit 431.
 LEDドライバ111nは、電流制御部92n及びタイミング制御部93nからの制御にしたがって、LED391bnのみを点灯させることにより、波長λ2の光を照射させる。 The LED driver 111n irradiates light of wavelength λ2 by turning on only the LED 391bn according to the control from the current control unit 92n and the timing control unit 93n.
 このとき、PD351は、波長λ2の光の照射により得られる反射光を受光し、受光した反射光を光電変換して得られる輝度信号Vλ2を、AD変換部95に出力する。 At this time, the PD 351 receives reflected light obtained by irradiation with light of wavelength λ2, and outputs a luminance signal V λ2 obtained by photoelectrically converting the received reflected light to the AD conversion unit 95.
 AD変換部95は、PD351からの輝度信号Vλ2をAD変換し、AD変換後の輝度信号Vλ2を、処理部431に供給する。 AD conversion unit 95, a luminance signal V .lambda.2 from PD351 AD conversion, a luminance signal V .lambda.2 after AD conversion, and supplies to the processing unit 431.
 ステップS164において、処理部431は、注目LEDユニット371n及びPD351を用いて、輝度信号Vλoffを生成してAD変換部95に出力させるVλoff取得処理を行わせる。 In step S164, the processing unit 431 uses the target LED unit 371 n and the PD 351 to perform a V λoff acquisition process that generates the luminance signal V λoff and outputs the luminance signal V λoff to the AD conversion unit 95.
 すなわち、例えば、処理部431は、電流制御部92nに、LED391anやLED391bnに流す電流を、タイミング制御部93nに、LED391anやLED391bnの点灯及び消灯のタイミングを指示する。 That is, for example, the processing unit 431 instructs the current control unit 92n to supply a current to the LED 391an and the LED 391bn, and instructs the timing control unit 93n to turn on and off the LED 391an and the LED 391bn.
 これに対して、電流制御部92n及びタイミング制御部93nは、処理部431からの指示にしたがって、LEDドライバ111nを制御する。 In contrast, the current control unit 92n and the timing control unit 93n control the LED driver 111n according to an instruction from the processing unit 431.
 LEDドライバ111nは、電流制御部92n及びタイミング制御部93nからの制御にしたがって、LED391an及びLED391bnを消灯させる。 The LED driver 111n turns off the LED 391an and the LED 391bn according to the control from the current control unit 92n and the timing control unit 93n.
 このとき、PD351は、外光の反射光を受光し、受光した反射光を光電変換して得られる輝度信号Vλoffを、AD変換部95に出力する。 At this time, the PD 351 receives reflected light of external light, and outputs a luminance signal V λoff obtained by photoelectrically converting the received reflected light to the AD conversion unit 95.
 AD変換部95は、PD351からの輝度信号VλoffをAD変換し、AD変換後の輝度信号Vλoffを、処理部431に供給する。 AD conversion unit 95, a luminance signal V? OFF from the PD351 AD conversion, a luminance signal V? OFF after AD conversion, and supplies to the processing unit 431.
 以上説明したステップS162乃至ステップS164の処理により、PD351からAD変換部95を介して処理部431には、輝度信号の組合せ(Vλ1,Vλ2,Vλoff)nが供給される。なお、組合せ(Vλ1,Vλ2,Vλoff)nの添え字nは、注目LEDユニット371nの添え字nと対応している。 Through the processing of step S162 to step S164 described above, the combination of luminance signals (V λ1 , V λ2 , V λoff ) n is supplied from the PD 351 to the processing unit 431 via the AD conversion unit 95. The subscript n of the combination (V λ1 , V λ2 , V λoff ) n corresponds to the subscript n of the target LED unit 371n.
 ステップS165では、処理部431は、AD変換部95からの輝度信号の組合せ(Vλ1,Vλ2,Vλoff)nに基づいて、処理部91と同様の肌判別処理を行う。この肌判別処理では、肌検出信号Dnが生成される。 In step S165, the processing unit 431, a combination of the luminance signal from the AD converter 95 (V λ1, V λ2, V λoff) based on n, perform the same skin discrimination processing and the processing unit 91. In this skin discrimination process, a skin detection signal Dn is generated.
 ステップS166において、処理部431は、例えば、輝度信号の組合せ(Vλ1,Vλ2,Vλoff)nの輝度信号Vλ1に基づいて、ジェスチャ認識情報Jnを生成する。 In step S166, the processing unit 431, for example, a combination of the luminance signal (V λ1, V λ2, V λoff) based on the luminance signal V .lambda.1 of n, and generates a gesture recognition information Jn.
 処理部431は、生成した肌検出信号、及びジェスチャ認識情報を、入出力インタフェース65及びバス64を介して、CPU61に供給する。 The processing unit 431 supplies the generated skin detection signal and gesture recognition information to the CPU 61 via the input / output interface 65 and the bus 64.
 ステップS167では、処理部431は、複数のLEDユニット3711乃至371Nの全てに注目したか否かを判定し、複数のLEDユニット3711乃至371Nの全てに注目していないと判定した場合、処理をステップS161に戻す。 In step S167, if the processing unit 431 determines whether the attention to all of the plurality of LED units 371 1 to 371 N, was determined not to be focused on all of the plurality of LED units 371 1 to 371 N The process returns to step S161.
 そして、ステップS161では、処理部431は、複数のLEDユニット3711乃至371Nのうち、まだ注目していないLEDユニット371nを、新たな注目LEDユニット371nとし、それ以降、同様の処理が行われる。 In step S161, the processing unit 431 sets the LED unit 371 n that has not been noticed among the plurality of LED units 371 1 to 371 N as a new noticed LED unit 371 n, and thereafter, the same processing is performed. Done.
 また、ステップS167では、処理部431は、複数のLEDユニット3711乃至371Nの全てに注目したと判定した場合、処理をステップS168に進める。 In step S167, when the processing unit 431 determines that all of the plurality of LED units 371 1 to 371 N are focused, the process proceeds to step S168.
 以上説明したステップS161乃至ステップS167の処理により、CPU61には、各LEDユニット3711乃至371Nが注目される毎に、PD351から入出力インタフェース65及びバス64を介して、肌検出信号及びジェスチャ認識情報が供給される。 Through the processing in steps S161 to S167 described above, the CPU 61 receives the skin detection signal and gesture recognition from the PD 351 via the input / output interface 65 and the bus 64 each time the LED units 371 1 to 371 N are noticed. Information is supplied.
 ステップS168乃至ステップS170では、図16のステップS8乃至ステップS10と同様の処理が行われる。 In steps S168 through S170, the same processing as in steps S8 through S10 in FIG. 16 is performed.
 なお、第4のジェスチャ認識処理は、例えばデジタルフォトフレーム331の電源がオフされたときに終了される。 Note that the fourth gesture recognition process ends when the digital photo frame 331 is powered off, for example.
 以上説明したように、第4のジェスチャ認識処理によれば、PD351の検知範囲(各LEDユニット371nそれぞれが照射する同一の照射範囲)内に、肌としての物体が存在する場合、肌としての物体の位置や動き等を認識するようにした。 As described above, according to the fourth gesture recognition process, when an object as skin exists within the detection range of the PD 351 (the same irradiation range irradiated by each LED unit 371n), the object as skin Recognize the position, movement, etc.
 したがって、物体が肌以外のものである場合に、物体の位置や動き等を誤って認識し、その認識結果に応じた処理を行う事態を防止することが可能となる。 Therefore, when the object is something other than the skin, it is possible to prevent a situation in which the position or movement of the object is erroneously recognized and processing according to the recognition result is performed.
 なお、第4のジェスチャ認識処理については、第1のジェスチャ認識処理と同様の変形を行うことができる。 It should be noted that the fourth gesture recognition process can be modified in the same manner as the first gesture recognition process.
 例えば、第4のジェスチャ認識処理を、図20に示した第2のジェスチャ認識処理のように変形した場合、図20のステップS91及びステップS101にそれぞれ相当する処理では、処理部431において、LEDユニット371nに注目することが行われる。 For example, when the fourth gesture recognition process is modified like the second gesture recognition process shown in FIG. 20, in the processes corresponding to steps S91 and S101 in FIG. Attention is made to 371n.
 また、図20のステップS97及びステップS104にそれぞれ相当する処理では、処理部431において、全てのLEDユニット371nに注目したか否かを判定することが行われる。それ以外は、図20に示した第2のジェスチャ認識処理と同様の処理が行われる。 Further, in the processing corresponding to step S97 and step S104 in FIG. 20, the processing unit 431 determines whether or not all the LED units 371n are noticed. Other than that, the same processing as the second gesture recognition processing shown in FIG. 20 is performed.
 さらに、例えば、図21に示される近接物体検出処理に相当する処理を行うようにし、物体が、PD351の検知範囲内に侵入したことに対応して、第4のジェスチャ認識処理を行うようにしてもよい。 Further, for example, a process corresponding to the proximity object detection process shown in FIG. 21 is performed, and a fourth gesture recognition process is performed in response to the object having entered the detection range of the PD 351. Also good.
 この場合、図21のステップS121及びステップS122に相当する処理として、所定のLEDユニット371nとPD351とによるVλ1取得処理が行われる。そして、図21のステップS123に相当する処理として、処理部431により、Vλ1取得処理から得られる輝度信号Vλ1に基づいて、PD351の検知範囲に対する物体が侵入されたか否かが判定される。 In this case, V λ1 acquisition processing by a predetermined LED unit 371n and PD 351 is performed as processing corresponding to step S121 and step S122 of FIG. Then, as a process corresponding to step S123 in FIG. 21, the processing unit 431 determines whether an object with respect to the detection range of the PD 351 has entered based on the luminance signal V λ1 obtained from the V λ1 acquisition process.
 図21のステップS123に相当する処理として、その判定結果に基づいて、PD351の検知範囲に対する物体が侵入されていないことが検出された場合、図21のステップS121及びステップS122に相当する処理に戻り、Vλ1取得処理が行われる。 As processing corresponding to step S123 in FIG. 21, when it is detected that an object with respect to the detection range of the PD 351 has not entered based on the determination result, the processing returns to processing corresponding to step S121 and step S122 in FIG. , V λ1 acquisition processing is performed.
 また、図21のステップS123に相当する処理として、その判定結果に基づいて、PD351の検知範囲に対する物体が侵入されたことが検出された場合、図21のステップS124に相当する処理として、第3のジェスチャ認識処理が行われる。 In addition, as a process corresponding to step S123 in FIG. 21, when it is detected that an object with respect to the detection range of the PD 351 has entered based on the determination result, a process corresponding to step S124 in FIG. The gesture recognition process is performed.
<4.変形例>
 ところで、上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。また、本開示において、上述した一連の処理を記述するステップは、記載された順序に沿って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的あるいは個別に実行される処理をも含むものである。
<4. Modification>
By the way, the above-described series of processing can be executed by hardware or can be executed by software. Further, in the present disclosure, the steps describing the series of processes described above are executed in parallel or individually even if they are not necessarily processed in time series, as well as processes performed in time series in the described order. The processing to be performed is also included.
 さらに、本開示における実施の形態は、上述した第1乃至第3の実施の形態に限定されるものではなく、本開示の要旨を逸脱しない範囲において種々の変更が可能である。 Furthermore, the embodiments in the present disclosure are not limited to the first to third embodiments described above, and various modifications can be made without departing from the gist of the present disclosure.
 また、本開示において、物体が肌であるか否かを区別し、肌としての物体の位置や動き等を認識するデジタルフォトフレームについて説明した。しかしながら、その他、例えば、タブレットPCや携帯電話機などの電子機器についても、物体が肌であるか否かを区別し、肌としての物体の位置や動き等を認識する処理を行わせることができる。 In the present disclosure, a digital photo frame that distinguishes whether an object is skin and recognizes the position and movement of the object as skin has been described. However, for other electronic devices such as tablet PCs and mobile phones, for example, it is possible to distinguish whether an object is skin and to perform processing for recognizing the position and movement of the object as skin.
 また、第1の実施の形態では、デジタルフォトフレーム1において、肌検出信号やジェスチャ認識情報等を出力する制御部66を設け、CPU61が、制御部66からの出力結果に基づいて、肌としての物体の動き等を認識するようにした。 In the first embodiment, the digital photo frame 1 is provided with a control unit 66 that outputs a skin detection signal, gesture recognition information, and the like, and the CPU 61 uses the output result from the control unit 66 as a skin. Recognize the movement of objects.
 しかしながら、肌検出信号やジェスチャ認識情報等を出力する制御部66を、1のジェスチャ出力機器として構成することもできる。 However, the control unit 66 that outputs a skin detection signal, gesture recognition information, and the like can also be configured as one gesture output device.
 この場合、ジェスチャ出力機器と、制御部66を有さないデジタルフォトフレーム等が接続され、そのようなデジタルフォトフレームでは、接続されたジェスチャ出力機器からの出力結果に応じて、表示画面の内容などを変更する。なお、このことは、第2及び第3の実施の形態においても同様である。 In this case, a gesture output device is connected to a digital photo frame or the like that does not have the control unit 66. In such a digital photo frame, the contents of the display screen, etc., according to the output result from the connected gesture output device To change. This also applies to the second and third embodiments.
 なお、本技術は以下のような構成もとることができる。
 (1)
  第1の波長の光を照射する第1の照射部と、
  前記第1の波長とは異なる第2の波長の光を照射する第2の照射部と
 を有する照射ユニットと、
  前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、
  前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する
 受光部と、
 前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、
 前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部と
 を含む情報処理装置。
 (2)
 前記受光部と同様に構成された他の受光部をさらに含み、
 前記生成部は、複数の前記受光部毎に生成された前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成する
 前記(1)に記載の情報処理装置。
 (3)
 前記第1の照射部は、複数の前記受光部のそれぞれから同一の距離に配置されており、
 前記生成部は、前記複数の受光部毎に生成された前記第1の検出用信号に基づいて、前記認識情報を生成する
 前記(2)に記載の情報処理装置。
 (4)
 前記受光部は、前記第1の照射部と前記第2の照射部から、それぞれ同一の距離に配置されており、
 前記肌検出部は、前記受光部により生成された前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する
 前記(2)に記載の情報処理装置。
 (5)
 前記照射ユニットと同様に構成された他の照射ユニットをさらに含み、
 前記受光部は、
  異なるタイミングで前記第1の波長の光を照射する前記照射ユニット毎に、前記第1の検出用信号を生成し、
  異なるタイミングで前記第2の波長の光を照射する前記照射ユニット毎に、前記第2の検出用信号を生成し、
 前記生成部は、前記照射ユニット毎に生成された前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成する
 前記(1)に記載の情報処理装置。
 (6)
 前記生成部は、第1の方向における前記第1の照射部どうしの間隔が、前記第2の照射部どうしの間隔よりも長い場合、前記受光部により、前記複数の照射ユニット毎に生成された前記第1の検出用信号に基づいて、前記第1の方向に垂直な第2の方向における前記物体の位置又は動きの少なくとも一方を認識するための前記認識情報を生成する
 前記(5)に記載の情報処理装置。
 (7)
 前記照射ユニットと前記受光部とを有する複数のセンサであって、前記センサ毎に異なる照射範囲を照射する前記照射ユニットを有する前記センサをさらに含み、
  前記肌検出部は、前記複数のセンサ毎に生成される前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出し、
  前記生成部は、前記複数のセンサ毎に生成される前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成する
 前記(1)に記載の情報処理装置。
 (8)
 前記第1の検出用信号に基づいて、前記物体が予め決められた検知範囲内に侵入したか否かを検出する近接検出部をさらに含み、
 前記肌検出部は、前記物体が前記検知範囲内に侵入したと検出されたことに対応して、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する
 前記(1)乃至(7)に記載の情報処理装置。
 (9)
 前記物体が肌であることが検出されたことに対応して、前記第1の検出用信号に基づき、前記物体が予め決められた検知範囲内に存在するか否かを検出する物体検出部をさらに含み、
 前記情報処理装置では、前記物体が前記検知範囲内に存在すると検出された場合、前記物体が肌であるものとして取り扱う
 前記(1)乃至(8)に記載の情報処理装置。
 (10)
 前記物体の位置に応じた大きさの出力信号を生成する信号生成部をさらに含み、
 前記生成部は、前記出力信号にも基づいて、前記認識情報を生成する
 前記(1)乃至(9)に記載の情報処理装置。
 (11)
 前記第1の波長λ1、及び前記第1の波長λ1よりも長波長である前記第2の波長λ2は、
 640nm ≦ λ1 ≦ 1000nm
 900nm ≦ λ2 ≦ 1100nm
 を満たす前記(1)乃至(10)に記載の情報処理装置。
 (12)
 前記第1の照射部は、前記第1の波長λ1の不可視光を照射し、
 前記第2の照射部は、前記第2の波長λ2の不可視光を照射する
 前記(11)に記載の情報処理装置。
 (13)
 前記受光部には、前記受光部に入射される可視光を遮断する可視光カットフィルタが設けられている
 前記(1)乃至(9)に記載の情報処理装置。
In addition, this technique can also take the following structures.
(1)
A first irradiation unit that irradiates light of a first wavelength;
An irradiation unit including: a second irradiation unit configured to irradiate light having a second wavelength different from the first wavelength;
In response to receiving reflected light from an object irradiated with light of the first wavelength, a first detection signal is generated,
A light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light of the second wavelength;
A skin detection unit that detects whether or not the object is skin based on the first and second detection signals;
An information processing apparatus comprising: a generation unit configured to generate recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signals.
(2)
It further includes another light receiving part configured similarly to the light receiving part,
The information processing apparatus according to (1), wherein the generation unit generates the recognition information based on at least one of the first or second detection signals generated for each of the plurality of light receiving units.
(3)
The first irradiation unit is arranged at the same distance from each of the plurality of light receiving units,
The information processing apparatus according to (2), wherein the generation unit generates the recognition information based on the first detection signal generated for each of the plurality of light receiving units.
(4)
The light receiving unit is disposed at the same distance from the first irradiation unit and the second irradiation unit, respectively.
The information processing apparatus according to (2), wherein the skin detection unit detects whether the object is skin based on the first and second detection signals generated by the light receiving unit.
(5)
It further includes another irradiation unit configured similarly to the irradiation unit,
The light receiving unit is
For each of the irradiation units that irradiate the light of the first wavelength at different timings, the first detection signal is generated,
For each of the irradiation units that irradiate the light of the second wavelength at different timings, the second detection signal is generated,
The information processing apparatus according to (1), wherein the generation unit generates the recognition information based on at least one of the first or second detection signal generated for each of the irradiation units.
(6)
The generator is generated for each of the plurality of irradiation units by the light receiving unit when the interval between the first irradiation units in the first direction is longer than the interval between the second irradiation units. The recognition information for recognizing at least one of the position or movement of the object in a second direction perpendicular to the first direction is generated based on the first detection signal. Information processing device.
(7)
A plurality of sensors having the irradiation unit and the light receiving unit, further including the sensor having the irradiation unit that irradiates a different irradiation range for each sensor;
The skin detection unit detects whether the object is skin based on the first and second detection signals generated for each of the plurality of sensors,
The information processing apparatus according to (1), wherein the generation unit generates the recognition information based on at least one of the first or second detection signal generated for each of the plurality of sensors.
(8)
A proximity detector for detecting whether the object has entered a predetermined detection range based on the first detection signal;
In response to detecting that the object has entered the detection range, the skin detection unit determines whether the object is skin based on the first and second detection signals. The information processing apparatus according to (1) to (7).
(9)
In response to detecting that the object is skin, an object detection unit that detects whether or not the object exists within a predetermined detection range based on the first detection signal. In addition,
The information processing apparatus according to any one of (1) to (8), wherein the information processing apparatus treats the object as skin when it is detected that the object exists within the detection range.
(10)
A signal generation unit that generates an output signal having a magnitude according to the position of the object;
The information processing apparatus according to (1) to (9), wherein the generation unit generates the recognition information based on the output signal.
(11)
The first wavelength λ1 and the second wavelength λ2 that is longer than the first wavelength λ1 are:
640nm ≤ λ1 ≤ 1000nm
900nm ≤ λ2 ≤ 1100nm
The information processing apparatus according to any one of (1) to (10), wherein:
(12)
The first irradiation unit irradiates invisible light having the first wavelength λ1,
The information processing apparatus according to (11), wherein the second irradiation unit irradiates invisible light having the second wavelength λ2.
(13)
The information processing apparatus according to any one of (1) to (9), wherein the light receiving unit is provided with a visible light cut filter that blocks visible light incident on the light receiving unit.
<5.その他>
 ところで、第1の実施の形態であるデジタルフォトフレーム1(図1)では、図40に示されるように、各センサ21nにより処理が行われる。
<5. Other>
By the way, in the digital photo frame 1 (FIG. 1) which is 1st Embodiment, as FIG. 40 shows, a process is performed by each sensor 21n.
[PDの飽和について]
 次に、図40は、各センサ21nが行う処理の概要を示している。
[About PD saturation]
Next, FIG. 40 shows an outline of processing performed by each sensor 21n.
 例えば、センサ211において、図40のAに示されるように、点灯期間「LED λ1」でLED112a1が点灯し、点灯期間「LED λ2」でLED112b1が点灯し、消灯期間「LED off」でLED112a1及びLED112b1が消灯する。 For example, the sensor 21 1, as shown in A of FIG. 40, and LEDs 112a 1 in the lighting period "LED .lambda.1" is lit at the lighting period "LED .lambda.2" LED112b 1 is lit, with light-off period "LED off" The LED 112a 1 and the LED 112b 1 are turned off.
 なお、このとき、他のセンサ212乃至21Nは、図40のAに示されるいずれの期間「LED λ1」、「LED λ2」及び「LED off」においても消灯しているものとする。 At this time, it is assumed that the other sensors 21 2 to 21 N are turned off during any period of “LED λ1”, “LED λ2”, and “LED off” shown in FIG.
 これに対応して、センサ211において、図40のC上側に示されるように、PD1141は、LED112a1により、波長λ1の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#1_λ1を出力する。 Correspondingly, in the sensor 21 1 , as shown in the upper side of FIG. 40C, the PD 114 1 is obtained by receiving reflected light from an object irradiated with light of wavelength λ 1 by the LED 112 a 1. The luminance signal Lum # 1_λ1 is output.
 また、センサ211において、図40のC上側に示されるように、PD1141は、LED112b1により、波長λ2の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#1_λ2を出力する。 Further, the sensor 21 1, as shown in C upper side of FIG. 40, PD 114 1, due LED112b 1, the luminance signal obtained by receiving the reflected light from an object light of the wavelength λ2 is irradiated Lum # 1_λ2 is output.
 さらに、例えば、センサ212において、図40のBに示されるように、点灯期間「LED λ1」でLED112a2が点灯し、点灯期間「LED λ2」でLED112b2が点灯し、消灯期間「LED off」でLED112a2及びLED112b2が消灯する。 Furthermore, for example, the sensor 21 2, as shown in B of FIG. 40, and LEDs 112a 2 in the lighting period "LED .lambda.1" lights, LED112b 2 is turned in the lighting period "LED .lambda.2" off period "LED off The LED 112a 2 and the LED 112b 2 are turned off.
 なお、このとき、他のセンサ211及び213乃至21Nは、図40のBに示されるいずれの期間「LED λ1」、「LED λ2」及び「LED off」においても消灯しているものとする。 At this time, the other sensors 21 1 and 21 3 to 21 N are turned off during any of the periods “LED λ1,” “LED λ2,” and “LED off” shown in FIG. To do.
 これに対応して、センサ212において、図40のC下側に示されるように、PD1142は、LED112a2により、波長λ1の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#2_λ1を出力する。 Correspondingly, the sensor 21 2, as shown in C under side of FIG. 40, PD 114 2, due LEDs 112a 2, receives reflected light from an object light of the wavelength λ1 is irradiated to give Output luminance signal Lum # 2_λ1.
 また、センサ212において、図40のC下側に示されるように、PD1142は、LED112b2により、波長λ2の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#2_λ2を出力する。 Further, the sensor 21 2, as shown in C under side of FIG. 40, PD 114 2, due LED112b 2, the luminance signal obtained by receiving the reflected light from an object light of the wavelength λ2 is irradiated Lum Output # 2_λ2.
 そして、処理部91は、図40のDに示されるように、センサ211のPD1141からAD変換部951を介して出力される輝度信号Lum#1_λ1及び輝度信号Lum#1_λ2等に基づいて、肌検出信号を生成する。 Then, the processing unit 91, as shown in D in FIG. 40, based on the PD 114 1 of the sensor 21 1 into a luminance signal Lum # 1_λ1 and luminance signal Lum # 1_λ2 like is outputted through the AD converter 95 1 Generate a skin detection signal.
 また、処理部91は、図40のDに示されるように、センサ212のPD1142からAD変換部952を介して出力される輝度信号Lum#2_λ1及び輝度信号Lum#2_λ2等に基づいて、肌検出信号を生成する。 The processing unit 91, as shown in D in FIG. 40, based on the luminance signal Lum # 2_λ1 and luminance signal Lum # 2_λ2 like output from PD 114 2 sensors 21 2 via the AD converter 95 2 Generate a skin detection signal.
ところで、例えば、センサ211に対して、ユーザの手等を近づけていくと、ある距離で、図41のC上側に示されるように、センサ211のPD1141からの出力としての輝度信号Lum#1_λ1が飽和してしまうことが生じ得る。 Incidentally, for example, the sensor 21 1 and moved toward the hand of the user, at a distance, as shown in C upper side of FIG. 41, the luminance signal Lum as output from PD 114 1 of the sensor 21 1 It can happen that # 1_λ1 is saturated.
 なお、図41では、図41のC上側が、図40の場合と異なるものの、それ以外については同様である。 In FIG. 41, the upper side of FIG. 41C is different from the case of FIG. 40, but the other points are the same.
 図41上側に示されるように、輝度信号Lum#1_λ1が飽和してしまった場合、輝度信号Lum#1_λ1と輝度信号Lum#1_λ2との差が小さくなってしまう。 As shown in the upper side of FIG. 41, when the luminance signal Lum # 1_λ1 is saturated, the difference between the luminance signal Lum # 1_λ1 and the luminance signal Lum # 1_λ2 becomes small.
 このため、処理部91において、飽和した輝度信号Lum#1_λ1と輝度信号Lum#1_λ2等に基づいて、肌検出信号を生成する場合、誤った肌検出信号(例えば、物体が肌であるのに、肌が検出されなかったことを表す肌検出信号)が生成されることが生じ得る。 For this reason, in the processing unit 91, when generating a skin detection signal based on the saturated luminance signal Lum # 1_λ1 and the luminance signal Lum # 1_λ2, etc., an erroneous skin detection signal (for example, the object is skin, It may occur that a skin detection signal indicating that no skin is detected is generated.
 この点について、ユーザは、例えばデジタルフォトフレーム1の前で手をかざしてジェスチャ操作を行う場合、最初に手をかざす時点で、ユーザ自身が手の位置を調整し、無意識に、手が正確に検知される位置を探す傾向にある。 In this regard, for example, when a user holds a hand in front of the digital photo frame 1 and performs a gesture operation, the user adjusts the position of the hand when the hand is first held, and the hand is unconsciously accurately positioned. There is a tendency to search for a detected position.
 したがって、例えば、図41上側に示されるように、輝度信号Lum#1_λ1が飽和する事態は殆ど発生しないと言える。 Therefore, for example, as shown in the upper side of FIG. 41, it can be said that the situation where the luminance signal Lum # 1_λ1 is saturated hardly occurs.
 しかしながら、操作中にそのような事態が発生した場合、デジタルフォトフレーム1において、ユーザの手等の動きなどに応じた処理が行われないことが生じ得るため、ユーザは、ジェスチャによる操作を妨げられた感覚に陥ってしまう。 However, when such a situation occurs during the operation, the digital photo frame 1 may not be processed according to the movement of the user's hand or the like, and thus the user is prevented from performing the operation by the gesture. It will fall into the sense.
 したがって、例えば、処理部91において、図示せぬ内蔵のメモリなどに、各PD141nの内部状態を保持するようにし、その内部状態の遷移に応じて、肌検出状態を決定することが望ましい。 Therefore, for example, in the processing unit 91, it is desirable to hold the internal state of each PD 141n in a built-in memory (not shown) and determine the skin detection state according to the transition of the internal state.
 次に、図42は、PD141nの内部状態(を表す情報)が遷移する様子の一例を示している。 Next, FIG. 42 shows an example of how the internal state (information representing) of the PD 141n transitions.
 図42において、内部状態「NO_SKIN」は、肌部を検出していない状態を表す。PD141nの内部状態が「NO_SKIN」である場合、処理部91は、Sensor#n Detect = FALSE、すなわち、肌が検出されていないことを表す肌検出信号(OFFとされた肌検出信号)を出力する。 42, the internal state “NO_SKIN” represents a state in which no skin part is detected. When the internal state of the PD 141n is “NO_SKIN”, the processing unit 91 outputs Sensor # n Detect = FALSE, that is, a skin detection signal (a skin detection signal turned OFF) indicating that no skin is detected. .
 また、図42において、内部状態「SKIN_DETECT」は、肌部を検出している状態を表す。PD141nの内部状態が「SKIN_DETECT」である場合、処理部91は、Sensor#n Detect = TRUE、すなわち、肌が検出されたことを表す肌検出信号(ONとされた肌検出信号)を出力する。 Further, in FIG. 42, the internal state “SKIN_DETECT” represents a state in which a skin part is detected. When the internal state of the PD 141n is “SKIN_DETECT”, the processing unit 91 outputs Sensor # n Detect = TRUE, that is, a skin detection signal (a skin detection signal turned ON) indicating that the skin is detected.
 処理部91は、PD141nからAD変換部95nを介して供給される輝度信号Lum#n_λ1及び輝度信号Lum#n_λ2(#nはPD141nの添え字nと対応する)に基づいて、{(Lum#n_λ1-Lum#n_λ2)×100/Lum#n_λ1}を算出する。なお、Lum#n_λ1及びLum#n_λ2は、それぞれ、輝度信号Lum#n_λ1及び輝度信号Lum#n_λ2がそれぞれ表す輝度値を示している。 Based on the luminance signal Lum # n_λ1 and the luminance signal Lum # n_λ2 (#n corresponds to the subscript n of the PD 141n) supplied from the PD 141n via the AD conversion unit 95n, the processing unit 91 {(Lum # n_λ1 -Lum # n_λ2) × 100 / Lum # n_λ1} is calculated. Note that Lum # n_λ1 and Lum # n_λ2 indicate the luminance values represented by the luminance signal Lum # n_λ1 and the luminance signal Lum # n_λ2, respectively.
 そして、処理部91は、算出した{(Lum#n_λ1-Lum#n_λ2)×100/Lum#n_λ1}が、所定の閾値(例えば、10)を超える場合に、図42に示される「肌を検出」と判断し、PD141nの内部状態を、「NO_SKIN」から「SKIN_DETECT」に遷移させる。 Then, when the calculated {(Lum # n_λ1-Lum # n_λ2) × 100 / Lum # n_λ1} exceeds a predetermined threshold (for example, 10), the processing unit 91 displays “skin detection” shown in FIG. And the internal state of the PD 141n is changed from “NO_SKIN” to “SKIN_DETECT”.
 なお、処理部91は、算出した{(Lum#n_λ1-Lum#n_λ2)×100/Lum#n_λ1}が、所定の閾値を超えない場合、PD141nの内部状態を遷移させず、「NO_SKIN」のままとする。 If the calculated {(Lum # n_λ1-Lum # n_λ2) × 100 / Lum # n_λ1} does not exceed the predetermined threshold, the processing unit 91 does not change the internal state of the PD 141n and remains “NO_SKIN”. And
 また、処理部91は、輝度値としてのLum#n_λ1、Lum#n_λ2、又は(Lum#n_λ1+ Lum#n_λ2)の少なくとも1つに基づいて、PD141nの内部状態を、「SKIN_DETECT」から「NO_SKIN」に遷移させるか否かを判断し、肌判別処理による判断に基づいて、「SKIN_DETECT」から「NO_SKIN」に遷移させることはしない。 Further, the processing unit 91 changes the internal state of the PD 141n from “SKIN_DETECT” to “NO_SKIN” based on at least one of Lum # n_λ1, Lum # n_λ2, or (Lum # n_λ1 + Lum # n_λ2) as the luminance value. Whether or not to make a transition is determined, and “SKIN_DETECT” is not changed to “NO_SKIN” based on the determination by the skin determination process.
 具体的には、例えば、処理部91は、図42に示されるように、輝度値としてのLum#n_λ1がある閾値未満である場合に、PD141nの内部状態を、「SKIN_DETECT」から「NO_SKIN」に遷移させる。 Specifically, for example, as illustrated in FIG. 42, the processing unit 91 changes the internal state of the PD 141n from “SKIN_DETECT” to “NO_SKIN” when Lum # n_λ1 as a luminance value is less than a certain threshold value. Transition.
 なお、例えば、処理部91は、輝度値としてのLum#n_λ1がある閾値未満ではない場合に、PD141nの内部状態を遷移させず、「SKIN_DETECT」のままとする。 Note that, for example, when the luminance value Lum # n_λ1 is not less than a certain threshold, the processing unit 91 does not change the internal state of the PD 141n and remains “SKIN_DETECT”.
 例えば、処理部91は、上述したようにして、PD141nからAD変換部95nを介して供給される輝度信号に基づいて、PD141nの内部状態を決定する。そして、処理部91は、決定したPD141nの内部状態に応じた肌検出信号を出力する。 For example, as described above, the processing unit 91 determines the internal state of the PD 141n based on the luminance signal supplied from the PD 141n via the AD conversion unit 95n. Then, the processing unit 91 outputs a skin detection signal corresponding to the determined internal state of the PD 141n.
 すなわち、例えば、処理部91は、PD141nの内部状態が「NO_SKIN」である場合、Sensor#n Detect = FALSE、すなわち、肌が検出されていないことを表す肌検出信号(OFFとされた肌検出信号)を出力する。 That is, for example, when the internal state of the PD 141n is “NO_SKIN”, the processing unit 91 has Sensor # n Detect = FALSE, that is, a skin detection signal indicating that no skin is detected (a skin detection signal that is turned OFF) ) Is output.
 また、例えば、処理部91は、PD141nの内部状態が「SKIN_DETECT」である場合、Sensor#n Detect = TRUE、すなわち、肌が検出されたことを表す肌検出信号(ONとされた肌検出信号)を出力する。 Further, for example, when the internal state of the PD 141n is “SKIN_DETECT”, the processing unit 91 has Sensor # n Detect = TRUE, that is, a skin detection signal (a skin detection signal turned ON) indicating that the skin has been detected. Is output.
 以上説明したように、処理部91において、PD141nの内部状態を保持するようにしたので、センサ21nの動作中におけるPD141nの飽和による操作ミスを防止することができる。 As described above, since the internal state of the PD 141n is held in the processing unit 91, an operation error due to the saturation of the PD 141n during the operation of the sensor 21n can be prevented.
 このことは、第2の実施の形態であるデジタルフォトフレーム171、及び第3の実施の形態であるデジタルフォトフレーム331についても同様のことが言える。 The same applies to the digital photo frame 171 according to the second embodiment and the digital photo frame 331 according to the third embodiment.
 次に、図43は、第2の実施の形態であるデジタルフォトフレーム171の制御部211が行う処理の概要を示している。 Next, FIG. 43 shows an outline of processing performed by the control unit 211 of the digital photo frame 171 according to the second embodiment.
 例えば、制御部211のLEDユニット192において、図43のAに示されるように、点灯期間「LED λ1」でLED112aが点灯し、点灯期間「LED λ2」でLED112bが点灯し、消灯期間「LED off」でLED112a及びLED112bが消灯する。 For example, in the LED unit 192 of the control unit 211, as shown in FIG. 43A, the LED 112a is turned on in the lighting period “LED λ1”, the LED 112b is turned on in the lighting period “LED λ2”, The LED 112a and the LED 112b are turned off.
 これに対応して、制御部211において、図43のBに示されるように、PD1911は、LED112aにより、波長λ1の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#1_λ1を出力する。 Correspondingly, in the control unit 211, as shown in FIG. 43B, the PD 191 1 receives the reflected light from the object irradiated with the light of the wavelength λ1 by the LED 112a. Lum # 1_λ1 is output.
 また、制御部211において、図43のBに示されるように、PD1141は、LED112bにより、波長λ2の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#1_λ2を出力する。 Further, in the control unit 211, as shown in FIG. 43B, the PD 114 1 receives a luminance signal Lum # 1_λ2 obtained by receiving reflected light from an object irradiated with light of wavelength λ2 by the LED 112b. Output.
 さらに、制御部211において、図43のCに示されるように、PD1142は、LED112aにより、波長λ1の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#2_λ1を出力する。 Further, the control unit 211, as shown in C of FIG. 43, PD 114 2, due LEDs 112a, the luminance signal Lum # 2_λ1 obtained by receiving the reflected light from an object light of the wavelength λ1 is irradiated Output.
 また、制御部211において、図43のCに示されるように、PD1142は、LED112bにより、波長λ2の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#2_λ2を出力する。 In the control unit 211, as shown in C of FIG. 43, PD 114 2, due LED112b, the luminance signal Lum # 2_λ2 obtained by receiving the reflected light from an object light of the wavelength λ2 is irradiated Output.
 そして、処理部231は、図43のCに示されるように、制御部211のPD1141からAD変換部951を介して出力される輝度信号Lum#1_λ1及び輝度信号Lum#1_λ2等に基づいて、肌検出信号を生成する。 Then, the processing unit 231, as shown in C of FIG. 43, based on the luminance signal Lum # 1_λ1 and luminance signal Lum # 1_λ2 like is outputted through the AD conversion unit 95 1 from PD 114 1 of the control unit 211 Generate a skin detection signal.
 また、処理部231は、図43のCに示されるように、制御部211のPD1142からAD変換部952を介して出力される輝度信号Lum#2_λ1及び輝度信号Lum#2_λ2等に基づいて、肌検出信号を生成する。 Further, the processing unit 231, as shown in C of FIG. 43, based on the luminance signal Lum # 2_λ1 and luminance signal Lum # 2_λ2 like is outputted through the AD converter 95 2 from the PD 114 2 of the controller 211 Generate a skin detection signal.
 処理部231において、処理部91と同様に、図示せぬ内蔵のメモリなどに、各PD191nの内部状態を保持するようにし、その内部状態の遷移に応じて、肌検出信号による判断を採用するか否かを決定することができる。 Whether the processing unit 231 keeps the internal state of each PD 191n in a built-in memory (not shown) as in the processing unit 91, and adopts judgment based on the skin detection signal according to the transition of the internal state You can decide whether or not.
 なお、PD191nの内部状態の遷移方法は、図42を参照して説明したようにして、処理部231により行われる。これにより、制御部211の動作中におけるPD191nの飽和による操作ミスを防止することができる。 Note that the method for transitioning the internal state of the PD 191n is performed by the processing unit 231 as described with reference to FIG. Thereby, an operation error due to saturation of the PD 191n during the operation of the control unit 211 can be prevented.
 次に、図44は、第3の実施の形態であるデジタルフォトフレーム331の制御部411が行う処理の概要を示している。 Next, FIG. 44 shows an outline of processing performed by the control unit 411 of the digital photo frame 331 according to the third embodiment.
 例えば、制御部411のLEDユニット3711において、図44のAに示されるように、点灯期間「LED λ1」でLED391a1が点灯し、点灯期間「LED λ2」でLED391b1が点灯し、消灯期間「LED off」でLED391a1及びLED391b1が消灯する。 For example, the LED unit 371 1 of the control unit 411, as shown in A of FIG. 44, and LED391a 1 at the lighting period "LED .lambda.1" lights, LED391b 1 is turned in the lighting period "LED .lambda.2" off period “LED off” turns off the LEDs 391a 1 and 391b 1 .
 なお、このとき、他のLEDユニット3712乃至371Nは、図44のAに示されるいずれの期間「LED λ1」、「LED λ2」及び「LED off」においても消灯しているものとする。 At this time, it is assumed that the other LED units 371 2 to 371 N are turned off during any of the periods “LED λ1”, “LED λ2”, and “LED off” shown in FIG.
 これに対応して、制御部411において、図44のCに示されるように、PD351は、LED391a1により、波長λ1の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#1_λ1を出力する。 Correspondingly, in the control unit 411, as shown in FIG. 44C, the PD 351 receives the reflected light from the object irradiated with the light of the wavelength λ1 by the LED 391a 1 and obtains the luminance signal. Lum # 1_λ1 is output.
 また、制御部411において、図44のCに示されるように、PD351は、LED391b1により、波長λ2の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#1_λ2を出力する。 In the control unit 411, as shown in C of FIG. 44, PD351, due LED391b 1, a luminance signal Lum # 1_λ2 obtained by receiving the reflected light from an object light of the wavelength λ2 is irradiated Output.
 さらに、例えば、LEDユニット3712において、図44のBに示されるように、点灯期間「LED λ1」でLED391a2が点灯し、点灯期間「LED λ2」でLED391b2が点灯し、消灯期間「LED off」でLED391a2及びLED391b2が消灯する。 Furthermore, for example, in the LED unit 371 2, as shown in B of FIG. 44, and LED391a 2 in the lighting period "LED .lambda.1" lights, LED391b 2 is turned in the lighting period "LED .lambda.2" off period "LED When “off”, the LED 391a 2 and the LED 391b 2 are turned off.
 なお、このとき、他のLEDユニット3711及び3713乃至371Nは、図44のBに示されるいずれの期間「LED λ1」、「LED λ2」及び「LED off」においても消灯しているものとする。 At this time, the other LED units 371 1 and 371 3 to 371 N are turned off during any of the periods “LED λ1,” “LED λ2,” and “LED off” shown in FIG. And
 これに対応して、制御部411において、図44のCに示されるように、PD351は、LED391a2により、波長λ1の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#2_λ1を出力する。 Correspondingly, in the control unit 411, as shown in FIG. 44C, the PD 351 receives the reflected light from the object irradiated with the light of the wavelength λ1 by the LED 391a 2 and obtains the luminance signal. Lum # 2_λ1 is output.
 また、制御部411において、図44のCに示されるように、PD351は、LED391b2により、波長λ2の光が照射されている物体からの反射光を受光して得られる輝度信号Lum#2_λ2を出力する。 In the control unit 411, as shown in C of FIG. 44, PD351, due LED391b 2, a luminance signal Lum # 2_λ2 obtained by receiving the reflected light from an object light of the wavelength λ2 is irradiated Output.
 そして、処理部431は、図44のCに示されるように、制御部211のPD351からAD変換部95を介して出力される輝度信号Lum#1_λ1及び輝度信号Lum#1_λ2等に基づいて、肌検出信号を生成する。 Then, as shown in FIG. 44C, the processing unit 431 determines the skin based on the luminance signal Lum # 1_λ1 and the luminance signal Lum # 1_λ2 output from the PD 351 of the control unit 211 via the AD conversion unit 95. A detection signal is generated.
 また、処理部431は、図43のCに示されるように、制御部211のPD351からAD変換部95を介して出力される輝度信号Lum#2_λ1及び輝度信号Lum#2_λ2等に基づいて、肌検出信号を生成する。 Further, as shown in FIG. 43C, the processing unit 431 determines the skin based on the luminance signal Lum # 2_λ1 and the luminance signal Lum # 2_λ2 output from the PD 351 of the control unit 211 via the AD conversion unit 95. A detection signal is generated.
 処理部431において、処理部91と同様に、図示せぬ内蔵のメモリなどに、LEDユニット371nの照射時におけるPD351それぞれの内部状態を保持するようにし、その内部状態の遷移に応じて、肌検出信号による判断を採用するか否かを決定することができる。 In the processing unit 431, as in the processing unit 91, the internal state of each PD 351 at the time of irradiation of the LED unit 371n is held in a built-in memory (not shown) or the like, and skin detection is performed according to the transition of the internal state It is possible to decide whether or not to adopt judgment based on signals.
 なお、LEDユニット371nの照射時におけるPD351それぞれの内部状態の遷移方法は、図42を参照して説明したようにして、処理部431により行われる。これにより、制御部411の動作中におけるPD351の飽和による操作ミスを防止することができる。 In addition, the transition method of each internal state of PD351 at the time of irradiation of LED unit 371n is performed by the process part 431 as demonstrated with reference to FIG. Thereby, an operation error due to saturation of the PD 351 during the operation of the control unit 411 can be prevented.
[各部品の配置について]
 次に、図45及び図46を参照して、第1の実施の形態であるデジタルフォトフレーム1において、センサ211とセンサ212との間の距離について説明する。
[About the arrangement of each part]
Next, with reference to FIGS. 45 and 46, the distance between the sensor 21 1 and the sensor 21 2 in the digital photo frame 1 according to the first embodiment will be described.
 なお、図45及び図46は、それぞれ、図1において、図中下側からデジタルフォトフレーム1を見た場合の様子の一例を示している。 45 and 46 each show an example of a state when the digital photo frame 1 is viewed from the lower side in FIG.
 ところで、ユーザのジェスチャ操作による使用感等から、最短使用距離lが、デジタルフォトフレーム1の製造時等に予め決定される。 By the way, the shortest usage distance l is determined in advance when the digital photo frame 1 is manufactured or the like based on the feeling of use by the user's gesture operation.
 この最短使用距離lは、例えば、ユーザが、自身の手等をデジタルフォトフレーム1の前でかざしてジェスチャ操作等を行う場合に、ユーザの手等の位置や動き等を連続的に認識可能な最短の距離をいう。 For example, when the user holds his / her hand or the like in front of the digital photo frame 1 to perform a gesture operation or the like, the shortest use distance l can continuously recognize the position or movement of the user's hand or the like. The shortest distance.
 このため、ユーザが、デジタルフォトフレーム1から最短使用距離l未満の範囲でジェスチャ操作等を行った場合、ユーザの手等の位置や動き等を認識できないことが生じ得る。 For this reason, when the user performs a gesture operation or the like within a range less than the shortest usable distance l from the digital photo frame 1, the position or movement of the user's hand or the like may not be recognized.
 また、例えば、センサ211及びセンサ212が、図45に示されるような位置に配置された場合、デジタルフォトフレーム1から最短使用距離l以上の範囲で一部分、ユーザの手等の位置や動き等を認識できない範囲(いわゆる不感帯)が生じてしまう。 Further, for example, when the sensor 21 1 and the sensor 21 2 are arranged at the positions as shown in FIG. 45, the position or movement of the user's hand or the like is partially within the range of the minimum use distance l from the digital photo frame 1. Such a range (so-called dead zone) in which it cannot be recognized occurs.
 したがって、センサ211及びセンサ212の位置は、それぞれ、図46に示されるように、デジタルフォトフレーム1から最短使用距離l以上の範囲(領域)を、センサ211及びセンサ212の検知範囲でカバーできる位置に決定される。 Therefore, as shown in FIG. 46, the positions of the sensors 21 1 and 21 2 are within the range (area) of the minimum use distance l from the digital photo frame 1, and the detection ranges of the sensors 21 1 and 21 2 are respectively. The position can be covered.
 なお、図46において、斜線で示されるアナログ出力可能領域とは、確実に、センサ211及びセンサ212により、ユーザの手等の位置や動き等を連続的に認識可能な領域を表す。 In FIG. 46, the analog output possible area indicated by hatching represents an area where the position and movement of the user's hand and the like can be continuously recognized reliably by the sensors 21 1 and 21 2 .
 また、図46において、センサ211とセンサ212との間の距離dは、最短使用距離l、及びセンサ21nにおける照射範囲(検知範囲)の半画角θ(度)に基づき、次式(1)により求められる。
 d=2×l/tan(90-θ)   ・・・(1)
In FIG. 46, the distance d between the sensor 21 1 and the sensor 21 2 is based on the shortest use distance l and the half field angle θ (degrees) of the irradiation range (detection range) in the sensor 21n ( 1).
d = 2 × l / tan (90-θ) (1)
 なお、第2の実施の形態であるデジタルフォトフレーム171において、PD1911とPD1912との間の距離dも、上述した式(1)を用いて求められる。 In the digital photo frame 171 according to the second embodiment, the distance d between the PD 191 1 and the PD 191 2 is also obtained using the above-described equation (1).
 次に、図47は、図22において、図中下側からデジタルフォトフレーム171を見た様子の一例を示している。 Next, FIG. 47 shows an example of a state in which the digital photo frame 171 is viewed from the lower side in FIG.
 デジタルフォトフレーム171においては、図47に示されるように、PD191nの検知範囲の半画角がθとされる。そして、式(1)により求められる距離dだけ、PD1911とPD1912とを離して配置させる。 In the digital photo frame 171, as shown in FIG. 47, the half angle of view of the detection range of the PD 191n is θ. Then, the PD 191 1 and the PD 191 2 are separated from each other by a distance d obtained by the equation (1).
 なお、図47において、LEDユニット192の照射範囲のうち、最短使用距離l以上の距離だけ離れた部分(の殆ど)が、アナログ出力可能領域とされる。 In FIG. 47, the portion (most) of the irradiation range of the LED unit 192 that is separated by a distance equal to or longer than the shortest usable distance l is an analog output possible region.
 また、第3の実施の形態であるデジタルフォトフレーム331において、LEDユニット3711とLEDユニット3712との間の距離dも、上述した式(1)を用いて求められる。 Further, in the digital photo frame 331 according to the third embodiment, the distance d between the LED unit 371 1 and the LED unit 371 2 is also obtained using the above-described equation (1).
 次に、図48は、図33において、図中下側からデジタルフォトフレーム331を見た様子の一例を示している。 Next, FIG. 48 shows an example of the digital photo frame 331 seen from the lower side in FIG.
 デジタルフォトフレーム331においては、図48に示されるように、LEDユニット371nの照射範囲の半画角がθとされる。そして、式(1)により求められる距離dだけ、LEDユニット3711とLEDユニット3712とを離して配置させる。 In the digital photo frame 331, as shown in FIG. 48, the half angle of view of the irradiation range of the LED unit 371n is θ. Then, the LED unit 371 1 and the LED unit 371 2 are separated from each other by a distance d obtained by the equation (1).
 なお、図48において、PD351の検知範囲のうち、最短使用距離l以上の距離だけ離れた部分(の殆ど)が、アナログ出力可能領域とされる。 In FIG. 48, a portion (most of) of the detection range of the PD 351 that is separated by a distance equal to or longer than the shortest usable distance l is an analog output possible region.
[LEDの調整について]
 次に、図49を参照して、第1の実施の形態であるデジタルフォトフレーム1に設けられたセンサ21nのLED112an及びLED112bnによる出力の調整の一例について説明する。なお、この調整は、例えば、デジタルフォトフレーム1の製造時等に行われる。
[LED adjustment]
Next, with reference to FIG. 49, an example of output adjustment by the LEDs 112an and LEDs 112bn of the sensor 21n provided in the digital photo frame 1 according to the first embodiment will be described. This adjustment is performed, for example, when the digital photo frame 1 is manufactured.
 なお、図49は、図1において、図中下側からデジタルフォトフレーム1を見た様子の一例を示している。 49 shows an example of the digital photo frame 1 viewed from the lower side in FIG.
 まず、センサ211及びセンサ212の前に、波長λ1に対する反射率と、波長λ2に対する反射率がいずれも等しい素材501を配置する。この素材501としては、例えば、灰色のシートや鏡の鏡面等が採用される。 First, a material 501 having the same reflectivity for the wavelength λ1 and the reflectivity for the wavelength λ2 is disposed in front of the sensors 21 1 and 21 2 . As the material 501, for example, a gray sheet or a mirror surface of a mirror is used.
 センサ211及びセンサ212では、図49に示されるように、センサ211及びセンサ212の前面に素材501を配置した状態で、センサ211及びセンサ212それぞれのLEDの出力が調整される。 In the sensor 21 1 and the sensor 21 2, as shown in Figure 49, the front surface of the sensor 21 1 and the sensor 21 2 in the state in which the material 501, sensor 21 1 and the sensor 21 2 output of each LED is adjusted The
 次に、図50は、センサ211及びセンサ212から出力される輝度信号に基づいて、センサ211及びセンサ212それぞれのLEDの出力を調整するときの一例を示している。 Next, FIG. 50, based on the luminance signal outputted from the sensor 21 1 and the sensor 21 2 shows an example of when adjusting the output of the sensor 21 1 and the sensor 21 2 each the LED.
 センサ211は、図50のAに示される点灯期間「LED λ1」で、素材501に対して波長λ1の光を照射し、波長λ1の光が照射されている素材501からの反射光を受光する。そして、センサ211は、その受光に応じて輝度信号Lum#1_λ1を生成し、AD変換部951を介して処理部91に出力する。 In the lighting period “LED λ1” shown in FIG. 50A, the sensor 21 1 irradiates the material 501 with light of wavelength λ1, and receives the reflected light from the material 501 irradiated with light of wavelength λ1. To do. The sensor 21 1 generates a luminance signal Lum # 1_λ1 according to the received light, and outputs to the processing unit 91 via the AD converter 95 1.
 また、センサ211は、図50のAに示される点灯期間「LED λ2」で、素材501に対して波長λ2の光を照射し、波長λ2の光が照射されている素材501からの反射光を受光する。そして、センサ211は、その受光に応じて輝度信号Lum#1_λ2を生成し、AD変換部951を介して処理部91に出力する。 Further, the sensor 21 1 irradiates the material 501 with light of wavelength λ2 during the lighting period “LED λ2” shown in FIG. 50A, and reflects light from the material 501 irradiated with light of wavelength λ2. Is received. The sensor 21 1 generates a luminance signal Lum # 1_λ2 according to the received light, and outputs to the processing unit 91 via the AD converter 95 1.
 センサ212は、センサ211と同様にして、輝度信号Lum#2_λ1及び輝度信号Lum#2_λ2を生成し、AD変換部952を介して処理部91に出力する。 Sensor 21 2, the sensor 21 1 in the same manner as to generate a luminance signal Lum # 2_λ1 and luminance signal Lum # 2_λ2, and outputs to the processing unit 91 via the AD converter 95 2.
 すなわち、例えば、センサ212は、図50のBに示される点灯期間「LED λ1」で、素材501に対して波長λ1の光を照射し、波長λ1の光が照射されている素材501からの反射光を受光する。そして、センサ212は、その受光に応じて輝度信号Lum#2_λ1を生成し、AD変換部952を介して処理部91に出力する。 That is, for example, the sensor 21 2 irradiates the material 501 with light of the wavelength λ1 during the lighting period “LED λ1” shown in FIG. 50B, and the light from the material 501 irradiated with the light of the wavelength λ1. Receives reflected light. The sensor 21 2 generates a luminance signal Lum # 2_λ1 according to the received light, and outputs to the processing unit 91 via the AD converter 95 2.
 また、センサ212は、図50のBに示される点灯期間「LED λ2」で、素材501に対して波長λ2の光を照射し、波長λ2の光が照射されている素材501からの反射光を受光する。そして、センサ212は、その受光に応じて輝度信号Lum#2_λ2を生成し、AD変換部952を介して処理部91に出力する。 In addition, the sensor 21 2 irradiates the material 501 with light of wavelength λ2 during the lighting period “LED λ2” shown in FIG. 50B, and reflects light from the material 501 irradiated with light of wavelength λ2. Is received. The sensor 21 2 generates a luminance signal Lum # 2_λ2 according to the received light, and outputs to the processing unit 91 via the AD converter 95 2.
 処理部91は、例えば、センサ211からの輝度信号Lum#1_λ1及び輝度信号Lum#1_λ2、並びにセンサ212からの輝度信号Lum#2_λ1及び輝度信号Lum#2_λ2に基づいて、図50のC及び図50のDに示されるように、輝度値としてのLum#1_λ1,Lum#1_λ2,Lum#2_λ1,Lum#2_λ2のいずれも等しくなるように、センサ211及びセンサ212を調整する。 The processing unit 91, for example, based on the luminance signal Lum # 1_λ1 and the luminance signal Lum # 1_λ2 from the sensor 21 1 and the luminance signal Lum # 2_λ1 and the luminance signal Lum # 2_λ2 from the sensor 21 2 , as shown in D in FIG. 50, Lum # 1_λ1 as luminance values, Lum # 1_λ2, Lum # 2_λ1 , to equal any of Lum # 2_λ2, adjusts the sensor 21 1 and the sensor 21 2.
 すなわち、例えば、処理部91は、センサ211のLED112a1及びLED112b1、並びにセンサ212のLED112a2及びLED112b2の照射による出力を調整する。なお、LEDによる出力の調整方法としては、例えば、LEDに接続された可変抵抗によるLEDへの電流の調整の他、電流制御のためのPWM(Pulse Width Modulation)出力調整、プログラムによる補正などを採用できる。 That is, for example, processor 91, LEDs 112a 1 and LED112b 1 of the sensor 21 1, and adjusts the output of the irradiation of the sensor 21 and second LEDs 112a 2 and LED112b 2. As an output adjustment method by LED, for example, adjustment of current to LED by variable resistor connected to LED, PWM (Pulse Width Modulation) output adjustment for current control, correction by program, etc. are adopted. it can.
 なお、第2の実施の形態であるデジタルフォトフレーム171、及び第3の実施の形態であるデジタルフォトフレーム331についても同様にして、LEDによる出力の調整が行われる。 It should be noted that the output adjustment by the LED is similarly performed for the digital photo frame 171 according to the second embodiment and the digital photo frame 331 according to the third embodiment.
[肌としての物体の位置の演算について]
 次に、図51乃至図53を参照して、肌としての物体の位置を演算する演算方法について説明する。
[Calculation of the position of an object as skin]
Next, a calculation method for calculating the position of an object as skin will be described with reference to FIGS.
 図51は、肌としての物体が左右方向に移動する場合の一例を示している。 FIG. 51 shows an example when an object as skin moves in the left-right direction.
 例えば、デジタルフォトフレーム1のセンサ211及びセンサ212を用いて、図51に示されるX1からX2までの範囲内で、肌としての物体(図51において二重丸で示す)の位置を演算することを考える。 For example, using the sensor 21 1 and the sensor 21 2 of the digital photo frame 1, the position of an object as a skin (indicated by a double circle in FIG. 51) is calculated within the range from X1 to X2 shown in FIG. Think about what to do.
 なお、X1からX2までの範囲は、例えば、図46に示されたアナログ出力可能領域であるものとする。なお、例えばX1は0とされ、X2は639とされる。 It should be noted that the range from X1 to X2 is, for example, the analog output possible area shown in FIG. For example, X1 is set to 0 and X2 is set to 639.
 デジタルフォトフレーム1において、処理部91は、例えば、センサ211のPD1141からAD変換部951を介して供給される輝度信号Lum#1_λ1と、センサ212のPD1142からAD変換部952を介して供給される輝度信号Lum#2_λ1とに基づいて、肌としての物体の位置Xを演算(算出)することができる。 In the digital photo frame 1, the processing unit 91, for example, sensor 21 1 from PD 114 1 the luminance signal Lum # 1_λ1 supplied via the AD converter 95 1, sensor 21 second PD 114 2 from the AD conversion unit 95 2 The position X of the object as skin can be calculated (calculated) on the basis of the luminance signal Lum # 2_λ1 supplied via.
 すなわち、例えば、輝度値としてのLum#1_λ1をL1とし、輝度値としてのLum#2_λ1をL2とすれば、処理部91は、次式(2)により、肌としての物体の位置Xを算出することができる。
 X={X1×L1/(L1+L2)}+{X2×L2/(L1+L2)}   ・・・(2)
That is, for example, if Lum # 1_λ1 as the luminance value is L1 and Lum # 2_λ1 as the luminance value is L2, the processing unit 91 calculates the position X of the object as the skin by the following equation (2). be able to.
X = {X1 × L1 / (L1 + L2)} + {X2 × L2 / (L1 + L2)} (2)
 この場合、例えば、処理部91は、位置Xをジェスチャ認識情報として、CPU61に出力する。なお、CPU61は、処理部91からのジェスチャ認識情報として、輝度信号Lum#1_λ1及び輝度信号Lum#2_λ1を取得した場合、取得したジェスチャ認識情報に基づいて、式(2)により、肌としての物体の位置Xを算出することにより認識するようにしてもよい。 In this case, for example, the processing unit 91 outputs the position X to the CPU 61 as gesture recognition information. In addition, when the CPU 61 acquires the luminance signal Lum # 1_λ1 and the luminance signal Lum # 2_λ1 as the gesture recognition information from the processing unit 91, based on the acquired gesture recognition information, the CPU 61 obtains an object as skin. You may make it recognize by calculating the position X of.
 このことは、第2及び第3の実施の形態においても同様である。 This also applies to the second and third embodiments.
 なお、デジタルフォトフレーム1において、図52に示されるように、3個のセンサ211乃至213が設けられている場合についても同様に、肌としての物体の位置Xを演算することができる。 In the digital photo frame 1, as shown in FIG. 52, the position X of the object as the skin can be calculated in the same manner even when three sensors 21 1 to 21 3 are provided.
 次に、図53は、肌としての物体が左右方向に移動する場合の一例を示している。 Next, FIG. 53 shows an example in which an object as skin moves in the left-right direction.
 例えば、デジタルフォトフレーム1のセンサ211乃至213を用いて、図53に示されるX1からX3までの範囲内で、肌としての物体(図53において二重丸で示す)の位置を演算することを考える。 For example, using the sensors 21 1 to 21 3 of the digital photo frame 1, the position of an object as a skin (indicated by a double circle in FIG. 53) is calculated within the range from X1 to X3 shown in FIG. Think about it.
 なお、X1からX3までの範囲は、例えば、図52に示されるように構成されたときのアナログ出力可能領域であるものとする。なお、例えばX1は0とされ、X2は320とされ、X3は639とされる。 Note that the range from X1 to X3 is, for example, a region where analog output is possible when configured as shown in FIG. For example, X1 is set to 0, X2 is set to 320, and X3 is set to 639.
 この場合、センサ213のPD1143からAD変換部953を介して処理部91に供給される輝度信号Lum#3_λ1において、輝度値としてのLum#3_λ1をL3とすれば、処理部91は、次式(3)により、肌としての物体の位置Xを算出することができる。
 X={X1×L1/(L1+L2+L3)}+{X2×L2/(L1+L2+L3)}+{X3×L3/(L1+L2+L3)} ・・・(3)
In this case, the luminance signal Lum # 3_λ1 supplied to the processing unit 91 via the AD converter 95 3 PD 114 3 sensors 21 3, if the Lum # 3_λ1 as luminance value L3, the processing unit 91, The position X of the object as the skin can be calculated by the following equation (3).
X = {X1 × L1 / (L1 + L2 + L3)} + {X2 × L2 / (L1 + L2 + L3)} + {X3 × L3 / (L1 + L2 + L3)} (3)
 このことは、第2及び第3の実施の形態においても同様である。 This also applies to the second and third embodiments.
[クリック動作について]
 次に、図54は、デジタルフォトフレーム1に対して、ユーザがクリック動作を行う様子の一例を示している。
[Click behavior]
Next, FIG. 54 shows an example of how the user performs a click operation on the digital photo frame 1.
 ユーザは、例えば、図54に示されるように、デジタルフォトフレーム1に対して、自身の手を近づけて元に戻すというクリック動作を行う。 For example, as shown in FIG. 54, the user performs a click operation of bringing his / her hand close to the digital photo frame 1 and returning it to the original position.
 これに対応して、デジタルフォトフレーム1が、ユーザのクリック動作を認識し、認識したクリック動作に応じた処理を行うように構成すれば、デジタルフォトフレーム1に対して、より直感的なジェスチャ操作を行うことができる。 Correspondingly, if the digital photo frame 1 recognizes the user's click motion and performs processing according to the recognized click motion, a more intuitive gesture operation is performed on the digital photo frame 1. It can be performed.
 次に、処理部91が、各センサ21nからAD変換部95nを介して供給される輝度信号Lum#n_λ1に基づいて、ユーザのクリック動作を認識する認識方法を説明する。 Next, a recognition method in which the processing unit 91 recognizes the user's click operation based on the luminance signal Lum # n_λ1 supplied from each sensor 21n via the AD conversion unit 95n will be described.
 なお、CPU61が、処理部91からのジェスチャ認識情報として、輝度信号Lum#n_λ1を取得した場合、そのジェスチャ認識情報に基づいて、ユーザのクリック動作を認識する場合についても、同様の認識方法が用いられる。 Note that when the CPU 61 acquires the luminance signal Lum # n_λ1 as the gesture recognition information from the processing unit 91, the same recognition method is used when the user's click operation is recognized based on the gesture recognition information. It is done.
 すなわち、例えば、処理部91は、各センサ21nからAD変換部95nを介して供給される輝度信号Lum#n_λ1に基づいて、次式(4)により、Lsumを算出する。
 Lsum=Σ(Lum#n_λ1)   ・・・(4)
That is, for example, the processing unit 91 calculates Lsum by the following equation (4) based on the luminance signal Lum # n_λ1 supplied from each sensor 21n via the AD conversion unit 95n.
Lsum = Σ (Lum # n_λ1) (4)
 なお、図54に示されるような場合、デジタルフォトフレーム1には、2個のセンサ211及びセンサ212が設けられている。したがって、処理部91には、センサ211からAD変換部951を介して輝度信号Lum#1_λ1が、センサ212からAD変換部952を介して輝度信号Lum#2_λ1が、それぞれ供給される。 In the case shown in FIG. 54, the digital photo frame 1 is provided with two sensors 21 1 and 21 2 . Therefore, the luminance signal Lum # 1_λ1 is supplied from the sensor 21 1 to the processing unit 91 via the AD conversion unit 95 1 , and the luminance signal Lum # 2_λ1 is supplied from the sensor 21 2 to the processing unit 91 via the AD conversion unit 95 2. .
 このため、処理部91は、輝度信号Lum#1_λ1及び輝度信号Lum#2_λ1に基づいて、式(4)により、Lsum(=Lum#1_λ1+Lum#2_λ1)を算出する。 For this reason, the processing unit 91 calculates Lsum (= Lum # 1_λ1 + Lum # 2_λ1) by Equation (4) based on the luminance signal Lum # 1_λ1 and the luminance signal Lum # 2_λ1.
 そして、処理部91は、算出したLsumの変化に基づいて、クリック動作を認識(検出)する。 Then, the processing unit 91 recognizes (detects) the click operation based on the calculated change in Lsum.
 次に、図55を参照して、処理部91が、Lsumの変化に応じてクリック動作を認識する方法の一例について説明する。 Next, an example of a method in which the processing unit 91 recognizes a click operation according to a change in Lsum will be described with reference to FIG.
 図55において、実線で示されるグラフは、Lsumを表している。また、点線で示されるグラフは、Lsumを微分して得られるd(Lsum)を表している。 In FIG. 55, a graph indicated by a solid line represents Lsum. A graph indicated by a dotted line represents d (Lsum) obtained by differentiating Lsum.
 例えば、処理部91は、算出したLsumを微分し、その結果得られるd(Lsum)のグラフが、ゼロクロスした場合、すなわち、図55に示されるように、d(Lsum)が値0を割った場合に、ユーザによるクリック動作が行われたと認識する。 For example, the processing unit 91 differentiates the calculated Lsum, and when the graph of d (Lsum) obtained as a result crosses zero, that is, as shown in FIG. 55, d (Lsum) divides the value 0. In this case, it is recognized that a click operation by the user has been performed.
 ところで、例えば、図56において、一般的には、図中左右方向に手を動かしたつもりでも、実際には、ユーザの手は、図56に示されるような軌道、つまり、クリック動作時の手の軌道に類似する軌道を描いてしまうことがある。 By the way, for example, in FIG. 56, in general, even though the hand is intended to move in the left-right direction in the figure, the user's hand actually has a trajectory as shown in FIG. It may draw a trajectory similar to the trajectory.
 このため、処理部91において、ユーザのクリック動作を誤って検出(認識)してしまうことが生じ得る。 For this reason, the processing unit 91 may erroneously detect (recognize) the user's click operation.
 ところで、一般的に、人間は、クリック動作等の一定の動作をする前に、X方向及びY方向の動きを止める傾向にある。ここで、X方向とは、表示画面1aの法線方向であるZ方向に垂直な方向であって、図1におけるセンサ211及びセンサ212が存在する方向(図1でいう左右方向)をいう。また、Y方向とは、Z方向とX方向に互いに垂直な方向(図1でいう上下方向)をいう。 In general, humans tend to stop movement in the X direction and the Y direction before performing a certain operation such as a click operation. Here, the X direction is a direction perpendicular to the Z direction, which is the normal direction of the display screen 1a, and is a direction in which the sensor 21 1 and the sensor 21 2 in FIG. 1 exist (the left-right direction in FIG. 1). Say. The Y direction is a direction perpendicular to the Z direction and the X direction (the vertical direction in FIG. 1).
 そこで、そのような傾向を利用して、より正確にクリック動作を認識することを考える。 Therefore, consider using such a tendency to recognize the click motion more accurately.
 次に、図57は、上述した人間の傾向を利用して、ユーザのクリック動作を、より精度良く認識するための方法を説明するための図である。 Next, FIG. 57 is a diagram for explaining a method for more accurately recognizing a user's click operation using the above-described human tendency.
 図57のAにおいて、実線で示されるグラフは、処理部91において算出される位置Xを表している。また、点線で示されるグラフは、d(Lsum)によるクリック動作の認識を行うか否かを表している。 57A, the graph indicated by the solid line represents the position X calculated by the processing unit 91. A graph indicated by a dotted line represents whether or not to recognize a click operation by d (Lsum).
 図57のAにおいて、横軸は時刻を表し、図中左側の縦軸は、実線で示されるグラフの値としての位置Xを表す。また、図中右側の縦軸は、点線で示されるグラフの値gateを表す。 57A, the horizontal axis represents time, and the left vertical axis in the figure represents the position X as a value of a graph indicated by a solid line. In addition, the vertical axis on the right side in the figure represents the value gate of the graph indicated by the dotted line.
 図57のBにおいて、実線で示されるグラフは、処理部91において算出されるd(Lsum)を表している。なお、図57のBにおいて、横軸は時刻を、縦軸は、実線で示されるグラフの値としてのd(Lsum)を表している。 57B, a graph indicated by a solid line represents d (Lsum) calculated by the processing unit 91. In FIG. 57B, the horizontal axis represents time, and the vertical axis represents d (Lsum) as a value of a graph indicated by a solid line.
 例えば、処理部91において、上述した式(2)や式(3)を用いて、各センサ21nからの輝度信号(例えば、Lum#n_λ1)に基づいて、図57のAに示されるように、肌としての物体(例えばユーザの手等)の位置Xを算出する。 For example, in the processing unit 91, as shown in A of FIG. 57, based on the luminance signal (for example, Lum # n_λ1) from each sensor 21n using the above-described equations (2) and (3), A position X of an object (such as a user's hand) as skin is calculated.
 そして、処理部91は、算出した位置Xに基づいて、図57のAに示される点線のグラフ(の値gate)を算出する。 Then, based on the calculated position X, the processing unit 91 calculates a dotted line graph (value gate) shown in A of FIG.
 すなわち、例えば、処理部91は、算出した位置X(図57のAに示される実線のグラフの値)が変わらない場合、対応する値gateを、d(Lsum)によるクリック動作の認識を行うことを表すHigh(=1)とする。 That is, for example, when the calculated position X (the value of the solid line graph shown in A of FIG. 57) does not change, the processing unit 91 recognizes the click operation using d (Lsum) for the corresponding value gate. High (= 1) representing
 なお、処理部91は、ごく微小な位置Xの変化(例えば、1pix以内の変化)が生じている場合についても、位置Xが変わらないものとして取り扱う。 Note that the processing unit 91 also handles a case where a very small change in the position X (for example, a change within 1 pix) has occurred, assuming that the position X does not change.
 また、例えば、処理部91は、算出した位置X(図57のAに示される実線のグラフの値)が変わる場合、対応する値gateを、d(Lsum)によるクリック動作の認識を行わないことを表すLow(=0)とする。 Further, for example, when the calculated position X (the value of the solid line graph shown in A of FIG. 57) changes, the processing unit 91 does not recognize the click operation using d (Lsum) for the corresponding value gate. Low (= 0) representing
 例えば、処理部91は、図57のAにおいて、点線で示されるグラフが、Low(=0)とされている場合、図57のBに示されるように、d(Lsum)によるクリック動作の認識を行わない。 For example, when the graph indicated by a dotted line in FIG. 57A is Low (= 0) in FIG. 57A, the processing unit 91 recognizes the click operation by d (Lsum) as shown in FIG. Do not do.
 また、例えば、処理部91は、図57のAにおいて、点線で示されるグラフが、High(=1)とされている場合、図57のBに示されるように、d(Lsum)によるクリック動作の認識を行う。 Further, for example, when the graph indicated by the dotted line in FIG. 57A is High (= 1) in FIG. 57A, the processing unit 91 performs a click operation by d (Lsum) as shown in FIG. Recognition.
 これにより、処理部91では、図57のAに示される点線のグラフの値gateがLowである場合、d(Lsum)によるクリック動作の認識を行わないようにし、図57のAに示される点線のグラフの値gateがHighである場合、d(Lsum)によるクリック動作の認識を行うようにした。 Thereby, in the processing unit 91, when the value gate of the dotted line graph shown in A of FIG. 57 is Low, the click operation by d (Lsum) is not recognized, and the dotted line shown in A of FIG. When the value gate of the graph in (5) is High, the click motion is recognized by d (Lsum).
 このため、処理部91は、より正確にクリック動作を認識することが可能となる。なお、これらのことは、第2及び第3の実施の形態でも同様のことが言える。 For this reason, the processing unit 91 can recognize the click operation more accurately. The same can be said for the second and third embodiments.
 1 デジタルフォトフレーム, 1a 表示画面, 211乃至21N センサ, 61 CPU, 62 ROM, 63 RAM, 64 バス, 65 入出力インタフェース, 66 制御部, 67 表示部, 68 記憶部, 69 ドライブ, 91 処理部, 92 電流制御部, 93 タイミング制御部, 94 ゲイン制御部, 95 AD変換部, 111 LEDドライバ, 112a,112b LED, 113a,113b レンズ, 114n PD, 115n レンズ, 131,151,171 デジタルフォトフレーム, 1911乃至191N PD, 192 LEDユニット, 211 制御部, 231 処理部, 232 レンズ, 331 デジタルフォトフレーム, 351 PD, 352 レンズ, 3711乃至371N LEDユニット, 391a,391b LED, 392a,392b レンズ, 411 制御部, 431 処理部 1 digital photo frame, 1a display screen, 21 1 to 21 N sensor, 61 CPU, 62 ROM, 63 RAM, 64 bus, 65 I / O interface, 66 control unit, 67 display unit, 68 storage unit, 69 drive, 91 processing Unit, 92 current control unit, 93 timing control unit, 94 gain control unit, 95 AD conversion unit, 111 LED driver, 112a, 112b LED, 113a, 113b lens, 114n PD, 115n lens, 131, 151, 171 digital photo frame , 191 1 to 191 N PD, 192 LED unit, 211 control unit, 231 processing unit, 232 lens, 331 digital photo frame, 351 PD, 352 lens, 371 1 to 371 N LED unit, 391a, 391b LED, 392a, 392b Lens, 411 control unit, 431 processing unit

Claims (16)

  1.   第1の波長の光を照射する第1の照射部と、
      前記第1の波長とは異なる第2の波長の光を照射する第2の照射部と
     を有する照射ユニットと、
      前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、
      前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する
     受光部と、
     前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、
     前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部と
     を含む情報処理装置。
    A first irradiation unit that irradiates light of a first wavelength;
    An irradiation unit including: a second irradiation unit configured to irradiate light having a second wavelength different from the first wavelength;
    In response to receiving reflected light from an object irradiated with light of the first wavelength, a first detection signal is generated,
    A light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light of the second wavelength;
    A skin detection unit that detects whether or not the object is skin based on the first and second detection signals;
    An information processing apparatus comprising: a generation unit configured to generate recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signals.
  2.  前記受光部と同様に構成された他の受光部をさらに含み、
     前記生成部は、複数の前記受光部毎に生成された前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成する
     請求項1に記載の情報処理装置。
    It further includes another light receiving part configured similarly to the light receiving part,
    The information processing apparatus according to claim 1, wherein the generation unit generates the recognition information based on at least one of the first or second detection signal generated for each of the plurality of light receiving units.
  3.  前記第1の照射部は、複数の前記受光部のそれぞれから同一の距離に配置されており、
     前記生成部は、前記複数の受光部毎に生成された前記第1の検出用信号に基づいて、前記認識情報を生成する
     請求項2に記載の情報処理装置。
    The first irradiation unit is arranged at the same distance from each of the plurality of light receiving units,
    The information processing apparatus according to claim 2, wherein the generation unit generates the recognition information based on the first detection signal generated for each of the plurality of light receiving units.
  4.  前記受光部は、前記第1の照射部と前記第2の照射部から、それぞれ同一の距離に配置されており、
     前記肌検出部は、前記受光部により生成された前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する
     請求項2に記載の情報処理装置。
    The light receiving unit is disposed at the same distance from the first irradiation unit and the second irradiation unit, respectively.
    The information processing apparatus according to claim 2, wherein the skin detection unit detects whether or not the object is skin based on the first and second detection signals generated by the light receiving unit.
  5.  前記照射ユニットと同様に構成された他の照射ユニットをさらに含み、
     前記受光部は、
      異なるタイミングで前記第1の波長の光を照射する前記照射ユニット毎に、前記第1の検出用信号を生成し、
      異なるタイミングで前記第2の波長の光を照射する前記照射ユニット毎に、前記第2の検出用信号を生成し、
     前記生成部は、前記照射ユニット毎に生成された前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成する
     請求項1に記載の情報処理装置。
    It further includes another irradiation unit configured similarly to the irradiation unit,
    The light receiving unit is
    For each of the irradiation units that irradiate the light of the first wavelength at different timings, the first detection signal is generated,
    For each of the irradiation units that irradiate the light of the second wavelength at different timings, the second detection signal is generated,
    The information processing apparatus according to claim 1, wherein the generation unit generates the recognition information based on at least one of the first or second detection signal generated for each irradiation unit.
  6.  前記生成部は、第1の方向における前記第1の照射部どうしの間隔が、前記第2の照射部どうしの間隔よりも長い場合、前記受光部により、前記複数の照射ユニット毎に生成された前記第1の検出用信号に基づいて、前記第1の方向に垂直な第2の方向における前記物体の位置又は動きの少なくとも一方を認識するための前記認識情報を生成する
     請求項5に記載の情報処理装置。
    The generator is generated for each of the plurality of irradiation units by the light receiving unit when the interval between the first irradiation units in the first direction is longer than the interval between the second irradiation units. The recognition information for recognizing at least one of the position or movement of the object in a second direction perpendicular to the first direction is generated based on the first detection signal. Information processing device.
  7.  前記照射ユニットと前記受光部とを有する複数のセンサであって、前記センサ毎に異なる照射範囲を照射する前記照射ユニットを有する前記センサをさらに含み、
      前記肌検出部は、前記複数のセンサ毎に生成される前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出し、
      前記生成部は、前記複数のセンサ毎に生成される前記第1又は第2の検出用信号の少なくとも一方に基づいて、前記認識情報を生成する
     請求項1に記載の情報処理装置。
    A plurality of sensors having the irradiation unit and the light receiving unit, further including the sensor having the irradiation unit that irradiates a different irradiation range for each sensor;
    The skin detection unit detects whether the object is skin based on the first and second detection signals generated for each of the plurality of sensors,
    The information processing apparatus according to claim 1, wherein the generation unit generates the recognition information based on at least one of the first or second detection signal generated for each of the plurality of sensors.
  8.  前記第1の検出用信号に基づいて、前記物体が予め決められた検知範囲内に侵入したか否かを検出する近接検出部をさらに含み、
     前記肌検出部は、前記物体が前記検知範囲内に侵入したと検出されたことに対応して、前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する
     請求項1に記載の情報処理装置。
    A proximity detector for detecting whether the object has entered a predetermined detection range based on the first detection signal;
    In response to detecting that the object has entered the detection range, the skin detection unit determines whether the object is skin based on the first and second detection signals. The information processing apparatus according to claim 1 to detect.
  9.  前記物体が肌であることが検出されたことに対応して、前記第1の検出用信号に基づき、前記物体が予め決められた検知範囲内に存在するか否かを検出する物体検出部をさらに含み、
     前記情報処理装置では、前記物体が前記検知範囲内に存在すると検出された場合、前記物体が肌であるものとして取り扱う
     請求項1に記載の情報処理装置。
    In response to detecting that the object is skin, an object detection unit that detects whether or not the object exists within a predetermined detection range based on the first detection signal. In addition,
    The information processing apparatus according to claim 1, wherein the information processing apparatus treats the object as skin when it is detected that the object exists within the detection range.
  10.  前記物体の位置に応じた大きさの出力信号を生成する信号生成部をさらに含み、
     前記生成部は、前記出力信号にも基づいて、前記認識情報を生成する
     請求項1に記載の情報処理装置。
    A signal generation unit that generates an output signal having a magnitude according to the position of the object;
    The information processing apparatus according to claim 1, wherein the generation unit generates the recognition information based on the output signal.
  11.  前記第1の波長λ1、及び前記第1の波長λ1よりも長波長である前記第2の波長λ2は、
     640nm ≦ λ1 ≦ 1000nm
     900nm ≦ λ2 ≦ 1100nm
     を満たす請求項1に記載の情報処理装置。
    The first wavelength λ1 and the second wavelength λ2 that is longer than the first wavelength λ1 are:
    640nm ≤ λ1 ≤ 1000nm
    900nm ≤ λ2 ≤ 1100nm
    The information processing apparatus according to claim 1, wherein:
  12.  前記第1の照射部は、前記第1の波長λ1の不可視光を照射し、
     前記第2の照射部は、前記第2の波長λ2の不可視光を照射する
     請求項11に記載の情報処理装置。
    The first irradiation unit irradiates invisible light having the first wavelength λ1,
    The information processing apparatus according to claim 11, wherein the second irradiation unit emits invisible light having the second wavelength λ2.
  13.  前記受光部には、前記受光部に入射される可視光を遮断する可視光カットフィルタが設けられている
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the light receiving unit is provided with a visible light cut filter that blocks visible light incident on the light receiving unit.
  14.   第1の波長の光を照射する第1の照射部と、
      前記第1の波長とは異なる第2の波長の光を照射する第2の照射部と
     を有する照射ユニットと、
      前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、
      前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する
     受光部と
     を含む情報処理装置の情報処理方法において、
     前記情報処理装置による、
     前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出ステップと、
     前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成ステップと
     を含む情報処理方法。
    A first irradiation unit that irradiates light of a first wavelength;
    An irradiation unit including: a second irradiation unit configured to irradiate light having a second wavelength different from the first wavelength;
    In response to receiving reflected light from an object irradiated with light of the first wavelength, a first detection signal is generated,
    In an information processing method for an information processing apparatus, comprising: a light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light of the second wavelength;
    According to the information processing apparatus,
    A skin detection step of detecting whether the object is skin based on the first and second detection signals;
    An information processing method comprising: generating recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signals.
  15.   第1の波長の光を照射する第1の照射部と、
      前記第1の波長とは異なる第2の波長の光を照射する第2の照射部と
     を有する照射ユニットと、
      前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、
      前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する
     受光部と
     を含む情報処理装置のコンピュータを、
     前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、
     前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部と
     して機能させるためのプログラム。
    A first irradiation unit that irradiates light of a first wavelength;
    An irradiation unit including: a second irradiation unit configured to irradiate light having a second wavelength different from the first wavelength;
    In response to receiving reflected light from an object irradiated with light of the first wavelength, a first detection signal is generated,
    A computer of an information processing apparatus including: a light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light of the second wavelength;
    A skin detection unit that detects whether or not the object is skin based on the first and second detection signals;
    Based on at least one of the first and second detection signals, the function is to function as a generation unit that generates recognition information for recognizing at least one of the position or movement of the object detected as skin. program.
  16.   第1の波長の光を照射する第1の照射部と、
      前記第1の波長とは異なる第2の波長の光を照射する第2の照射部と
     を有する照射ユニットと、
      前記第1の波長の光が照射されている物体からの反射光を受光したことに応じて、第1の検出用信号を生成し、
      前記第2の波長の光が照射されている前記物体からの反射光を受光したことに応じて、第2の検出用信号を生成する
     受光部と、
     前記第1及び第2の検出用信号に基づいて、前記物体が肌であるか否かを検出する肌検出部と、
     前記第1又は第2の検出用信号の少なくとも一方に基づいて、肌として検出された前記物体の位置又は動きの少なくとも一方を認識するための認識情報を生成する生成部と、
     前記認識情報に基づく認識結果に応じて、対応する処理を行う処理部と
     を含む電子機器。
    A first irradiation unit that irradiates light of a first wavelength;
    An irradiation unit including: a second irradiation unit configured to irradiate light having a second wavelength different from the first wavelength;
    In response to receiving reflected light from an object irradiated with light of the first wavelength, a first detection signal is generated,
    A light receiving unit that generates a second detection signal in response to receiving reflected light from the object irradiated with light of the second wavelength;
    A skin detection unit that detects whether or not the object is skin based on the first and second detection signals;
    A generating unit that generates recognition information for recognizing at least one of the position or movement of the object detected as skin based on at least one of the first or second detection signals;
    And a processing unit that performs a corresponding process according to a recognition result based on the recognition information.
PCT/JP2012/075042 2011-10-12 2012-09-28 Information processing device, information processing method, program, and electronic apparatus WO2013054664A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-225365 2011-10-12
JP2011225365A JP2013084228A (en) 2011-10-12 2011-10-12 Information processing device, information processing method, program, and electronic apparatus

Publications (1)

Publication Number Publication Date
WO2013054664A1 true WO2013054664A1 (en) 2013-04-18

Family

ID=48081721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/075042 WO2013054664A1 (en) 2011-10-12 2012-09-28 Information processing device, information processing method, program, and electronic apparatus

Country Status (2)

Country Link
JP (1) JP2013084228A (en)
WO (1) WO2013054664A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11599199B2 (en) * 2019-11-28 2023-03-07 Boe Technology Group Co., Ltd. Gesture recognition apparatus, gesture recognition method, computer device and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6091693B1 (en) * 2016-09-21 2017-03-08 京セラ株式会社 Electronics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10148640A (en) * 1996-11-18 1998-06-02 Matsushita Electric Ind Co Ltd Method and device for hand movement detection
JP2000305706A (en) * 1999-03-26 2000-11-02 Nokia Mobile Phones Ltd Data inputting device by manual input and electronic device using the same
WO2011001761A1 (en) * 2009-06-30 2011-01-06 ソニー株式会社 Information processing device, information processing method, program, and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10148640A (en) * 1996-11-18 1998-06-02 Matsushita Electric Ind Co Ltd Method and device for hand movement detection
JP2000305706A (en) * 1999-03-26 2000-11-02 Nokia Mobile Phones Ltd Data inputting device by manual input and electronic device using the same
WO2011001761A1 (en) * 2009-06-30 2011-01-06 ソニー株式会社 Information processing device, information processing method, program, and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11599199B2 (en) * 2019-11-28 2023-03-07 Boe Technology Group Co., Ltd. Gesture recognition apparatus, gesture recognition method, computer device and storage medium

Also Published As

Publication number Publication date
JP2013084228A (en) 2013-05-09

Similar Documents

Publication Publication Date Title
EP3202467B1 (en) Treadmill and control method for controlling the treadmill belt thereof
JP4831267B2 (en) Information processing apparatus, information processing method, program, and electronic apparatus
US10080623B2 (en) Visible light projection device for surgery to project images on a patient
JP6282758B2 (en) Projection-type image display device and image display method
JP5264731B2 (en) System and method for performing lighting copy paste operations in a lighting system
JP6264909B2 (en) Headlamp control device and headlamp
WO2011062102A1 (en) Information processing device, information processing method, program, and electronic apparatus
JP5800175B2 (en) Image processing apparatus, image processing method, program, and electronic apparatus
JP2016501426A (en) Object detection and tracking using variable field illuminators
JP5532315B2 (en) Image processing apparatus, image processing method, program, and electronic apparatus
JP2012208926A (en) Detection device, input device, projector and electronic apparatus
US10685550B2 (en) Gesture-enabled audio device with visible feedback
US20130335787A1 (en) Overhead image reading apparatus
US20140346957A1 (en) Medical lighting system, in particular an operating lighting system, and a method of controlling such a lighting system
JP2008243031A (en) Careless driving determination device
JP2013056011A (en) Measuring device, measuring method, and program
WO2013054664A1 (en) Information processing device, information processing method, program, and electronic apparatus
JP2016139045A5 (en)
JP2011160380A (en) Image processing device, image processing method, program, and electronic device
JP6618337B2 (en) Object detection apparatus, object detection method, and computer program
JP2021181925A5 (en) Terahertz wave system, controller, control method, program
JP2012042229A (en) Detection device, detection method, and electronic apparatus
JP6124215B2 (en) Lighting system and lighting control device
JP2021181925A (en) Imaging system, control unit, control method, and program
JP6152111B2 (en) Non-invasive living body measurement device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12839522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12839522

Country of ref document: EP

Kind code of ref document: A1