WO2012018090A1 - Area sensor and display device - Google Patents

Area sensor and display device Download PDF

Info

Publication number
WO2012018090A1
WO2012018090A1 PCT/JP2011/067892 JP2011067892W WO2012018090A1 WO 2012018090 A1 WO2012018090 A1 WO 2012018090A1 JP 2011067892 W JP2011067892 W JP 2011067892W WO 2012018090 A1 WO2012018090 A1 WO 2012018090A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
light receiving
image
receiving element
area sensor
Prior art date
Application number
PCT/JP2011/067892
Other languages
French (fr)
Japanese (ja)
Inventor
伸一 宮崎
健吾 ▲高▼濱
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2012018090A1 publication Critical patent/WO2012018090A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means

Definitions

  • the present invention relates to an area sensor provided with a light receiving element (light sensor) and a display device provided with such an area sensor.
  • Some display devices such as liquid crystal display devices have a touch panel (area sensor) function that can detect the touched position when the surface of the display device is touched with an input pen or a human finger.
  • An integrated display device has been developed.
  • Conventional touch panel integrated display devices have a resistive film method (a method in which an input position is detected by contact between an upper conductive substrate and a lower conductive substrate when pressed) and a capacitive method (in which the touched area is touched).
  • the method of detecting the input position by detecting a change in capacitance has been the mainstream.
  • touch panel integrated display devices have disadvantages such as an increase in thickness and the addition of a separate manufacturing process.
  • a touch panel integrated liquid crystal display device in which a sensor) is provided for each pixel (or for each of a plurality of pixels) in an image display region of the liquid crystal display device has been developed.
  • the liquid crystal display device is provided with a pixel transistor for each pixel.
  • the said light receiving element is formed between the both board
  • a display device that includes a light receiving element and an area sensor that detects an input position from the outside is attracting attention.
  • a floating state (non-contact) of an object to be detected such as an input pen or a human finger, which is difficult to realize with a resistive touch panel or a capacitive touch panel The position in the state) can be detected.
  • the distance of the detection object in a float state from the display surface of the display device can also be detected.
  • Patent Document 1 describes an information input device including a light receiving unit configured by a photodiode capable of obtaining distance information of a target object such as a hand with high resolution and extracting a three-dimensional shape of the target object. ing.
  • FIG. 19 is a diagram showing a schematic configuration of the information input device.
  • the light emitted from the light emitting means 105 is reflected by the target object 106 and forms an image on the light receiving surface of the reflected light extracting means 108 by the light receiving optical system 107 such as a lens. ing.
  • the reflected light extraction unit 108 includes a first light receiving unit 109, a second light receiving unit 110, and a difference calculation unit 111, and detects the intensity distribution of reflected light reflected by the target object 106, that is, a reflected light image. .
  • the first light receiving means 109 and the second light receiving means 110 are set to receive light at different timings, and the light emitting means 105 emits light when the first light receiving means 109 is receiving light,
  • the timing control unit 112 controls the operation timing so that the light emitting unit 105 does not emit light when the second light receiving unit 110 is receiving light.
  • the first light receiving means 109 receives the reflected light of the light from the light emitting means 105 by the target object 106 and other external light such as sunlight and illumination light, while the second light receiving means 109 receives the second light.
  • the light receiving means 110 receives only external light. Since the timings at which the light is received are different, but are close to each other, fluctuations in external light during this period can be ignored.
  • the difference calculation unit 111 calculates and outputs the difference between the images received by the first light receiving unit 109 and the second light receiving unit 110.
  • the reflected light extraction means 108 sequentially outputs the reflected light amount of each pixel of the reflected light image.
  • the output from the reflected light extraction means 108 is amplified by the amplifier 113, converted into digital data by the A / D converter 114, stored in the memory 115, and stored in the memory 115 at an appropriate timing.
  • Data is read out and processed by the feature information generation means 116. These controls are performed by the timing control means 112.
  • the reflected light from the target object 106 decreases significantly as the distance between the target object 106 and the first light receiving means 109 increases.
  • the amount of received light per pixel of the reflected light image decreases in inverse proportion to the square of the distance between the target object 106 and the first light receiving means 109. Therefore, when the target object 106 is placed in front of the information input device, the reflected light from the background becomes small enough to be ignored, and a reflected light image from only the target object 106 can be obtained.
  • a reflected light image from the hand is obtained.
  • the amount of light received per pixel of the reflected light image is The amount of reflected light received by the unit light receiving unit corresponding to the one pixel is obtained.
  • the amount of reflected light is affected by the properties of the target object 106 (such as specularly reflecting, scattering, and absorbing light), the orientation of the surface of the target object 106, the distance of the target object 106, etc.
  • the entire object 106 is an object that uniformly scatters light, the amount of reflected light has a close relationship with the distance to the target object 106.
  • a hand or the like is an object in which the entire target object 106 scatters light uniformly, so the reflected light image when the hand is put out shows the distance of the hand, the inclination of the hand (the distance is partially different), etc. reflect. Therefore, by extracting such feature information, various information can be input / generated.
  • FIG. 20 is a diagram showing a schematic configuration of the display device described in Patent Document 2.
  • the pixels 140 have a matrix arrangement that is continuously arranged in the vertical and horizontal directions.
  • the display area 120 in one pixel 140 is formed in a rectangular shape, and one set of the optical sensors 130R and 130L is arranged in the horizontal direction below the display area 120.
  • the light shielding member 150 is arranged as hatched in FIG. In other words, the light shielding member 150 has an opening 152 that matches the shape of the display area 120 and a rectangular opening 154 that matches the optical sensors 130R and 130L.
  • the optical sensors 130R and 130L are alternately arranged, and the optical sensor 130R and the optical sensor 130L are arranged in this order from the left in FIG.
  • a light shielding member 150 is provided so as to straddle the sensor 130L.
  • an opening 154 of the light shielding member 150 is provided so as to straddle the optical sensor 130L and the optical sensor 130R.
  • the light shielding member 150 is provided so that the boundary between the optical sensor 130R and the optical sensor 130L in each pixel 140 and the center line 138 of the rectangular opening 154 in the light shielding member 150 coincide with each other.
  • one light shielding member is provided so as to straddle part of the optical sensor 130R and part of the optical sensor 130L.
  • 150 openings 154 are provided.
  • the irradiation light from the backlight (not shown) is reflected by the detected object and enters the optical sensors 130R and 130L.
  • the object to be detected is detected by the optical sensors 130R and 130L.
  • the portion specified by the optical sensor 130R among the portions where the light amount is reduced or increased compared to the background portion, whether the outside is bright or dark, is the R image, the optical sensor.
  • An image specified by 130L can be an L image.
  • FIG. 21 is a diagram illustrating a case where an object to be detected such as a finger is shown as a sphere, and the object to be detected approaches the display panel 100.
  • Patent Document 2 the parallax between the R image and the L image is used, and the finger performs a touch operation on the display panel 100 when the parallax is smaller than a preset threshold. It is described that it is possible to realize a display device that can be discriminated.
  • Japanese Patent Laid-Open No. 10-177449 released on June 30, 1998) Japanese Patent Laying-Open No. 2009-9098 (released on January 15, 2009) Japanese Patent Laid-Open No. 9-27969 (published January 28, 1997)
  • the amount of received light per pixel of the reflected light image is inversely proportional to the distance to the target object 106 in estimating the distance of the target object 106, that is, the reflection of the target object 106. Only strength is used.
  • the reflection intensity of the target object 106 varies depending on the reflectance of the target object 106, for example, if the target object 106 is located at the same distance from the first light receiving unit 109, the target object 106 is separated by the same distance. Nevertheless, the reflection characteristics of the hand differ depending on race (skin color), hand skin thickness, blood circulation, etc. Therefore, in order to estimate the distance with high accuracy, every time the user changes There is a problem that optimization (adjustment) of the information input device is required.
  • the configuration of the above-mentioned Patent Document 1 has a problem that it is difficult to estimate the distance because it cannot cope with a momentary fluctuation in external light.
  • Patent Document 2 as shown in FIG. 20, a part of the optical sensor 130R and the optical sensor 130L are provided so that the optical sensor 130R and the optical sensor 130L in each pixel 140 have different directivity characteristics. It is the structure which provided the opening part 154 of the one light shielding member 150 so that one part may be straddled.
  • the optical sensor 130R, the optical sensor 130L, and the light shielding member 150 are arranged so that the boundary between the optical sensor 130R and the optical sensor 130L in each pixel 140 and the center line 138 of the rectangular opening 154 in the light shielding member 150 coincide. Is provided in stripes at regular intervals.
  • each pixel 140 since one common opening 154 is formed for the two photosensors 130R and 130L, the two photosensors 130R and 130L have different directivity characteristics. Therefore, the arrangement positions of the optical sensors 130R and 130L with respect to the arrangement position of the opening 154 are limited.
  • the present invention has been made in view of the above problems, and is an area sensor that can estimate a distance with higher accuracy and can relatively freely arrange light receiving elements having different directivity characteristics.
  • An object is to provide a display device.
  • the area sensor of the present invention has a surface in which a plurality of first and second light receiving elements that detect the amount of incident light are combined to form a plurality.
  • the first light receiving element includes a first light receiving portion and a first light blocking member that is formed in an upper layer than the first light receiving portion and blocks the light.
  • the second light receiving element includes a second light receiving part and a second light shielding member that is formed in an upper layer than the second light receiving part and blocks the light.
  • the light component from the first direction can be received, and the light component from the second direction that is opposite to the direction of the reflected light when the light from the first direction is specularly reflected by the surface can be blocked.
  • the opening of the first light-shielding member is in plan view with respect to the first light-receiving portion.
  • the second light receiving element is formed to be shifted by a predetermined amount so as to partially overlap, and the second light receiving element can receive a light component from the second direction.
  • the opening of the second light blocking member is located on the second direction side so as to partially overlap the second light receiving unit in plan view so that light components from the direction can be blocked. It is characterized by being formed with a fixed amount of displacement.
  • an opening of one light shielding member is provided so as to straddle a part of each of the two light receiving elements formed adjacent to each other.
  • the light receiving element and the light shielding member are provided at regular intervals so that the boundary between the two light receiving elements coincides with the center line of the opening of the one light shielding member.
  • the opening of the first light shielding member is partially separated from the first light receiving unit in a plan view. So that the second light-receiving element has directivity, the opening of the second light-shielding member is formed in the second direction.
  • the light receiving portion is formed so as to be shifted by a predetermined amount on the second direction side so as to partially overlap in plan view.
  • the arrangement position of the second light receiving element does not affect the directivity of the first light receiving element, and the arrangement position of the first light receiving element is the second position. This does not affect the directivity of the light receiving element.
  • the first light receiving element and the second light receiving element having different directivity characteristics do not necessarily have to be provided at adjacent positions. Therefore, the light receiving elements having different directivity characteristics are compared.
  • An area sensor that can be freely arranged can be realized.
  • a display device of the present invention includes the area sensor, and is formed in a matrix on the surface on which the first light receiving element and the second light receiving element are formed.
  • a plurality of pixels for displaying an image are provided, and the first light receiving element and the second light receiving element are provided according to the position of the pixel.
  • a display device in which the area sensor is integrated can be realized. Therefore, information can be input using the area sensor while viewing the image of the display device.
  • the first light receiving element and the second light receiving element are provided according to the position of the pixel.
  • the first light receiving element and the second light receiving element are 1
  • the first light receiving element is provided in one of the two adjacent pixels, the second light receiving element is provided in the other, or the light receiving elements are not provided in all the pixels.
  • An example is a case where a light receiving element is provided only in a pixel relating to color, but is not limited thereto.
  • the first light receiving element includes the first light receiving portion and the first light shielding member that is formed in an upper layer than the first light receiving portion and blocks the light.
  • the second light receiving element includes a second light receiving portion and a second light shielding member that is formed in an upper layer than the second light receiving portion and blocks the light.
  • the light component from the first direction can be received, and the light from the first direction is reflected from the second direction which is opposite to the reflected light direction when the light is specularly reflected by the surface.
  • the opening of the first light shielding member is shifted by a predetermined amount toward the first direction so as to partially overlap the first light receiving part in plan view so that the light component can be shielded.
  • the second light-shielding member has an opening that partially overlaps the second light-receiving part in plan view so that the light component from the first direction can be shielded.
  • 2 is configured to be shifted by a predetermined amount in the direction of 2.
  • the display device of the present invention includes the area sensor as described above, and is formed in a matrix on the surface on which the first light receiving element and the second light receiving element are formed. A plurality of pixels for displaying an image are provided, and the first light receiving element and the second light receiving element are provided according to the position of the pixel.
  • FIG. 1 is a diagram showing a schematic configuration of an area sensor integrated liquid crystal display device according to an embodiment of the present invention
  • FIG. 1 The example of formation of the optical sensor provided in the area sensor integrated liquid crystal display device of one embodiment of this invention is shown.
  • FIG. 1 shows the schematic circuit structure in each pixel of the area sensor integrated liquid crystal display device of one embodiment of this invention.
  • FIG. 1 shows schematic block diagram which shows the structure of the area sensor integrated liquid crystal display device of one embodiment of this invention.
  • FIG. 6 is a diagram for explaining a case where a parallax is calculated by performing correlation calculation using a binarized left image and right image in the area sensor integrated liquid crystal display device according to the embodiment of the present invention.
  • a parallax is calculated by matching a 256-tone left image and a right image for each line using a normalized correlation method. It is a figure for demonstrating. To describe a case where a parallax is calculated by performing correlation calculation for each line using a left image and a right image in which an edge is detected in an area sensor integrated liquid crystal display device according to an embodiment of the present invention.
  • FIG. 9 is a diagram for explaining a method for obtaining a height h from a parallax d and an incident angle ⁇ in an area sensor integrated liquid crystal display device according to an embodiment of the present invention.
  • FIG. 20 It is a figure which shows schematic structure of the optical sensor provided with the optical element which restrict
  • an area sensor integrated liquid crystal display device including a light receiving element will be described as an example.
  • the present invention is not limited thereto, and the area sensor is provided.
  • the present invention can be applied to a display device such as an organic EL display device or an area sensor including a light receiving element.
  • the area sensor integrated liquid crystal display device 1 (also referred to simply as the liquid crystal display device 1) shown in FIG. 3 is provided with a photosensor for each pixel of the liquid crystal display panel 2, although not shown.
  • the image of the detected object 7 (finger) placed on the display surface 2a side of the liquid crystal display panel 2 is extracted and its position is detected.
  • a backlight 3 for irradiating the liquid crystal display panel 2 with light is provided on the back surface 2b side of the liquid crystal display panel 2.
  • the liquid crystal display panel 2 includes an active matrix substrate 4 in which a large number of pixels are arranged in a matrix, and a counter substrate 5 disposed so as to face the active matrix substrate 4, and further between these two substrates 4 and 5.
  • the liquid crystal layer 6 is sandwiched between the two.
  • the VA mode liquid crystal display panel 2 is used.
  • the display mode of the liquid crystal display panel 2 is not particularly limited, and any display mode such as a TN mode or an IPS mode can be applied. it can.
  • a front-side polarizing plate and a back-side polarizing plate are provided outside the liquid crystal display panel 2 so as to sandwich the liquid crystal display panel 2.
  • Each of the polarizing plates serves as a polarizer, and in the present embodiment in which the liquid crystal material in the liquid crystal layer 6 is a vertical alignment type, the polarization direction of the front-side polarizing plate and the polarization direction of the back-side polarizing plate are By arranging them in a crossed Nicols relationship, a normally black mode liquid crystal display device can be realized.
  • the active matrix substrate 4 includes a TFT element which is a switching element for driving each pixel, a pixel electrode electrically connected to the TFT element, the optical sensor, an alignment film, and the like. Is formed.
  • the counter substrate 5 is formed with a color filter layer, a counter electrode, an alignment film, and the like.
  • the said color filter layer can be comprised from the coloring part which has each color of red (R), green (G), and blue (B), and a black matrix, for example.
  • the detected object 7 is detected with higher accuracy regardless of the external brightness without affecting the image quality of the display image. Therefore, a backlight having a function of uniformly emitting infrared light that is invisible light is used. Since the infrared light is invisible light, it can be appropriately modulated and pulsed as necessary.
  • the infrared light emitted from the backlight 3 is reflected by the detected object 7 and detects the amount of incident light that is incident on the optical sensor, the light is used.
  • a black matrix made of carbon black or the like provided on the counter substrate 5 is positioned on the region where the sensor is formed.
  • the black matrix made of carbon black or the like blocks visible light and transmits infrared light. Therefore, light that passes through the black matrix and enters the photosensor is only infrared light. The amount of infrared light reflected by the detection object 7 and incident on the optical sensor can be detected.
  • the backlight 3 is a backlight that does not contain infrared light and emits visible light for display
  • the backlight 3 is provided on the counter substrate 5 on the region where the photosensor is formed.
  • the opening of the black matrix made of carbon black or the like is positioned, and the position of the detection object 7 can be detected as follows.
  • the amount of incident light from the outside is different between the location where the detected object 7 exists and the location where the detected object 7 does not exist, so the position of the detected object 7 is not used without using a backlight. Can be detected.
  • the position of the detected object 7 can be detected by utilizing the fact that the object 7 is reflected.
  • FIG. 1 is a diagram showing a schematic configuration of an optical sensor provided in the liquid crystal display device 1 of the present embodiment.
  • FIG. 1A shows an optical sensor 8a (first light receiving element) having a directional characteristic in the upper left direction in the drawing provided in the active matrix substrate 4, and FIG. An optical sensor 8b (second light receiving element) having a directivity characteristic is shown in the upper right direction in the figure provided on the matrix substrate 4.
  • the active matrix substrate 4 provided with a photosensor (light receiving element) having a high sensing speed is relatively easy to manufacture.
  • a PIN diode having a structure in which the P + layer 9p +, the I layer 9i (first light receiving unit / second light receiving unit), and the N + layer 9n + do not overlap each other is used.
  • the present invention is not limited to this.
  • any optical sensor may be used as long as it passes different current values according to the amount of light received in the light receiving unit provided in the optical sensor. You can also.
  • a glass substrate is used as a substrate for constituting the active matrix substrate 4 not shown in FIGS. 1A and 1B, but the present invention is not limited to this.
  • a quartz substrate, a plastic substrate, or the like can be used.
  • the light emitted from the backlight 3 is applied to the surface on which the pixel TFT and the optical sensors 8a and 8b, which will be described later, are formed, and the pixel TFT and the optical sensor 8a.
  • a light-shielding film is formed to block incident on 8b.
  • a base coat film is formed on the light shielding film so as to cover the entire surface of the light shielding film and the glass substrate.
  • a film made of an insulating inorganic material such as a silicon oxide film, a silicon nitride film, or a silicon oxynitride film, or a laminated film appropriately combining them can be used.
  • a silicon oxide film was used.
  • these films can be formed by depositing by LPCVD, plasma CVD, sputtering, or the like.
  • optical sensors 8a and 8b shown in FIGS. 1A and 1B are formed.
  • the process for forming the optical sensors 8a and 8b on the base coat film is as follows.
  • a non-single-crystal semiconductor thin film that will later become the polycrystalline semiconductor film 9 is formed by LPCVD, plasma CVD, sputtering, or the like.
  • the non-single-crystal semiconductor thin film includes amorphous silicon, polycrystalline silicon, amorphous germanium, polycrystalline germanium, amorphous silicon / germanium, polycrystalline silicon / germanium, amorphous silicon / carbide, Crystalline silicon carbide or the like can be used. In this embodiment mode, amorphous silicon is used.
  • the non-single-crystal semiconductor thin film is crystallized to form a polycrystalline semiconductor film 9.
  • a laser beam, an electron beam, or the like can be used.
  • crystallization is performed using a laser beam.
  • the polycrystalline semiconductor film 9 is patterned by photolithography according to the formation region of the light shielding film.
  • an intrinsic semiconductor layer serving as a light receiving portion or an I layer 9i that is a semiconductor layer having a relatively low impurity concentration is formed in the center of the polycrystalline semiconductor film 9 in the region where the optical sensors 8a and 8b are formed.
  • a P + layer 9p + that is a semiconductor layer having a relatively high P-type impurity concentration and an N + layer 9n + that is a semiconductor layer having a relatively high N-type impurity concentration are formed.
  • a gate insulating film 10 made of a silicon oxide film or the like is formed, and the polycrystalline semiconductor film 9 is covered with the gate insulating film 10.
  • contact holes penetrating the gate insulating film 10 are formed on the P + layer 9p + and the N + layer 9n +, respectively.
  • a conductive film is formed on the entire surface by sputtering or the like.
  • the conductive film for example, a conductive film made of aluminum or the like can be used.
  • the conductive film is not limited to this, and is selected from Ta, W, Ti, Mo, Al, Cu, Cr, Nd, and the like.
  • An element, or an alloy material or a compound material containing the element as a main component may be used, and if necessary, a laminated structure may be formed by appropriately combining them.
  • the conductive film may be formed using a semiconductor film typified by polycrystalline silicon or the like doped with an impurity such as phosphorus or boron. Note that in this embodiment mode, aluminum is used for the conductive film.
  • the conductive film is patterned into a desired shape by etching by photolithography, and becomes metal electrodes (wiring) 11a and 11b electrically connected to the P + layer 9p + and the N + layer 9n +, respectively.
  • a transparent insulating layer 12 is formed on the entire surface so as to cover the gate insulating film 10 and the conductive film.
  • an organic interlayer insulating film made of an acrylic resin is used as the transparent insulating layer 12.
  • a shield electrode layer 13 made of a transparent conductive film such as ITO (Indium Tin Oxide) or IZO (Indium Zinc Oxide) is formed into a P + layer 9p +, as shown in FIGS. 1 (a) and 1 (b). It is formed on transparent insulating layer 12 so as to overlap with I layer 9i and N + layer 9n + in plan view.
  • ITO Indium Tin Oxide
  • IZO Indium Zinc Oxide
  • a light shielding member 14 having an opening 14 a is formed on the shield electrode layer 13.
  • an optical sensor 8a having a directional characteristic is formed in the upper left direction in the drawing, and therefore the light component from the upper left direction in the drawing is received.
  • the light component from the upper right direction in the figure which is the direction opposite to the direction of the reflected light when the light from the upper left direction in the figure is specularly reflected by the surface on which the optical sensor 8a is formed, can be shielded.
  • the opening 14a of the light shielding member 14 is located in the left direction in the drawing so as to partially overlap the I layer 9i of the polycrystalline semiconductor film 9 which is the light receiving portion of the optical sensor 8a in plan view. A fixed amount is formed.
  • FIG. 2 (a) is a plan view of the optical sensor 8a shown in FIG. 1 (a).
  • the predetermined amount to be shifted to the left is the same as that of FIG. It is preferable that it is 5% or more and 50% or less of the lateral width which is the width in the left-right direction.
  • the optical sensor 8b having the directivity characteristic is formed in the upper right direction in the figure, the light component from the upper right direction in the figure can be received.
  • the light shielding member can shield the light component from the upper left direction in the figure, which is the direction opposite to the direction of the reflected light when the light from the upper right direction is specularly reflected by the surface on which the optical sensor 8b is formed.
  • 14 openings 14a are formed by shifting a predetermined amount in the right direction in the drawing so as to partially overlap the I layer 9i of the polycrystalline semiconductor film 9 which is a light receiving portion of the optical sensor 8b in plan view. ing.
  • FIG. 2 (b) is a plan view of the optical sensor 8b shown in FIG. 1 (b).
  • the predetermined amount to be shifted to the right (the width of the opening 14a of the light shielding member 14 formed on the p + layer 9p + in FIG. 2B) is the same as that of FIG. 2B of the opening 14a of the light shielding member 14. It is preferable that it is 5% or more and 50% or less of the lateral width which is the width in the left-right direction.
  • the opening 14a of the light shielding member 14 is formed by shifting a predetermined amount with respect to the I layer 9i of the polycrystalline semiconductor film 9.
  • the present invention is not limited to this. However, it may be formed shifted by a predetermined amount with respect to a portion where a non-permeable film such as a conductive film is not formed on the I layer 9i, which is a substantial light receiving portion of the optical sensor.
  • the infrared light emitted from the backlight 3 is reflected by the detection object 7, and the incident light quantity of the infrared light incident on the optical sensors 8a and 8b is set. Since the detection configuration is used, a black matrix made of carbon black or the like provided on the counter substrate 5 is positioned on the region where the optical sensors 8a and 8b are formed.
  • the material of the light shielding member 14 is not particularly limited as long as it has a function of blocking infrared light, and is a material capable of blocking infrared light.
  • a metal material can be used.
  • the light shielding member 14 is not limited to this, and any material having a high reflectance in the wavelength region of the infrared light may be used. Is possible.
  • the backlight 3 is a backlight that does not contain infrared light and emits visible light for display, etc.
  • the light wavelength range of light sensitive to the optical sensors 8a and 8b is cut as the light shielding member 14. It is necessary to select materials that can be used.
  • the light shielding member 14 absorbs a wavelength region of light of 1100 nm or less, or A reflective material may be selected.
  • the opening 14a of the light shielding member 14 is replaced with the polycrystalline semiconductor film 9 which is a light receiving portion of the optical sensor 8a.
  • the I layer 9i is shifted by a predetermined amount in the left direction in the figure.
  • the opening 14a is formed by shifting a predetermined amount in the right direction in the drawing so as to partially overlap the I layer 9i of the polycrystalline semiconductor film 9 which is a light receiving portion of the optical sensor 8b in plan view. .
  • the arrangement position of the optical sensor 8a does not affect the directivity of the optical sensor 8b, and the arrangement position of the optical sensor 8b does not affect the directivity of the optical sensor 8a.
  • the arrangement position of the optical sensors 8a and 8b is not particularly limited.
  • the optical sensors 8a and 8b having different directivity characteristics do not necessarily have to be provided at adjacent positions. Therefore, the optical sensors 8a and 8b having different directivity characteristics can be arranged relatively freely. Can do.
  • the first direction is the upper left direction in FIG. 1 and the second direction is the upper right direction in FIG. 1, but photosensors having different directivity characteristics can be formed.
  • the first direction and the second direction are not limited to the above directions as long as parallax can be obtained from the image of the detection object obtained using the optical sensor.
  • FIG. 4 shows a formation example of the optical sensors 8a and 8b provided in the liquid crystal display device 1 of the present embodiment.
  • the optical sensors 8a and 8b have a configuration in which the directivity characteristics can be controlled independently for each of the optical sensors 8a and 8b. Therefore, the arrangement positions of the optical sensors 8a and 8b are not particularly limited. As in the prior art, the optical sensors 8a and 8b do not necessarily have to be provided at adjacent positions, so that the optical sensors 8a and 8b having different directivity characteristics can be arranged relatively freely.
  • the pixel P (1, 1) when the pixel P (1, 1) is provided with the photosensor 8a having directivity in the upper left direction in the figure, the right direction of the pixel P (1, 1).
  • Light having directivity characteristics in the upper right direction in the figure is P (1,2) and P (2,1) which is the nearest neighbor pixel in the lower direction of the pixel P (1,1).
  • the sensor 8b is provided, and the pixel provided with the photosensor 8b is disposed in the nearest pixel in the vertical and horizontal directions of the pixel provided with the photosensor 8a, and the pixel provided with the photosensor 8b.
  • the optical sensors 8a and 8b can be arranged in a staggered manner so that the pixel provided with the optical sensor 8a is arranged at the nearest pixel in the direction.
  • the present invention is not limited to this, and although not shown, for example, the pixels relating to a specific color are not provided, for example, the photosensors 8a and 8b are provided in one pixel or the photosensors 8a and 8b are not provided in all pixels. It is also possible to provide the optical sensors 8a and 8b only in the case.
  • the optical sensor 8a and the optical sensor 8b are arranged apart from each other in the vertical and horizontal ends with the display region interposed therebetween, and the optical sensors 8a and 8b are arranged in a plurality of pixels. It can also be provided for each.
  • the optical sensors 8a and 8b can be provided in consideration of the accuracy of parallax, which will be described in detail later, from the images of the detected objects detected by the optical sensors 8a and 8b having directional characteristics different from those required for the purpose. .
  • FIG. 5 is a diagram showing a schematic circuit configuration in each pixel of the liquid crystal display device 1 of the present embodiment.
  • an optical sensor circuit comprising an optical sensor 8a or an optical sensor 8b, a boosting capacitor 16, and a transistor 17 serving as a sensor output (output amplifier) is provided below each pixel.
  • the scanning signal line GLn to which a scanning signal is supplied from a scanning signal line driving circuit (not shown) (see FIG. 6) is provided above the area where the photosensor circuit is provided in each pixel.
  • a data signal line SLn to which a data signal is supplied from a data signal line driving circuit (not shown) (see FIG. 6) is formed so as to intersect, and in the vicinity of a position where the scanning signal line GLn and the data signal line SLn intersect.
  • the pixel TFT 15 is formed.
  • FIG. 5 shows an example of four adjacent pixels P (n ⁇ 1, n ⁇ 1) ⁇ P (n ⁇ 1, n) ⁇ P (n) in the liquid crystal display device 1 having n ⁇ n pixels shown in FIG. , N ⁇ 1) ⁇ P (n, n) is shown as an example.
  • the auxiliary capacitor Cs is provided, and one end of the auxiliary capacitor Cs is electrically connected to the drain electrode of the pixel TFT 15 and the pixel electrode of the liquid crystal capacitor Clc, and the other end is in the corresponding pixel. It is electrically connected to a storage capacitor bus line CSn formed in parallel with the scanning signal line GLn.
  • the optical sensor circuit is configured as a 1T (abbreviation of transistor) type circuit using only one transistor 17 that plays a role of sensor output, and the transistor 17 functions as a source follower transistor (voltage follower transistor). .
  • the drain electrode of the transistor 17 is connected to the AMP power supply bus line Vsn (n is a natural number indicating the pixel column number), and the source electrode is connected to the photosensor output bus line Von.
  • the AMP power supply bus line Vsn and the optical sensor output bus line Von are connected to a sensor readout circuit (not shown) (see FIG. 6).
  • the AMP power supply bus line Vsn receives the power supply voltage VDD from the sensor readout circuit. Applied.
  • the gate electrode of the transistor 17 is connected to an optical sensor 8a or an optical sensor 8b made of a PIN diode and one end of a boosting capacitor 16 is connected.
  • the anode electrode (metal electrode (wiring) 11a in FIG. 1A and FIG. 1B) of the PIN diode provided in the optical sensors 8a and 8b is a sensor scanning signal line drive circuit not shown. (See FIG. 6) is connected to a wiring Vrstn (n is a natural number indicating the row number of the pixel) to which a reset signal RST is sent, and the other end of the boosting capacitor 16 is a sensor scanning signal line drive circuit (not shown). 6) is connected to the wiring Vrwn to which the optical sensor row selection signal RWS is sent.
  • the photosensor row selection signal RWS has a role of selecting a specific row of photosensor circuits arranged in a matrix and outputting a detection signal from the photosensor circuit in the specific row.
  • a high level reset signal RST is sent to the wiring Vrstn from a sensor scanning signal line drive circuit (not shown) (see FIG. 6).
  • the forward bias is applied to the PIN diodes provided in the optical sensors 8a and 8b, so that the boosting capacitor 16 is charged and the gate of the transistor 17 is charged.
  • the potential of the electrode gradually rises and finally reaches the initialization potential.
  • a high-level row selection signal RWS is applied to the other end of the boosting capacitor 16 from a sensor scanning signal line drive circuit (not shown) (see FIG. 6) via the wiring Vrwn.
  • the potential of the gate electrode of the transistor 17 is pushed up through the boosting capacitor 16, so that the potential of the gate electrode of the transistor 17 becomes a potential obtained by adding the high level potential of the row selection signal RWS to the detection potential. .
  • a detection signal having a level corresponding to the intensity of light received by the optical sensors 8a and 8b is generated, and the detection signal is generated for each pixel provided with the optical sensors 8a and 8b. Is done. Therefore, the detection operation can be performed on the object to be detected arranged close to the liquid crystal display device 1.
  • the AMP power supply bus line Vsn and the optical sensor output bus line Von are provided for each pixel separately from the data signal line SLn.
  • the data signal line SLn and the AMP power supply are provided.
  • the bus line Vsn can be shared to increase the aperture ratio.
  • the AMP power supply bus line Vsn is shared with the data signal line SLn-1
  • the optical sensor output bus line Von is shared with the data signal line SLn. You can also
  • the optical sensors 8a and 8b, the pixel TFT 15, the auxiliary capacitor Cs, the boosting capacitor 16, the pixel electrode, and the transistor 17 are formed by the same process.
  • the present invention is not limited to this. Absent.
  • FIG. 6 is a schematic block diagram showing the configuration of the liquid crystal display device 1.
  • the liquid crystal display device 1 includes a liquid crystal display panel 2, a scanning signal line driving circuit 18, a data signal line driving circuit / sensor reading circuit 19, a sensor scanning signal line driving circuit 20, a control circuit 21, and sensing.
  • An image processing unit 22 and an interface circuit 28 are provided.
  • the scanning signal line driving circuit 18, the data signal line driving circuit / sensor reading circuit 19, and the sensor scanning signal line driving circuit 20 may have a separately created LSI attached to the liquid crystal display panel 2, or the liquid crystal display panel 2. It may be formed monolithically on top.
  • the scanning signal line driving circuit 18 generates a scanning signal for selectively scanning the pixels P (n, n) row by row by using a scanning signal line (not shown).
  • the data signal line driving circuit / sensor reading circuit 19 supplies a data signal to each pixel P (n, n) using a data signal line (not shown).
  • the sensor scanning signal line driving circuit 20 selects and drives the optical sensor circuit row by row, and the data signal line driving circuit / sensor reading circuit 19 uses an AMP power supply bus line (not shown).
  • a power supply voltage VDD having a constant potential is supplied to the photosensor circuit, and a detection signal of an object to be detected is read from the photosensor circuit using a photosensor output bus line (not shown).
  • the control circuit 21 supplies necessary power supply voltages and control signals to the scanning signal line driving circuit 18, the data signal line driving circuit / sensor reading circuit 19, the sensor scanning signal line driving circuit 20, and the sensing image processing unit 22.
  • the sensing image processing unit 22 includes a left image frame memory 23 for storing image data of an object to be detected obtained from the optical sensor 8a having a directivity in the upper left direction and an optical sensor 8b having the directivity in the upper right direction. And a right image frame memory 24 for storing detected image data.
  • the correlation between the left image obtained from the left image frame memory 23 and the right image obtained from the right image frame memory 24 is calculated, and the parallax between the two images and the height (Z coordinate) of the detected object are obtained.
  • the required parallax and height detection circuit 25 detection circuit
  • reference image generation for generating a reference image based on the left image obtained from the left image frame memory 23, the right image obtained from the right image frame memory 24, and the parallax obtained from the parallax and height detection circuit 25.
  • a circuit 26 and a coordinate extraction circuit 27 that extracts XY coordinates from the reference image are provided.
  • each information obtained by the sensing image processing unit 22 can be taken out to a CPU or the like not shown via the interface circuit 28.
  • FIG. 7 is a diagram for explaining a case where a parallax is calculated by performing a correlation calculation using the binarized left image and right image.
  • FIG. 7A shows a case where a finger is present as an object to be detected on the liquid crystal display panel 2.
  • FIG. 7B shows the image data of the detected object obtained from the optical sensor 8a having the directivity in the upper left direction in black when it is less than a certain threshold, and white (dot pattern) when it is greater than or equal to a certain threshold.
  • FIG. 7C shows the left image binarized so that the image data of the detected object obtained from the optical sensor 8b having the directivity in the upper right direction is less than a certain threshold value. The right image that is binarized so as to be white (dot pattern) above a certain threshold is shown in black.
  • the left image is shifted leftward from the actual finger position
  • the right image is shifted from the actual finger position. It is shifted to the right. Therefore, there is a parallax between the left image and the right image.
  • a normalized correlation method is used to set one of the left image and the right image as a reference, and the reference image as 1
  • the left image and the right image are matched for each line while shifting pixel by pixel, and the correlation value R becomes maximum when the seven pixels are shifted.
  • the parallax corresponds to seven pixels, and the liquid crystal display panel 2
  • the parallax distance can be calculated based on the pixel pitch.
  • FIG. 8 is a diagram for explaining a case where the parallax is calculated by matching the left image and the right image of 256 gradations for each line using the normalized correlation method.
  • the peak value of the correlation value R is broad, so the parallax is calculated with high accuracy. It is difficult.
  • FIG. 9 is a diagram for explaining a case where the parallax is calculated by performing a correlation operation for each line using the left image and the right image in which the edge is detected.
  • FIG. 9A shows a case where a finger is present as an object to be detected on the liquid crystal display panel 2.
  • FIG. 9B shows the left image of the detected object obtained from the optical sensor 8a having the directivity in the upper left direction using a known filter such as a Sobel filter or a Roberts filter to detect the detected object. It is the image which detected the outline of the finger
  • a known filter such as a Sobel filter or a Roberts filter
  • (c) of FIG. 9 filters the right image of the detected object obtained from the optical sensor 8b having the directivity characteristic in the upper right direction by using a known filter such as a Sobel filter or Roberts filter, and detects the detected object. It is an image (edge image) in which the contour line of a finger as an object, ie, an edge is detected.
  • the left image is shifted leftward from the actual finger position
  • the right image is shifted from the actual finger position. It is shifted to the right. Therefore, there is a parallax between the left image and the right image.
  • FIG. 10 illustrates a case where parallax is obtained by performing matching using the normalized correlation method on the left image and the right image in which the edges illustrated in FIGS. 9B and 9C are detected. It is a figure for doing.
  • FIG. 10A shows a left image in which an edge is detected
  • FIG. 10B shows a right image in which an edge is detected
  • FIG. 10C shows the right image as a reference.
  • the left image and the right image are matched one line at a time while shifting the reference right image one pixel at a time, and when the seven pixels are shifted, the left image and the right image are detected objects. It shows that the outlines of the fingers just match.
  • the correlation value R is obtained while performing matching using the left image and the right image in which the edge is detected, the correlation value R is illustrated in FIG. Compared with the case where a binarized image is used, no ghost is generated, so that the parallax can be calculated with relatively high accuracy.
  • FIG. 11 is a diagram for explaining a case where parallax is obtained by obtaining an edge direction image of a left image and a right image and performing a correlation calculation including a term relating to the edge direction.
  • FIG. 11A shows a case where a finger is present as an object to be detected on the liquid crystal display panel 2.
  • FIG. 11B shows the left image of the detected object obtained from the optical sensor 8a having the directivity in the upper left direction using a known filter such as a Sobel filter or a Roberts filter to detect the detected object.
  • a known filter such as a Sobel filter or a Roberts filter to detect the detected object.
  • This is an image in which the contour line of a finger, that is, an edge, is detected, and the direction of the edge is also detected.
  • the number from 1 to 8 is described by the direction.
  • (c) of FIG. 11 filters the right image of the detected object obtained from the optical sensor 8b having the directivity characteristic in the upper right direction by using a known filter such as a Sobel filter or Roberts filter, and detects the detected object.
  • a known filter such as a Sobel filter or Roberts filter
  • This is an image in which the contour line of a finger, that is, an edge, is detected, and the direction of the edge is also detected.
  • the number of 1-8 is described by (c) of FIG. 11 with the direction.
  • ILi is the i-th edge intensity of the left image
  • IRi + j is the i-th edge intensity shifted to the left by j pixels of the right image.
  • Wdiri is a COS value of an angle formed between the direction of the i-th edge of the left image and the direction of the i-th edge shifted to the left by j pixels of the right image. Therefore, for example, if the directions of the two edges are the same, wdiri is 1, and if the directions of the two edges are different by 90 degrees, wdiri is 0, and if the directions of the two edges are 180 degrees, wdiri is -1.
  • the left edge of the left image and the left edge of the right image, the right edge of the left image, and the right edge of the right image are weighted by wdiri, which is a term related to the edge direction. If the two match, a high correlation value R can be obtained.
  • FIG. 12 shows the parallax d obtained by the method as described above and the incident angle of the reflected light from the detected object (finger) 7 to the optical sensors 8a and 8b, that is, the I layer 9i in the optical sensors 8a and 8b. It is a figure for demonstrating the method of calculating
  • the height h from the I layer 9i (not shown) to the detected object (finger) 7 can be obtained from the parallax d obtained for each line and the incident angle ⁇ which is a set value.
  • the incident angle ⁇ is a straight line connecting the perpendicular from the center of the I layer 9 i in the optical sensors 8 a and 8 b and the center of the I layer 9 i and the center of the opening 14 a of the light shielding member 14.
  • the present invention is not limited to this.
  • the reliability information (reliability) obtained from the parallax and height detection circuit 25 shown in FIG. 6 can be obtained by the following (Equation 2), and the square root of the maximum value Rjmax of the correlation value R is obtained. Normalized by edge strength.
  • FIG. 13 is a diagram for explaining a flow for obtaining a correlation function for each pixel.
  • FIG. 14 is a diagram illustrating R (j, k) in FIG. 13.
  • HSZ is the number of pixels in the horizontal direction. Indicates the number of pixels in the vertical direction.
  • FIG. 15 is a reference table for obtaining the value of Wdir, which is a term related to the edge direction.
  • Wdir is a COS value determined by the angle formed by the edge direction of a pixel in the left image and the edge direction of a pixel in the right image.
  • the direction of the edge is described by a number from 1 to 8, and the edge direction in a pixel in the left image is the same as the edge direction in a pixel in the right image.
  • COS0, and Wdir becomes 1.
  • a correlation function between the upper left pixel (0, 0) of the left image and the upper left pixel (0, 0) of the right image is calculated.
  • i is determined to be equal to or higher than HSZ. Until the upper left pixel (0, 0) of the left image and all the pixels (0, 0) to (0, HSZ) of the uppermost row of the right image -1) is calculated.
  • the left image is shifted by one column in the same row while shifting the left image pixels (0, 2) to (0, HSZ-1) and one of the right image.
  • a correlation function with all the pixels (0, 0) to (0, HSZ-1) in the top row is calculated.
  • the correlation function between all the left image pixels and all the right image pixels is calculated in the same manner as described above while shifting the left image line by line until it is determined that k is equal to or greater than VSZ.
  • the correlation function for each line of the left image and the right image can be obtained.
  • FIG. 16 is a diagram for explaining a flow for obtaining the maximum value of the correlation function for each line
  • FIG. 17 is a diagram showing R (j, k) and parallax z (k) in FIG. .
  • step S202 each pixel (0, 1) to (0, HSZ-1) of the left image and the right image are shifted while shifting the left image by one column in the same row until j is determined to be equal to or higher than HSZ.
  • the maximum value of the correlation function with all the pixels (0, 0) to (0, HSZ-1) in the top row is obtained.
  • each pixel (1, 0) to (1, HSZ-1) obtained by shifting the left image by one row and all of the second row of the right image
  • the maximum value of the correlation function with the pixels (1, 0) to (1, HSZ-1) is obtained.
  • the maximum value of the correlation function between all the left image pixels and all the right image pixels is set in the same manner as described above while shifting the left image line by line until it is determined that k is equal to or greater than VSZ. Ask.
  • the parallax z (k) can be obtained by obtaining the maximum value of the correlation function for each line.
  • FIG. 2 a second embodiment of the present invention will be described based on FIG.
  • the present embodiment is different from the first embodiment in that an optical element that limits the incident angle to the optical sensors 8a and 8b within a certain range is provided, and other configurations are the same as in the first embodiment. 1 as described above.
  • members having the same functions as those shown in the drawings of the first embodiment are given the same reference numerals, and descriptions thereof are omitted.
  • FIG. 18 is a diagram showing a schematic configuration of an optical sensor provided with an optical element that limits the incident angle within a certain range.
  • the optical sensor 8c having directivity in the upper left direction in the drawing, it is an upper layer from the I layer 9i that is a light receiving portion, and the opening 14a of the light shielding member 14
  • An optical element 29a that restricts the incident angle to the I layer 9i that is the light receiving portion within a certain range is provided at a position overlapping in plan view.
  • the optical sensor 8d having directivity characteristics in the upper right direction in the drawing, it is an upper layer than the I layer 9i that is the light receiving portion, and the opening portion of the light shielding member 14
  • An optical element 29b that limits the incident angle to the I layer 9i that is the light receiving portion within a certain range is provided at a position overlapping with 14a in plan view.
  • the liquid crystal display device 1 can improve the blur of incident light incident on the I layer 9i that is a light receiving unit, and can estimate the distance of the detection object with high accuracy. Can be realized.
  • the first image of the detected object detected by the plurality of first light receiving elements and the second image of the detected object detected by the plurality of second light receiving elements are detected at the same timing, performs a correlation operation between the first image and the second image, and based on the calculated parallax between the first image and the second image, It is preferable that a detection circuit for detecting the height of the detected object from the surface is provided.
  • the first object of the detected object is based on the parallax of the image of the detected object detected by the first light receiving element and the second light receiving element having different directivity characteristics.
  • the height from the surface on which the light receiving element and the second light receiving element are formed can be detected.
  • each of the first image and the second image is a binarized image, and the parallax is calculated by a correlation operation of the binarized image. Is preferred.
  • the amount of information is large compared to the case where the binarized image is used, and the amount of calculation processing in the correlation calculation is larger by the amount of information. Therefore, it becomes necessary to provide hardware and software that can process a huge amount of calculation at high speed.
  • the first image and the second image are edge images obtained by extracting contour lines of the detected object, and the parallax is calculated by correlation calculation of the edge images. It is preferable.
  • the edge image obtained by extracting the contour line of the detection object is used for the calculation of the parallax, the correlation value (similarity) having the highest matching pattern between the images is maximized. Since the portion to be shown can be obtained relatively sharply, an area sensor that can calculate an accurate parallax and can estimate the distance of the detected object with high accuracy can be realized.
  • the first image and the second image are edge images obtained by extracting contour lines of the detected object, and the parallax is obtained by correlation calculation including a term related to the direction of the edge. Is preferably calculated.
  • the correlation value (similarity)
  • the parallax is calculated by a correlation calculation including a term related to the edge direction set so as to increase.
  • a reference image generation circuit that generates a reference image based on the first image, the second image, and the parallax between the first image and the second image.
  • a coordinate extraction circuit for extracting coordinates from the reference image.
  • the coordinates on the surface on which the first light receiving element and the second light receiving element are formed can also be extracted from the reference image.
  • the area sensor can be used as information input means for three-dimensional coordinates (XYZ coordinates).
  • a position that is an upper layer than the first light receiving unit and the second light receiving unit and overlaps the opening of the first light shielding member and the opening of the second light shielding member in plan view It is preferable that an optical element for limiting an incident angle to the first light receiving unit and the second light receiving unit within a certain range is provided.
  • the incident angle to the first light receiving unit and the second light receiving unit at a position overlapping the opening of the first light blocking member and the opening of the second light blocking member in plan view Since an optical element that restricts the light intensity within a certain range is provided, it is possible to improve blurring of incident light incident on the first light receiving unit and the second light receiving unit, and further, high accuracy.
  • An area sensor capable of estimating the distance of the object to be detected can be realized.
  • an active matrix substrate having a surface on which the first light receiving element and the second light receiving element are formed, and the first light receiving element and the second light receiving element are formed. It is preferable that a counter substrate disposed so as to be opposed to the surface being provided, and a liquid crystal layer sandwiched between the active matrix substrate and the counter substrate.
  • a liquid crystal display device in which the area sensor is integrated can be realized. Therefore, information can be input using the area sensor while viewing the image of the liquid crystal display device.
  • the present invention can be applied to an area sensor including a light receiving element (light sensor) and a display device including such an area sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

Light sensors (8a, 8b) are provided with: an I layer (9i) constituting a light receiving part; and a light blocking member (14) which blocks light and is formed in a layer above the I layer (9i) constituting the light receiving part. In the light sensor (8a), an opening (14a) of the light blocking member (14) is formed at a predetermined amount of offset to the left side of the I layer (9i) constituting the light receiving part, in order to partially overlap the I layer (9i) in plan view in such a way that light components from the upper left side in the drawing can be received and light components from the upper right side in the drawing can be blocked. In the light sensor (8b), the opening (14a) of the light blocking member (14) is formed at a predetermined amount of offset to the right side of the I layer (9i) constituting the light receiving part, in order to partially overlap the I layer (9i) in plan view in such a way that the light components from the upper right side in the drawing can be received and the light components from the upper left side in the drawing can be blocked. It is therefore possible to realize a display device and an area sensor capable of estimating distance with higher precision and also enabling relatively free placement of a light receiving element having different directionality characteristics.

Description

エリアセンサおよび表示装置Area sensor and display device
 本発明は、受光素子(光センサ)を備えたエリアセンサおよびこのようなエリアセンサを備えている表示装置に関するものである。 The present invention relates to an area sensor provided with a light receiving element (light sensor) and a display device provided with such an area sensor.
 液晶表示装置などの表示装置の中には、入力用のペンまたは人の指などで表示装置の表面を触れると、その触れた位置を検出することのできるタッチパネル(エリアセンサ)機能を備えたタッチパネル一体型表示装置が開発されている。 Some display devices such as liquid crystal display devices have a touch panel (area sensor) function that can detect the touched position when the surface of the display device is touched with an input pen or a human finger. An integrated display device has been developed.
 従来のタッチパネル一体型表示装置は、抵抗膜方式(押されると上の導電性基板と下の導電性基板とが接触することによって入力位置を検知する方式)や静電容量方式(触った場所の容量変化を検知することによって入力位置を検知する方式)のものが主流であった。 Conventional touch panel integrated display devices have a resistive film method (a method in which an input position is detected by contact between an upper conductive substrate and a lower conductive substrate when pressed) and a capacitive method (in which the touched area is touched). The method of detecting the input position by detecting a change in capacitance) has been the mainstream.
 しかし、このようなタッチパネル一体型表示装置においては、その厚さが厚くなることや別途の製造工程が追加されるなどのデメリットがあることから、近年、フォトダイオードやフォトトランジスタなどの受光素子(光センサ)を、例えば、液晶表示装置の画像表示領域内の各画素毎に(あるいは複数の画素毎に)設けたタッチパネル一体型液晶表示装置が開発されている。 However, such touch panel integrated display devices have disadvantages such as an increase in thickness and the addition of a separate manufacturing process. For example, a touch panel integrated liquid crystal display device in which a sensor) is provided for each pixel (or for each of a plurality of pixels) in an image display region of the liquid crystal display device has been developed.
 このような構成によれば、上記液晶表示装置には、各画素毎に画素トランジスタが設けられている。そして、上記受光素子は、上記液晶表示装置に備えられた対向する両基板間に形成されるため、上記液晶表示装置の厚さを厚くすることなく、また、上記受光素子は、上記画素トランジスタと同一工程にて形成することができるので、別途の製造工程を追加しなくてもよい。 According to such a configuration, the liquid crystal display device is provided with a pixel transistor for each pixel. And since the said light receiving element is formed between the both board | substrates with which the said liquid crystal display device was equipped, without increasing the thickness of the said liquid crystal display device, the said light receiving element and the said pixel transistor are used. Since it can be formed in the same process, it is not necessary to add a separate manufacturing process.
 このような理由から、受光素子を備え、外部からの入力位置を検出するエリアセンサを備えた表示装置は、注目を浴びている。 For this reason, a display device that includes a light receiving element and an area sensor that detects an input position from the outside is attracting attention.
 また、受光素子を備えたエリアセンサにおいては、抵抗膜方式や静電容量方式のタッチパネルでは、実現するのが困難である入力用のペンまたは人の指などの検出対象物のフロート状態(非接触状態)での位置の検出が可能である。 In addition, in an area sensor having a light receiving element, a floating state (non-contact) of an object to be detected such as an input pen or a human finger, which is difficult to realize with a resistive touch panel or a capacitive touch panel The position in the state) can be detected.
 さらには、受光素子を備えたエリアセンサにおいては、フロート状態にある上記検出対象物の上記表示装置の表示面からの距離も検出することができる。 Furthermore, in an area sensor provided with a light receiving element, the distance of the detection object in a float state from the display surface of the display device can also be detected.
 例えば、特許文献1には、手などの対象物体の距離情報を高い分解能で求めることができ、対象物体の立体形状を抽出できるフォトダイオードで構成された受光手段を備えた情報入力装置について記載されている。 For example, Patent Document 1 describes an information input device including a light receiving unit configured by a photodiode capable of obtaining distance information of a target object such as a hand with high resolution and extracting a three-dimensional shape of the target object. ing.
 図19は、上記情報入力装置の概略構成を示す図である。 FIG. 19 is a diagram showing a schematic configuration of the information input device.
 図示されているように、発光手段105より発光された光は、対象物体106に反射して、レンズ等の受光光学系107により、反射光抽出手段108の受光面上に結像するようになっている。 As shown in the figure, the light emitted from the light emitting means 105 is reflected by the target object 106 and forms an image on the light receiving surface of the reflected light extracting means 108 by the light receiving optical system 107 such as a lens. ing.
 反射光抽出手段108は、第1の受光手段109、第2の受光手段110および差分演算部111から構成され、対象物体106に反射された反射光の強度分布、すなわち、反射光画像を検出する。そして、第1の受光手段109と第2の受光手段110は、異なるタイミングで受光を行うように設定されており、第1の受光手段109が受光しているときに発光手段105が発光し、第2の受光手段110が受光しているときには発光手段105は発光しないように、タイミング制御手段112がこれらの動作タイミングを制御する。 The reflected light extraction unit 108 includes a first light receiving unit 109, a second light receiving unit 110, and a difference calculation unit 111, and detects the intensity distribution of reflected light reflected by the target object 106, that is, a reflected light image. . The first light receiving means 109 and the second light receiving means 110 are set to receive light at different timings, and the light emitting means 105 emits light when the first light receiving means 109 is receiving light, The timing control unit 112 controls the operation timing so that the light emitting unit 105 does not emit light when the second light receiving unit 110 is receiving light.
 このような構成であるため、第1の受光手段109は、発光手段105からの光の対象物体106による反射光とそれ以外の太陽光、照明光などの外光を受光し、一方、第2の受光手段110は、外光のみを受光するようになっている。両者が受光するタイミングは異なっているが近いので、この間における外光の変動は無視できる。 Due to such a configuration, the first light receiving means 109 receives the reflected light of the light from the light emitting means 105 by the target object 106 and other external light such as sunlight and illumination light, while the second light receiving means 109 receives the second light. The light receiving means 110 receives only external light. Since the timings at which the light is received are different, but are close to each other, fluctuations in external light during this period can be ignored.
 したがって、第1の受光手段109で受光した像と第2の受光手段110で受光した像との差分をとれば、発光手段105の光において、対象物体106による反射光の成分だけを抽出することができる。差分演算部111は、第1の受光手段109と第2の受光手段110とで受光した像の差を計算して出力する。 Therefore, if the difference between the image received by the first light receiving means 109 and the image received by the second light receiving means 110 is taken, only the component of the reflected light from the target object 106 is extracted from the light from the light emitting means 105. Can do. The difference calculation unit 111 calculates and outputs the difference between the images received by the first light receiving unit 109 and the second light receiving unit 110.
 そして、反射光抽出手段108は、反射光画像の各画素の反射光量をシーケンシャルに出力する。また、反射光抽出手段108からの出力は、アンプ113によって増幅され、A/D変換器114によってデジタルデータに変換された後、メモリ115に蓄えられ、しかるべきタイミングでこのメモリ115より蓄積されたデータが読み出され、特徴情報生成手段116において処理されるようになっている。なお、これらの制御は、タイミング制御手段112が行う。 The reflected light extraction means 108 sequentially outputs the reflected light amount of each pixel of the reflected light image. The output from the reflected light extraction means 108 is amplified by the amplifier 113, converted into digital data by the A / D converter 114, stored in the memory 115, and stored in the memory 115 at an appropriate timing. Data is read out and processed by the feature information generation means 116. These controls are performed by the timing control means 112.
 対象物体106からの反射光は、対象物体106と第1の受光手段109との距離が大きくなるにつれ大幅に減少する。対象物体106の表面が一様に光を散乱する場合、反射光画像の1画素あたりの受光量は、対象物体106と第1の受光手段109との距離の2乗に反比例して小さくなる。したがって、上記情報入力装置の前に対象物体106を置いたとき、背景からの反射光はほぼ無視できるくらいに小さくなり、対象物体106のみからの反射光画像を得ることが出来る。 The reflected light from the target object 106 decreases significantly as the distance between the target object 106 and the first light receiving means 109 increases. When the surface of the target object 106 scatters light uniformly, the amount of received light per pixel of the reflected light image decreases in inverse proportion to the square of the distance between the target object 106 and the first light receiving means 109. Therefore, when the target object 106 is placed in front of the information input device, the reflected light from the background becomes small enough to be ignored, and a reflected light image from only the target object 106 can be obtained.
 具体的に例を挙げて説明すると、上記情報入力装置の前に手を持ってきた場合、その手からの反射光画像が得られるが、この時、反射光画像の1画素あたりの受光量は、その1画素に対応する単位受光部で受光した反射光量となる。そして、上記反射光量は、対象物体106の性質(光を鏡面反射する、散乱する、吸収する、など)、対象物体106の面の向き、対象物体106の距離、などに影響されるが、対象物体106全体が一様に光を散乱する物体である場合、その反射光量は、対象物体106までの距離と密接な関係を持つ。手などは、対象物体106全体が一様に光を散乱する物体であるため、手を差し出した場合の反射光画像は、手の距離、手の傾き(部分的に距離が異なる)、などを反映する。したがって、これらの特徴情報を抽出することによって、様々な情報の入力・生成が可能となる。 Specifically, when a hand is brought in front of the information input device, a reflected light image from the hand is obtained. At this time, the amount of light received per pixel of the reflected light image is The amount of reflected light received by the unit light receiving unit corresponding to the one pixel is obtained. The amount of reflected light is affected by the properties of the target object 106 (such as specularly reflecting, scattering, and absorbing light), the orientation of the surface of the target object 106, the distance of the target object 106, etc. When the entire object 106 is an object that uniformly scatters light, the amount of reflected light has a close relationship with the distance to the target object 106. A hand or the like is an object in which the entire target object 106 scatters light uniformly, so the reflected light image when the hand is put out shows the distance of the hand, the inclination of the hand (the distance is partially different), etc. reflect. Therefore, by extracting such feature information, various information can be input / generated.
 上記特許文献1の構成においては、アンプ113として対数アンプを用いることにより、反射光画像の1画素あたりの受光量を、対象物体106までの距離に反比例するようにすることができ、ダイナミックレンジを有効に使うことができるとともに、距離情報を高い分解能で求め、対象物体106の立体形状を抽出できる情報入力装置を実現できると記載されている。 In the configuration of Patent Document 1 described above, by using a logarithmic amplifier as the amplifier 113, the amount of received light per pixel of the reflected light image can be made inversely proportional to the distance to the target object 106, and the dynamic range can be increased. It is described that an information input device that can be used effectively, can obtain distance information with high resolution, and can extract the three-dimensional shape of the target object 106 can be realized.
 図20は、上記特許文献2に記載の表示装置の概略構成を示す図である。 FIG. 20 is a diagram showing a schematic configuration of the display device described in Patent Document 2.
 図示されているように、画素140は、縦および横方向に連続して配置されたマトリクス配列となっている。そして、1つの画素140における表示領域120は、矩形形状で形成されており、また、光センサ130R・130Lの1組は、表示領域120の下方において横方向に並んで配列している。 As shown in the figure, the pixels 140 have a matrix arrangement that is continuously arranged in the vertical and horizontal directions. The display area 120 in one pixel 140 is formed in a rectangular shape, and one set of the optical sensors 130R and 130L is arranged in the horizontal direction below the display area 120.
 また、遮光部材150は、図20においてハッチングで付されているように配置されている。すなわち、遮光部材150は、表示領域120の形状に合わせた開口部152と、光センサ130R・130Lに合わせた矩形形状の開口部154とを有している。 Further, the light shielding member 150 is arranged as hatched in FIG. In other words, the light shielding member 150 has an opening 152 that matches the shape of the display area 120 and a rectangular opening 154 that matches the optical sensors 130R and 130L.
 図示されているように、光センサ130Rと光センサ130Lとは、交互に配置されており、図20の左より光センサ130Rと光センサ130Lとが、この順に並ぶ箇所では、光センサ130Rと光センサ130Lとに跨るように遮光部材150が設けられている。 As shown in the drawing, the optical sensors 130R and 130L are alternately arranged, and the optical sensor 130R and the optical sensor 130L are arranged in this order from the left in FIG. A light shielding member 150 is provided so as to straddle the sensor 130L.
 一方、図20の左より光センサ130Lと光センサ130Rとが、この順に並ぶ箇所では、光センサ130Lと光センサ130Rとに跨るように遮光部材150の開口部154が設けられている。 On the other hand, in the place where the optical sensor 130L and the optical sensor 130R are arranged in this order from the left in FIG. 20, an opening 154 of the light shielding member 150 is provided so as to straddle the optical sensor 130L and the optical sensor 130R.
 そして、各画素140における光センサ130Rと光センサ130Lとの境界と、遮光部材150における矩形形状の開口部154の中心線138とが一致するように、遮光部材150が設けられている。 The light shielding member 150 is provided so that the boundary between the optical sensor 130R and the optical sensor 130L in each pixel 140 and the center line 138 of the rectangular opening 154 in the light shielding member 150 coincide with each other.
 すなわち、上記構成においては、各画素140における光センサ130Rと光センサ130Lとに異なる指向特性を持たせるため、光センサ130Rの一部と光センサ130Lの一部とを跨るように一つの遮光部材150の開口部154を設けている構成である。 That is, in the above configuration, in order to give different directional characteristics to the optical sensor 130R and the optical sensor 130L in each pixel 140, one light shielding member is provided so as to straddle part of the optical sensor 130R and part of the optical sensor 130L. In this configuration, 150 openings 154 are provided.
 また、上記特許文献2に記載の表示装置は、比較的外部が明るい状態においては、表示パネル100から運転席側または、助手席側にかけて指のような被検出物が存在するとき、当該被検出物の影が生じ、すなわち、背景に対して暗くなる部分が生じ、被検出物が光センサ130R・130Lによって検出される。 Further, in the display device described in Patent Document 2, when a detection object such as a finger is present from the display panel 100 to the driver seat side or the passenger seat side in a relatively bright external state, the detection target is detected. A shadow of the object is generated, that is, a part that becomes darker than the background is generated, and the detected object is detected by the optical sensors 130R and 130L.
 一方、夜間やトンネル走行などのように比較的外部が暗い場合、バックライト(図示省略)による照射光が被検出物で反射して、光センサ130R・130Lに入射するので、被検出物による像は、背景に対して逆に明るくなるので被検出物が光センサ130R・130Lによって検出される。 On the other hand, when the outside is relatively dark, such as at night or when traveling in a tunnel, the irradiation light from the backlight (not shown) is reflected by the detected object and enters the optical sensors 130R and 130L. , The object to be detected is detected by the optical sensors 130R and 130L.
 このような構成であるため、外部が明るい場合にも暗い場合にも、光量が背景部分と比較して減少または、増加した部分のうち、光センサ130Rにより特定されるものをR像、光センサ130Lにより特定されるものをL像とすることができる。 Because of such a configuration, the portion specified by the optical sensor 130R among the portions where the light amount is reduced or increased compared to the background portion, whether the outside is bright or dark, is the R image, the optical sensor. An image specified by 130L can be an L image.
 図21は、指のような被検出物を球体として示し、上記被検出物が表示パネル100へ接近する場合を示す図である。 FIG. 21 is a diagram illustrating a case where an object to be detected such as a finger is shown as a sphere, and the object to be detected approaches the display panel 100.
 図示されているように、例えば、指が助手席側から表示パネル100に接近する場合に、当該指は(a)、(b)、(c)の地点を経る。 As shown in the figure, for example, when a finger approaches the display panel 100 from the passenger seat side, the finger passes through points (a), (b), and (c).
 そして、上記指が表示パネル100に接近するにつれて(当該指が(a)から(c)に移動するにつれて)、R像とL像とは互いに近づき、上記指が表示パネル100に触れた状態に至ると、R像とL像とはほぼ重なる。 Then, as the finger approaches the display panel 100 (as the finger moves from (a) to (c)), the R image and the L image approach each other, and the finger touches the display panel 100. As a result, the R image and the L image almost overlap each other.
 すなわち、上記指が表示パネル100に接近するにつれて、R像とL像との視差は小さくなる。 That is, as the finger approaches the display panel 100, the parallax between the R image and the L image decreases.
 したがって、上記特許文献2には、このようなR像とL像との視差を利用し、上記視差が、予め設定された閾値よりも小さい場合に、上記指が表示パネル100にタッチ操作を行ったと判別できる表示装置を実現できると記載されている。 Therefore, in Patent Document 2, the parallax between the R image and the L image is used, and the finger performs a touch operation on the display panel 100 when the parallax is smaller than a preset threshold. It is described that it is possible to realize a display device that can be discriminated.
特開平10-177449号公報(1998年6月30日公開)Japanese Patent Laid-Open No. 10-177449 (released on June 30, 1998) 特開2009-9098号公報(2009年1月15日公開)Japanese Patent Laying-Open No. 2009-9098 (released on January 15, 2009) 特開平9-27969号公報(1997年1月28日公開)Japanese Patent Laid-Open No. 9-27969 (published January 28, 1997)
 しかしながら、上記特許文献1においては、対象物体106の距離の推定に、反射光画像の1画素あたりの受光量が、対象物体106までの距離に反比例していること、すなわち、対象物体106の反射強度のみを用いている。 However, in Patent Document 1, the amount of received light per pixel of the reflected light image is inversely proportional to the distance to the target object 106 in estimating the distance of the target object 106, that is, the reflection of the target object 106. Only strength is used.
 対象物体106の反射強度は、対象物体106の反射率によってばらつくため、例えば、第1の受光手段109から同じ距離離された位置に対象物体106として手があるとした場合、同じ距離離されているにも関わらず、人種(肌の色)、手の皮の厚さ、血行などによって、手の反射特性は異なるため、精度の高い距離の推定を行うためには、ユーザーが変わる度に、上記情報入力装置の最適化(調整)が必要になるという問題がある。 Since the reflection intensity of the target object 106 varies depending on the reflectance of the target object 106, for example, if the target object 106 is located at the same distance from the first light receiving unit 109, the target object 106 is separated by the same distance. Nevertheless, the reflection characteristics of the hand differ depending on race (skin color), hand skin thickness, blood circulation, etc. Therefore, in order to estimate the distance with high accuracy, every time the user changes There is a problem that optimization (adjustment) of the information input device is required.
 また、上記特許文献1の構成においては、外光に瞬間的な変動が生じた場合には、対応できず、距離の推定が困難になるという問題もある。 Also, the configuration of the above-mentioned Patent Document 1 has a problem that it is difficult to estimate the distance because it cannot cope with a momentary fluctuation in external light.
 また、上記特許文献2においては、図20に図示されているように、各画素140における光センサ130Rと光センサ130Lとに異なる指向特性を持たせるため、光センサ130Rの一部と光センサ130Lの一部とを跨るように一つの遮光部材150の開口部154を設けている構成である。 In Patent Document 2, as shown in FIG. 20, a part of the optical sensor 130R and the optical sensor 130L are provided so that the optical sensor 130R and the optical sensor 130L in each pixel 140 have different directivity characteristics. It is the structure which provided the opening part 154 of the one light shielding member 150 so that one part may be straddled.
 したがって、各画素140における光センサ130Rと光センサ130Lとの境界と、遮光部材150における矩形形状の開口部154の中心線138とが一致するように、光センサ130Rと光センサ130Lと遮光部材150とは、一定間隔でストライプ状に設けられている。 Therefore, the optical sensor 130R, the optical sensor 130L, and the light shielding member 150 are arranged so that the boundary between the optical sensor 130R and the optical sensor 130L in each pixel 140 and the center line 138 of the rectangular opening 154 in the light shielding member 150 coincide. Is provided in stripes at regular intervals.
 上記構成によれば、各画素140において、二つの光センサ130R・130Lに対して、共通する一つの開口部154が形成されているので、二つの光センサ130R・130Lに異なる指向特性を持たせるためには、開口部154の配置位置に対する光センサ130R・130Lの配置位置は限定されることとなる。 According to the above configuration, in each pixel 140, since one common opening 154 is formed for the two photosensors 130R and 130L, the two photosensors 130R and 130L have different directivity characteristics. Therefore, the arrangement positions of the optical sensors 130R and 130L with respect to the arrangement position of the opening 154 are limited.
 したがって、二つの光センサ130R・130Lを、各画素140において、独立的に自由に配置するのは困難である。 Therefore, it is difficult to freely arrange the two optical sensors 130R and 130L independently in each pixel 140.
 本発明は、上記の問題点に鑑みてなされたものであり、より精度の高い距離の推定を行うことができるとともに、異なる指向特性を有する受光素子を比較的自由に配置することのできるエリアセンサおよび表示装置を提供することを目的とする。 The present invention has been made in view of the above problems, and is an area sensor that can estimate a distance with higher accuracy and can relatively freely arrange light receiving elements having different directivity characteristics. An object is to provide a display device.
 本発明のエリアセンサは、上記の課題を解決するために、光の入射光量を検出する第1の受光素子と第2の受光素子とが1組になって、複数個形成されている面を有するエリアセンサであって、上記第1の受光素子は、第1の受光部と、上記第1の受光部より上層に形成され上記光を遮断する第1の遮光部材とを備えており、上記第2の受光素子は、第2の受光部と、上記第2の受光部より上層に形成され上記光を遮断する第2の遮光部材とを備えており、上記第1の受光素子においては、第1の方向からの光成分は受光でき、上記第1の方向からの光が上記面で鏡面反射される場合の反射光の方向と反対方向である第2の方向からの光成分は遮光できるように、上記第1の遮光部材の開口部は、上記第1の受光部に対して、平面視において部分的に重なるように、上記第1の方向側に所定量ずらされて形成されており、上記第2の受光素子においては、上記第2の方向からの光成分は受光でき、上記第1の方向からの光成分は遮光できるように、上記第2の遮光部材の開口部は、上記第2の受光部に対して、平面視において部分的に重なるように、上記第2の方向側に所定量ずらされて形成されていることを特徴としている。 In order to solve the above-described problem, the area sensor of the present invention has a surface in which a plurality of first and second light receiving elements that detect the amount of incident light are combined to form a plurality. The first light receiving element includes a first light receiving portion and a first light blocking member that is formed in an upper layer than the first light receiving portion and blocks the light. The second light receiving element includes a second light receiving part and a second light shielding member that is formed in an upper layer than the second light receiving part and blocks the light. In the first light receiving element, The light component from the first direction can be received, and the light component from the second direction that is opposite to the direction of the reflected light when the light from the first direction is specularly reflected by the surface can be blocked. As described above, the opening of the first light-shielding member is in plan view with respect to the first light-receiving portion. The second light receiving element is formed to be shifted by a predetermined amount so as to partially overlap, and the second light receiving element can receive a light component from the second direction. The opening of the second light blocking member is located on the second direction side so as to partially overlap the second light receiving unit in plan view so that light components from the direction can be blocked. It is characterized by being formed with a fixed amount of displacement.
 従来においては、二つの異なる指向特性を有する受光素子を形成するため、隣接するように形成された二つの受光素子のそれぞれの一部に跨るように一つの遮光部材の開口部を設けていた。 Conventionally, in order to form a light receiving element having two different directional characteristics, an opening of one light shielding member is provided so as to straddle a part of each of the two light receiving elements formed adjacent to each other.
 したがって、上記二つの受光素子間の境界と上記一つの遮光部材の開口部の中心線とが一致するように、上記受光素子と上記遮光部材とは、一定間隔で設けられていた。 Therefore, the light receiving element and the light shielding member are provided at regular intervals so that the boundary between the two light receiving elements coincides with the center line of the opening of the one light shielding member.
 このような構成であるため、上記二つの受光素子を独立的に自由に配置するのは困難であった。 Because of this configuration, it was difficult to arrange the two light receiving elements independently and freely.
 一方、本発明の構成によれば、上記第1の受光素子に指向性を持たせるため、上記第1の遮光部材の開口部を、上記第1の受光部に対して、平面視において部分的に重なるように、上記第1の方向側に所定量ずらして形成しており、上記第2の受光素子に指向性を持たせるため、上記第2の遮光部材の開口部を、上記第2の受光部に対して、平面視において部分的に重なるように、上記第2の方向側に所定量ずらして形成している。 On the other hand, according to the configuration of the present invention, in order to give directivity to the first light receiving element, the opening of the first light shielding member is partially separated from the first light receiving unit in a plan view. So that the second light-receiving element has directivity, the opening of the second light-shielding member is formed in the second direction. The light receiving portion is formed so as to be shifted by a predetermined amount on the second direction side so as to partially overlap in plan view.
 このような構成であるため、上記第2の受光素子の配置位置は、上記第1の受光素子の指向性に影響を与えず、また、上記第1の受光素子の配置位置は、上記第2の受光素子の指向性に影響を与えない。 Due to such a configuration, the arrangement position of the second light receiving element does not affect the directivity of the first light receiving element, and the arrangement position of the first light receiving element is the second position. This does not affect the directivity of the light receiving element.
 すなわち、上記構成においては、各受光素子毎に、独立的に指向特性を制御できる構成であるため、上記第1の受光素子および上記第2の受光素子の配置位置には、特に制限が生じないこととなる。 That is, in the above configuration, since the directivity can be controlled independently for each light receiving element, there is no particular limitation on the arrangement positions of the first light receiving element and the second light receiving element. It will be.
 したがって、上記構成によれば、異なる指向特性を有する上記第1の受光素子と上記第2の受光素子とを、必ず隣接した位置に設けなくてもよいので、異なる指向特性を有する受光素子を比較的自由に配置することのできるエリアセンサを実現することができる。 Therefore, according to the above configuration, the first light receiving element and the second light receiving element having different directivity characteristics do not necessarily have to be provided at adjacent positions. Therefore, the light receiving elements having different directivity characteristics are compared. An area sensor that can be freely arranged can be realized.
 本発明の表示装置は、上記の課題を解決するために、上記エリアセンサを備え、上記第1の受光素子と上記第2の受光素子とが形成されている面上には、マトリクス状に形成され、画像の表示を行う複数の画素が備えられ、上記第1の受光素子と上記第2の受光素子とは、上記画素の位置に応じて設けられていることを特徴としている。 In order to solve the above problems, a display device of the present invention includes the area sensor, and is formed in a matrix on the surface on which the first light receiving element and the second light receiving element are formed. In addition, a plurality of pixels for displaying an image are provided, and the first light receiving element and the second light receiving element are provided according to the position of the pixel.
 上記構成によれば、上記エリアセンサが一体化された表示装置を実現することができる。したがって、上記表示装置の画像を見ながら、上記エリアセンサを用いて、情報入力を行うことができる。 According to the above configuration, a display device in which the area sensor is integrated can be realized. Therefore, information can be input using the area sensor while viewing the image of the display device.
 なお、上記第1の受光素子と上記第2の受光素子とが、上記画素の位置に応じて設けられているとは、例えば、上記第1の受光素子と上記第2の受光素子とを1画素内に設ける場合や隣接する2画素において、何れか一方に上記第1の受光素子を、他方に上記第2の受光素子を設ける場合や、全ての画素に受光素子を設けるのではなく、特定色に関する画素にのみ受光素子を設ける場合などを例に挙げることができるが、これに限定されることはない。 The first light receiving element and the second light receiving element are provided according to the position of the pixel. For example, the first light receiving element and the second light receiving element are 1 When the first light receiving element is provided in one of the two adjacent pixels, the second light receiving element is provided in the other, or the light receiving elements are not provided in all the pixels. An example is a case where a light receiving element is provided only in a pixel relating to color, but is not limited thereto.
 本発明のエリアセンサは、以上のように、上記第1の受光素子は、第1の受光部と、上記第1の受光部より上層に形成され上記光を遮断する第1の遮光部材とを備えており、上記第2の受光素子は、第2の受光部と、上記第2の受光部より上層に形成され上記光を遮断する第2の遮光部材とを備えており、上記第1の受光素子においては、第1の方向からの光成分は受光でき、上記第1の方向からの光が上記面で鏡面反射される場合の反射光の方向と反対方向である第2の方向からの光成分は遮光できるように、上記第1の遮光部材の開口部は、上記第1の受光部に対して、平面視において部分的に重なるように、上記第1の方向側に所定量ずらされて形成されており、上記第2の受光素子においては、上記第2の方向からの光成分は受光でき、上記第1の方向からの光成分は遮光できるように、上記第2の遮光部材の開口部は、上記第2の受光部に対して、平面視において部分的に重なるように、上記第2の方向側に所定量ずらされて形成されている構成である。 As described above, in the area sensor of the present invention, the first light receiving element includes the first light receiving portion and the first light shielding member that is formed in an upper layer than the first light receiving portion and blocks the light. The second light receiving element includes a second light receiving portion and a second light shielding member that is formed in an upper layer than the second light receiving portion and blocks the light. In the light receiving element, the light component from the first direction can be received, and the light from the first direction is reflected from the second direction which is opposite to the reflected light direction when the light is specularly reflected by the surface. The opening of the first light shielding member is shifted by a predetermined amount toward the first direction so as to partially overlap the first light receiving part in plan view so that the light component can be shielded. In the second light receiving element, the light component from the second direction is received. The second light-shielding member has an opening that partially overlaps the second light-receiving part in plan view so that the light component from the first direction can be shielded. 2 is configured to be shifted by a predetermined amount in the direction of 2.
 また、本発明の表示装置は、以上のように、上記エリアセンサを備え、上記第1の受光素子と上記第2の受光素子とが形成されている面上には、マトリクス状に形成され、画像の表示を行う複数の画素が備えられ、上記第1の受光素子と上記第2の受光素子とは、上記画素の位置に応じて設けられている構成である。 Moreover, the display device of the present invention includes the area sensor as described above, and is formed in a matrix on the surface on which the first light receiving element and the second light receiving element are formed. A plurality of pixels for displaying an image are provided, and the first light receiving element and the second light receiving element are provided according to the position of the pixel.
 それゆえ、より精度の高い距離の推定を行うことができるとともに、異なる指向特性を有する受光素子を比較的自由に配置することのできるエリアセンサおよび表示装置を実現することができるという効果を奏する。 Therefore, it is possible to estimate the distance with higher accuracy and to realize an area sensor and a display device in which light receiving elements having different directivity characteristics can be arranged relatively freely.
本発明の一実施の形態のエリアセンサ一体型の液晶表示装置に備えられた光センサの概略構成を示す図である。It is a figure which shows schematic structure of the optical sensor with which the area sensor integrated liquid crystal display device of one embodiment of this invention was equipped. 図1に示した光センサの平面図である。It is a top view of the photosensor shown in FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置の概略構成を示す図である。1 is a diagram showing a schematic configuration of an area sensor integrated liquid crystal display device according to an embodiment of the present invention; FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置に設けられる光センサの形成例を示している。The example of formation of the optical sensor provided in the area sensor integrated liquid crystal display device of one embodiment of this invention is shown. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置の各画素における概略的な回路構成を示す図である。It is a figure which shows the schematic circuit structure in each pixel of the area sensor integrated liquid crystal display device of one embodiment of this invention. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the area sensor integrated liquid crystal display device of one embodiment of this invention. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、2値化された左画像と右画像とを用いて相関演算を行い、視差を算出する場合を説明するための図である。FIG. 6 is a diagram for explaining a case where a parallax is calculated by performing correlation calculation using a binarized left image and right image in the area sensor integrated liquid crystal display device according to the embodiment of the present invention. . 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、256階調の左画像と右画像とを正規化相関法を用いて、各ライン毎にマッチングを行い、視差を算出する場合を説明するための図である。In the area sensor integrated liquid crystal display device according to the embodiment of the present invention, a parallax is calculated by matching a 256-tone left image and a right image for each line using a normalized correlation method. It is a figure for demonstrating. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、エッジが検出された左画像と右画像とを用いて各ライン毎に相関演算を行い、視差を算出する場合を説明するための図である。To describe a case where a parallax is calculated by performing correlation calculation for each line using a left image and a right image in which an edge is detected in an area sensor integrated liquid crystal display device according to an embodiment of the present invention. FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、図9の(b)および図9の(c)に図示したエッジが検出された左画像と右画像とを正規化相関法を用いて、マッチングを行い、視差を求める場合を説明するための図である。In the area sensor-integrated liquid crystal display device according to the embodiment of the present invention, the left image and the right image in which the edge illustrated in FIGS. 9B and 9C is detected are normalized and correlated. It is a figure for demonstrating the case where matching is performed using FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、左画像と右画像とのエッジ方向画像を求め、エッジの方向に関する項を含む相関演算によって、視差を求める場合を説明するための図である。In the area sensor integrated liquid crystal display device according to one embodiment of the present invention, a case where an edge direction image of a left image and a right image is obtained and parallax is obtained by correlation calculation including a term related to the edge direction is described. FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、視差dと、入射角θとから、高さhを求める方法を説明するための図である。FIG. 6 is a diagram for explaining a method for obtaining a height h from a parallax d and an incident angle θ in an area sensor integrated liquid crystal display device according to an embodiment of the present invention. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、画素毎の相関関数を求めるフローを説明するための図である。It is a figure for demonstrating the flow which calculates | requires the correlation function for every pixel in the area sensor integrated liquid crystal display device of one embodiment of this invention. 図13におけるR(j、k)を示す図である。It is a figure which shows R (j, k) in FIG. 図13におけるエッジの方向に関する項であるWdirの値を求めるための参照テーブルを示す図である。It is a figure which shows the reference table for calculating | requiring the value of Wdir which is the term regarding the direction of the edge in FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置において、各ライン毎の相関関数の最大値を求めるフローを説明するための図である。It is a figure for demonstrating the flow which calculates | requires the maximum value of the correlation function for every line in the area sensor integrated liquid crystal display device of one embodiment of this invention. 図16におけるR(j、k)と視差z(k)とを示す図である。It is a figure which shows R (j, k) and parallax z (k) in FIG. 本発明の一実施の形態のエリアセンサ一体型の液晶表示装置に備えられている入射角を一定の範囲内に制限する光学素子を備えた光センサの概略構成を示す図である。It is a figure which shows schematic structure of the optical sensor provided with the optical element which restrict | limits the incident angle within the fixed range with which the area sensor integrated liquid crystal display device of one embodiment of this invention is equipped. 従来の情報入力装置の概略構成を示す図である。It is a figure which shows schematic structure of the conventional information input device. 特許文献2に記載の従来の表示装置の概略構成を示す図である。It is a figure which shows schematic structure of the conventional display apparatus of patent document 2. As shown in FIG. 図20に示す従来の表示装置において、指のような被検出物を球体として示し、上記被検出物が表示パネルへ接近する場合を示す図である。In the conventional display device shown in FIG. 20, a detected object such as a finger is shown as a sphere, and the detected object approaches the display panel.
 以下、図面に基づいて本発明の実施の形態について詳しく説明する。ただし、この実施の形態に記載されている構成部品の寸法、材質、形状、その相対配置などはあくまで一実施形態に過ぎず、これらによってこの発明の範囲が限定解釈されるべきではない。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. However, the dimensions, materials, shapes, relative arrangements, and the like of the component parts described in this embodiment are merely one embodiment, and the scope of the present invention should not be construed as being limited thereto.
 〔実施の形態1〕
 本実施の形態では、受光素子(光センサ)を備えたエリアセンサ一体型の液晶表示装置を例に挙げて説明するが、本発明は、これに限定されることなく、上記エリアセンサを備えた、例えば、有機EL表示装置などの表示装置や受光素子を備えたエリアセンサにも適用できるのは勿論である。
[Embodiment 1]
In this embodiment, an area sensor integrated liquid crystal display device including a light receiving element (light sensor) will be described as an example. However, the present invention is not limited thereto, and the area sensor is provided. For example, the present invention can be applied to a display device such as an organic EL display device or an area sensor including a light receiving element.
 先ず、本実施の形態のエリアセンサ一体型の液晶表示装置1の構成を、図3を参照しながら説明する。 First, the configuration of the area sensor integrated liquid crystal display device 1 according to the present embodiment will be described with reference to FIG.
 図3に示すエリアセンサ一体型の液晶表示装置1(単に液晶表示装置1とも呼ぶ)は、図示されてないが、液晶表示パネル2の各画素毎に光センサを備えており、上記光センサによって、液晶表示パネル2の表示面2a側に置かれる被検出物7(指)の画像を抽出し、その位置を検出するようになっている。 The area sensor integrated liquid crystal display device 1 (also referred to simply as the liquid crystal display device 1) shown in FIG. 3 is provided with a photosensor for each pixel of the liquid crystal display panel 2, although not shown. The image of the detected object 7 (finger) placed on the display surface 2a side of the liquid crystal display panel 2 is extracted and its position is detected.
 また、液晶表示パネル2の背面2b側には、液晶表示パネル2に光を照射するバックライト3が備えられている。 Further, a backlight 3 for irradiating the liquid crystal display panel 2 with light is provided on the back surface 2b side of the liquid crystal display panel 2.
 液晶表示パネル2は、多数の画素がマトリクス状に配列されたアクティブマトリクス基板4と、これに対向するように配置された対向基板5とを備えており、さらにこれら2つの基板4・5の間に液晶層6が挟持された構造を有している。なお、本実施の形態においては、VAモードの液晶表示パネル2を用いているが、液晶表示パネル2の表示モードは特に限定されず、TNモード、IPSモードなどのあらゆる表示モードを適用することができる。 The liquid crystal display panel 2 includes an active matrix substrate 4 in which a large number of pixels are arranged in a matrix, and a counter substrate 5 disposed so as to face the active matrix substrate 4, and further between these two substrates 4 and 5. The liquid crystal layer 6 is sandwiched between the two. In this embodiment, the VA mode liquid crystal display panel 2 is used. However, the display mode of the liquid crystal display panel 2 is not particularly limited, and any display mode such as a TN mode or an IPS mode can be applied. it can.
 また、図示されてないが、液晶表示パネル2の外側には、液晶表示パネル2を挟み込むようにして表側偏光板と裏側偏光板とがそれぞれ設けられている。 Although not shown, a front-side polarizing plate and a back-side polarizing plate are provided outside the liquid crystal display panel 2 so as to sandwich the liquid crystal display panel 2.
 上記各偏光板は、偏光子としての役割を果たし、液晶層6における液晶材料が垂直配向型である本実施の形態においては、上記表側偏光板の偏光方向と上記裏側偏光板の偏光方向とを、互いにクロスニコルの関係になるように配置することで、ノーマリーブラックモードの液晶表示装置を実現することができる。 Each of the polarizing plates serves as a polarizer, and in the present embodiment in which the liquid crystal material in the liquid crystal layer 6 is a vertical alignment type, the polarization direction of the front-side polarizing plate and the polarization direction of the back-side polarizing plate are By arranging them in a crossed Nicols relationship, a normally black mode liquid crystal display device can be realized.
 なお、アクティブマトリクス基板4には、図示されてないが、各画素を駆動するためのスイッチング素子であるTFT素子、上記TFT素子に電気的に接続された画素電極、上記光センサ、配向膜などが形成されている。 Although not shown, the active matrix substrate 4 includes a TFT element which is a switching element for driving each pixel, a pixel electrode electrically connected to the TFT element, the optical sensor, an alignment film, and the like. Is formed.
 一方、対向基板5には、図示してないが、カラーフィルタ層、対向電極および配向膜などが形成されている。上記カラーフィルタ層は、例えば、赤(R)、緑(G)、青(B)のそれぞれの色を有する着色部と、ブラックマトリクスから構成することができる。 On the other hand, although not shown, the counter substrate 5 is formed with a color filter layer, a counter electrode, an alignment film, and the like. The said color filter layer can be comprised from the coloring part which has each color of red (R), green (G), and blue (B), and a black matrix, for example.
 なお、本実施の形態においては、バックライト3として、表示用の可視光の他に、表示画像の画質に影響を及ぼすことなく、外部の明るさに関係なくより精度高く被検出物7を検出するため、不可視光である赤外光をも均一に発光する機能を備えたバックライトを用いている。上記赤外光は、不可視光であるため、必要に応じて、適宜変調して、パルス発光させることができる。 In the present embodiment, as the backlight 3, in addition to the visible light for display, the detected object 7 is detected with higher accuracy regardless of the external brightness without affecting the image quality of the display image. Therefore, a backlight having a function of uniformly emitting infrared light that is invisible light is used. Since the infrared light is invisible light, it can be appropriately modulated and pulsed as necessary.
 また、本実施の形態においては、バックライト3から出射された上記赤外光が被検出物7に反射され、上記光センサに入射される入射光量を検出する構成を用いているため、上記光センサが形成されている領域上には、対向基板5に設けられたカーボンブラックなどからなるブラックマトリクスが位置するようになっている。 In the present embodiment, since the infrared light emitted from the backlight 3 is reflected by the detected object 7 and detects the amount of incident light that is incident on the optical sensor, the light is used. On the region where the sensor is formed, a black matrix made of carbon black or the like provided on the counter substrate 5 is positioned.
 上記カーボンブラックなどからなるブラックマトリクスは、可視光は遮断し、赤外光は透過させるので、上記ブラックマトリクスを透過して上記光センサに入射される光は、赤外光のみとなるので、被検出物7に反射され、上記光センサに入射される赤外光の光量を検出することができる。 The black matrix made of carbon black or the like blocks visible light and transmits infrared light. Therefore, light that passes through the black matrix and enters the photosensor is only infrared light. The amount of infrared light reflected by the detection object 7 and incident on the optical sensor can be detected.
 なお、バックライト3が、赤外光を含まず、表示用の可視光を発光するバックライトである場合には、上記光センサが形成されている領域上には、対向基板5に設けられたカーボンブラックなどからなるブラックマトリクスの開口部が位置するようにし、以下のように、被検出物7の位置を検出することができる。 When the backlight 3 is a backlight that does not contain infrared light and emits visible light for display, the backlight 3 is provided on the counter substrate 5 on the region where the photosensor is formed. The opening of the black matrix made of carbon black or the like is positioned, and the position of the detection object 7 can be detected as follows.
 比較的外部が明るい場合には、外部からの光の入射光量が、被検出物7が存在する箇所と存在しない箇所とでは、異なるため、バックライトを使用せず、被検出物7の位置を検出することができる。 When the outside is relatively bright, the amount of incident light from the outside is different between the location where the detected object 7 exists and the location where the detected object 7 does not exist, so the position of the detected object 7 is not used without using a backlight. Can be detected.
 一方、比較的外部が暗い場合には、外部からの光の入射光量が、被検出物7が存在する箇所と存在しない箇所とで、差がないため、バックライトから出射された光が被検出物7が存在する箇所では反射されることを利用して、被検出物7の位置を検出することができる。 On the other hand, when the outside is relatively dark, there is no difference in the amount of incident light from the outside between the place where the detected object 7 exists and the place where the detected object 7 does not exist, so the light emitted from the backlight is detected. The position of the detected object 7 can be detected by utilizing the fact that the object 7 is reflected.
 以下、図1および図2に基づいて、本実施の形態の液晶表示装置1に備えられた光センサについて詳しく説明する。 Hereinafter, based on FIG. 1 and FIG. 2, the optical sensor provided in the liquid crystal display device 1 of the present embodiment will be described in detail.
 図1は、本実施の形態の液晶表示装置1に備えられた光センサの概略構成を示す図である。 FIG. 1 is a diagram showing a schematic configuration of an optical sensor provided in the liquid crystal display device 1 of the present embodiment.
 図1の(a)は、アクティブマトリクス基板4に備えられた図中の左上方向に指向特性を有する光センサ8a(第1の受光素子)を示しており、図1の(b)は、アクティブマトリクス基板4に備えられた図中の右上方向に指向特性を有する光センサ8b(第2の受光素子)を示している。 FIG. 1A shows an optical sensor 8a (first light receiving element) having a directional characteristic in the upper left direction in the drawing provided in the active matrix substrate 4, and FIG. An optical sensor 8b (second light receiving element) having a directivity characteristic is shown in the upper right direction in the figure provided on the matrix substrate 4.
 図1の(a)に図示されているように、本実施の形態においては、センシング速度が高い光センサ(受光素子)を備えたアクティブマトリクス基板4を比較的容易に製造するという観点から、図1に図示されているようにP+層9p+、I層9i(第1の受光部・第2の受光部)およびN+層9n+の各層が互いに重なりを持たない構造を有するPINダイオードを用いているが、これに限定されることはない。 As shown in FIG. 1A, in the present embodiment, the active matrix substrate 4 provided with a photosensor (light receiving element) having a high sensing speed is relatively easy to manufacture. As shown in FIG. 1, a PIN diode having a structure in which the P + layer 9p +, the I layer 9i (first light receiving unit / second light receiving unit), and the N + layer 9n + do not overlap each other is used. However, the present invention is not limited to this.
 また、光センサとしては、上記光センサに備えられた受光部における、光の受光量に応じて異なる電流値を流すものであればよく、例えば、CCD、CMOS、PNダイオード、フォトトランジスタなどを用いることもできる。 Further, as the optical sensor, any optical sensor may be used as long as it passes different current values according to the amount of light received in the light receiving unit provided in the optical sensor. You can also.
 以下、アクティブマトリクス基板4上に、光センサ8a・8bを形成するプロセスについて説明する。 Hereinafter, a process for forming the optical sensors 8a and 8b on the active matrix substrate 4 will be described.
 本実施の形態においては、図1の(a)および図1の(b)に図示されてないアクティブマトリクス基板4を構成するための基板として、ガラス基板を用いているが、これに限定されることなく、石英基板やプラスチック基板などを用いることもできる。 In the present embodiment, a glass substrate is used as a substrate for constituting the active matrix substrate 4 not shown in FIGS. 1A and 1B, but the present invention is not limited to this. Alternatively, a quartz substrate, a plastic substrate, or the like can be used.
 上記ガラス基板においては、図示されてないが、詳しくは後述する画素TFTと光センサ8a・8bとが形成される面には、バックライト3から出射された光が、上記画素TFTと光センサ8a・8bとに入射するのを遮断するための遮光膜が形成されている。そして、上記遮光膜の上側には、上記遮光膜およびガラス基板の全面を覆うようにベースコート膜が形成されている。 In the glass substrate, although not shown in detail, the light emitted from the backlight 3 is applied to the surface on which the pixel TFT and the optical sensors 8a and 8b, which will be described later, are formed, and the pixel TFT and the optical sensor 8a. A light-shielding film is formed to block incident on 8b. A base coat film is formed on the light shielding film so as to cover the entire surface of the light shielding film and the glass substrate.
 上記ベースコート膜としては、シリコン酸化膜、シリコン窒化膜、シリコン窒化酸化膜などの絶縁性無機物質からなる膜、あるいはこれらを適宜組み合わせた積層膜を用いることが可能であり、本実施の形態においては、シリコン酸化膜を用いた。また、これらの膜は、LPCVD法、プラズマCVD法、スパッタ法等により堆積させて形成することができる。 As the base coat film, a film made of an insulating inorganic material such as a silicon oxide film, a silicon nitride film, or a silicon oxynitride film, or a laminated film appropriately combining them can be used. A silicon oxide film was used. Further, these films can be formed by depositing by LPCVD, plasma CVD, sputtering, or the like.
 そして、ベースコート膜の上面には、図1の(a)および図1の(b)に示す光センサ8a・8bが形成されている。 And on the upper surface of the base coat film, optical sensors 8a and 8b shown in FIGS. 1A and 1B are formed.
 なお、上記ベースコート膜上に光センサ8a・8bを形成するプロセスは次のとおりである。 The process for forming the optical sensors 8a and 8b on the base coat film is as follows.
 先ず、上記遮光膜が形成されている領域における上記ベースコート膜上には、LPCVD法、プラズマCVD法、スパッタ法等により、後から多結晶半導体膜9となる非単結晶半導体薄膜がそれぞれ形成される。 First, on the base coat film in the region where the light shielding film is formed, a non-single-crystal semiconductor thin film that will later become the polycrystalline semiconductor film 9 is formed by LPCVD, plasma CVD, sputtering, or the like. .
 なお、上記非単結晶半導体薄膜としては、非晶質シリコン、多結晶シリコン、非晶質ゲルマニウム、多結晶ゲルマニウム、非晶質シリコン・ゲルマニウム、多結晶シリコン・ゲルマニウム、非晶質シリコン・カーバイド、多結晶シリコン・カーバイドなどを用いることができる。本実施の形態では、非晶質シリコンを用いている。 The non-single-crystal semiconductor thin film includes amorphous silicon, polycrystalline silicon, amorphous germanium, polycrystalline germanium, amorphous silicon / germanium, polycrystalline silicon / germanium, amorphous silicon / carbide, Crystalline silicon carbide or the like can be used. In this embodiment mode, amorphous silicon is used.
 続いて、上記非単結晶半導体薄膜が結晶化されることにより、多結晶半導体膜9となる。結晶化の際にはレーザビーム、電子ビームなどが使用可能であり、本実施の形態ではレーザビームを用いて結晶化を行った。 Subsequently, the non-single-crystal semiconductor thin film is crystallized to form a polycrystalline semiconductor film 9. In crystallization, a laser beam, an electron beam, or the like can be used. In this embodiment mode, crystallization is performed using a laser beam.
 次に、多結晶半導体膜9を、上記遮光膜の形成領域に応じてフォトリソグラフィ法によりパターニングをする。 Next, the polycrystalline semiconductor film 9 is patterned by photolithography according to the formation region of the light shielding film.
 そして、光センサ8a・8bを形成する領域における、多結晶半導体膜9の中央には、受光部となる真性半導体層または不純物濃度が相対的に低い半導体層であるI層9iが形成され、その両側には、P型の不純物濃度が相対的に高い半導体層であるP+層9p+と、N型の不純物濃度が相対的に高い半導体層であるN+層9n+とがそれぞれ形成される。 Then, an intrinsic semiconductor layer serving as a light receiving portion or an I layer 9i that is a semiconductor layer having a relatively low impurity concentration is formed in the center of the polycrystalline semiconductor film 9 in the region where the optical sensors 8a and 8b are formed. On both sides, a P + layer 9p + that is a semiconductor layer having a relatively high P-type impurity concentration and an N + layer 9n + that is a semiconductor layer having a relatively high N-type impurity concentration are formed.
 次に、シリコン酸化膜などからなるゲート絶縁膜10が形成され、ゲート絶縁膜10によって、多結晶半導体膜9が覆われる。 Next, a gate insulating film 10 made of a silicon oxide film or the like is formed, and the polycrystalline semiconductor film 9 is covered with the gate insulating film 10.
 それから、ゲート絶縁膜10を貫通するコンタクトホールが、P+層9p+上及びN+層9n+上にそれぞれ形成される。 Then, contact holes penetrating the gate insulating film 10 are formed on the P + layer 9p + and the N + layer 9n +, respectively.
 そして、スパッタ法などにより、全面に導電膜が形成される。 Then, a conductive film is formed on the entire surface by sputtering or the like.
 上記導電膜としては、例えば、アルミニウム等からなる導電膜を用いることができるが、これに限定されることはなく、Ta、W、Ti、Mo、Al、Cu、Cr、Ndなどから選ばれた元素、あるいは前記元素を主成分とする合金材料もしくは化合物材料を用い、必要に応じてこれらの適宜組合せによる積層構造として形成してもよい。また、多結晶シリコンなどに代表される半導体膜にリン、ボロンなどの不純物をドーピングしたものにより上記導電膜を形成してもよい。なお、本実施の形態では、上記導電膜としてアルミニウムを用いている。 As the conductive film, for example, a conductive film made of aluminum or the like can be used. However, the conductive film is not limited to this, and is selected from Ta, W, Ti, Mo, Al, Cu, Cr, Nd, and the like. An element, or an alloy material or a compound material containing the element as a main component may be used, and if necessary, a laminated structure may be formed by appropriately combining them. Alternatively, the conductive film may be formed using a semiconductor film typified by polycrystalline silicon or the like doped with an impurity such as phosphorus or boron. Note that in this embodiment mode, aluminum is used for the conductive film.
 なお、上記導電膜は、フォトリソグラフィ法によりエッチングすることにより所望の形状にパターニングされ、P+層9p+及びN+層9n+にそれぞれ電気的に接続されるメタル電極(配線)11a・11bとなる。 The conductive film is patterned into a desired shape by etching by photolithography, and becomes metal electrodes (wiring) 11a and 11b electrically connected to the P + layer 9p + and the N + layer 9n +, respectively.
 そして、ゲート絶縁膜10と上記導電膜とを覆うように、透明絶縁層12が全面に形成される。なお、本実施の形態においては、透明絶縁層12として、アクリル系の樹脂からなる有機層間絶縁膜を用いている。 Then, a transparent insulating layer 12 is formed on the entire surface so as to cover the gate insulating film 10 and the conductive film. In the present embodiment, an organic interlayer insulating film made of an acrylic resin is used as the transparent insulating layer 12.
 その後、ITO(Indium Tin Oxide)やIZO(Indium Zinc Oxide)などの透明導電膜からなるシールド電極層13が、図1の(a)および図1の(b)に示すように、P+層9p+、I層9iおよびN+層9n+と平面視において重なるように透明絶縁層12上に形成される。 Thereafter, a shield electrode layer 13 made of a transparent conductive film such as ITO (Indium Tin Oxide) or IZO (Indium Zinc Oxide) is formed into a P + layer 9p +, as shown in FIGS. 1 (a) and 1 (b). It is formed on transparent insulating layer 12 so as to overlap with I layer 9i and N + layer 9n + in plan view.
 そして、シールド電極層13上には、開口部14aを有する遮光部材14が形成される。 Then, a light shielding member 14 having an opening 14 a is formed on the shield electrode layer 13.
 本実施の形態においては、図1の(a)に図示されているように、図中の左上方向に指向特性を有する光センサ8aを形成するため、図中の左上方向からの光成分は受光でき、上記図中の左上方向からの光が、光センサ8aが形成されている面で鏡面反射される場合の反射光の方向と反対方向である図中の右上方向からの光成分は遮光できるように、遮光部材14の開口部14aを、光センサ8aの受光部である多結晶半導体膜9のI層9iに対して、平面視において部分的に重なるように、図中の左方向に所定量ずらして形成している。 In the present embodiment, as shown in FIG. 1A, an optical sensor 8a having a directional characteristic is formed in the upper left direction in the drawing, and therefore the light component from the upper left direction in the drawing is received. The light component from the upper right direction in the figure, which is the direction opposite to the direction of the reflected light when the light from the upper left direction in the figure is specularly reflected by the surface on which the optical sensor 8a is formed, can be shielded. As described above, the opening 14a of the light shielding member 14 is located in the left direction in the drawing so as to partially overlap the I layer 9i of the polycrystalline semiconductor film 9 which is the light receiving portion of the optical sensor 8a in plan view. A fixed amount is formed.
 図2の(a)は、図1の(a)に示す光センサ8aの平面図を示す。 2 (a) is a plan view of the optical sensor 8a shown in FIG. 1 (a).
 なお、上記左方向にずらす所定量(図2の(a)のn+層9n+上に形成される遮光部材14の開口部14aの幅)は、遮光部材14の開口部14aの図2の(a)における左右方向の幅である横幅の5%以上50%以下であることが好ましい。 Note that the predetermined amount to be shifted to the left (the width of the opening 14a of the light shielding member 14 formed on the n + layer 9n + in FIG. 2A) is the same as that of FIG. It is preferable that it is 5% or more and 50% or less of the lateral width which is the width in the left-right direction.
 一方、図1の(b)に図示されているように、図中の右上方向に指向特性を有する光センサ8bを形成するため、図中の右上方向からの光成分は受光でき、上記図中の右上方向からの光が、光センサ8bが形成されている面で鏡面反射される場合の反射光の方向と反対方向である図中の左上方向からの光成分は遮光できるように、遮光部材14の開口部14aを、光センサ8bの受光部である多結晶半導体膜9のI層9iに対して、平面視において部分的に重なるように、図中の右方向に所定量ずらして形成している。 On the other hand, as shown in FIG. 1B, since the optical sensor 8b having the directivity characteristic is formed in the upper right direction in the figure, the light component from the upper right direction in the figure can be received. The light shielding member can shield the light component from the upper left direction in the figure, which is the direction opposite to the direction of the reflected light when the light from the upper right direction is specularly reflected by the surface on which the optical sensor 8b is formed. 14 openings 14a are formed by shifting a predetermined amount in the right direction in the drawing so as to partially overlap the I layer 9i of the polycrystalline semiconductor film 9 which is a light receiving portion of the optical sensor 8b in plan view. ing.
 図2の(b)は、図1の(b)に示す光センサ8bの平面図を示す。 2 (b) is a plan view of the optical sensor 8b shown in FIG. 1 (b).
 なお、上記右方向にずらす所定量(図2の(b)のp+層9p+上に形成される遮光部材14の開口部14aの幅)は、遮光部材14の開口部14aの図2の(b)における左右方向の幅である横幅の5%以上50%以下であることが好ましい。 The predetermined amount to be shifted to the right (the width of the opening 14a of the light shielding member 14 formed on the p + layer 9p + in FIG. 2B) is the same as that of FIG. 2B of the opening 14a of the light shielding member 14. It is preferable that it is 5% or more and 50% or less of the lateral width which is the width in the left-right direction.
 また、本実施の形態においては、遮光部材14の開口部14aを、多結晶半導体膜9のI層9iに対して、所定量ずらして形成している場合について説明しているが、これに限定されることはなく、光センサの実質的な受光部となる、I層9i上において導電膜などの非透過性膜が形成されてない部分に対して、所定量ずらして形成してもよい。 In the present embodiment, the description has been given of the case where the opening 14a of the light shielding member 14 is formed by shifting a predetermined amount with respect to the I layer 9i of the polycrystalline semiconductor film 9. However, the present invention is not limited to this. However, it may be formed shifted by a predetermined amount with respect to a portion where a non-permeable film such as a conductive film is not formed on the I layer 9i, which is a substantial light receiving portion of the optical sensor.
 なお、本実施の形態においては、既に上述したように、バックライト3から出射された赤外光が被検出物7に反射され、光センサ8a・8bに入射される赤外光の入射光量を検出する構成を用いているため、光センサ8a・8bが形成されている領域上には、対向基板5に設けられたカーボンブラックなどからなるブラックマトリクスが位置するようになっている。 In the present embodiment, as already described above, the infrared light emitted from the backlight 3 is reflected by the detection object 7, and the incident light quantity of the infrared light incident on the optical sensors 8a and 8b is set. Since the detection configuration is used, a black matrix made of carbon black or the like provided on the counter substrate 5 is positioned on the region where the optical sensors 8a and 8b are formed.
 カーボンブラックなどからなるブラックマトリクスは、可視光は遮断し、赤外光は透過させるので、上記ブラックマトリクスを透過して光センサ8a・8bに入射される光は、赤外光のみとなるので、本実施の形態においては、遮光部材14の材料としては、赤外光を遮断するという機能を有するものであれば特に限定されることなく用いることができ、赤外光を遮断することのできる材料としては、例えば、金属材料を挙げることができる。 Since the black matrix made of carbon black or the like blocks visible light and transmits infrared light, the light transmitted through the black matrix and incident on the optical sensors 8a and 8b is only infrared light. In the present embodiment, the material of the light shielding member 14 is not particularly limited as long as it has a function of blocking infrared light, and is a material capable of blocking infrared light. For example, a metal material can be used.
 本実施の形態においては、遮光部材14として、アルミニウムを用いているが、これに限定されることはなく、上記赤外光の波長領域において、反射率が高い物質であれば何れも用いることが可能である。 In the present embodiment, aluminum is used as the light shielding member 14, but the light shielding member 14 is not limited to this, and any material having a high reflectance in the wavelength region of the infrared light may be used. Is possible.
 一方、カーボンブラックなどからなるブラックマトリクスを介さず直接、光センサ8a・8bに入射される斜め光の影響を考慮する場合や光センサ8a・8bを備えたエリアセンサ単体で用いる場合や上述したようにバックライト3が、赤外光を含まず、表示用の可視光を発光するバックライトである場合などには、遮光部材14として、光センサ8a・8bに感度のある光の波長域をカットできる材料を選択する必要がある。 On the other hand, when considering the influence of oblique light incident directly on the optical sensors 8a and 8b without using a black matrix made of carbon black or the like, or when using an area sensor alone including the optical sensors 8a and 8b, or as described above. When the backlight 3 is a backlight that does not contain infrared light and emits visible light for display, etc., the light wavelength range of light sensitive to the optical sensors 8a and 8b is cut as the light shielding member 14. It is necessary to select materials that can be used.
 さらに具体的には、本実施の形態のように、光センサ8a・8bの受光部として多結晶シリコンを用いている場合においては、遮光部材14として、1100nm以下の光の波長域を吸収あるいは、反射する材料を選択すればよい。 More specifically, in the case where polycrystalline silicon is used as the light receiving part of the optical sensors 8a and 8b as in the present embodiment, the light shielding member 14 absorbs a wavelength region of light of 1100 nm or less, or A reflective material may be selected.
 図2に図示されているように、本実施の形態においては、光センサ8aに指向性を持たせるため、遮光部材14の開口部14aを、光センサ8aの受光部である多結晶半導体膜9のI層9iに対して、平面視において部分的に重なるように、図中の左方向に所定量ずらして形成しており、一方、光センサ8bに指向性を持たせるため、遮光部材14の開口部14aを、光センサ8bの受光部である多結晶半導体膜9のI層9iに対して、平面視において部分的に重なるように、図中の右方向に所定量ずらして形成している。 As shown in FIG. 2, in the present embodiment, in order to provide the optical sensor 8a with directivity, the opening 14a of the light shielding member 14 is replaced with the polycrystalline semiconductor film 9 which is a light receiving portion of the optical sensor 8a. In order to partially overlap with the I layer 9i in plan view, the I layer 9i is shifted by a predetermined amount in the left direction in the figure. The opening 14a is formed by shifting a predetermined amount in the right direction in the drawing so as to partially overlap the I layer 9i of the polycrystalline semiconductor film 9 which is a light receiving portion of the optical sensor 8b in plan view. .
 このような構成であるため、光センサ8aの配置位置は、光センサ8bの指向性に影響を与えず、また、光センサ8bの配置位置は、光センサ8aの指向性に影響を与えない。 Because of this configuration, the arrangement position of the optical sensor 8a does not affect the directivity of the optical sensor 8b, and the arrangement position of the optical sensor 8b does not affect the directivity of the optical sensor 8a.
 すなわち、上記構成においては、各光センサ8a・8b毎に、独立的に指向特性を制御できる構成であるため、光センサ8a・8bの配置位置には、特に制限が生じないこととなる。 That is, in the above configuration, since the directivity characteristics can be controlled independently for each of the optical sensors 8a and 8b, the arrangement position of the optical sensors 8a and 8b is not particularly limited.
 したがって、上記構成によれば、異なる指向特性を有する光センサ8a・8bを、必ず隣接した位置に設けなくてもよいので、異なる指向特性を有する光センサ8a・8bを比較的自由に配置することができる。 Therefore, according to the above configuration, the optical sensors 8a and 8b having different directivity characteristics do not necessarily have to be provided at adjacent positions. Therefore, the optical sensors 8a and 8b having different directivity characteristics can be arranged relatively freely. Can do.
 なお、本実施の形態においては、第1の方向を図1の図中の左上方向、第2の方向を図1の図中の右上方向としているが、異なる指向特性を有する光センサを形成でき、上記光センサを用いて得られる被検出物の画像から視差を求められるのであれば、第1の方向と第2の方向とは、上記方向に限定されない。 In the present embodiment, the first direction is the upper left direction in FIG. 1 and the second direction is the upper right direction in FIG. 1, but photosensors having different directivity characteristics can be formed. The first direction and the second direction are not limited to the above directions as long as parallax can be obtained from the image of the detection object obtained using the optical sensor.
 図4は、本実施の形態の液晶表示装置1に設けられる光センサ8a・8bの形成例を示している。 FIG. 4 shows a formation example of the optical sensors 8a and 8b provided in the liquid crystal display device 1 of the present embodiment.
 光センサ8a・8bは、上述したように、各光センサ8a・8b毎に、独立的に指向特性を制御できる構成であるため、光センサ8a・8bの配置位置には、特に制限が生じず、従来のように、光センサ8a・8bを、必ず隣接した位置に設けなくてもよいので、異なる指向特性を有する光センサ8a・8bを比較的自由に配置することができる。 As described above, the optical sensors 8a and 8b have a configuration in which the directivity characteristics can be controlled independently for each of the optical sensors 8a and 8b. Therefore, the arrangement positions of the optical sensors 8a and 8b are not particularly limited. As in the prior art, the optical sensors 8a and 8b do not necessarily have to be provided at adjacent positions, so that the optical sensors 8a and 8b having different directivity characteristics can be arranged relatively freely.
 したがって、図4に図示されているように、画素P(1、1)に図中の左上方向に指向特性を有する光センサ8aを設けた場合には、画素P(1、1)の右方向の最隣接画素であるP(1、2)と画素P(1、1)の下方向の最隣接画素であるP(2、1)とには、図中の右上方向に指向特性を有する光センサ8bを設け、光センサ8aが設けられている画素の上下左右方向の最隣接画素には、光センサ8bが設けられている画素が配置され、光センサ8bが設けられている画素の上下左右方向の最隣接画素には、光センサ8aが設けられている画素が配置されるように、光センサ8a・8bを千鳥配置することができる。 Therefore, as shown in FIG. 4, when the pixel P (1, 1) is provided with the photosensor 8a having directivity in the upper left direction in the figure, the right direction of the pixel P (1, 1). Light having directivity characteristics in the upper right direction in the figure is P (1,2) and P (2,1) which is the nearest neighbor pixel in the lower direction of the pixel P (1,1). The sensor 8b is provided, and the pixel provided with the photosensor 8b is disposed in the nearest pixel in the vertical and horizontal directions of the pixel provided with the photosensor 8a, and the pixel provided with the photosensor 8b. The optical sensors 8a and 8b can be arranged in a staggered manner so that the pixel provided with the optical sensor 8a is arranged at the nearest pixel in the direction.
 また、これに限定されることなく、図示してないが、例えば、光センサ8a・8bを1画素内に設けることや全ての画素に光センサ8a・8bを設けるのではなく、特定色に関する画素にのみ光センサ8a・8bを設けることもできる。 Further, the present invention is not limited to this, and although not shown, for example, the pixels relating to a specific color are not provided, for example, the photosensors 8a and 8b are provided in one pixel or the photosensors 8a and 8b are not provided in all pixels. It is also possible to provide the optical sensors 8a and 8b only in the case.
 さらには、1画素内において、光センサ8aと光センサ8bとを表示領域を間に挟んで、それぞれ上下方向や左右方向の端部に離して配置させることや光センサ8a・8bを複数の画素毎に設けることもできる。 Further, in one pixel, the optical sensor 8a and the optical sensor 8b are arranged apart from each other in the vertical and horizontal ends with the display region interposed therebetween, and the optical sensors 8a and 8b are arranged in a plurality of pixels. It can also be provided for each.
 すなわち、本実施の形態においては、各画素毎に光センサ8aまたは光センサ8bの何れか一方が設けられている構成を用いているが、これに限定されることなく、被検出物の検出のために求められる解像度と異なる指向特性を有する光センサ8a・8bによって検出される被検出物の画像から求められる詳しくは後述する視差の精度などを考慮し、光センサ8a・8bを設けることができる。 That is, in the present embodiment, a configuration in which either one of the optical sensor 8a or the optical sensor 8b is provided for each pixel is used, but the present invention is not limited to this, and detection of an object to be detected is performed. Therefore, the optical sensors 8a and 8b can be provided in consideration of the accuracy of parallax, which will be described in detail later, from the images of the detected objects detected by the optical sensors 8a and 8b having directional characteristics different from those required for the purpose. .
 図5は、本実施の形態の液晶表示装置1の各画素における概略的な回路構成を示す図である。 FIG. 5 is a diagram showing a schematic circuit configuration in each pixel of the liquid crystal display device 1 of the present embodiment.
 図示されているように、各画素の下部には、光センサ8aまたは光センサ8bと、昇圧用のコンデンサ16と、センサ出力(出力アンプ)の役割を担うトランジスタ17とで構成された光センサ回路が設けられており、上記各画素における上記光センサ回路が設けられている領域の上部には、図示されてない走査信号線駆動回路(図6参照)から走査信号が供給される走査信号線GLnおよび図示されてないデータ信号線駆動回路(図6参照)からデータ信号が供給されるデータ信号線SLnが交差するように形成され、走査信号線GLnとデータ信号線SLnとが交差する位置付近には、画素TFT15が形成されている。 As shown in the drawing, an optical sensor circuit comprising an optical sensor 8a or an optical sensor 8b, a boosting capacitor 16, and a transistor 17 serving as a sensor output (output amplifier) is provided below each pixel. The scanning signal line GLn to which a scanning signal is supplied from a scanning signal line driving circuit (not shown) (see FIG. 6) is provided above the area where the photosensor circuit is provided in each pixel. Further, a data signal line SLn to which a data signal is supplied from a data signal line driving circuit (not shown) (see FIG. 6) is formed so as to intersect, and in the vicinity of a position where the scanning signal line GLn and the data signal line SLn intersect. The pixel TFT 15 is formed.
 なお、図5は、図4に示すn×n画素を有する液晶表示装置1において、隣接する4つの画素P(n-1、n-1)・P(n-1、n)・P(n、n-1)・P(n、n)の回路構成を例示的に示している。 FIG. 5 shows an example of four adjacent pixels P (n−1, n−1) · P (n−1, n) · P (n) in the liquid crystal display device 1 having n × n pixels shown in FIG. , N−1) · P (n, n) is shown as an example.
 なお、本実施の形態においては、画素電極と、液晶層6と、対向電極(Vcom)とで形成される液晶容量Clcに充電された電荷の減衰時間を長くするため、液晶容量Clcと並列に補助容量Csを設けている構成を用いており、補助容量Csの一端は、画素TFT15のドレイン電極および液晶容量Clcの上記画素電極に電気的に接続されており、他端は、該当画素内に走査信号線GLnと平行に形成された補助容量バスラインCSnに電気的に接続されている。 In the present embodiment, in order to lengthen the decay time of the charge charged in the liquid crystal capacitor Clc formed by the pixel electrode, the liquid crystal layer 6 and the counter electrode (Vcom), in parallel with the liquid crystal capacitor Clc. The auxiliary capacitor Cs is provided, and one end of the auxiliary capacitor Cs is electrically connected to the drain electrode of the pixel TFT 15 and the pixel electrode of the liquid crystal capacitor Clc, and the other end is in the corresponding pixel. It is electrically connected to a storage capacitor bus line CSn formed in parallel with the scanning signal line GLn.
 以下、上記光センサ回路について、さらに詳しく説明する。 Hereinafter, the optical sensor circuit will be described in more detail.
 上記光センサ回路は、センサ出力の役割を担うトランジスタ17を1つだけ用いた1T(トランジスタの略)方式の回路として構成されており、トランジスタ17は、ソースフォロワトランジスタ(電圧フォロワトランジスタ)として機能する。 The optical sensor circuit is configured as a 1T (abbreviation of transistor) type circuit using only one transistor 17 that plays a role of sensor output, and the transistor 17 functions as a source follower transistor (voltage follower transistor). .
 トランジスタ17のドレイン電極はAMP電源供給バスラインVsn(nは画素の列番号を示す自然数)に接続され、ソース電極は光センサ出力バスラインVonに接続されている。そして、AMP電源供給バスラインVsnおよび光センサ出力バスラインVonは、図示されてないセンサ読出し回路(図6参照)に接続され、AMP電源供給バスラインVsnには上記センサ読出し回路から電源電圧VDDが印加される。 The drain electrode of the transistor 17 is connected to the AMP power supply bus line Vsn (n is a natural number indicating the pixel column number), and the source electrode is connected to the photosensor output bus line Von. The AMP power supply bus line Vsn and the optical sensor output bus line Von are connected to a sensor readout circuit (not shown) (see FIG. 6). The AMP power supply bus line Vsn receives the power supply voltage VDD from the sensor readout circuit. Applied.
 また、トランジスタ17のゲート電極には、PINダイオードからなる光センサ8aまたは光センサ8bが接続されるとともに、昇圧用コンデンサ16の一端が接続されている。 Further, the gate electrode of the transistor 17 is connected to an optical sensor 8a or an optical sensor 8b made of a PIN diode and one end of a boosting capacitor 16 is connected.
 さらに、光センサ8a・8bに備えられたPINダイオードのアノード電極(図1の(a)および図1の(b)におけるメタル電極(配線)11a)は、図示されてないセンサ走査信号線駆動回路(図6参照)からリセット信号RSTが送られる配線Vrstn(nは画素の行番号を示す自然数)に接続され、昇圧用コンデンサ16の他端は、図示されてないセンサ走査信号線駆動回路(図6参照)から光センサ行選択信号RWSが送られる配線Vrwnに接続されている。なお、光センサ行選択信号RWSは、マトリクス状に並んでいる光センサ回路の特定行を選択し、その特定行にある光センサ回路から検出信号を出力させる役割を持っている。 Further, the anode electrode (metal electrode (wiring) 11a in FIG. 1A and FIG. 1B) of the PIN diode provided in the optical sensors 8a and 8b is a sensor scanning signal line drive circuit not shown. (See FIG. 6) is connected to a wiring Vrstn (n is a natural number indicating the row number of the pixel) to which a reset signal RST is sent, and the other end of the boosting capacitor 16 is a sensor scanning signal line drive circuit (not shown). 6) is connected to the wiring Vrwn to which the optical sensor row selection signal RWS is sent. The photosensor row selection signal RWS has a role of selecting a specific row of photosensor circuits arranged in a matrix and outputting a detection signal from the photosensor circuit in the specific row.
 次に、図5を参照して、上記光センサ回路の動作について説明する。 Next, the operation of the optical sensor circuit will be described with reference to FIG.
 先ず、トランジスタ17のゲート電極の電位をリセットするために、図示されてないセンサ走査信号線駆動回路(図6参照)から配線Vrstnにハイレベルのリセット信号RSTが送られる。これにより、配線Vrstnにハイレベルのリセット信号RSTが送られる期間においては、光センサ8a・8bに備えられたPINダイオードに順方向バイアスがかかるので、昇圧用コンデンサ16が充電され、トランジスタ17のゲート電極の電位は徐々に立ち上がり、最終的に初期化電位に到達する。 First, in order to reset the potential of the gate electrode of the transistor 17, a high level reset signal RST is sent to the wiring Vrstn from a sensor scanning signal line drive circuit (not shown) (see FIG. 6). As a result, during the period in which the high level reset signal RST is sent to the wiring Vrstn, the forward bias is applied to the PIN diodes provided in the optical sensors 8a and 8b, so that the boosting capacitor 16 is charged and the gate of the transistor 17 is charged. The potential of the electrode gradually rises and finally reaches the initialization potential.
 そして、トランジスタ17のゲート電極の電位が初期化電位に到達した後、リセット信号RSTをローレベルに落とすと、光センサ8a・8bに備えられたPINダイオードのカソード電極(図1の(a)および図1の(b)におけるメタル電極(配線)11b)の電位が、アノード電極の電位より高くなるので、上記PINダイオードに逆バイアスがかかる。 When the reset signal RST is lowered to a low level after the potential of the gate electrode of the transistor 17 reaches the initialization potential, the cathode electrodes of the PIN diodes provided in the photosensors 8a and 8b (FIG. 1 (a) and Since the potential of the metal electrode (wiring) 11b) in FIG. 1B is higher than the potential of the anode electrode, a reverse bias is applied to the PIN diode.
 この状態で、光センサ8a・8bに光が照射されると、光の強さに応じて、逆バイアスによる光電流が上記PINダイオードに流れる。この結果、昇圧用コンデンサ16に保持されていた電荷が、配線Vrstnを介して放電されるため、トランジスタ17のゲート電極の電位が次第に下がり、最終的には、光の強さに応じた検出電位まで下がる。 In this state, when the optical sensors 8a and 8b are irradiated with light, a photocurrent due to reverse bias flows through the PIN diode according to the intensity of the light. As a result, the charge held in the boosting capacitor 16 is discharged through the wiring Vrstn, so that the potential of the gate electrode of the transistor 17 gradually decreases, and finally the detection potential corresponding to the intensity of light. Go down.
 続いて、検出結果の読み取りを行う。この際には、昇圧用コンデンサ16の他端に、図示されてないセンサ走査信号線駆動回路(図6参照)から配線Vrwnを介してハイレベルの行選択信号RWSが印加される。これにより、昇圧用コンデンサ16越しに、トランジスタ17のゲート電極の電位が突き上げられるので、トランジスタ17のゲート電極の電位は、検出電位に行選択信号RWSのハイレベルの電位が上乗せされた電位になる。 Subsequently, the detection result is read. At this time, a high-level row selection signal RWS is applied to the other end of the boosting capacitor 16 from a sensor scanning signal line drive circuit (not shown) (see FIG. 6) via the wiring Vrwn. As a result, the potential of the gate electrode of the transistor 17 is pushed up through the boosting capacitor 16, so that the potential of the gate electrode of the transistor 17 becomes a potential obtained by adding the high level potential of the row selection signal RWS to the detection potential. .
 以上のように、トランジスタ17のゲート電極の電位が突き上げられると、トランジスタ17がオンになるしきい値電圧を越えるので、トランジスタ17がオン状態になる。この結果、トランジスタ17のゲート電極の電位のレベルに応じた、すなわち、光の強さに応じた増幅率で制御された電圧が、検出信号として、トランジスタ17のソース電極から出力され、光センサ出力バスラインVonを介して図示してないセンサ読出し回路(図6参照)に送られる。 As described above, when the potential of the gate electrode of the transistor 17 is raised, the threshold voltage for turning on the transistor 17 is exceeded, so that the transistor 17 is turned on. As a result, a voltage controlled at an amplification factor according to the potential level of the gate electrode of the transistor 17, that is, according to the light intensity, is output as a detection signal from the source electrode of the transistor 17, and output from the optical sensor. It is sent to a sensor readout circuit (see FIG. 6) not shown via the bus line Von.
 このようにして、光センサ8a・8bが受光した光の強さに応じたレベルを持つ検出信号が生成され、しかも、その検出信号は、光センサ8a・8bが備えられた各画素毎に生成される。したがって、液晶表示装置1に近接配置された被検出物について、検出動作を行うことができる。 Thus, a detection signal having a level corresponding to the intensity of light received by the optical sensors 8a and 8b is generated, and the detection signal is generated for each pixel provided with the optical sensors 8a and 8b. Is done. Therefore, the detection operation can be performed on the object to be detected arranged close to the liquid crystal display device 1.
 なお、本実施の形態においては、各画素毎に、データ信号線SLnとは別に、AMP電源供給バスラインVsnと光センサ出力バスラインVonとを設けているが、データ信号線SLnとAMP電源供給バスラインVsnとを共有化し、開口率を高くすることもできる。また、複数の画素毎に上記センサ回路が設けられている場合には、AMP電源供給バスラインVsnをデータ信号線SLn-1と共有化し、光センサ出力バスラインVonをデータ信号線SLnと共有化することもできる。 In this embodiment, the AMP power supply bus line Vsn and the optical sensor output bus line Von are provided for each pixel separately from the data signal line SLn. However, the data signal line SLn and the AMP power supply are provided. The bus line Vsn can be shared to increase the aperture ratio. When the sensor circuit is provided for each of a plurality of pixels, the AMP power supply bus line Vsn is shared with the data signal line SLn-1, and the optical sensor output bus line Von is shared with the data signal line SLn. You can also
 また、本実施の形態においては、光センサ8a・8b、画素TFT15、補助容量Cs、昇圧用コンデンサ16、画素電極およびトランジスタ17は、同一プロセスによって形成しているが、これに限定されることはない。 In the present embodiment, the optical sensors 8a and 8b, the pixel TFT 15, the auxiliary capacitor Cs, the boosting capacitor 16, the pixel electrode, and the transistor 17 are formed by the same process. However, the present invention is not limited to this. Absent.
 以下、図6に基づいて、液晶表示装置1に備えられた概略的な回路構成について説明する。 Hereinafter, a schematic circuit configuration provided in the liquid crystal display device 1 will be described with reference to FIG.
 図6は、液晶表示装置1の構成を示す概略ブロック図である。 FIG. 6 is a schematic block diagram showing the configuration of the liquid crystal display device 1.
 図示されているように、液晶表示装置1には、液晶表示パネル2、走査信号線駆動回路18、データ信号線駆動回路/センサ読み出し回路19、センサ走査信号線駆動回路20、制御回路21、センシング画像処理部22およびインターフェース回路28が備えられている。 As shown, the liquid crystal display device 1 includes a liquid crystal display panel 2, a scanning signal line driving circuit 18, a data signal line driving circuit / sensor reading circuit 19, a sensor scanning signal line driving circuit 20, a control circuit 21, and sensing. An image processing unit 22 and an interface circuit 28 are provided.
 走査信号線駆動回路18、データ信号線駆動回路/センサ読み出し回路19およびセンサ走査信号線駆動回路20は、別途作成したLSIを液晶表示パネル2に外付けしてもよく、また、液晶表示パネル2上にモノリシックに形成してもよい。 The scanning signal line driving circuit 18, the data signal line driving circuit / sensor reading circuit 19, and the sensor scanning signal line driving circuit 20 may have a separately created LSI attached to the liquid crystal display panel 2, or the liquid crystal display panel 2. It may be formed monolithically on top.
 走査信号線駆動回路18は、図示されてない走査信号線を用いて、画素P(n、n)を1行ずつ選択的に走査する走査信号を生成する。そして、データ信号線駆動回路/センサ読み出し回路19は、図示されてないデータ信号線を用いて、各画素P(n、n)にデータ信号を供給する。 The scanning signal line driving circuit 18 generates a scanning signal for selectively scanning the pixels P (n, n) row by row by using a scanning signal line (not shown). The data signal line driving circuit / sensor reading circuit 19 supplies a data signal to each pixel P (n, n) using a data signal line (not shown).
 また、センサ走査信号線駆動回路20は、上記光センサ回路を1行ずつ選択して駆動させ、データ信号線駆動回路/センサ読み出し回路19は、図示されてないAMP電源供給バスラインを用いて、上記光センサ回路に一定電位の電源電圧VDDを供給するとともに、図示されてない光センサ出力バスラインを用いて、被検出物の検出信号を上記光センサ回路から読み出す。 The sensor scanning signal line driving circuit 20 selects and drives the optical sensor circuit row by row, and the data signal line driving circuit / sensor reading circuit 19 uses an AMP power supply bus line (not shown). A power supply voltage VDD having a constant potential is supplied to the photosensor circuit, and a detection signal of an object to be detected is read from the photosensor circuit using a photosensor output bus line (not shown).
 そして、制御回路21は、走査信号線駆動回路18、データ信号線駆動回路/センサ読み出し回路19、センサ走査信号線駆動回路20およびセンシング画像処理部22へ必要な電源電圧および制御信号を供給する。 The control circuit 21 supplies necessary power supply voltages and control signals to the scanning signal line driving circuit 18, the data signal line driving circuit / sensor reading circuit 19, the sensor scanning signal line driving circuit 20, and the sensing image processing unit 22.
 センシング画像処理部22は、左上方向に指向特性を有する光センサ8aから得られる被検出物の画像データを格納する左画像用フレームメモリ23と右上方向に指向特性を有する光センサ8bから得られる被検出物の画像データを格納する右画像用フレームメモリ24とを備えている。 The sensing image processing unit 22 includes a left image frame memory 23 for storing image data of an object to be detected obtained from the optical sensor 8a having a directivity in the upper left direction and an optical sensor 8b having the directivity in the upper right direction. And a right image frame memory 24 for storing detected image data.
 また、左画像用フレームメモリ23から得られる左画像と右画像用フレームメモリ24から得られる右画像との相関演算を行い、上記両画像の視差と上記被検出物の高さ(Z座標)を求める視差および高さ検出回路25(検出回路)を備えている。 Further, the correlation between the left image obtained from the left image frame memory 23 and the right image obtained from the right image frame memory 24 is calculated, and the parallax between the two images and the height (Z coordinate) of the detected object are obtained. The required parallax and height detection circuit 25 (detection circuit) is provided.
 さらに、左画像用フレームメモリ23から得られる左画像と右画像用フレームメモリ24から得られる右画像と視差および高さ検出回路25から得られる視差とに基づいて、基準画像を生成する基準画像生成回路26と、上記基準画像からXY座標を抽出する座標抽出回路27とを備えている。 Further, reference image generation for generating a reference image based on the left image obtained from the left image frame memory 23, the right image obtained from the right image frame memory 24, and the parallax obtained from the parallax and height detection circuit 25. A circuit 26 and a coordinate extraction circuit 27 that extracts XY coordinates from the reference image are provided.
 そして、センシング画像処理部22によって、得られる各情報は、インターフェース回路28を介して図示されてないCPUなどに取り出すことができる。 Then, each information obtained by the sensing image processing unit 22 can be taken out to a CPU or the like not shown via the interface circuit 28.
 図7は、2値化された左画像と右画像とを用いて相関演算を行い、視差を算出する場合を説明するための図である。 FIG. 7 is a diagram for explaining a case where a parallax is calculated by performing a correlation calculation using the binarized left image and right image.
 図7の(a)は、液晶表示パネル2上に被検出物として指が存在する場合を示している。 7A shows a case where a finger is present as an object to be detected on the liquid crystal display panel 2. FIG.
 また、図7の(b)は、左上方向に指向特性を有する光センサ8aから得られる被検出物の画像データを、一定の閾値未満では黒色に、一定の閾値以上では白色(ドット模様)になるように2値化した左画像を示しており、一方、図7の(c)は、右上方向に指向特性を有する光センサ8bから得られる被検出物の画像データを、一定の閾値未満では黒色に、一定の閾値以上では白色(ドット模様)になるように2値化した右画像を示している。 FIG. 7B shows the image data of the detected object obtained from the optical sensor 8a having the directivity in the upper left direction in black when it is less than a certain threshold, and white (dot pattern) when it is greater than or equal to a certain threshold. On the other hand, FIG. 7C shows the left image binarized so that the image data of the detected object obtained from the optical sensor 8b having the directivity in the upper right direction is less than a certain threshold value. The right image that is binarized so as to be white (dot pattern) above a certain threshold is shown in black.
 図7の(b)および図7の(c)に図示されているように、上記左画像は、実際の指の位置より左方向にずれており、上記右画像は、実際の指の位置より右方向にずれている。したがって、上記左画像と上記右画像との間には、視差が存在する。 As shown in FIGS. 7B and 7C, the left image is shifted leftward from the actual finger position, and the right image is shifted from the actual finger position. It is shifted to the right. Therefore, there is a parallax between the left image and the right image.
 図7の(d)に図示されているように、上記視差を求めるため、正規化相関法を用いて、上記左画像と上記右画像との何れか一方を基準とし、基準とした画像を1画素ずつずらしながら、各ライン毎に上記左画像と上記右画像とのマッチングを行い、7画素をずらした際に相関値Rが最大となり、上記視差は、7画素に相当し、液晶表示パネル2の画素ピッチを基に、視差距離を算出することができる。 As shown in FIG. 7D, in order to obtain the parallax, a normalized correlation method is used to set one of the left image and the right image as a reference, and the reference image as 1 The left image and the right image are matched for each line while shifting pixel by pixel, and the correlation value R becomes maximum when the seven pixels are shifted. The parallax corresponds to seven pixels, and the liquid crystal display panel 2 The parallax distance can be calculated based on the pixel pitch.
 一方、図8は、256階調の左画像と右画像とを正規化相関法を用いて、各ライン毎にマッチングを行い、視差を算出する場合を説明するための図である。 On the other hand, FIG. 8 is a diagram for explaining a case where the parallax is calculated by matching the left image and the right image of 256 gradations for each line using the normalized correlation method.
 図示されているように、256階調の左画像と右画像とを用いて相関演算を行い、視差を算出した場合、相関値Rのピーク値がブロードになるため、上記視差を精度高く算出するのは、困難である。 As shown in the drawing, when the correlation calculation is performed using the left image and the right image of 256 gradations, and the parallax is calculated, the peak value of the correlation value R is broad, so the parallax is calculated with high accuracy. It is difficult.
 また、256階調の画像を上記視差の算出に用いる場合は、2値化された画像を用いる場合と比較して、その情報量が多く、情報量が多い分だけ相関演算における計算処理量が膨大になり、膨大な計算量を高速に処理できるハードウェアやソフトウェアを備える必要が生じる。 In addition, when a 256-gradation image is used for the above-described parallax calculation, the amount of information is large compared to the case where a binarized image is used, and the amount of calculation processing in the correlation calculation is increased by the amount of information. It becomes enormous, and it becomes necessary to provide hardware and software capable of processing a huge amount of calculation at high speed.
 図9は、エッジが検出された左画像と右画像とを用いて各ライン毎に相関演算を行い、視差を算出する場合を説明するための図である。 FIG. 9 is a diagram for explaining a case where the parallax is calculated by performing a correlation operation for each line using the left image and the right image in which the edge is detected.
 図9の(a)は、液晶表示パネル2上に被検出物として指が存在する場合を示している。 FIG. 9A shows a case where a finger is present as an object to be detected on the liquid crystal display panel 2.
 また、図9の(b)は、左上方向に指向特性を有する光センサ8aから得られる被検出物の左画像を、SobelフィルタやRobertsフィルタなどの公知のフィルタを用いてフィルタ処理し、被検出物である指の輪郭線、すなわち、エッジを検出した画像である。 FIG. 9B shows the left image of the detected object obtained from the optical sensor 8a having the directivity in the upper left direction using a known filter such as a Sobel filter or a Roberts filter to detect the detected object. It is the image which detected the outline of the finger | toe which is a thing, ie, an edge.
 一方、図9の(c)は、右上方向に指向特性を有する光センサ8bから得られる被検出物の右画像を、SobelフィルタやRobertsフィルタなどの公知のフィルタを用いてフィルタ処理し、被検出物である指の輪郭線、すなわち、エッジを検出した画像(エッジ画像)である。 On the other hand, (c) of FIG. 9 filters the right image of the detected object obtained from the optical sensor 8b having the directivity characteristic in the upper right direction by using a known filter such as a Sobel filter or Roberts filter, and detects the detected object. It is an image (edge image) in which the contour line of a finger as an object, ie, an edge is detected.
 図9の(b)および図9の(c)に図示されているように、上記左画像は、実際の指の位置より左方向にずれており、上記右画像は、実際の指の位置より右方向にずれている。したがって、上記左画像と上記右画像との間には、視差が存在する。 As shown in FIGS. 9B and 9C, the left image is shifted leftward from the actual finger position, and the right image is shifted from the actual finger position. It is shifted to the right. Therefore, there is a parallax between the left image and the right image.
 図10は、図9の(b)および図9の(c)に図示したエッジが検出された左画像と右画像とを正規化相関法を用いて、マッチングを行い、視差を求める場合を説明するための図である。 FIG. 10 illustrates a case where parallax is obtained by performing matching using the normalized correlation method on the left image and the right image in which the edges illustrated in FIGS. 9B and 9C are detected. It is a figure for doing.
 図10の(a)は、エッジが検出された左画像を示し、図10の(b)は、エッジが検出された右画像を示し、図10の(c)は、上記右画像を基準とし、基準とした右画像を1画素ずつずらしながら、上記左画像と上記右画像とを1ラインずつマッチングを行い、7画素をずらした際に上記左画像と上記右画像とにおける被検出物である指の輪郭線がちょうど一致することを示している。 10A shows a left image in which an edge is detected, FIG. 10B shows a right image in which an edge is detected, and FIG. 10C shows the right image as a reference. The left image and the right image are matched one line at a time while shifting the reference right image one pixel at a time, and when the seven pixels are shifted, the left image and the right image are detected objects. It shows that the outlines of the fingers just match.
 図10の(d)に図示されているように、エッジが検出された左画像と右画像とを用いて、マッチングを行いながら、相関値Rを求めた場合には、図7に図示されている2値化された画像を用いる場合と比較すると、ゴーストが生じないので、比較的精度高く視差を算出することができる。 As shown in FIG. 10D, when the correlation value R is obtained while performing matching using the left image and the right image in which the edge is detected, the correlation value R is illustrated in FIG. Compared with the case where a binarized image is used, no ghost is generated, so that the parallax can be calculated with relatively high accuracy.
 図11は、左画像と右画像とのエッジ方向画像を求め、エッジの方向に関する項を含む相関演算によって、視差を求める場合を説明するための図である。 FIG. 11 is a diagram for explaining a case where parallax is obtained by obtaining an edge direction image of a left image and a right image and performing a correlation calculation including a term relating to the edge direction.
 図11の(a)は、液晶表示パネル2上に被検出物として指が存在する場合を示している。 FIG. 11A shows a case where a finger is present as an object to be detected on the liquid crystal display panel 2.
 また、図11の(b)は、左上方向に指向特性を有する光センサ8aから得られる被検出物の左画像を、SobelフィルタやRobertsフィルタなどの公知のフィルタを用いてフィルタ処理し、被検出物である指の輪郭線、すなわち、エッジを検出するとともに、上記エッジの方向も検出した画像である。なお、エッジの方向を示すため、図11の(b)には、その方向によって、1から8の数字を記載している。 FIG. 11B shows the left image of the detected object obtained from the optical sensor 8a having the directivity in the upper left direction using a known filter such as a Sobel filter or a Roberts filter to detect the detected object. This is an image in which the contour line of a finger, that is, an edge, is detected, and the direction of the edge is also detected. In addition, in order to show the direction of an edge, in FIG.11 (b), the number from 1 to 8 is described by the direction.
 一方、図11の(c)は、右上方向に指向特性を有する光センサ8bから得られる被検出物の右画像を、SobelフィルタやRobertsフィルタなどの公知のフィルタを用いてフィルタ処理し、被検出物である指の輪郭線、すなわち、エッジを検出するとともに、上記エッジの方向も検出した画像である。なお、エッジの方向を示すため、図11の(c)には、その方向によって、1から8の数字を記載している。 On the other hand, (c) of FIG. 11 filters the right image of the detected object obtained from the optical sensor 8b having the directivity characteristic in the upper right direction by using a known filter such as a Sobel filter or Roberts filter, and detects the detected object. This is an image in which the contour line of a finger, that is, an edge, is detected, and the direction of the edge is also detected. In addition, in order to show the direction of an edge, the number of 1-8 is described by (c) of FIG. 11 with the direction.
 なお、エッジの方向に関する項を含む相関演算によって、視差を求めるため、相関関数として、下記(式1)を用いた。 In addition, in order to obtain the parallax by the correlation calculation including the term related to the edge direction, the following (Formula 1) was used as the correlation function.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 上記(式1)において、ILiは、左画像のi番目のエッジ強度であり、IRi+jは、右画像のj画素だけ左へずらしたi番目のエッジ強度である。 In the above (Formula 1), ILi is the i-th edge intensity of the left image, and IRi + j is the i-th edge intensity shifted to the left by j pixels of the right image.
 また、wdiriは、左画像のi番目のエッジの方向と右画像のj画素だけ左へずらしたi番目のエッジの方向とのなす角度のCOS値である。したがって、例えば、上記両エッジの方向が同じであるとwdiriは1となり、上記両エッジの方向が90度異なるとwdiriは0、上記両エッジの方向が180度異なるとwdiriは-1となる。 Wdiri is a COS value of an angle formed between the direction of the i-th edge of the left image and the direction of the i-th edge shifted to the left by j pixels of the right image. Therefore, for example, if the directions of the two edges are the same, wdiri is 1, and if the directions of the two edges are different by 90 degrees, wdiri is 0, and if the directions of the two edges are 180 degrees, wdiri is -1.
 以上のように、相関演算を行う際に、エッジの方向に関する項であるwdiriの重み付けをすることにより、左画像の左エッジと右画像の左エッジ、左画像の右エッジと右画像の右エッジが一致した場合、高い相関値Rを得ることができる。 As described above, when performing the correlation calculation, the left edge of the left image and the left edge of the right image, the right edge of the left image, and the right edge of the right image are weighted by wdiri, which is a term related to the edge direction. If the two match, a high correlation value R can be obtained.
 したがって、上記(式1)を用いて、相関演算を行い、視差を求める場合、精度高く視差を求めることができる。 Therefore, when the correlation calculation is performed using the above (Equation 1) to obtain the parallax, the parallax can be obtained with high accuracy.
 図12は、上述したような方法で求めた視差dと、被検出物(指)7からの反射光の光センサ8a・8bへの入射角、すなわち、光センサ8a・8bにおけるI層9iの垂線と上記反射光とのなす角θとから、I層9iから被検出物(指)7までの高さhを求める方法を説明するための図である。 FIG. 12 shows the parallax d obtained by the method as described above and the incident angle of the reflected light from the detected object (finger) 7 to the optical sensors 8a and 8b, that is, the I layer 9i in the optical sensors 8a and 8b. It is a figure for demonstrating the method of calculating | requiring the height h from the I layer 9i to the to-be-detected object (finger) 7 from the angle (theta) which a perpendicular and the said reflected light make.
 図12に図示されている視差dと、入射角θと、高さhとは、Tanθ=(d/2)/hとなり、この式を高さhについて整理すると、h=(d/2)×(1/Tanθ)となる。 The parallax d, the incident angle θ, and the height h shown in FIG. 12 are Tanθ = (d / 2) / h. When this equation is arranged with respect to the height h, h = (d / 2) X (1 / Tanθ).
 したがって、各ライン毎に求まった視差dと、設定値である入射角θとからI層9i(未図示)から被検出物(指)7までの高さhを求めることができる。 Therefore, the height h from the I layer 9i (not shown) to the detected object (finger) 7 can be obtained from the parallax d obtained for each line and the incident angle θ which is a set value.
 なお、本実施の形態においては、上記入射角θを、光センサ8a・8bにおけるI層9iの中心からの垂線と、I層9iの中心と遮光部材14の開口部14aにおける中心とを結ぶ直線との成す角としたが、これに限定されることはない。 In the present embodiment, the incident angle θ is a straight line connecting the perpendicular from the center of the I layer 9 i in the optical sensors 8 a and 8 b and the center of the I layer 9 i and the center of the opening 14 a of the light shielding member 14. However, the present invention is not limited to this.
 なお、図6に示す視差および高さ検出回路25から得られる信頼性情報(信頼度)は、下記(式2)によって、求めることができ、相関値Rの最大値Rjmaxの大きさの平方根をエッジ強度で正規化したものである。 The reliability information (reliability) obtained from the parallax and height detection circuit 25 shown in FIG. 6 can be obtained by the following (Equation 2), and the square root of the maximum value Rjmax of the correlation value R is obtained. Normalized by edge strength.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 上記信頼度が一定の値より低い場合などには、高さ(Z座標)に関する情報は、表示しないようにすることもできる。 If the reliability is lower than a certain value, information on the height (Z coordinate) can be not displayed.
 以下、図13から図17に基づいて、図6に示す視差および高さ検出回路25で、左画像用フレームメモリ23から得られる左画像と右画像用フレームメモリ24から得られる右画像との相関演算を行うフローと上記両画像の視差を求めるフローについて詳しく説明する。 The correlation between the left image obtained from the left image frame memory 23 and the right image obtained from the right image frame memory 24 by the parallax and height detection circuit 25 shown in FIG. A flow for performing the calculation and a flow for obtaining the parallax between the two images will be described in detail.
 なお、ここでは、図11に示すように、左画像と右画像とのエッジ方向画像を求め、エッジの方向に関する項を含む相関演算によって、視差を求める場合について説明する。 Here, as shown in FIG. 11, a case will be described in which an edge direction image of a left image and a right image is obtained, and parallax is obtained by correlation calculation including a term related to the edge direction.
 図13は、画素毎の相関関数を求めるフローを説明するための図であり、図14は、図13におけるR(j、k)を示す図であり、HSZは横方向の画素数を、VSZは縦方向の画素数をそれぞれ示す。また、図15は、エッジの方向に関する項であるWdirの値を求めるための参照テーブルである。 FIG. 13 is a diagram for explaining a flow for obtaining a correlation function for each pixel. FIG. 14 is a diagram illustrating R (j, k) in FIG. 13. HSZ is the number of pixels in the horizontal direction. Indicates the number of pixels in the vertical direction. FIG. 15 is a reference table for obtaining the value of Wdir, which is a term related to the edge direction.
 図13に図示されているように、フローの開始とともに、初期値として、R(j、k)=0、k=0、j=0、i=0が設定される。 As shown in FIG. 13, at the start of the flow, R (j, k) = 0, k = 0, j = 0, i = 0 are set as initial values.
 そして、S101では、図15に示す参照テーブルを参照して、左画像のある画素におけるエッジ方向と右画像のある画素におけるエッジ方向との成す角によって定まるCOS値であるWdirを求める。 In S101, the reference table shown in FIG. 15 is referred to determine Wdir, which is a COS value determined by the angle formed by the edge direction of a pixel in the left image and the edge direction of a pixel in the right image.
 図15に図示されているように、上記エッジの方向は1から8の数字で記載されており、左画像のある画素におけるエッジ方向と右画像のある画素におけるエッジ方向とが同じである場合は、COS0となり、Wdirは1となる。 As shown in FIG. 15, the direction of the edge is described by a number from 1 to 8, and the edge direction in a pixel in the left image is the same as the edge direction in a pixel in the right image. , COS0, and Wdir becomes 1.
 それから、S102では、i+jが0以上、HSZ-1以下であるかを判断し、そうであれば、PrijをPR(i+j、k)とし、そうでなければ、Prijを0とする。 Then, in S102, it is determined whether i + j is 0 or more and HSZ-1 or less. If so, Prij is set to PR (i + j, k), otherwise Prij is set to 0.
 そして、S103では、先ず、左画像の一番左上画素(0、0)と右画像の一番左上画素(0、0)との相関関数を算出し、S104で、iがHSZ以上と判断されるまでは、右画像を同一行内で一列ずつずらしながら、左画像の一番左上画素(0、0)と右画像の一番上の行の全ての画素(0、0)~(0、HSZ-1)との相関関数を算出する。 In S103, first, a correlation function between the upper left pixel (0, 0) of the left image and the upper left pixel (0, 0) of the right image is calculated. In S104, i is determined to be equal to or higher than HSZ. Until the upper left pixel (0, 0) of the left image and all the pixels (0, 0) to (0, HSZ) of the uppermost row of the right image -1) is calculated.
 一方、S104で、iがHSZ以上と判断された場合には、左画像を同一行内で一列ずらした画素(0、1)と右画像の一番上の行の全ての画素(0、0)~(0、HSZ-1)との相関関数を算出する。 On the other hand, if it is determined in S104 that i is equal to or higher than HSZ, the pixel (0, 1) obtained by shifting the left image by one column in the same row and all the pixels (0, 0) in the uppermost row of the right image. A correlation function with (0, HSZ-1) is calculated.
 そして、S105で、jがHSZ以上と判断されるまでは、左画像を同一行内で一列ずつ、ずらしながら、左画像の画素(0、2)~(0、HSZ-1)と右画像の一番上の行の全ての画素(0、0)~(0、HSZ-1)との相関関数を算出する。 Then, until it is determined in S105 that j is equal to or higher than HSZ, the left image is shifted by one column in the same row while shifting the left image pixels (0, 2) to (0, HSZ-1) and one of the right image. A correlation function with all the pixels (0, 0) to (0, HSZ-1) in the top row is calculated.
 一方、S105で、jがHSZ以上と判断された場合には、左画像を一行ずらした画素(1、0)~(1、HSZ-1)と右画像の2番目の行の全ての画素(1、0)~(1、HSZ-1)との相関関数を算出する。 On the other hand, if it is determined in S105 that j is equal to or higher than HSZ, the pixels (1, 0) to (1, HSZ-1) obtained by shifting the left image by one row and all the pixels in the second row of the right image ( Correlation functions with (1, 0) to (1, HSZ-1) are calculated.
 そして、S106で、kがVSZ以上と判断されるまでは、左画像を一行ずつ、ずらしながら、上記と同様に全ての左画像の画素と全ての右画像の画素との相関関数を算出する。 In S106, the correlation function between all the left image pixels and all the right image pixels is calculated in the same manner as described above while shifting the left image line by line until it is determined that k is equal to or greater than VSZ.
 それから、S106で、kがVSZ以上と判断された時点で上記フローは終了するようになっている。 Then, when it is determined in S106 that k is equal to or greater than VSZ, the above flow ends.
 以上のようにして、本実施の形態においては、左画像と右画像との各ライン毎の相関関数を求めることができる。 As described above, in this embodiment, the correlation function for each line of the left image and the right image can be obtained.
 以下、図16および図17に基づいて、以上のようにして得られた相関関数の最大値を求めることにより、視差を求めるフローについて説明する。 Hereinafter, the flow for obtaining the parallax by obtaining the maximum value of the correlation function obtained as described above will be described based on FIG. 16 and FIG.
 図16は、各ライン毎の相関関数の最大値を求めるフローを説明するための図であり、図17は、図16におけるR(j、k)と視差z(k)とを示す図である。 FIG. 16 is a diagram for explaining a flow for obtaining the maximum value of the correlation function for each line, and FIG. 17 is a diagram showing R (j, k) and parallax z (k) in FIG. .
 図16に図示されているように、フローの開始とともに、初期値として、z(k)=0、k=0、i=0が設定される。 As shown in FIG. 16, with the start of the flow, z (k) = 0, k = 0, and i = 0 are set as initial values.
 そして、S201で、k(z)<R(j、k)であると判断された場合には、左画像の一番左上画素(0、0)と右画像の一番上の行の全ての画素(0、0)~(0、HSZ-1)との相関関数の最大値を求める。 If it is determined in S201 that k (z) <R (j, k), the upper left pixel (0, 0) of the left image and all of the uppermost row of the right image The maximum value of the correlation function with the pixels (0, 0) to (0, HSZ-1) is obtained.
 そして、S202で、jがHSZ以上と判断されるまでは、左画像を同一行内で一列ずつ、ずらしながら、左画像のそれぞれの画素(0、1)~(0、HSZ-1)と右画像の一番上の行の全ての画素(0、0)~(0、HSZ-1)との相関関数の最大値を求める。 In step S202, each pixel (0, 1) to (0, HSZ-1) of the left image and the right image are shifted while shifting the left image by one column in the same row until j is determined to be equal to or higher than HSZ. The maximum value of the correlation function with all the pixels (0, 0) to (0, HSZ-1) in the top row is obtained.
 なお、S201で、k(z)≧R(j、k)であると判断された場合には、以上のように相関関数の最大値を求めることなく、左画像を同一行内で一列ずらす。 If it is determined in S201 that k (z) ≧ R (j, k), the left image is shifted by one column within the same row without obtaining the maximum value of the correlation function as described above.
 また、S202で、jがHSZ以上と判断された場合には、左画像を一行ずらしたそれぞれの画素(1、0)~(1、HSZ-1)と右画像の2番目の行の全ての画素(1、0)~(1、HSZ-1)との相関関数の最大値を求める。 If it is determined in S202 that j is equal to or higher than HSZ, each pixel (1, 0) to (1, HSZ-1) obtained by shifting the left image by one row and all of the second row of the right image The maximum value of the correlation function with the pixels (1, 0) to (1, HSZ-1) is obtained.
 そして、S203で、kがVSZ以上と判断されるまでは、左画像を一行ずつ、ずらしながら、上記と同様に全ての左画像の画素と全ての右画像の画素との相関関数の最大値を求める。 Then, in S203, the maximum value of the correlation function between all the left image pixels and all the right image pixels is set in the same manner as described above while shifting the left image line by line until it is determined that k is equal to or greater than VSZ. Ask.
 以上のようにして、各ライン毎の相関関数の最大値を求めることにより、視差z(k)を求めることができる。 As described above, the parallax z (k) can be obtained by obtaining the maximum value of the correlation function for each line.
 〔実施の形態2〕
 次に、図18に基づいて、本発明の第2の実施形態について説明する。本実施の形態は、光センサ8a・8bへの入射角を一定の範囲内に制限する光学素子が設けられている点において実施の形態1とは異なっており、その他の構成については実施の形態1において説明したとおりである。説明の便宜上、上記の実施の形態1の図面に示した部材と同じ機能を有する部材については、同じ符号を付し、その説明を省略する。
[Embodiment 2]
Next, a second embodiment of the present invention will be described based on FIG. The present embodiment is different from the first embodiment in that an optical element that limits the incident angle to the optical sensors 8a and 8b within a certain range is provided, and other configurations are the same as in the first embodiment. 1 as described above. For convenience of explanation, members having the same functions as those shown in the drawings of the first embodiment are given the same reference numerals, and descriptions thereof are omitted.
 図18は、入射角を一定の範囲内に制限する光学素子が備えられている光センサの概略構成を示す図である。 FIG. 18 is a diagram showing a schematic configuration of an optical sensor provided with an optical element that limits the incident angle within a certain range.
 図18の(a)に図示されているように、図中の左上方向に指向特性を有する光センサ8cにおいては、受光部であるI層9iより上層であり、遮光部材14の開口部14aと平面視において重なる位置に、受光部であるI層9iへの入射角を一定の範囲内に制限する光学素子29aが設けられている。 As shown in FIG. 18A, in the optical sensor 8c having directivity in the upper left direction in the drawing, it is an upper layer from the I layer 9i that is a light receiving portion, and the opening 14a of the light shielding member 14 An optical element 29a that restricts the incident angle to the I layer 9i that is the light receiving portion within a certain range is provided at a position overlapping in plan view.
 一方、図18の(b)に図示されているように、図中の右上方向に指向特性を有する光センサ8dにおいては、受光部であるI層9iより上層であり、遮光部材14の開口部14aと平面視において重なる位置に、受光部であるI層9iへの入射角を一定の範囲内に制限する光学素子29bが設けられている。 On the other hand, as shown in FIG. 18B, in the optical sensor 8d having directivity characteristics in the upper right direction in the drawing, it is an upper layer than the I layer 9i that is the light receiving portion, and the opening portion of the light shielding member 14 An optical element 29b that limits the incident angle to the I layer 9i that is the light receiving portion within a certain range is provided at a position overlapping with 14a in plan view.
 上記構成によれば、受光部であるI層9iへ入射される入射光のボケを改善することができ、さらに、精度の高い被検出物の距離の推定を行うことができる液晶表示装置1を実現することができる。 According to the above configuration, the liquid crystal display device 1 can improve the blur of incident light incident on the I layer 9i that is a light receiving unit, and can estimate the distance of the detection object with high accuracy. Can be realized.
 本発明のエリアセンサにおいては、上記複数の第1の受光素子によって検出された被検出物の第1の画像と上記複数の第2の受光素子によって検出された上記被検出物の第2の画像とは、同タイミングで検出され、上記第1の画像と上記第2の画像との相関演算を行い、算出された上記第1の画像と上記第2の画像との視差に基づいて、上記被検出物の上記面からの高さを検出する検出回路を備えていることが好ましい。 In the area sensor of the present invention, the first image of the detected object detected by the plurality of first light receiving elements and the second image of the detected object detected by the plurality of second light receiving elements. Is detected at the same timing, performs a correlation operation between the first image and the second image, and based on the calculated parallax between the first image and the second image, It is preferable that a detection circuit for detecting the height of the detected object from the surface is provided.
 上記構成によれば、異なる指向特性を有する上記第1の受光素子と上記第2の受光素子とによって検出された上記被検出物の画像の視差に基づいて、上記被検出物の上記第1の受光素子と上記第2の受光素子とが形成されている面からの高さを検出することができる。 According to the configuration, the first object of the detected object is based on the parallax of the image of the detected object detected by the first light receiving element and the second light receiving element having different directivity characteristics. The height from the surface on which the light receiving element and the second light receiving element are formed can be detected.
 したがって、上記被検出物の反射強度のみを用いて、上記被検出物の距離を推定する方式と比べると、より精度の高い距離の推定を行うことができるエリアセンサを実現することができる。 Therefore, it is possible to realize an area sensor that can estimate the distance with higher accuracy than the method of estimating the distance of the detected object using only the reflection intensity of the detected object.
 また、上記構成によれば、外光に瞬間的な変動が生じた場合においても、距離の推定を行うことができる。 Further, according to the above configuration, it is possible to estimate the distance even when instantaneous fluctuations occur in the external light.
 本発明のエリアセンサにおいて、上記第1の画像および上記第2の画像は、それぞれ2値化された画像であり、上記視差は、上記2値化された画像の相関演算によって、算出されることが好ましい。 In the area sensor of the present invention, each of the first image and the second image is a binarized image, and the parallax is calculated by a correlation operation of the binarized image. Is preferred.
 例えば、上記視差の算出に256階調の画像を用いる場合には、上記2値化された画像を用いる場合に比べると、その情報量が多く、情報量が多い分だけ相関演算における計算処理量が膨大になり、膨大な計算量を高速に処理できるハードウェアやソフトウェアを備える必要が生じることとなる。 For example, when a 256-gradation image is used for calculating the parallax, the amount of information is large compared to the case where the binarized image is used, and the amount of calculation processing in the correlation calculation is larger by the amount of information. Therefore, it becomes necessary to provide hardware and software that can process a huge amount of calculation at high speed.
 一方、上記構成によれば、上記視差の算出に2値化された画像を用いているため、上記相関演算における計算処理量を少なくすることができる。 On the other hand, according to the above configuration, since the binarized image is used for the calculation of the parallax, the amount of calculation processing in the correlation calculation can be reduced.
 本発明のエリアセンサにおいて、上記第1の画像および上記第2の画像は、上記被検出物の輪郭線を抽出したエッジ画像であり、上記視差は、上記エッジ画像の相関演算によって、算出されることが好ましい。 In the area sensor of the present invention, the first image and the second image are edge images obtained by extracting contour lines of the detected object, and the parallax is calculated by correlation calculation of the edge images. It is preferable.
 例えば、256階調の画像を上記視差の算出に用いた場合には、画像間のパターンが最も一致する、相関値(類似度)が最大を示す部分がブロードになるため、正確な視差を算出できず、精度の高い上記被検出物の距離の推定を行うことができない場合がある。 For example, when a 256-gradation image is used for the above-described parallax calculation, the portion where the pattern between images is the best match and the correlation value (similarity) is the largest is broad, so the accurate parallax is calculated. In some cases, it is impossible to estimate the distance of the detected object with high accuracy.
 また、2値化された画像を上記視差の算出に用いた場合には、画像間のパターンが最も一致する、相関値(類似度)が最大を示す部分の両側に、ゴーストピークが生じるため、上記視差算出の精度の低下を招いてしまう。 In addition, when a binarized image is used for the above-described parallax calculation, a ghost peak occurs on both sides of a portion where the pattern between images most closely matches and the correlation value (similarity) is maximum. The parallax calculation accuracy is reduced.
 一方、上記構成によれば、上記視差の算出に上記被検出物の輪郭線を抽出したエッジ画像が用いられているため、画像間のパターンが最も一致する、相関値(類似度)が最大を示す部分を比較的シャープに得ることができるので、正確な視差を算出でき、精度の高い上記被検出物の距離の推定を行うことができるエリアセンサを実現することができる。 On the other hand, according to the above configuration, since the edge image obtained by extracting the contour line of the detection object is used for the calculation of the parallax, the correlation value (similarity) having the highest matching pattern between the images is maximized. Since the portion to be shown can be obtained relatively sharply, an area sensor that can calculate an accurate parallax and can estimate the distance of the detected object with high accuracy can be realized.
 本発明のエリアセンサにおいて、上記第1の画像および上記第2の画像は、上記被検出物の輪郭線を抽出したエッジ画像であり、上記視差は、上記エッジの方向に関する項を含む相関演算によって、算出されることが好ましい。 In the area sensor of the present invention, the first image and the second image are edge images obtained by extracting contour lines of the detected object, and the parallax is obtained by correlation calculation including a term related to the direction of the edge. Is preferably calculated.
 上記構成によれば、例えば、上記第1の画像における上記被検出物のエッジの方向と上記第2の画像における上記被検出物のエッジの方向とが一致した場合に、相関値(類似度)が高くなるように設定されたエッジの方向に関する項を含む相関演算によって、上記視差を算出している。 According to the above configuration, for example, when the direction of the edge of the detection object in the first image matches the direction of the edge of the detection object in the second image, the correlation value (similarity) The parallax is calculated by a correlation calculation including a term related to the edge direction set so as to increase.
 したがって、画像間のパターンが最も一致する、相関値(類似度)が最大を示す部分を比較的シャープに得ることができるので、正確な視差を算出でき、精度の高い上記被検出物の距離の推定を行うことができるエリアセンサを実現することができる。 Therefore, since the portion where the pattern between images most closely matches and the correlation value (similarity) is maximum can be obtained relatively sharply, accurate parallax can be calculated, and the distance of the detected object with high accuracy can be calculated. An area sensor that can perform estimation can be realized.
 本発明のエリアセンサにおいては、上記第1の画像と、上記第2の画像と、上記第1の画像と上記第2の画像との視差とに基づいて、基準画像を生成する基準画像生成回路と、上記基準画像から座標を抽出する座標抽出回路とを備えていることが好ましい。 In the area sensor of the present invention, a reference image generation circuit that generates a reference image based on the first image, the second image, and the parallax between the first image and the second image. And a coordinate extraction circuit for extracting coordinates from the reference image.
 上記構成によれば、上記被検出物の距離、すなわち、上記被検出物の上記第1の受光素子と上記第2の受光素子とが形成されている面からの高さに関する座標のみでなく、上記基準画像から上記第1の受光素子と上記第2の受光素子とが形成されている面上の座標も抽出することができる。 According to the above configuration, not only the distance of the detection object, that is, the coordinates relating to the height of the detection object from the surface on which the first light receiving element and the second light receiving element are formed, The coordinates on the surface on which the first light receiving element and the second light receiving element are formed can also be extracted from the reference image.
 したがって、例えば、上記エリアセンサを3次元座標(XYZ座標)の情報入力手段として用いることもできる。 Therefore, for example, the area sensor can be used as information input means for three-dimensional coordinates (XYZ coordinates).
 本発明のエリアセンサにおいて、上記第1の受光部および上記第2の受光部より上層であり、上記第1の遮光部材の開口部および上記第2の遮光部材の開口部と平面視において重なる位置には、上記第1の受光部および上記第2の受光部への入射角を一定の範囲内に制限する光学素子が設けられていることが好ましい。 In the area sensor of the present invention, a position that is an upper layer than the first light receiving unit and the second light receiving unit and overlaps the opening of the first light shielding member and the opening of the second light shielding member in plan view It is preferable that an optical element for limiting an incident angle to the first light receiving unit and the second light receiving unit within a certain range is provided.
 上記構成によれば、上記第1の遮光部材の開口部および上記第2の遮光部材の開口部と平面視において重なる位置に、上記第1の受光部および上記第2の受光部への入射角を一定の範囲内に制限する光学素子が設けられているため、上記第1の受光部および上記第2の受光部へ入射される入射光のボケを改善することができ、さらに、精度の高い上記被検出物の距離の推定を行うことができるエリアセンサを実現することができる。 According to the above configuration, the incident angle to the first light receiving unit and the second light receiving unit at a position overlapping the opening of the first light blocking member and the opening of the second light blocking member in plan view. Since an optical element that restricts the light intensity within a certain range is provided, it is possible to improve blurring of incident light incident on the first light receiving unit and the second light receiving unit, and further, high accuracy. An area sensor capable of estimating the distance of the object to be detected can be realized.
 本発明の表示装置は、上記第1の受光素子と上記第2の受光素子とが形成されている面を有するアクティブマトリクス基板と、上記第1の受光素子と上記第2の受光素子とが形成されている面と対向するように配置された対向基板と、上記アクティブマトリクス基板と上記対向基板との間に挟持された液晶層とを備えていることが好ましい。 In the display device of the present invention, an active matrix substrate having a surface on which the first light receiving element and the second light receiving element are formed, and the first light receiving element and the second light receiving element are formed. It is preferable that a counter substrate disposed so as to be opposed to the surface being provided, and a liquid crystal layer sandwiched between the active matrix substrate and the counter substrate.
 上記構成によれば、上記エリアセンサが一体化された液晶表示装置を実現することができる。したがって、上記液晶表示装置の画像を見ながら、上記エリアセンサを用いて、情報入力を行うことができる。 According to the above configuration, a liquid crystal display device in which the area sensor is integrated can be realized. Therefore, information can be input using the area sensor while viewing the image of the liquid crystal display device.
 本発明は上記した各実施の形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施の形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施の形態についても本発明の技術的範囲に含まれる。 The present invention is not limited to the above-described embodiments, and various modifications are possible within the scope shown in the claims, and the present invention can be obtained by appropriately combining technical means disclosed in different embodiments. Embodiments are also included in the technical scope of the present invention.
 本発明は、受光素子(光センサ)を備えたエリアセンサおよびこのようなエリアセンサを備えている表示装置に適用することができる。 The present invention can be applied to an area sensor including a light receiving element (light sensor) and a display device including such an area sensor.
  1        エリアセンサ一体型の液晶表示装置(表示装置)
  2        液晶表示パネル
  3        バックライト
  7        指(被検出物)
  8a・8c    光センサ(第1の受光素子)
  8b・8d    光センサ(第2の受光素子)
  9i       受光部(第1の受光部、第2の受光部)
  14       遮光部材
  14a      開口部
  25       視差および高さ検出回路
  26       基準画像生成回路
  27       座標抽出回路
1 Area sensor integrated liquid crystal display device (display device)
2 LCD panel 3 Backlight 7 Finger (object to be detected)
8a, 8c Optical sensor (first light receiving element)
8b / 8d optical sensor (second light receiving element)
9i light receiving part (first light receiving part, second light receiving part)
14 light shielding member 14a opening 25 parallax and height detection circuit 26 reference image generation circuit 27 coordinate extraction circuit

Claims (9)

  1.  光の入射光量を検出する第1の受光素子と第2の受光素子とが1組になって、複数個形成されている面を有するエリアセンサであって、
     上記第1の受光素子は、第1の受光部と、上記第1の受光部より上層に形成され上記光を遮断する第1の遮光部材とを備えており、
     上記第2の受光素子は、第2の受光部と、上記第2の受光部より上層に形成され上記光を遮断する第2の遮光部材とを備えており、
     上記第1の受光素子においては、第1の方向からの光成分は受光でき、上記第1の方向からの光が上記面で鏡面反射される場合の反射光の方向と反対方向である第2の方向からの光成分は遮光できるように、上記第1の遮光部材の開口部は、上記第1の受光部に対して、平面視において部分的に重なるように、上記第1の方向側に所定量ずらされて形成されており、
     上記第2の受光素子においては、上記第2の方向からの光成分は受光でき、上記第1の方向からの光成分は遮光できるように、上記第2の遮光部材の開口部は、上記第2の受光部に対して、平面視において部分的に重なるように、上記第2の方向側に所定量ずらされて形成されていることを特徴とするエリアセンサ。
    An area sensor having a plurality of surfaces formed by combining a first light receiving element and a second light receiving element that detect the amount of incident light.
    The first light receiving element includes a first light receiving portion and a first light shielding member that is formed in an upper layer than the first light receiving portion and blocks the light.
    The second light receiving element includes a second light receiving portion and a second light shielding member that is formed above the second light receiving portion and blocks the light.
    In the first light receiving element, the light component from the first direction can be received, and the second direction is opposite to the direction of the reflected light when the light from the first direction is specularly reflected by the surface. The opening of the first light shielding member is on the first direction side so as to partially overlap the first light receiving portion in plan view so that the light component from the direction can be shielded. It is formed by shifting a predetermined amount,
    In the second light receiving element, the opening of the second light shielding member is configured to receive the light component from the second direction and to shield the light component from the first direction. An area sensor characterized by being formed by being shifted a predetermined amount toward the second direction so as to partially overlap the two light receiving portions in plan view.
  2.  上記複数の第1の受光素子によって検出された被検出物の第1の画像と上記複数の第2の受光素子によって検出された上記被検出物の第2の画像とは、同タイミングで検出され、
     上記第1の画像と上記第2の画像との相関演算を行い、算出された上記第1の画像と上記第2の画像との視差に基づいて、上記被検出物の上記面からの高さを検出する検出回路を備えていることを特徴とする請求項1に記載のエリアセンサ。
    The first image of the detected object detected by the plurality of first light receiving elements and the second image of the detected object detected by the plurality of second light receiving elements are detected at the same timing. ,
    The correlation between the first image and the second image is calculated, and the height of the detected object from the surface is calculated based on the calculated parallax between the first image and the second image. The area sensor according to claim 1, further comprising: a detection circuit for detecting
  3.  上記第1の画像および上記第2の画像は、それぞれ2値化された画像であり、
     上記視差は、上記2値化された画像の相関演算によって、算出されることを特徴とする請求項2に記載のエリアセンサ。
    The first image and the second image are binarized images, respectively.
    The area sensor according to claim 2, wherein the parallax is calculated by a correlation calculation of the binarized image.
  4.  上記第1の画像および上記第2の画像は、上記被検出物の輪郭線を抽出したエッジ画像であり、
     上記視差は、上記エッジ画像の相関演算によって、算出されることを特徴とする請求項2に記載のエリアセンサ。
    The first image and the second image are edge images obtained by extracting outlines of the detection object,
    The area sensor according to claim 2, wherein the parallax is calculated by a correlation calculation of the edge image.
  5.  上記第1の画像および上記第2の画像は、上記被検出物の輪郭線を抽出したエッジ画像であり、
     上記視差は、上記エッジの方向に関する項を含む相関演算によって、算出されることを特徴とする請求項2に記載のエリアセンサ。
    The first image and the second image are edge images obtained by extracting outlines of the detection object,
    The area sensor according to claim 2, wherein the parallax is calculated by a correlation calculation including a term relating to the direction of the edge.
  6.  上記第1の画像と、上記第2の画像と、上記第1の画像と上記第2の画像との視差とに基づいて、基準画像を生成する基準画像生成回路と、
     上記基準画像から座標を抽出する座標抽出回路とを備えていることを特徴とする請求項2から5の何れか1項に記載のエリアセンサ。
    A reference image generation circuit that generates a reference image based on the first image, the second image, and the parallax between the first image and the second image;
    The area sensor according to claim 2, further comprising a coordinate extraction circuit that extracts coordinates from the reference image.
  7.  上記第1の受光部および上記第2の受光部より上層であり、
     上記第1の遮光部材の開口部および上記第2の遮光部材の開口部と平面視において重なる位置には、
     上記第1の受光部および上記第2の受光部への入射角を一定の範囲内に制限する光学素子が設けられていることを特徴とする請求項1から6の何れか1項に記載のエリアセンサ。
    It is an upper layer than the first light receiving part and the second light receiving part,
    In a position overlapping the opening of the first light shielding member and the opening of the second light shielding member in plan view,
    7. The optical element according to claim 1, further comprising an optical element that limits an incident angle to the first light receiving unit and the second light receiving unit within a certain range. area sensor.
  8.  請求項1から7の何れか1項に記載のエリアセンサを備え、
     上記第1の受光素子と上記第2の受光素子とが形成されている面上には、マトリクス状に形成され、画像の表示を行う複数の画素が備えられ、
     上記第1の受光素子と上記第2の受光素子とは、上記画素の位置に応じて設けられていることを特徴とする表示装置。
    The area sensor according to any one of claims 1 to 7, comprising:
    On the surface on which the first light receiving element and the second light receiving element are formed, a plurality of pixels that are formed in a matrix and display an image are provided.
    The display device, wherein the first light receiving element and the second light receiving element are provided in accordance with the position of the pixel.
  9.  上記第1の受光素子と上記第2の受光素子とが形成されている面を有するアクティブマトリクス基板と、
     上記第1の受光素子と上記第2の受光素子とが形成されている面と対向するように配置された対向基板と、
     上記アクティブマトリクス基板と上記対向基板との間に挟持された液晶層とを備えていることを特徴とする請求項8に記載の表示装置。
    An active matrix substrate having a surface on which the first light receiving element and the second light receiving element are formed;
    A counter substrate disposed to face a surface on which the first light receiving element and the second light receiving element are formed;
    The display device according to claim 8, further comprising a liquid crystal layer sandwiched between the active matrix substrate and the counter substrate.
PCT/JP2011/067892 2010-08-05 2011-08-04 Area sensor and display device WO2012018090A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010176359 2010-08-05
JP2010-176359 2010-08-05

Publications (1)

Publication Number Publication Date
WO2012018090A1 true WO2012018090A1 (en) 2012-02-09

Family

ID=45559586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/067892 WO2012018090A1 (en) 2010-08-05 2011-08-04 Area sensor and display device

Country Status (1)

Country Link
WO (1) WO2012018090A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219486A (en) * 2006-01-20 2007-08-30 Denso Corp Display device
JP2009009098A (en) * 2007-05-25 2009-01-15 Seiko Epson Corp Display device and detecting method
JP2010061639A (en) * 2008-08-04 2010-03-18 Sony Corp Biometrics authentication system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219486A (en) * 2006-01-20 2007-08-30 Denso Corp Display device
JP2009009098A (en) * 2007-05-25 2009-01-15 Seiko Epson Corp Display device and detecting method
JP2010061639A (en) * 2008-08-04 2010-03-18 Sony Corp Biometrics authentication system

Similar Documents

Publication Publication Date Title
TWI354919B (en) Liquid crystal display having multi-touch sensing
US8350827B2 (en) Display with infrared backlight source and multi-touch sensing function
US9971456B2 (en) Light sensitive display with switchable detection modes for detecting a fingerprint
TWI491246B (en) Stereoscopic image displaying device, object proximity detecting device, and electronic apparatus
KR101929427B1 (en) Display device including touch sensor
EP3174044B1 (en) Pixel drive circuit and drive method thereof, array substrate, and transparent reflective display device
US8878817B2 (en) Area sensor and liquid crystal display device with area sensor
EP2387745B1 (en) Touch-sensitive display
US8797297B2 (en) Display device
US8189128B2 (en) Liquid crystal display having first and second diffusion members and a plurality of sensors that can detect a sensing signal reflected from an object
JP2009151039A (en) Display device
JP2009139597A (en) Display apparatus
TW200941087A (en) Display device
JP2011038916A (en) Detector, display apparatus, and method for measuring proximal distance of object
JP2009271369A (en) Liquid crystal display device
US20090159901A1 (en) Display
TW201118849A (en) Information input device, information input program, and electronic instrument
US20120074406A1 (en) Photosensor
CN110633695A (en) Line identification module and display device
JP2010092935A (en) Sensor element and method of driving sensor element, input device, display device with input function, and communication device
KR20060056633A (en) Display device including sensing element
JP2022157328A (en) Display device
CN111258096A (en) Fingerprint identification display panel, fingerprint identification method and display device
WO2012018090A1 (en) Area sensor and display device
WO2012018092A1 (en) Area sensor and display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11814715

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11814715

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP