WO2019098265A1 - Color vision support device - Google Patents

Color vision support device Download PDF

Info

Publication number
WO2019098265A1
WO2019098265A1 PCT/JP2018/042261 JP2018042261W WO2019098265A1 WO 2019098265 A1 WO2019098265 A1 WO 2019098265A1 JP 2018042261 W JP2018042261 W JP 2018042261W WO 2019098265 A1 WO2019098265 A1 WO 2019098265A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
image
light source
color
light emitting
Prior art date
Application number
PCT/JP2018/042261
Other languages
French (fr)
Japanese (ja)
Inventor
勇二 関口
木村 浩
洋一 藤岡
悟 須藤
貴仁 日高
岩井 順一
Original Assignee
株式会社テレパシージャパン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社テレパシージャパン filed Critical 株式会社テレパシージャパン
Publication of WO2019098265A1 publication Critical patent/WO2019098265A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • the present invention relates to an eyepiece type color vision assistance device mounted on a head mounted display (HMD).
  • the color vision support device of the present invention is an optical device installed in front of the observer's eyes, and converts the image data taken by the camera into a hue that is easy for the observer to distinguish in color.
  • the projection assists the observer's color vision.
  • L cones red cones
  • M cones green cones
  • S cones blue cones
  • L cones Due to the difference in spectral sensitivities of these three types of cones (photocells), L cones mainly feel yellowish green to red, M cones mainly feel green to yellowish light, and S cones mainly I feel mainly purple to blue light. Then, when light enters the pupil, these cones react, the information is transmitted from the retina to the optic nerve and carried to the visual center of the cerebral cortex, and color vision corresponding to each wavelength of visible light (400 to 800 nm) occurs It is said that.
  • the color-impaired person will have a slightly different color sensation from the normal one, for example, due to the weakness or lack of any of the L-, M-, and S-cone senses.
  • the type of color blindness is broadly divided into “type 1 color vision” in which the sense of the L cone is weak or missing, “type 2 color vision” in which the sense in the M cone is weak or missing, and S cone It is said that there is a “type 3 color vision” with weak or missing sensation. People with color weaknesses with such characteristics may find it difficult to distinguish between two adjacent colors (such as red and green).
  • Patent Document 1 discloses a head-mounted display type visual aid device.
  • the visual aid device of Patent Document 1 stores in advance the hue at which the viewer's color weakness occurs, and when the hue at which the color weakness occurs is detected from the image data acquired by the camera, for example, The image data is converted to a hue that is easy for the observer to distinguish by making the light amount larger than the light amount of other hues and outputting.
  • a general LED is used as a light source of the visual aid device of patent document 1, and it is supposed that the irradiation light from a light source will be introduce
  • a general LED emits white light.
  • a red, green, or blue light emitting diode emits light with the same light amount to generate white light (multichip), or a blue light emitting diode emits yellow fluorescence.
  • a red, green, or blue light emitting diode emits light with the same light amount to generate white light (multichip), or a blue light emitting diode emits yellow fluorescence.
  • the red light amount is increased more than green and blue for color-weakened people with poor red sensitivity. In this case, it is necessary to decrease the transmission amount (light amount) of green and blue in the liquid crystal display to make the red light amount relatively strong.
  • an object of the present invention is to provide a color vision support device capable of converting a captured image into a hue that is easy for the observer to discriminate colors without degrading the image quality of the captured image captured by a camera.
  • the inventor of the present invention has intensively studied the means for solving the problems of the prior art and, as a result, constitutes a light source with a plurality of light emitting elements having different light emitting colors and enables independently controlling the light quantity of each light emitting element
  • the light source after increasing the light quantity of the hue that the observer can not distinguish in color, introducing the irradiation light from the light source to the image element such as a liquid crystal panel, the image light is not affected by the image quality
  • the present inventors considered that the problems of the prior art could be solved based on the above findings, and completed the present invention.
  • the present invention has the following configuration.
  • a color vision support apparatus includes a camera 10 for photographing the outside world to generate image data, a display optical system 20 for emitting an image light corresponding to the image data, and guiding the image light to the observer's pupil And an eyepiece optical system 30.
  • the display optical system 20 includes a light source unit 22, an image device 25, and a control circuit 26.
  • the light source unit 22 includes a plurality of light emitting elements 21 having different emission wavelengths.
  • the example of the some light emitting element 21 is a light emitting diode which light-emits in three primary colors of red, green, and blue, it may be a laser.
  • the light emission color of the light emitting element 21 is not limited to these, and for example, adjustment such as changing red to orange may be performed according to the observer's color vision.
  • the video element 25 converts the irradiation light from the light source unit 22 into video light.
  • the control circuit 26 controls the light amounts of the plurality of light emitting elements 21 independently.
  • the display optical system 20 can output image light obtained by converting the image data into a hue that can be easily discriminated by the observer.
  • the red light amount of each light emitting element 21 of the light source unit 22 is increased more than green and blue for color weak who is poor in red sensitivity.
  • the light amount of red in the light source unit 22 may be increased.
  • it is necessary to relatively increase the amount of red light by reducing the transmission amount (light amount) of green and blue in the picture element 25 such as a liquid crystal panel. It is possible to intensify the red light amount without reducing the green and blue light amounts. Therefore, according to the present invention, it is possible to project image light onto the observer's pupil without degrading the image quality.
  • control circuit 26 performs time division driving in which the plurality of light emitting elements 21 emit light sequentially. That is, time-division driving is time-divided by the observer by turning off the other light emitting elements 21 at the timing when one light emitting element 21 is lit and repeatedly performing this for each light emitting element 21 at high speed.
  • time-division driving is time-divided by the observer by turning off the other light emitting elements 21 at the timing when one light emitting element 21 is lit and repeatedly performing this for each light emitting element 21 at high speed.
  • the afterimage of image light is visually recognized and each image light is combined in the brain of the observer.
  • the light source of the conventional color vision support device always lights all the light emitting elements (light emitting diodes) to emit white light, and transmits white light from the light source through the liquid crystal panel provided with the color filter. It was common to add color to the light.
  • each light emitting element 21 by time division as in the present invention, substantially to the eye of the observer (specifically, L cone, M cone, S cone of retina)
  • the light emission color of each light emitting element 21 is separately projected. For this reason, the hue of the image light is not mixed at the stage of being projected onto the eye of the observer, so that it is easy for the color-weaker to perceive the image light.
  • the light source unit 22 may include four or more light emitting elements 21 having different emission wavelengths.
  • the control circuit 26 select and drive at least three of the light emitting elements 21.
  • the light emitting elements 21 for general three primary colors (red, green, blue) for normal persons the light emitting elements 21 for weakly colored persons are provided, and the light emitting elements 21 used as needed.
  • the light source unit 22 may be configured to physically exchange the first light source unit 41 and the second light source unit 42.
  • the first light source unit 41 includes a plurality of light emitting elements 21 having different emission wavelengths.
  • the second light source unit 42 includes at least one or more light emitting elements 21 having different emission wavelengths and at least one light emitting element 21 included in the first light source unit 41.
  • the eyepiece optical system 30 has a prism 32 which reflects the image light and guides it to the pupil of the observer, and the back surface side of the reflection surface 32a of the prism 32 is shielded. If the prism 32 is translucent, the observer will visually recognize the background behind the prism 32, and even if the image light whose hue is adjusted is provided to the observer, the image light and the background are The effect of color vision assistance is halved because it will appear to overlap. Therefore, as described above, by making the back surface of the reflecting surface 32a of the prism 32 substantially in the light shielding state, the observer can effectively view the image light whose hue is adjusted.
  • the camera 10 preferably includes an autofocus sensor 11 for detecting the distance between the photographing lens and the subject.
  • the display optical system 20 preferably has a plurality of modes for the method of converting the hue of the image data, and determines the mode in accordance with the distance detected by the autofocus sensor 11.
  • the mode of the hue conversion method for example, the degree to which the light amount of the specific light emitting element 21 is increased may be divided into a plurality of modes.
  • white balance can be achieved by changing only the RGB ratio (see Fig. 7), changing the position of the RGB primary color chromaticity points (see Fig. 8), or changing the position of the RGB primary color chromaticity points and adjusting the ratio.
  • a mode see FIG.
  • the camera 10 preferably has an autofocus sensor 11 for detecting the distance between the lens and the subject.
  • the control circuit 26 generates display image data obtained by cutting out a portion of the image data generated by the camera 10 based on the distance detected by the autofocus sensor 11, and displays the image for displaying the display image data.
  • a signal may be sent to the light source unit 22 and the video device 25.
  • the color vision aiding apparatus of the present invention it is possible to convert the photographed image into a hue which is easy for the observer to discriminate the color without deteriorating the image quality of the photographed image photographed by the camera.
  • FIG. 1 shows an example of the appearance of a color vision assistance device (head mounted display type) according to the present invention.
  • FIG. 2 is a block diagram showing a functional configuration of the color vision support apparatus.
  • FIG. 3 is a view for explaining the effect of the color vision aiding apparatus according to the present invention, in which (a) is a normal display state using white light, and (b) a state in which a specific primary color is extracted from white light. (Conventional example) is shown.
  • FIG. 4 is a diagram for explaining the effect of the color vision aiding apparatus according to the present invention, and (c) shows a state (in the present invention) in which the light quantity of a specific primary color is increased in the light emitting portion.
  • FIG. 5 shows the wavelength sensitivity characteristics of the cone.
  • FIG. 6 shows mixed color lines of types 1 2 3 3 weak.
  • FIG. 7 shows the color reproduction range and the white chromaticity point when the primary color point intensity ratio is changed.
  • FIG. 8 shows the color reproduction range and the white chromaticity point when the primary color chromaticity point is changed.
  • FIG. 9 shows the effect when the primary color chromaticity point is changed, using a mixed color line of type 1 weak.
  • FIG. 10 shows the color reproduction range and the color chromaticity point in the case of performing W / B adjustment after changing the primary color chromaticity point.
  • FIG. 11 shows the change of the spectrum for changing the primary color chromaticity point in the light source.
  • FIG. 12 shows an example of time-division driving of light emitting elements.
  • FIG. 12 shows an example of time-division driving of light emitting elements.
  • FIG. 13 shows the configuration of a display optical system provided with a plurality of light sources.
  • FIG. 14 shows the configuration of a display optical system in which the light source unit can be replaced.
  • FIG. 15 shows an example of a method of generating display image data using autofocus.
  • FIG. 16 shows an example of a method of generating display image data using autofocus.
  • FIG. 17 schematically shows an example in which a display image is shown superimposed on a landscape.
  • FIG. 1 is an external perspective view showing an example of a head mounted display (HMD) type color vision assistance apparatus 100.
  • the HMD type color vision assistance apparatus 100 is mounted on the head of the observer and projects an image on its pupil.
  • the color vision support apparatus 100 is disposed only in front of the eye of one eye of the observer.
  • the color vision support apparatus 100 basically includes a camera 10, a display optical system 20, and an eyepiece optical system 30.
  • the camera 10 captures an external world in the same direction as the line of sight of the observer and generates the image data.
  • the display optical system 20 subjects the image data captured by the camera 10 to predetermined color tone correction for color vision support, generates image light of the image data after the correction, and emits the light toward the eyepiece optical system 30. .
  • the eyepiece optical system 30 reflects or refracts the image light emitted from the display optical system 20 and guides it to the pupil of the observer.
  • the image light emitted from the display optical system 20 in the lateral direction (X-axis direction) is reflected approximately at right angles by the eyepiece optical system 30 and travels in the depth direction (Z-axis direction) It enters the pupil of the observer.
  • the space between the display optical system 20 and the eyepiece optical system 30 is a space, and the image light travels in the air.
  • the eyepiece optical system 30 has a configuration in which a prism 32 is disposed on a transparent substrate 31.
  • the transparent substrate 31 is a plate-like member facing the observer's pupil, and at least a part thereof is made of a transparent member or a mesh member which transmits light.
  • the transparent substrate 31 is a transparent substrate formed entirely of a transparent member such as plastic or glass.
  • a prism 32 is provided on the outer surface of the transparent substrate 31.
  • the transparent substrate 31 is fixed to the housing of the HMD that accommodates the display optical system 20.
  • the transparent substrate 31 functions as a support for positioning the prism 32 on the optical axis of the image light emitted from the display optical system 20.
  • the transparent substrate 31 is transparent, it is possible to ensure a wide view of the observer who visually recognizes the image.
  • the observer can visually recognize the background on the back side of the transparent substrate 31 together with the image.
  • the reflective surface 32b of the prism 32 is coated with a light-shielding paint or the like so that the viewer can not visually recognize the background behind the prism 32. Therefore, for the observer, the background overlapping with the prism 32 is replaced with the image generated in the display optical system 20.
  • the HMD type color vision assistance apparatus 100 may be equipped with an optical sensor, a touch panel, etc. other than the above-mentioned structure.
  • the housing of the HMD contains a CPU, memory, various communication devices, an acceleration sensor, a gyro sensor, and a battery.
  • the structure of HMD can employ
  • FIG. 2 shows an optical system provided with a transmission type image device (liquid crystal panel) 25.
  • FIG. 2 is a drawing of the structure of the color vision assistance device 100 from a plane (XZ plane).
  • the color vision support apparatus 100 includes a camera 10, a display optical system 20 for generating and emitting an image light (one-dot chain line), and an eyepiece optical system 30 for guiding the image light to the observer's pupil. Equipped with Image data acquired by the camera 10 is corrected by the display optical system 20 and emitted as an image light, propagates through the air and is incident on the eyepiece optical system 30, and finally to the pupil E of the observer It is projected.
  • the camera 10 is an imaging device for acquiring image data of a still image or a moving image.
  • the camera 10 is configured of, for example, a photographing lens, a mechanical shutter, a shutter driver, a photoelectric conversion element such as a CCD image sensor unit, a digital signal processor (DSP) that reads charge amount from the photoelectric conversion element and generates image data, and an IC memory Be done.
  • the camera 10 also includes an autofocus sensor (AF sensor) for measuring the distance from the photographing lens to the subject, and a mechanism for adjusting the focal length of the photographing lens according to the distance detected by the AF sensor. Is preferred.
  • the type of AF sensor is not particularly limited, but a known passive method such as a phase difference sensor or a contrast sensor may be used.
  • an AF sensor it is possible to use an active type sensor which directs infrared rays or ultrasonic waves to a subject and receives the reflected light or the reflected wave.
  • the image data acquired by the camera 10 is supplied to the control circuit 26, and after predetermined image processing is performed, image light corresponding to the image data is generated and output by the display optical system 20.
  • the display optical system 20 includes a light source unit 22 including a plurality of light emitting elements 21, an equalizing element (integrator) 23, a condenser lens 24, an image element 25, and a control circuit. And 26.
  • the light source unit 22 is configured to include a plurality of light emitting elements 21 having different emission wavelengths (that is, emission colors).
  • An example of the light emitting element 21 is an LED (light emitting diode).
  • the light emitting element 21 is provided with one capable of emitting light of three primary colors of, for example, red (R), green (G), and blue (B).
  • R red
  • G green
  • B blue
  • the light emission color of the light emitting element 21 is not limited to the above three primary colors, and can be replaced with a light emission color that allows the viewer to easily distinguish in a situation where the viewer's color is weak. For example, use the orange one in place of the red light emitting element 21, use the blue green one in place of the green light emitting element 21, or use the blue one in place of the blue light emitting element 21.
  • the light quantity of each light emitting element 21 can be adjusted independently. That is, in a normal LED light source, white light is generated with the light amount of the light emitting element 21 of three primary colors being always uniform, but the light emitting unit 22 of the present invention is not limited to one generating white light.
  • the amount of light of the element 21 may be increased to generate reddish irradiation light, or only the amount of light of the blue light emitting element 21 or the green light emitting element 21 may be increased.
  • the light amount of each light emitting element 21 in the light emitting unit 22 is adjusted independently by the control circuit 26.
  • the light source unit 22 needs power to emit light, so the light source unit 22 is connected to a battery (not shown) via a power supply line.
  • the light emitted from each light emitting element 21 of the light source unit 22 is input to the equalizing element 23.
  • the equalizing element 23 adjusts the deviation of the optical path of the irradiation light output from each light emitting element 21 and makes the irradiation surface to the video element 25 uniform via the condenser lens 24 in the latter stage.
  • the condenser lens 24 is formed of an aspheric lens or the like, and converts the irradiation light from each light emitting element 21 into substantially parallel light and emits the light toward the image element 25 so as to pass through the same optical path.
  • the video element 25 converts incident light into video light by modulating it according to display image data.
  • a transmissive liquid crystal panel is used as the video element 25.
  • Each pixel of the liquid crystal panel has, for example, a structure in which a liquid crystal layer is interposed between two glass substrates with transparent electrodes, and a polarizing plate is attached to each of the glass substrates.
  • the liquid crystal molecules of the liquid crystal layer are arranged parallel to the glass surface, but when a voltage is applied between the transparent electrodes, the liquid crystal molecules of the liquid crystal layer are oriented in the direction perpendicular to the glass surface Will change.
  • the combination of the movement of liquid crystal molecules and the polarization directions of the two polarizing plates adjusts the amount of transmission of light transmitted through each pixel of the liquid crystal panel.
  • the liquid crystal panel has a structure in which such pixels are arranged in a matrix.
  • the irradiation light input from the light emitting device 21 to the image element 25 already contains a color component, It is not necessary to provide a color filter or the like on the video element 25.
  • Control of a voltage or the like applied to the video element 25 is performed by the control circuit 26.
  • the video element 25 is connected to the battery via a power supply line.
  • a reflective liquid crystal panel can also be employed as the video element 25.
  • a reflective liquid crystal panel is used as the image element 25
  • the configuration of the “linear arrangement type eyepiece image display device” disclosed in Japanese Patent No. 6081508 may be referred to.
  • the control circuit 26 subjects the image data generated by the camera 10 to image processing such as correction processing of hue for people with color weakness and generates image data for display (display image data). Then, the control circuit 26 controls each of the light emitting elements 21 and the video element 25 constituting the light source unit 22 based on the display image data to generate and output video light.
  • control circuit 26 can control the light amounts of the plurality of light emitting elements 21 constituting the light source unit 22 independently.
  • the maximum luminance of a certain light emitting element is preferably set to be twice or more, and may be three times or more or four times or more, with respect to the maximum luminance of another light emitting element.
  • FIGS. 3 and 4 show an example in which the light amounts of the respective primary colors of red, green and green are linearly controlled at 256 gradations by 8-bit signals.
  • the pattern of (a) shown in FIG. 3 shows a general control method of the light source.
  • the maximum luminance of each primary color is set to be equal, and the light quantity of each primary color is controlled to 256 gradations between 0 and 100% with an 8-bit signal.
  • FIG. 3 (b) shows an example in the case where the amount of red light is made four times that of green and blue.
  • the other green and blue colors are maintained at 100% at maximum while the amount of red light is maintained at a maximum of 100% in order to increase the amount of red light The only way to reduce the amount of light to 25%.
  • the pattern of (c) shown in FIG. 4 shows a control method of independently adjusting the light quantity (maximum luminance) of each primary color in the light source.
  • FIG. 4C shows an example in which the maximum luminance of the red light amount is four times the green and blue light amounts while maintaining the green and blue light amounts at a maximum of 100%.
  • the pattern of FIG. 4C can avoid deterioration in image quality.
  • the pattern of the plurality of light emitting elements 21 constituting the light source unit 22 is independently controllable, it is possible to increase the light amount of a specific primary color while avoiding the deterioration of the image quality of the image. Is possible.
  • the eyepiece optical system 30 is an optical element which reflects or refracts the image light emitted from the image element 25 and guides it to the pupil E of the observer.
  • the eyepiece optical system 30 has a prism 32.
  • the eyepiece optical system 30 may also have an eyepiece lens 33.
  • the prism 32 is a member for guiding the image light from the display optical system 20 inside.
  • the prism 32 has, for example, a shape having an incident surface 32a of image light, a reflection surface 32b, and an emission surface 32c.
  • the incident surface 32a of the prism 32 is provided to intersect the optical axis of the image light traveling in the lateral direction (X-axis direction) substantially perpendicularly.
  • the exit surface 32c is provided to face the pupil of the observer.
  • the reflecting surface 32b has, for example, a rectangular shape (rectangular shape), and functions as means for bending the optical path of the image light at a right angle. Specifically, the reflecting surface 32b reflects the image light incident on the inside of the prism through the incident surface 32a in the Z direction.
  • the prism 32 may be configured by a single prism, or may be configured by combining a plurality of prisms.
  • the eyepiece 33 is attached to, for example, the incident surface 32 a of the prism 32.
  • the eyepiece lens 33 has positive power and condenses the image light incident on the prism 32 on the pupil.
  • the eyepiece 33 may be joined to the incident surface 32 a of the prism 32 or may be integrated with the prism 32.
  • the rear surface of the reflection surface 32b of the prism 32 be subjected to light shielding processing for absorbing or totally reflecting light.
  • black paint may be applied to the back surface of the reflective surface 32b or mirror finish may be applied. In this case, it is possible to prevent external light from entering the prism 32 from the back surface of the reflecting surface 32b and the viewer from visually recognizing a scene overlapping the reflecting surface 32b.
  • the display optical system 20 generates image light. That is, the irradiation light from the light source unit 22 is homogenized by the homogenizing element 23 and then condensed by the condenser lens 24 and enters the video element 25. The irradiation light is modulated by the video element 25 to become video light.
  • the image light emitted from the display optical system 20 enters the eyepiece optical system 30.
  • the image light enters the inside of the prism 32 through the eyepiece lens 33. Thereafter, the image light travels inside the prism 32 along the lateral direction (X-axis direction), the light path is bent at the reflecting surface 32b, and travels while changing the direction in the depth direction (Z-axis direction). Thereby, the image light is emitted through the emission surface 32 c of the prism 32 and guided to the pupil of the observer.
  • the observer can observe the enlarged virtual image of the image generated by the display optical system 20 at the position of the pupil E.
  • FIG. 5 shows the sensitivity characteristics of the S-cone, M-cone, and L-cone, and the respective maximum values are normalized to one.
  • the blue color of the primary color is perceived by the S cone, but the wavelengths sensed by the M cone and the L cone overlap, so red and green are perceived by calculation of the light reception amounts of the M cone and the L cone.
  • a color is usually defined by a number called "two-dimensional chromaticity" using x, y coordinates, but people with color blindness can only perceive one-dimensional change in color.
  • FIG. 6 is a diagram showing the colors felt by the color-impaired person, and it is shown that the colors on the same oblique line in the chromaticity diagram all appear to be the same color to the color-weakened person. If the sensitivity of the L cone is low (type 1 weak), if the sensitivity of the M cone is low (type 2 weak), the sensitivity of the S cone is low. It shows the weak case (3 color weak).
  • FIG. 7 shows the primary color points of the display, the color reproduction range, and the white (highest luminance) chromaticity point in the xy chromaticity diagram.
  • the white chromaticity points in the case where the intensity ratio of the light quantity of each primary color (red, blue, green) is made equal in a general display are indicated by x marks.
  • FIG. 7 shows an example where the control circuit 26 changes the intensity ratio of the primary colors of the light emitting element 21, specifically, the light quantity of red is set to four times that of other green and blue. .
  • the white chromaticity point is at the position indicated by ⁇ .
  • the light quantity of each light emitting element 21 can be controlled by the control circuit 26 to change the position of the primary color itself.
  • it is effective to change the position of the primary color itself when the sensitivity drop of the cone of the observer is large and the intensity (light intensity) increase of the specific primary color is insufficient.
  • the color normally displayed in red is converted to orange and displayed as a support measure for the color weak who is sensitive to red (mainly type 1 weak).
  • the color-impaired person can recognize that the red itself can not be recognized, and if it is orange, there is a color there.
  • the green primary color is also converted to light blue.
  • the white (highest luminance) chromaticity point is originally set by adjusting the intensity ratio of the light emitting element 21. It can be returned to the position.
  • the control circuit 26 can also restore the white balance by adjusting the intensity ratio of each light emitting element 21 after changing the primary color chromaticity point.
  • FIG. 11 shows, as an example, the spectrum of a normal light source and the spectrum of the light source when red and green primary color points are moved.
  • the movement of the primary color points can also be performed using color conversion of the image signal, it can be said that it is more preferable to use a light source having the primary color point as the dominant wavelength.
  • a light source having the primary color point as the dominant wavelength. For example, when the sensitivity of the L cone is low, using an orange wavelength light source is more efficient than when displaying a red primary color (eg, orange) changed by mixing red and green wavelengths. If such a light source is used, color conversion processing with a signal becomes unnecessary.
  • control circuit 26 drives each of the light emitting elements 21 constituting the light source unit 22 in a time division manner.
  • time-division driving the other light-emitting elements 21 are turned off at the timing when one light-emitting element 21 is lit, and this is repeatedly performed for each light-emitting element 21 at a high speed. It is a method of combining the image lights in the observer's brain by visually recognizing the afterimage.
  • FIG. 12 conceptually shows an example of time-division driving of three types of light emitting elements emitting red, green and blue light.
  • the red light emitting element, the green light emitting element, and the blue light emitting element emit light in this order, and another light emitting element is turned off at the timing when a certain light emitting element is lit.
  • the primary color light emitted from each light emitting element is input to a video element (liquid crystal panel) and converted into video light there.
  • the red image light, the green image light, and the blue image light are sequentially projected onto the observer's pupil, but these three types of image light are combined in the observer's brain to form one image. It becomes.
  • fine hue specifically, the cube of 256 gradations
  • 16777216 16777216
  • each light emitting element 21 When each light emitting element 21 is driven in a time-division manner, the light emission color of each light emitting element 21 is substantially separately (when the light cone of the retina L cone, M cone and S cone) of the observer. Projected) For this reason, in the case of time-division driving, the hue of the image light is not mixed at the stage of being projected onto the eye of the observer, so that there is an advantage that the image light can be easily perceived by people with color blindness.
  • FIG. 13 shows an example in which the light emitting element 21 to be driven is switched by a switch
  • FIG. 14 shows an example in which the light emitting element 21 incorporated in the display optical system is physically replaced.
  • the light emitting unit 22 includes a first light source 22 a including three types of light emitting elements 21 and a second light source 22 b including three types of light emitting elements 21.
  • the light emitting elements 21 of the first light source 22 a and the light emitting elements 21 of the second light source 22 b may have different emission colors of at least one light emitting element 21, and the emission colors of all the light emitting elements 21 may be different. .
  • the first light source 22a is composed of light emitting elements 21 of three primary colors of red, green and blue for displaying a normal image
  • the second light source 22b has a color different from the above three primary colors for the weak Light emitting elements 21 (for example, a combination of orange, light blue and blue, or a combination of red, yellow and blue green).
  • the first light source 22a and the second light source 22b are connected to the control circuit 26, respectively.
  • the control circuit 26 determines which of the first light source 22 a and the second light source 22 b is to be driven according to the mode selected by the observer. As described above, which of the first light source 22a and the second light source 22b is to be driven can be switched by the switch.
  • a plurality of physically separated light source units 41, 42, 43 are prepared in advance, and the display optical system 20 can be mounted with an arbitrarily selected light source unit. It is configured. Specifically, the display optical system 20 and each light source unit 41, 42, 43 are provided with terminals for connection, respectively, and by connecting the two terminals, the display optical system 20 and each light source unit 41, 42 and 43 are electrically connected.
  • the first light source unit 41 includes three light emitting elements 21. Each light emitting element 21 emits light in three primary colors of red, green and blue for displaying an ordinary image.
  • the second light source unit 42 includes a first light source 42a and a second light source 42b.
  • the first light source 22a is composed of light emitting elements 21 of three primary colors of red, green and blue for displaying a normal image
  • the second light source 22b is for the weak
  • the light-emitting elements 21 for example, a combination of orange, light blue, and blue, or a combination of red, yellow, and blue-green
  • the control circuit 26 can select which of the first light source 22a and the second light source 22b is to be driven.
  • the third light source unit 43 also includes the first light source 43a and the second light source 43b.
  • the first light source 43a is composed of light emitting elements 21 of three primary colors of red, green and blue for displaying a normal image.
  • the second light source 43b is assumed to be driven at the same time as the first light source 43a, and for the color-weaker, for example, the light emitting element 21 of the same luminescent color as one of the three primary colors of the first light source 43a.
  • the three light emitting elements 21 constituting the second light source 43a emit light of the same color, and are selected from red, green or blue. That is, when the third light source unit 43 is attached to the display optical system 20, the control circuit 26 selects whether to drive only the first light source 43a or to drive both the first light source 43a and the second light source 43b.
  • the second light source 43b is composed of three light emitting elements 41 emitting red light
  • only the first light source unit 41 is driven by driving both the first light source 43a and the second light source 43b.
  • the amount of red light can be set to four times as much as sometimes.
  • processing for extracting display image data equivalent to the observer's visual angle from the image data acquired by the camera 10 in consideration of the parallax between the observer's pupil and the camera 10 Explain.
  • FIG. 15 for example, the case where the camera 10 is disposed near the temple of the observer and the image light is projected onto one eye of the observer is considered.
  • FIG. 1 the configuration shown in FIG. 1 and the like, the case where image light is projected onto the pupil by the prism 32 by the prism 32 and the back surface of the prism 32 is shielded will be described as an example.
  • a gap is present between the observer's one eye and the camera 10, and as a result, a parallax P is generated between the line of sight of the observer's one eye and the line of sight of the camera 10.
  • the image data captured by the camera 10 is projected as it is to the observer's one eye, it is possible to seamlessly overlap the background actually viewed by the observer with the image generated by the device under the influence of the parallax P. Can not. Therefore, in consideration of the parallax P, a range in which the gaze range of the observer and the object plane at which the observer is gazing overlap is extracted as a display image from the image data captured by the camera 10 (cut out ), It is effective to generate display image data.
  • the range extracted as the display image fluctuates depending on the distance from the photographing lens of the camera 10 to the object plane at which the observer is gazing.
  • the distance from the photographing lens of the camera 10 to the object plane at which the observer is gazing can be measured by an autofocus lens provided in the camera 10. Therefore, the control circuit 26 preferably determines a range to be extracted as a display image from the image data based on the parallax P and the distance to the object plane measured by the autofocus lens.
  • the parallax P can be arbitrarily set in consideration of the entire configuration of the color vision assistance device 100, and may be finely adjusted by the operation of the observer.
  • the distance from the photographing lens of the camera 10 to the object plane at which the observer is gazing may be input to the control circuit 26 as the measurement value of the autofocus lens provided in the camera 10 as described above.
  • the control circuit 26 extracts display image data to be projected onto the observer's pupil from the image data acquired by the camera 10 based on the parallax P and the measurement value of the autofocus lens. Then, based on the display image data, the light source unit 22 and the video device 25 are controlled to generate video light.
  • FIG. 17 shows an example in the case where an image generated by the color vision support apparatus 100 is superimposed and displayed on an actual landscape. If the display range of the image is reduced relative to the actual viewing angle (the apparent size), a gap may occur between the landscape and the image, which may cause the viewer to feel uncomfortable.
  • the display range of the video is equal to the actual viewing angle
  • the landscape and the video are seamlessly connected, and the observer can naturally view the state where the video is superimposed in the landscape.
  • Such a display makes it possible to provide natural visual assistance to the observer.
  • the control circuit 26 of the front optical system 20 may select a mode for converting the hue of the image data in accordance with the distance detected by the autofocus sensor 11.
  • the control circuit 26 stores a plurality of modes for the method of converting the hue of the image data in a memory or the like, and determines the mode in accordance with the distance detected by the autofocus sensor 11.
  • the mode of the hue conversion method for example, the degree to which the light amount of the specific light emitting element 21 is increased may be divided into a plurality of modes.
  • prepare a mode that changes only the RGB ratio a mode that includes movement of the origin (white chromaticity point) in addition to changing the RGB ratio, or a mode that changes the RGB ratio but maintains the W / B balance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

[Problem] To convert an image captured by a camera into a hue that is easy for an observer to discriminate, without degrading the image quality of the captured image. [Solution] A color vision support device 100 includes a camera 10 that captures an image of the outside world to generate image data, a display optical system 20 that emits image light corresponding to the image data, and an optical system 30 that guides the image light to a pupil of the observer. The display optical system 20 includes a light source unit 22 including a plurality of light emitting elements 21 having different emission wavelengths, an image element 25 converting irradiation light from the light source unit 22 into image light, and a control circuit 26 that independently controls light quantities of the plurality of light emitting elements 21. The display optical system outputs the image light in which the image data are converted into a hue that is easy for an observer to discriminate. Since the gradation of each primary color can be maintained by increasing or decreasing the light quantity of the primary color in the light source unit 22, it is possible to avoid degradation of image quality of the captured image.

Description

色覚支援装置Color vision support device
 本発明は,ヘッドマウントディスプレイ(HMD:Head Mounted Display)に搭載される接眼型の色覚支援装置に関するものである。具体的に説明すると,本発明の色覚支援装置は,観察者の眼前に設置される光学装置であって,カメラで撮影した画像データを観察者が色弁別しやすい色相に変換したうえで瞳に投影することで,観察者の色覚を支援するものである。 The present invention relates to an eyepiece type color vision assistance device mounted on a head mounted display (HMD). Specifically, the color vision support device of the present invention is an optical device installed in front of the observer's eyes, and converts the image data taken by the camera into a hue that is easy for the observer to distinguish in color. The projection assists the observer's color vision.
 色の見え方や感じ方が一般色覚者とは異なる状態のことを「色弱」という。ヒトの網膜には,それぞれ長波長(565nm),中波長(545nm),及び短波長(440nm)付近の光に感度の高い錐体が存在しており,それぞれL錐体,M錐体,及びS錐体と呼ばれている。L錐体(赤錐体)は黄緑~赤の光を主に感じ,M錐体(緑錐体)は緑~黄緑の光を主に感じ,S錐体(青錐体)は紫~青の光を主に感じる。これら3種類の錐体(視細胞)の分光感度の違いにより,L錐体が主に黄緑~赤を感じ,M錐体が緑~黄緑の光を主に感じ,S錐体が主に紫~青の光を主に感じる。そして,瞳に光が入るとこれらの錐体が反応し,その情報が網膜から視神経を伝わって大脳皮質の視覚中枢に運ばれ,可視光線(400~800nm)の各波長に応じた色覚が起こるといわれている。色弱者は,例えばL錐体,M錐体,S錐体のうちのいいずれかの感覚が弱いか欠落していることによって,正常者とは少し違った色感覚を持つことになる。 It is called "colorless" that the appearance and feeling of color are different from those of general color vision users. In the human retina, cones having high sensitivity to light near long wavelength (565 nm), middle wavelength (545 nm) and short wavelength (440 nm) exist, respectively, and L cone, M cone and It is called S cone. L cones (red cones) mainly sense yellowish green to red light, M cones (green cones) mainly sense green to yellowish green light, S cones (blue cones) purple I mainly feel the blue light. Due to the difference in spectral sensitivities of these three types of cones (photocells), L cones mainly feel yellowish green to red, M cones mainly feel green to yellowish light, and S cones mainly I feel mainly purple to blue light. Then, when light enters the pupil, these cones react, the information is transmitted from the retina to the optic nerve and carried to the visual center of the cerebral cortex, and color vision corresponding to each wavelength of visible light (400 to 800 nm) occurs It is said that. The color-impaired person will have a slightly different color sensation from the normal one, for example, due to the weakness or lack of any of the L-, M-, and S-cone senses.
 色弱のタイプは,大きく分けて,L錐体の感覚が弱いか欠落している「1型色覚」,M錐体の感覚が弱いか欠落している「2型色覚」,及びS錐体の感覚が弱いか欠落している「3型色覚」があるといわれている。このような特徴を持つ色弱者は,例えば隣り合った2色(赤と緑など)の区別が困難となる場合がある(赤緑色弱)。 The type of color blindness is broadly divided into "type 1 color vision" in which the sense of the L cone is weak or missing, "type 2 color vision" in which the sense in the M cone is weak or missing, and S cone It is said that there is a "type 3 color vision" with weak or missing sensation. People with color weaknesses with such characteristics may find it difficult to distinguish between two adjacent colors (such as red and green).
 このような色弱者の色覚を補助するための装置として,例えば特許文献1には,ヘッドマウントディスプレイ型の視覚補助装置が開示されている。特許文献1の視覚補助装置は,観察者の色弱が生じる色相を予め記憶しておき,カメラで取得した画像データの中からその色弱が生じる色相が検出された場合に,例えばその色相の光量を他の色相の光量よりも増加させて出力することにより,画像データを観察者とって色弁別しやすい色相に変換することとしている。 As a device for assisting the color vision of such a color-impaired person, for example, Patent Document 1 discloses a head-mounted display type visual aid device. The visual aid device of Patent Document 1 stores in advance the hue at which the viewer's color weakness occurs, and when the hue at which the color weakness occurs is detected from the image data acquired by the camera, for example, The image data is converted to a hue that is easy for the observer to distinguish by making the light amount larger than the light amount of other hues and outputting.
特開2016-95577号公報JP, 2016-95577, A
 ところで,特許文献1の視覚補助装置は,光源として一般的なLEDが用いられており,光源からの照射光を光透過型の液晶表示器に導入して映像光に変換することとしている。一般的なLEDは,白色光を照射するものであり,例えば赤色,緑色,青色の発光ダイオードを同光量で発光させて白色光を生成するもの(マルチチップ)や,青色の発光ダイオードを黄色蛍光体で被覆することで白色光を生成するもの(シングルチップ)などが知られている。このような白色LEDからの照射光を光透過型の液晶表示器に導入して映像光を生成するにあたり,例えば赤色の感度が乏しい色弱者のために赤色の光量を緑色及び青色よりも増加させる際には,液晶表示器において緑色及び青色の透過量(光量)を減少させて,相対的に赤色の光量を強くする必要がある。すなわち,一般的なLEDを光源として使用した場合,発光源からは白色光のみが提供されるため,色弱者向けに映像光の色相を調整するためには,液晶表示器において各色相の透過量を調整するしか方法がない。しかしながら,ある特定の色相の光量を相対的に増加させるために他の色相の光量を減少させることとすると,その他の色相の階調が劣化し,映像光に擬似輪郭などが発生して画質の低下を招くという問題があった。 By the way, a general LED is used as a light source of the visual aid device of patent document 1, and it is supposed that the irradiation light from a light source will be introduce | transduced into a light transmissive liquid crystal display, and will be converted into image light. A general LED emits white light. For example, a red, green, or blue light emitting diode emits light with the same light amount to generate white light (multichip), or a blue light emitting diode emits yellow fluorescence. One that produces white light by coating with a body (single chip) is known. In order to generate image light by introducing irradiation light from such a white LED to a light transmission type liquid crystal display, for example, the red light amount is increased more than green and blue for color-weakened people with poor red sensitivity. In this case, it is necessary to decrease the transmission amount (light amount) of green and blue in the liquid crystal display to make the red light amount relatively strong. That is, when a general LED is used as a light source, only white light is provided from the light emission source, so in order to adjust the hue of the image light for the color impaired, the transmission amount of each hue in the liquid crystal display There is no way but to adjust the However, if the amount of light of another hue is decreased to relatively increase the amount of light of a specific hue, the gradation of the other hues is degraded, and a false contour occurs in the image light and the image quality There was a problem of causing a decline.
 そこで,本発明は,カメラで撮影した撮影画像の画質を低下させずに,その撮影画像を観察者が色弁別しやすい色相に変換することのできる色覚支援装置を提供することを目的とする。 Therefore, an object of the present invention is to provide a color vision support device capable of converting a captured image into a hue that is easy for the observer to discriminate colors without degrading the image quality of the captured image captured by a camera.
 本発明の発明者は,上記従来技術の課題を解決する手段について鋭意検討した結果,発光色の異なる複数の発光素子で光源を構成するとともに,各発光素子の光量を独立して制御可能とし,光源において観察者が色弁別しにくい色相の光量を増加させた上で,その光源から照射光を液晶パネルなどの映像素子に導入することにより,画質を低下させずに映像光を観察者の瞳に投影することができるという知見を得た。本発明者は,上記知見に基づけば,従来技術の課題を解決できることに想到し,本発明を完成させた。具体的に説明すると,本発明は以下の構成を有する。 The inventor of the present invention has intensively studied the means for solving the problems of the prior art and, as a result, constitutes a light source with a plurality of light emitting elements having different light emitting colors and enables independently controlling the light quantity of each light emitting element In the light source, after increasing the light quantity of the hue that the observer can not distinguish in color, introducing the irradiation light from the light source to the image element such as a liquid crystal panel, the image light is not affected by the image quality We have found that we can project The present inventors considered that the problems of the prior art could be solved based on the above findings, and completed the present invention. Specifically, the present invention has the following configuration.
 本発明は,色覚支援装置に関する。本発明に係る色覚支援装置は,外界を撮影して画像データを生成するカメラ10と,この画像データに対応する映像光を射出する表示光学系20と,この映像光を観察者の瞳に導く接眼光学系30とを備える。表示光学系20は,光源部22,映像素子25,及び制御回路26を有する。光源部22は,それぞれ発光波長の異なる複数の発光素子21を含む。なお,複数の発光素子21の例は,赤色,緑色,及び青色の3原色に発光する発光ダイオードであるが,レーザであってもよい。発光素子21の発光色はこれらに限定されず,観察者の色覚に応じて例えば赤色を橙色に変更するなどの調整を行うこともできる。映像素子25は,光源部22からの照射光を映像光に変換する。制御回路26は,複数の発光素子21の光量を独立して制御する。表示光学系20は,画像データを観察者の色弁別しやすい色相に変換した映像光を出力することができる。 The present invention relates to a color vision support apparatus. A color vision support apparatus according to the present invention includes a camera 10 for photographing the outside world to generate image data, a display optical system 20 for emitting an image light corresponding to the image data, and guiding the image light to the observer's pupil And an eyepiece optical system 30. The display optical system 20 includes a light source unit 22, an image device 25, and a control circuit 26. The light source unit 22 includes a plurality of light emitting elements 21 having different emission wavelengths. In addition, although the example of the some light emitting element 21 is a light emitting diode which light-emits in three primary colors of red, green, and blue, it may be a laser. The light emission color of the light emitting element 21 is not limited to these, and for example, adjustment such as changing red to orange may be performed according to the observer's color vision. The video element 25 converts the irradiation light from the light source unit 22 into video light. The control circuit 26 controls the light amounts of the plurality of light emitting elements 21 independently. The display optical system 20 can output image light obtained by converting the image data into a hue that can be easily discriminated by the observer.
 上記構成のように,光源部22の各発光素子21の光量を独立して調整できるようにすることで,例えば赤色の感度が乏しい色弱者のために赤色の光量を緑色及び青色よりも増加させる際には,光源部22において赤色の光量を増加させればよい。前述したように,従来技術では,液晶パネルなどの映像素子25において緑色及び青色の透過量(光量)を減少させて,相対的に赤色の光量を強くする必要があったが,本発明では,緑色及び青色の光量を減少させることなく赤色の光量を強くすることができる。従って,本発明によれば,画質を低下させずに映像光を観察者の瞳に投影することができることが可能である。 As in the above configuration, by enabling the light amount of each light emitting element 21 of the light source unit 22 to be adjusted independently, for example, the red light amount is increased more than green and blue for color weak who is poor in red sensitivity. In this case, the light amount of red in the light source unit 22 may be increased. As described above, in the prior art, it is necessary to relatively increase the amount of red light by reducing the transmission amount (light amount) of green and blue in the picture element 25 such as a liquid crystal panel. It is possible to intensify the red light amount without reducing the green and blue light amounts. Therefore, according to the present invention, it is possible to project image light onto the observer's pupil without degrading the image quality.
 本発明において,制御回路26は,複数の発光素子21を順次発光させる時分割駆動を行うことが好ましい。すなわち,時分割駆動とは,一つの発光素子21が点灯しているタイミングにおいて他の発光素子21を消灯させ,これを各発光素子21について繰り返し高速で行うことで,観察者に時分割された映像光の残像を視認させて,観察者の脳内で各映像光を合成するという駆動手法である。従来の色覚支援装置の光源は,すべての発光素子(発光ダイオード)を常時点灯して白色光を照射し,その光源からの白色光をカラーフィルタが設けられた液晶パネルを透過させることで,映像光に色を付与することが一般的であった。これに対して,本発明のように各発光素子21を時分割駆動することにより,実質的には観察者の眼(具体的には網膜のL錐体,M錐体,S錐体)には各発光素子21の発光色が別々に投影されることとなる。このため,観察者の眼に投影される段階で映像光の色相が混合されないため,色弱者にとって映像光を色覚しやすいものとなる。 In the present invention, it is preferable that the control circuit 26 performs time division driving in which the plurality of light emitting elements 21 emit light sequentially. That is, time-division driving is time-divided by the observer by turning off the other light emitting elements 21 at the timing when one light emitting element 21 is lit and repeatedly performing this for each light emitting element 21 at high speed. In this driving method, the afterimage of image light is visually recognized and each image light is combined in the brain of the observer. The light source of the conventional color vision support device always lights all the light emitting elements (light emitting diodes) to emit white light, and transmits white light from the light source through the liquid crystal panel provided with the color filter. It was common to add color to the light. On the other hand, by driving each light emitting element 21 by time division as in the present invention, substantially to the eye of the observer (specifically, L cone, M cone, S cone of retina) The light emission color of each light emitting element 21 is separately projected. For this reason, the hue of the image light is not mixed at the stage of being projected onto the eye of the observer, so that it is easy for the color-weaker to perceive the image light.
 本発明において,光源部22は,それぞれ発光波長の異なる発光素子21を4つ以上含むこととしてもよい。この場合に,制御回路26は,発光素子21の中から少なくとも3つを選択して駆動することが好ましい。このように,例えば正常者向けの一般的な3原色(赤色,緑色,青色)の発光素子21に加えて,色弱者向けの発光素子21を設けておき,必要に応じて利用する発光素子21を切り替えることができるようにすることで,様々な者が使用できる汎用的な映像表示装置を実現できる。 In the present invention, the light source unit 22 may include four or more light emitting elements 21 having different emission wavelengths. In this case, it is preferable that the control circuit 26 select and drive at least three of the light emitting elements 21. Thus, for example, in addition to the light emitting elements 21 for general three primary colors (red, green, blue) for normal persons, the light emitting elements 21 for weakly colored persons are provided, and the light emitting elements 21 used as needed. Can be implemented to realize a general-purpose video display device that can be used by various people.
 本発明において,光源部22は,第1光源ユニット41と第2光源ユニット42とを物理的に交換可能に構成されていてもよい。第1光源ユニット41は,それぞれ発光波長の異なる複数の発光素子21を含む。他方で,第2光源ユニット42は,第1光源ユニット41に含まれる少なくともいずれか1つの発光素子21と発光波長の異なる発光素子21を少なくとも1つ以上含む。このような光源ユニット41,42を用意しておき,必要に応じて色覚支援装置に装着したり,ユニット自体を交換することで,正常者及び色弱者を含め様々な者が本発明を利用できるようになる。 In the present invention, the light source unit 22 may be configured to physically exchange the first light source unit 41 and the second light source unit 42. The first light source unit 41 includes a plurality of light emitting elements 21 having different emission wavelengths. On the other hand, the second light source unit 42 includes at least one or more light emitting elements 21 having different emission wavelengths and at least one light emitting element 21 included in the first light source unit 41. By preparing such light source units 41 and 42 and attaching them to the color vision support apparatus as needed or replacing the unit itself, various persons including normal persons and persons with weak colors can use the present invention It will be.
 本発明において,接眼光学系30は,映像光を反射して観察者の瞳に導くプリズム32を有し,このプリズム32の反射面32aの裏面側は遮光されていることが好ましい。プリズム32が透光性のものであると,観察者がプリズム32の裏側の背景を視認することとなり,観察者に対して色相を調整した映像光を提供しても,その映像光と背景が重なって見えることとなるため,色覚支援の効果が半減する。そこで,上記のように,プリズム32の反射面32aの裏面はほぼ遮光状態とすることで,観察者は色相が調整された映像光を効果的に視認することができるようになる。 In the present invention, it is preferable that the eyepiece optical system 30 has a prism 32 which reflects the image light and guides it to the pupil of the observer, and the back surface side of the reflection surface 32a of the prism 32 is shielded. If the prism 32 is translucent, the observer will visually recognize the background behind the prism 32, and even if the image light whose hue is adjusted is provided to the observer, the image light and the background are The effect of color vision assistance is halved because it will appear to overlap. Therefore, as described above, by making the back surface of the reflecting surface 32a of the prism 32 substantially in the light shielding state, the observer can effectively view the image light whose hue is adjusted.
 本発明において,カメラ10は,撮影レンズと被写体との距離を検出するオートフォーカスセンサ11を備えていることが好ましい。この場合に,表示光学系20は,画像データの色相の変換方法について複数のモードを有し,オートフォーカスセンサ11が検出した距離に応じてモードを決定することが好ましい。色相の変換方法のモードとは,例えば,特定の発光素子21の光量を増加する程度を複数のモードに分けることとしてもよい。また,RGB比のみを変更するモード(図7参照)や,RGBの原色色度点の位置変更を含むモード(図8参照),あるいはRGBの原色色度点の位置変更と比率調整によりホワイトバランス(W/H)は維持するモード(図10参照)などを用意してもよい。例えば,観察者が遠くの景色を眺めている場合と,野菜の収穫や電気配線の接続等の手元で色の判断を行う作業をしている場合とで,色相の変換方法のモードを変えるようにすることで,その状況に応じた適切な色覚支援を提供することができる。 In the present invention, the camera 10 preferably includes an autofocus sensor 11 for detecting the distance between the photographing lens and the subject. In this case, the display optical system 20 preferably has a plurality of modes for the method of converting the hue of the image data, and determines the mode in accordance with the distance detected by the autofocus sensor 11. With the mode of the hue conversion method, for example, the degree to which the light amount of the specific light emitting element 21 is increased may be divided into a plurality of modes. Also, white balance can be achieved by changing only the RGB ratio (see Fig. 7), changing the position of the RGB primary color chromaticity points (see Fig. 8), or changing the position of the RGB primary color chromaticity points and adjusting the ratio. A mode (see FIG. 10) to maintain (W / H) may be prepared. For example, change the mode of the hue conversion method depending on whether the observer is looking at a distant view or the task of making a color judgment at hand such as harvesting vegetables or connecting electrical wiring etc. Can provide appropriate color vision support depending on the situation.
 本発明において,カメラ10は,レンズと被写体との距離を検出するオートフォーカスセンサ11を有していることが好ましい。この場合,制御回路26は,オートフォーカスセンサ11が検出した距離に基づいて,カメラ10が生成した画像データの一部を切り出した表示画像データを生成し,当該表示画像データを表示するための映像信号を光源部22と映像素子25へ送出することとしてもよい。観察者が注視している物体までの距離に応じて,画像データから切り出して観察者に視認させる画像範囲を適切に調整することで,その観察者に提供する映像とその周囲の景色とがシームレスに繋がるようになる。 In the present invention, the camera 10 preferably has an autofocus sensor 11 for detecting the distance between the lens and the subject. In this case, the control circuit 26 generates display image data obtained by cutting out a portion of the image data generated by the camera 10 based on the distance detected by the autofocus sensor 11, and displays the image for displaying the display image data. A signal may be sent to the light source unit 22 and the video device 25. By appropriately adjusting the image range extracted from the image data and made to be visible to the observer according to the distance to the object at which the observer is gazing, the image provided to the observer and the surrounding scenery are seamless. Will be connected to
 本発明の色覚支援装置によれば,カメラで撮影した撮影画像の画質を低下させずに,その撮影画像を観察者が色弁別しやすい色相に変換することができる。 According to the color vision aiding apparatus of the present invention, it is possible to convert the photographed image into a hue which is easy for the observer to discriminate the color without deteriorating the image quality of the photographed image photographed by the camera.
図1は,本発明に係る色覚支援装置(ヘッドマウントディスプレイ型)の外観の一例を示している。FIG. 1 shows an example of the appearance of a color vision assistance device (head mounted display type) according to the present invention. 図2は,色覚支援装置の機能構成を示したブロック図である。FIG. 2 is a block diagram showing a functional configuration of the color vision support apparatus. 図3は,本発明に係る色覚支援装置の効果を説明するための図であり,(a)は白色光を利用する通常の表示状態,(b)は白色光から特定の原色を抽出した状態(従来例)を示している。FIG. 3 is a view for explaining the effect of the color vision aiding apparatus according to the present invention, in which (a) is a normal display state using white light, and (b) a state in which a specific primary color is extracted from white light. (Conventional example) is shown. 図4は,本発明に係る色覚支援装置の効果を説明するための図であり,(c)は発光部において特定の原色の光量を増加させた状態(本発明)を示している。FIG. 4 is a diagram for explaining the effect of the color vision aiding apparatus according to the present invention, and (c) shows a state (in the present invention) in which the light quantity of a specific primary color is increased in the light emitting portion. 図5は,錐体の波長感度特性を示している。FIG. 5 shows the wavelength sensitivity characteristics of the cone. 図6は,1型,2型,3型の色弱の混同色線を示している。FIG. 6 shows mixed color lines of types 1 2 3 3 weak. 図7は,原色点強度比変更した場合の色再現範囲と白色色度点を示している。FIG. 7 shows the color reproduction range and the white chromaticity point when the primary color point intensity ratio is changed. 図8は,原色色度点を変更した場合の色再現範囲と白色色度点を示している。FIG. 8 shows the color reproduction range and the white chromaticity point when the primary color chromaticity point is changed. 図9は,原色色度点を変更した場合の効果を1型色弱の混同色線を用いて示している。FIG. 9 shows the effect when the primary color chromaticity point is changed, using a mixed color line of type 1 weak. 図10は,原色色度点を変更した上でW/B調整した場合の色再現範囲と色色度点を示している。FIG. 10 shows the color reproduction range and the color chromaticity point in the case of performing W / B adjustment after changing the primary color chromaticity point. 図11は,光源で原色色度点を変更する為のスペクトルの変化を示している。FIG. 11 shows the change of the spectrum for changing the primary color chromaticity point in the light source. 図12は,発光素子の時分割駆動の一例を示している。FIG. 12 shows an example of time-division driving of light emitting elements. 図13は,複数の光源を備える表示光学系の構成を示している。FIG. 13 shows the configuration of a display optical system provided with a plurality of light sources. 図14は,光源ユニットを交換可能な表示光学系の構成を示している。FIG. 14 shows the configuration of a display optical system in which the light source unit can be replaced. 図15は,オートフォーカスを用いて表示画像データを生成する方法の一例を示している。FIG. 15 shows an example of a method of generating display image data using autofocus. 図16は,オートフォーカスを用いて表示画像データを生成する方法の一例を示している。FIG. 16 shows an example of a method of generating display image data using autofocus. 図17は,表示画像を風景と重ねて示した場合の例を模式的に示している。FIG. 17 schematically shows an example in which a display image is shown superimposed on a landscape.
 以下,図面を用いて本発明を実施するための形態について説明する。本発明は,以下に説明する形態に限定されるものではなく,以下の形態から当業者が自明な範囲で適宜変更したものも含む。 Hereinafter, an embodiment of the present invention will be described using the drawings. The present invention is not limited to the embodiments described below, and includes those appropriately modified by the person skilled in the art from the following embodiments within the obvious scope.
 図1は,ヘッドマウントディスプレイ(HMD)型の色覚支援装置100の例を示した外観斜視図である。HMD型の色覚支援装置100は,観察者の頭部に装着されてその瞳に映像を投影する。本実施形態において,色覚支援装置100は,観察者の片眼の眼前にのみ配置されている。色覚支援装置100は,基本的に,カメラ10と,表示光学系20と,接眼光学系30とを備える。カメラ10は,観察者の視線と同方向の外界を撮影して,その画像データを生成する。表示光学系20は,カメラ10で撮影した画像データに色覚支援用の所定の色調補正を施したうえで,その補正後の画像データの映像光を生成し,接眼光学系30に向けて射出する。接眼光学系30は,表示光学系20から射出された映像光を反射又は屈折させて観察者の瞳に導く。図1に示した例では,表示光学系20から横方向(X軸方向)に射出された映像光が,接眼光学系30によって略直角に反射されて奥行き方向(Z軸方向)に進行し,観察者の瞳孔に入射する。また,表示光学系20と接眼光学系30の間は空間となっており,映像光は空気中を進行する。 FIG. 1 is an external perspective view showing an example of a head mounted display (HMD) type color vision assistance apparatus 100. The HMD type color vision assistance apparatus 100 is mounted on the head of the observer and projects an image on its pupil. In the present embodiment, the color vision support apparatus 100 is disposed only in front of the eye of one eye of the observer. The color vision support apparatus 100 basically includes a camera 10, a display optical system 20, and an eyepiece optical system 30. The camera 10 captures an external world in the same direction as the line of sight of the observer and generates the image data. The display optical system 20 subjects the image data captured by the camera 10 to predetermined color tone correction for color vision support, generates image light of the image data after the correction, and emits the light toward the eyepiece optical system 30. . The eyepiece optical system 30 reflects or refracts the image light emitted from the display optical system 20 and guides it to the pupil of the observer. In the example shown in FIG. 1, the image light emitted from the display optical system 20 in the lateral direction (X-axis direction) is reflected approximately at right angles by the eyepiece optical system 30 and travels in the depth direction (Z-axis direction) It enters the pupil of the observer. The space between the display optical system 20 and the eyepiece optical system 30 is a space, and the image light travels in the air.
 図1に示した実施形態において,接眼光学系30は,透明基板31上にプリズム32を配置した構成となっている。透明基板31は,観察者の瞳と対向する板状の部材であり,少なくとも一部が光を透過する透明部材又はメッシュ部材で構成されている。図1に示した例では,透明基板31は,その全体がプラスチックやガラスなどの透明部材で形成された透明基板となっている。また,透明基板31の外面にはプリズム32が設けられている。また,この透明基板31は,表示光学系20を収容するHMDの筐体に固定されている。このため,透明基板31は,プリズム32を表示光学系20から射出された映像光の光軸上に位置させるための支持具としての機能を果たしている。透明基板31が透明であるため,映像を視認する観察者の視野を広く確保することができる。また,観察者は,透明基板31の奥側の背景を映像とともに視認することができる。他方で,プリズム32の反射面32bは,遮光性の塗料等が塗布されており,観察者はプリズム32の奥側の背景は視認できないようになっている。このため,観察者にとって,プリズム32と重なる背景が表示光学系20において生成された映像に置き換えられることとなる。 In the embodiment shown in FIG. 1, the eyepiece optical system 30 has a configuration in which a prism 32 is disposed on a transparent substrate 31. The transparent substrate 31 is a plate-like member facing the observer's pupil, and at least a part thereof is made of a transparent member or a mesh member which transmits light. In the example shown in FIG. 1, the transparent substrate 31 is a transparent substrate formed entirely of a transparent member such as plastic or glass. Also, a prism 32 is provided on the outer surface of the transparent substrate 31. Further, the transparent substrate 31 is fixed to the housing of the HMD that accommodates the display optical system 20. Thus, the transparent substrate 31 functions as a support for positioning the prism 32 on the optical axis of the image light emitted from the display optical system 20. Since the transparent substrate 31 is transparent, it is possible to ensure a wide view of the observer who visually recognizes the image. In addition, the observer can visually recognize the background on the back side of the transparent substrate 31 together with the image. On the other hand, the reflective surface 32b of the prism 32 is coated with a light-shielding paint or the like so that the viewer can not visually recognize the background behind the prism 32. Therefore, for the observer, the background overlapping with the prism 32 is replaced with the image generated in the display optical system 20.
 なお,図1に示した例において,HMD型の色覚支援装置100は,上記した構成の他に,光学センサやタッチパネルなどを備えていてもよい。HMDの筐体の中には,CPU,メモリ,各種通信機器,加速度センサ,ジャイロセンサ,及びバッテリーなどが搭載されている。HMDの構成は,適宜公知の構成を採用することができ,本発明においては特に限定されない。 In addition, in the example shown in FIG. 1, the HMD type color vision assistance apparatus 100 may be equipped with an optical sensor, a touch panel, etc. other than the above-mentioned structure. The housing of the HMD contains a CPU, memory, various communication devices, an acceleration sensor, a gyro sensor, and a battery. The structure of HMD can employ | adopt a well-known structure suitably, and it does not specifically limit in this invention.
 続いて,図2を参照して,色覚支援装置100が備える光学系の一例について説明する。特に,図2は,透過型の映像素子(液晶パネル)25を備える光学系を示している。なお,図2は,色覚支援装置100の構造を平面(XZ面)から描画したものである。図2に示されるように,色覚支援装置100は,カメラ10と,映像光(一点鎖線)を生成して射出する表示光学系20と,映像光を観察者の瞳に導く接眼光学系30とを備える。カメラ10によって取得された画像データが,表示光学系20において補正された上で映像光として射出され,空気中を伝搬して接眼光学系30に入射し,最終的には観察者の瞳Eに投影される。 Subsequently, an example of an optical system provided in the color vision aiding device 100 will be described with reference to FIG. In particular, FIG. 2 shows an optical system provided with a transmission type image device (liquid crystal panel) 25. FIG. 2 is a drawing of the structure of the color vision assistance device 100 from a plane (XZ plane). As shown in FIG. 2, the color vision support apparatus 100 includes a camera 10, a display optical system 20 for generating and emitting an image light (one-dot chain line), and an eyepiece optical system 30 for guiding the image light to the observer's pupil. Equipped with Image data acquired by the camera 10 is corrected by the display optical system 20 and emitted as an image light, propagates through the air and is incident on the eyepiece optical system 30, and finally to the pupil E of the observer It is projected.
 カメラ10は,静止画又は動画の画像データを取得するための撮像装置である。カメラ10は,例えば,撮影レンズ,メカシャッター,シャッタードライバ,CCDイメージセンサユニットなどの光電変換素子,光電変換素子から電荷量を読み出し画像データを生成するデジタルシグナルプロセッサ(DSP),及びICメモリで構成される。また,カメラ10は,撮影レンズから被写体までの距離を測定するオートフォーカスセンサ(AFセンサ)と,このAFセンサが検出した距離に応じて撮影レンズの焦点距離を調整するための機構とを備えることが好ましい。AFセンサの種類は特に限定されないが,位相差センサやコントラストセンサといった公知のパッシブ方式のものを用いればよい。また,AFセンサとして,赤外線や超音波を被写体に向けてその反射光や反射波を受信するアクティブ方式のセンサを用いることもできる。カメラ10によって取得された画像データは,制御回路26へと供給され,所定の画像処理が行われた上で,表示光学系20によってその画像データに対応する映像光が生成及び出力される。 The camera 10 is an imaging device for acquiring image data of a still image or a moving image. The camera 10 is configured of, for example, a photographing lens, a mechanical shutter, a shutter driver, a photoelectric conversion element such as a CCD image sensor unit, a digital signal processor (DSP) that reads charge amount from the photoelectric conversion element and generates image data, and an IC memory Be done. The camera 10 also includes an autofocus sensor (AF sensor) for measuring the distance from the photographing lens to the subject, and a mechanism for adjusting the focal length of the photographing lens according to the distance detected by the AF sensor. Is preferred. The type of AF sensor is not particularly limited, but a known passive method such as a phase difference sensor or a contrast sensor may be used. In addition, as an AF sensor, it is possible to use an active type sensor which directs infrared rays or ultrasonic waves to a subject and receives the reflected light or the reflected wave. The image data acquired by the camera 10 is supplied to the control circuit 26, and after predetermined image processing is performed, image light corresponding to the image data is generated and output by the display optical system 20.
 図2に示された実施形態において,表示光学系20は,複数の発光素子21を含む光源部22と,均一化素子(インテグレータ)23と,集光レンズ24と,映像素子25と,制御回路26とを有する。 In the embodiment shown in FIG. 2, the display optical system 20 includes a light source unit 22 including a plurality of light emitting elements 21, an equalizing element (integrator) 23, a condenser lens 24, an image element 25, and a control circuit. And 26.
 光源部22は,それぞれ発光波長(すなわち発光色)の異なる複数の発光素子21を含んで構成されている。発光素子21の例は,LED(発光ダイオード)である。発光素子21は,例えば,R(赤色),G(緑色),B(青色)の3原色の光を射出可能なものがそれぞれ設けられる。ただし,発光素子21の発光色は上記3原色に限定されず,観察者の色弱の状況において,観察者が色弁別しやすい発光色のものに置き換えることもできる。例えば,赤色の発光素子21に代えて橙色のものを用いたり,緑色の発光素子21に代えて青緑色のものを用いたり,あるいは青色の発光素子21に代えて水色のものを用いたりすることも可能である。また,本発明において,各発光素子21の光量は,独立して調整することが可能である。すなわち,通常のLED光源では,3原色の発光素子21の光量を常時均一なものとして白色光を生成するが,本発明の発光部22は,白色光を生成するものに限られず,赤色の発光素子21の光量を多くして赤みがかった照射光を生成してもよいし,その他,青色の発光素子21や緑色の発光素子21の光量のみを多くすることもできる。発光部22における各発光素子21の光量は,制御回路26によって独立して調整される。なお,光源部22が発光するためには電力が必要になるため,この光源部22は,電源線を介してバッテリー(図示省略)に接続されている。 The light source unit 22 is configured to include a plurality of light emitting elements 21 having different emission wavelengths (that is, emission colors). An example of the light emitting element 21 is an LED (light emitting diode). The light emitting element 21 is provided with one capable of emitting light of three primary colors of, for example, red (R), green (G), and blue (B). However, the light emission color of the light emitting element 21 is not limited to the above three primary colors, and can be replaced with a light emission color that allows the viewer to easily distinguish in a situation where the viewer's color is weak. For example, use the orange one in place of the red light emitting element 21, use the blue green one in place of the green light emitting element 21, or use the blue one in place of the blue light emitting element 21. Is also possible. Moreover, in the present invention, the light quantity of each light emitting element 21 can be adjusted independently. That is, in a normal LED light source, white light is generated with the light amount of the light emitting element 21 of three primary colors being always uniform, but the light emitting unit 22 of the present invention is not limited to one generating white light. The amount of light of the element 21 may be increased to generate reddish irradiation light, or only the amount of light of the blue light emitting element 21 or the green light emitting element 21 may be increased. The light amount of each light emitting element 21 in the light emitting unit 22 is adjusted independently by the control circuit 26. The light source unit 22 needs power to emit light, so the light source unit 22 is connected to a battery (not shown) via a power supply line.
 光源部22の各発光素子21から射出された光は,均一化素子23に入力される。均一化素子23は,各発光素子21から出力された照射光の光路のずれを調整して,後段の集光レンズ24を介して,映像素子25への照射面を均一化する。集光レンズ24は,非球面レンズなどで形成されており,各発光素子21からの照射光をほぼ平行光に変えて,同一の光路を通過するようにして映像素子25に向けて射出する。 The light emitted from each light emitting element 21 of the light source unit 22 is input to the equalizing element 23. The equalizing element 23 adjusts the deviation of the optical path of the irradiation light output from each light emitting element 21 and makes the irradiation surface to the video element 25 uniform via the condenser lens 24 in the latter stage. The condenser lens 24 is formed of an aspheric lens or the like, and converts the irradiation light from each light emitting element 21 into substantially parallel light and emits the light toward the image element 25 so as to pass through the same optical path.
 映像素子25は,入射光を表示画像データに応じて変調することにより,映像光に変換するものである。本実施形態では,映像素子25は,透過型の液晶パネルが用いられている。液晶パネルの各画素は,例えば,2層の透明電極付きガラス基板の間に液晶層が介在し,またガラス基板のそれぞれに偏光板が取り付けられた構造となっている。透明電極間に電圧をかけない場合は,液晶層の液晶分子はガラス面と平行に並ぶが,透明電極間に電圧をかけると液晶層の液晶分子がガラス面と垂直な方向へ液晶分子の向きが変わる。液晶分子の動きと2枚の偏光板の偏光方向の組み合わせにより,液晶パネルの各画素を透過する光の透過量が調整される。液晶パネルは,このような各画素がマトリクス状に配置された構造となる。なお,本発明においては,後述するように,発光素子21が時分割駆動されるものである場合には,発光装置21から映像素子25に入力される照射光がすでに色成分を含むものであるため,映像素子25にカラーフィルタ等を設ける必要はない。映像素子25に印加する電圧等の制御は,制御回路26によって行われる。なお,映像素子25は電源線を介してバッテリーに接続されている。映像素子25は,上記した透過型の液晶パネル以外にも,例えば反射型の液晶パネルを採用することもできる。また,映像素子25として反射型の液晶パネルを採用する場合,特許6081508号に開示された「直線配置型の接眼映像表示装置」の構成を参考にすることとしてもよい。 The video element 25 converts incident light into video light by modulating it according to display image data. In the present embodiment, a transmissive liquid crystal panel is used as the video element 25. Each pixel of the liquid crystal panel has, for example, a structure in which a liquid crystal layer is interposed between two glass substrates with transparent electrodes, and a polarizing plate is attached to each of the glass substrates. When no voltage is applied between the transparent electrodes, the liquid crystal molecules of the liquid crystal layer are arranged parallel to the glass surface, but when a voltage is applied between the transparent electrodes, the liquid crystal molecules of the liquid crystal layer are oriented in the direction perpendicular to the glass surface Will change. The combination of the movement of liquid crystal molecules and the polarization directions of the two polarizing plates adjusts the amount of transmission of light transmitted through each pixel of the liquid crystal panel. The liquid crystal panel has a structure in which such pixels are arranged in a matrix. In the present invention, as described later, when the light emitting element 21 is driven by time division, the irradiation light input from the light emitting device 21 to the image element 25 already contains a color component, It is not necessary to provide a color filter or the like on the video element 25. Control of a voltage or the like applied to the video element 25 is performed by the control circuit 26. The video element 25 is connected to the battery via a power supply line. Other than the transmissive liquid crystal panel described above, for example, a reflective liquid crystal panel can also be employed as the video element 25. When a reflective liquid crystal panel is used as the image element 25, the configuration of the “linear arrangement type eyepiece image display device” disclosed in Japanese Patent No. 6081508 may be referred to.
 制御回路26は,カメラ10で生成された画像データに色弱者向けの色相の補正加工等の画像処理を施して,表示用の画像データ(表示画像データ)を生成する。そして,制御回路26は,その表示画像データに基づいて,光源部22を構成する各発光素子21と映像素子25とを制御して,映像光を生成及び出力する。 The control circuit 26 subjects the image data generated by the camera 10 to image processing such as correction processing of hue for people with color weakness and generates image data for display (display image data). Then, the control circuit 26 controls each of the light emitting elements 21 and the video element 25 constituting the light source unit 22 based on the display image data to generate and output video light.
 本発明において,制御回路26は,光源部22を構成する複数の発光素子21の光量をそれぞれ独立して制御することができるものである。具体的には,ある発光素子の最大輝度は,他の発光素子の最大輝度に対して2倍以上に設定可能であることが好ましく,3倍以上又は4倍以上であってもよい。 In the present invention, the control circuit 26 can control the light amounts of the plurality of light emitting elements 21 constituting the light source unit 22 independently. Specifically, the maximum luminance of a certain light emitting element is preferably set to be twice or more, and may be three times or more or four times or more, with respect to the maximum luminance of another light emitting element.
 ここで,図3及び図4を参照して,光源部22において原色の光量を独立して調整することの利点を説明する。図3及び図4では,8bitの信号で,赤色,緑色,緑色の各原色の光量を256階調で直線的に制御する場合の例を示している。図3に示した(a)のパターンは,一般的な光源の制御方法を示している。図3(a)では,各原色の最大輝度が同等に設定されており,8bitの信号で各原色の光量を0~100%の間で256階調に制御される。一般的な光源は,白色光のみが求められるため,各原色の光量が常時等しくなるように制御される。他方で,図3に示した(b)のパターンは,各原色の光量を同等にして光源から白色光を照射し,映像素子(液晶パネル)側で白色光から特定の原色の光量のみを強調して,他の原色の光量を低下させる制御方法を示している。例えば,図3(b)では,赤色の光量を,緑色及び青色の4倍にした場合の例を示している。この場合,光源における各原色の最大輝度は同等であるため,映像素子で赤色の光量を他の4倍とするためには,赤色の光量を最大100%に維持しつつ,他の緑色及び青色の光量を25%に低下させるしか方法がない。この場合,赤色画像の表示には256階調すべてを使用できるものの,緑色画像及び青色画像の表示にはその1/4である64階調しか使用することができない。このように特定の原色の階調が減少すると,疑似輪郭の発生などにより映像の画質が低下するという問題がある。これに対して,図4に示した(c)のパターンは,光源において各原色の光量(最大輝度)を独立的に調整する制御方法を示している。例えば,図4(c)では,緑色と青色の光量を最大100%に維持しつつ,赤色の光量の最大輝度を緑色と青色の光量の4倍にした場合の例を示している。この場合,赤色,緑色,及び青色の各原色について,256階調すべてを使用することができる。このため,図4(c)のパターンは,図3(b)のパターンと異なり,画質の低下を避けることができる。このように,光源部22を構成する複数の発光素子21の光量をそれぞれ独立して制御可能に構成することで,映像の画質の低下を回避しつつ,ある特定の原色の光量を増加させることが可能となる。 Here, with reference to FIG. 3 and FIG. 4, the advantage of adjusting the light quantity of a primary color independently in the light source part 22 is demonstrated. FIGS. 3 and 4 show an example in which the light amounts of the respective primary colors of red, green and green are linearly controlled at 256 gradations by 8-bit signals. The pattern of (a) shown in FIG. 3 shows a general control method of the light source. In FIG. 3A, the maximum luminance of each primary color is set to be equal, and the light quantity of each primary color is controlled to 256 gradations between 0 and 100% with an 8-bit signal. In a general light source, only white light is required, so the light amounts of the respective primary colors are controlled to be always equal. On the other hand, in the pattern of (b) shown in FIG. 3, the light amount of each primary color is made equal and the white light is emitted from the light source, and only the light amount of the specific primary color is emphasized from white light on the image element (liquid crystal panel) side. Shows a control method for reducing the amount of light of the other primary colors. For example, FIG. 3 (b) shows an example in the case where the amount of red light is made four times that of green and blue. In this case, since the maximum luminance of each primary color in the light source is equal, the other green and blue colors are maintained at 100% at maximum while the amount of red light is maintained at a maximum of 100% in order to increase the amount of red light The only way to reduce the amount of light to 25%. In this case, although all 256 gradations can be used to display a red image, only 64 gradations, which is 1/4, can be used to display green and blue images. As such, when the gradation of a specific primary color decreases, there is a problem that the image quality of the image is degraded due to the occurrence of a false contour or the like. On the other hand, the pattern of (c) shown in FIG. 4 shows a control method of independently adjusting the light quantity (maximum luminance) of each primary color in the light source. For example, FIG. 4C shows an example in which the maximum luminance of the red light amount is four times the green and blue light amounts while maintaining the green and blue light amounts at a maximum of 100%. In this case, all 256 gradations can be used for each of the red, green and blue primary colors. Therefore, unlike the pattern of FIG. 3B, the pattern of FIG. 4C can avoid deterioration in image quality. As described above, by configuring the light amounts of the plurality of light emitting elements 21 constituting the light source unit 22 to be independently controllable, it is possible to increase the light amount of a specific primary color while avoiding the deterioration of the image quality of the image. Is possible.
 図2に示されるように,接眼光学系30は,上記映像素子25から射出された映像光を反射又は屈折させて観察者の瞳Eに導く光学素子である。本実施形態において,接眼光学系30は,プリズム32を有している。また,接眼光学系30は,接眼レンズ33を有していてもよい。プリズム32は,表示光学系20からの映像光を内部で導光する部材である。プリズム32は,例えば,映像光の入射面32aと,反射面32bと,射出面32cを有する形状となっている。プリズム32の入射面32aは,横方向(X軸方向)に進行する映像光の光軸とほぼ垂直に交差するように設けられている。また,射出面32cは,観察者の瞳と対向するように設けられている。反射面32bは,例えば矩形形状(長方形)であり,映像光の光路を直角に折り曲げる手段として機能している。具体的には,反射面32bは,入射面32aを介してプリズム内部に入射した映像光を,Z方向に反射させる。なお,プリズム32は,単一のプリズムで構成されてもよいし,複数のプリズムを組み合わせて構成されてもよい。また,接眼レンズ33は,例えばプリズム32の入射面32aに取り付けられる。接眼レンズ33は,正のパワーを持ち,プリズム32に入射する映像光を瞳に集光する。接眼レンズ33は,プリズム32の入射面32aに接合されていてもよいし,プリズム32と一体化されていてもよい。また,プリズム32の反射面32bの裏面は,光を吸収又は全反射するための遮光加工が施されていることが好ましい。例えば,反射面32bの裏面に黒色の塗料を塗布したり,あるいは鏡面加工を施すこととすればよい。この場合,反射面32bの裏面からプリズム32内に外光が入射することや,観察者が反射面32bに重なる景色を視認することを回避できる。 As shown in FIG. 2, the eyepiece optical system 30 is an optical element which reflects or refracts the image light emitted from the image element 25 and guides it to the pupil E of the observer. In the present embodiment, the eyepiece optical system 30 has a prism 32. The eyepiece optical system 30 may also have an eyepiece lens 33. The prism 32 is a member for guiding the image light from the display optical system 20 inside. The prism 32 has, for example, a shape having an incident surface 32a of image light, a reflection surface 32b, and an emission surface 32c. The incident surface 32a of the prism 32 is provided to intersect the optical axis of the image light traveling in the lateral direction (X-axis direction) substantially perpendicularly. Further, the exit surface 32c is provided to face the pupil of the observer. The reflecting surface 32b has, for example, a rectangular shape (rectangular shape), and functions as means for bending the optical path of the image light at a right angle. Specifically, the reflecting surface 32b reflects the image light incident on the inside of the prism through the incident surface 32a in the Z direction. The prism 32 may be configured by a single prism, or may be configured by combining a plurality of prisms. The eyepiece 33 is attached to, for example, the incident surface 32 a of the prism 32. The eyepiece lens 33 has positive power and condenses the image light incident on the prism 32 on the pupil. The eyepiece 33 may be joined to the incident surface 32 a of the prism 32 or may be integrated with the prism 32. In addition, it is preferable that the rear surface of the reflection surface 32b of the prism 32 be subjected to light shielding processing for absorbing or totally reflecting light. For example, black paint may be applied to the back surface of the reflective surface 32b or mirror finish may be applied. In this case, it is possible to prevent external light from entering the prism 32 from the back surface of the reflecting surface 32b and the viewer from visually recognizing a scene overlapping the reflecting surface 32b.
 上記の構成においては,表示光学系20によって映像光が生成される。すなわち,光源部22からの照射光は,均一化素子23で照射面が均一化された後,集光レンズ24で集光されて映像素子25に入射する。照射光は映像素子25によって変調されて映像光となる。次に,表示光学系20から射出された映像光は,接眼光学系30に入射する。接眼光学系30では,映像光が,接眼レンズ33を介してプリズム32の内部に入射する。その後,映像光は,プリズム32の内部を横方向(X軸方向)に沿って進行し,反射面32bで光路が折り曲げられ奥行き方向(Z軸方向)に向きを変えて進行する。これにより,映像光は,プリズム32の射出面32cを介して射出され,観察者の瞳に導かれる。このようにして,観察者は,瞳Eの位置で,表示光学系20にて生成された映像の拡大虚像を観察することができる。 In the above configuration, the display optical system 20 generates image light. That is, the irradiation light from the light source unit 22 is homogenized by the homogenizing element 23 and then condensed by the condenser lens 24 and enters the video element 25. The irradiation light is modulated by the video element 25 to become video light. Next, the image light emitted from the display optical system 20 enters the eyepiece optical system 30. In the eyepiece optical system 30, the image light enters the inside of the prism 32 through the eyepiece lens 33. Thereafter, the image light travels inside the prism 32 along the lateral direction (X-axis direction), the light path is bent at the reflecting surface 32b, and travels while changing the direction in the depth direction (Z-axis direction). Thereby, the image light is emitted through the emission surface 32 c of the prism 32 and guided to the pupil of the observer. Thus, the observer can observe the enlarged virtual image of the image generated by the display optical system 20 at the position of the pupil E.
 続いて,図5~図11を参照して,制御回路26が光源部22を構成する各発光素子21を制御する方法の例について説明する。ここでは,3つの発光素子21が,赤色,緑色,及び青色の3原色で発光する場合を例に挙げて説明する。 Subsequently, with reference to FIGS. 5 to 11, an example of a method in which the control circuit 26 controls each of the light emitting elements 21 constituting the light source unit 22 will be described. Here, the case where the three light emitting elements 21 emit light in the three primary colors of red, green and blue will be described as an example.
 図5は,S錐体,M錐体,及びL錐体の感度特性を示しており,それぞれの最大値を1として規格化している。原色の青色はS錐体で感じ取るが,M錐体とL錐体で感じる波長は重複しているため,赤色と緑色はM錐体とL錐体の受光量の演算で感じ取ることとなる。この3つの錐体の内,1つ又は2つの感度が低くなると所謂色弱と呼ばれる状態になり,色の感じ方が健常者とは異なってくる。色は通常x,y座標を用いた二次元の色度と呼ばれる数字で定義されるが,色弱者は1次元的な変化しか色を感じ取ることができなくなる。色弱者は,このように色度を1次元的にしか感じ取ることができないため,健常者であれば区別できる色であっても,色弱者にとっては同じ色に見える場合がある。図6は,色弱者が感じる色を示した図であり,色度図内の同一斜め線の上にある色は色弱者にとって全て同じ色に見えることを示している。「1型」はL錐体の感度が弱い場合(1型色弱),「2型」はM錐体の感度が弱い場合(2型色弱)場合,「3型」はS錐体の感度が弱い場合(3型色弱)を示している。 FIG. 5 shows the sensitivity characteristics of the S-cone, M-cone, and L-cone, and the respective maximum values are normalized to one. The blue color of the primary color is perceived by the S cone, but the wavelengths sensed by the M cone and the L cone overlap, so red and green are perceived by calculation of the light reception amounts of the M cone and the L cone. Of these three cones, when one or two sensitivities are lowered, it becomes a state called so-called weak color, and the way of feeling the color differs from that of the healthy person. A color is usually defined by a number called "two-dimensional chromaticity" using x, y coordinates, but people with color blindness can only perceive one-dimensional change in color. Since the color-impaired person can sense the chromaticity only in one dimension in this way, even a color that can be distinguished by a healthy person may appear to be the same color to the color-weakened person. FIG. 6 is a diagram showing the colors felt by the color-impaired person, and it is shown that the colors on the same oblique line in the chromaticity diagram all appear to be the same color to the color-weakened person. If the sensitivity of the L cone is low (type 1 weak), if the sensitivity of the M cone is low (type 2 weak), the sensitivity of the S cone is low. It shows the weak case (3 color weak).
 図7は,ディスプレイの原色点,色再現範囲,及び白色(最高輝度)色度点をxy色度図で示したものである。図7では,一般的なディスプレイにおいて各原色(赤色,青色,緑色)の光量の強度比を同等にした場合の白色色度点を×印で示している。他方で,図7では,制御回路26が発光素子21の原色の強度比を変えた場合,具体的には赤色の光量を他の緑色や青色の4倍に設定した場合の例を示している。この場合,白色色度点は○印で示した位置となる。このように,赤色の感度が弱い色弱者(主に1型色弱)への支援策として,赤原色の強度(光量)を増強することが有効である。ただし,このような場合には,赤色,緑色,青色の各原色点に変化はないものの,赤色の強度を増したことにより白色色度点(最高輝度)が赤色の原色点に近づくことがわかる。そうすると,単純に赤色の光量のみを増加させた場合,色弱者にとって,赤色を感じる効果は大きくなるものの,白色が赤みがかった色に感じられることとなる。 FIG. 7 shows the primary color points of the display, the color reproduction range, and the white (highest luminance) chromaticity point in the xy chromaticity diagram. In FIG. 7, the white chromaticity points in the case where the intensity ratio of the light quantity of each primary color (red, blue, green) is made equal in a general display are indicated by x marks. On the other hand, FIG. 7 shows an example where the control circuit 26 changes the intensity ratio of the primary colors of the light emitting element 21, specifically, the light quantity of red is set to four times that of other green and blue. . In this case, the white chromaticity point is at the position indicated by 印. As described above, it is effective to enhance the intensity (light quantity) of the red primary color as a support measure for the color-weak person with low red sensitivity (mainly type 1 weak). However, in such a case, although there is no change in each of the red, green and blue primary color points, it can be understood that the white chromaticity point (maximum luminance) approaches the red primary color point by increasing the red intensity . Then, if only the amount of red light is simply increased, the effect of feeling red increases for people with color weakness, but the whiteness will be felt reddish.
 続いて,図8に示されるように,制御回路26において各発光素子21の光量を制御して,原色の位置そのものを変えることもできる。例えば,観察者の錐体の感度低下が大きく,特定の原色の強度(光量)増大では不十分な場合に,原色の位置そのものを変えることが有効である。図8に示した例では,赤色の感度が弱い色弱者(主に1型色弱)への支援策として,通常は赤で表示される色を橙色に変換して表示している。これにより,色弱者は,赤色自体は認識することはできないもの,橙色であればそこに色があることを認識することができる。また,橙色と緑色との違いを明確にするために,緑色の原色も水色に変換させている。このように,各原色点の位置を適切に変換することで,色弱者にとって色の区別を認識しやすくなる。ただし,図8に示されるように,この場合にも,白色(最高輝度)の色度点は元の位置からずれるため,観察者は白色を通常とは異なった色として認識することとなる。 Subsequently, as shown in FIG. 8, the light quantity of each light emitting element 21 can be controlled by the control circuit 26 to change the position of the primary color itself. For example, it is effective to change the position of the primary color itself when the sensitivity drop of the cone of the observer is large and the intensity (light intensity) increase of the specific primary color is insufficient. In the example shown in FIG. 8, the color normally displayed in red is converted to orange and displayed as a support measure for the color weak who is sensitive to red (mainly type 1 weak). As a result, the color-impaired person can recognize that the red itself can not be recognized, and if it is orange, there is a color there. Also, in order to clarify the difference between orange and green, the green primary color is also converted to light blue. Thus, by appropriately converting the position of each primary color point, it becomes easy for the color-weak to recognize the color distinction. However, as shown in FIG. 8, also in this case, the chromaticity point of white (highest luminance) deviates from the original position, so the observer recognizes the white as a color different from the normal color.
 図9は,上記図8に示したように制御回路26において原色点を変更する制御を行うことで,色弱者(例えば1型)であっても同一混同色線上に存在する色を区別できるようになることを示している。このように,原色点変更前の状態では,同一混同色線上に存在していた色であっても,原色点を変更することにより,同一混同色線上からずれることとなるため,色弱者であってもそれらの色を異なる色として区別できるようになる。また,原色点を変更すると,元々問題なく区別できていた色が同一線上に並ぶこともあるため,原色点を変更した状態と変更しない元の状態とを時間的に切り替えて繰り返し表示することも有効である。つまり,1次元的に感じる色度を時間的に2種類表示することにより,擬似的に2次元の色度を再現することができる。 In FIG. 9, by performing control to change the primary color point in the control circuit 26 as shown in FIG. 8 as described above, it is possible to distinguish colors existing on the same confusion color line even for the weak (for example, type 1) It shows that it becomes. In this way, in the state before changing the primary color point, even if the color was present on the same confusion color line, it is possible to shift from the same confusion color line by changing the primary color point. Even those colors can be distinguished as different colors. In addition, when primary color points are changed, colors that were originally distinguishable without problems may be arranged on the same line, so it is also possible to temporally switch between the state where primary color points are changed and the original state without change. It is valid. That is, by displaying two types of chromaticity perceived in one dimension temporally, two-dimensional chromaticity can be simulated in a pseudo manner.
 また,図10に示されるように,原色色度点の位置そのものを変えた場合であっても,発光素子21の強度比を調整することにより,白色(最高輝度)の色度点を元の位置に戻すことができる。このように,制御回路26は,原色色度点を変えたうえで,各発光素子21の強度比を調整することにより,ホワイトバランスを元に戻すこともできる。 Further, as shown in FIG. 10, even when the position of the primary color chromaticity point itself is changed, the white (highest luminance) chromaticity point is originally set by adjusting the intensity ratio of the light emitting element 21. It can be returned to the position. As described above, the control circuit 26 can also restore the white balance by adjusting the intensity ratio of each light emitting element 21 after changing the primary color chromaticity point.
 図11は,通常の光源のスペクトルと,赤・緑の原色点を移動させた場合の光源のスペクトルを一例として示している。なお,原色点の移動は,画像信号の色変換を用いて行うことも可能であるが,原色点を主波長とする光源を用いた方がより望ましいものであるといえる。例えばL錐体の感度が低い場合,赤と緑の波長を混ぜることによって変更した赤原色(例えば橙色)を表示する場合よりも,橙色の波長の光源を用いた方がより効率的である。このような光源を用いれば信号での色変換処理が不要となる。 FIG. 11 shows, as an example, the spectrum of a normal light source and the spectrum of the light source when red and green primary color points are moved. Although the movement of the primary color points can also be performed using color conversion of the image signal, it can be said that it is more preferable to use a light source having the primary color point as the dominant wavelength. For example, when the sensitivity of the L cone is low, using an orange wavelength light source is more efficient than when displaying a red primary color (eg, orange) changed by mixing red and green wavelengths. If such a light source is used, color conversion processing with a signal becomes unnecessary.
 続いて,図12を参照して,制御回路26が光源部22を構成する各発光素子21を時分割駆動する場合の例について説明する。時分割駆動は,一つの発光素子21が点灯しているタイミングにおいて他の発光素子21は消灯させ,これを各発光素子21について繰り返し高速で行うことで,観察者に時分割された映像光の残像を視認させて,観察者の脳内で各映像光を合成する手法である。 Subsequently, with reference to FIG. 12, an example in which the control circuit 26 drives each of the light emitting elements 21 constituting the light source unit 22 in a time division manner will be described. In time-division driving, the other light-emitting elements 21 are turned off at the timing when one light-emitting element 21 is lit, and this is repeatedly performed for each light-emitting element 21 at a high speed. It is a method of combining the image lights in the observer's brain by visually recognizing the afterimage.
 図12は,赤色,緑色,青色のそれぞれで発光する3種の発光素子を時分割駆動する場合の例を概念的に示している。図12に示されるように,赤色の発光素子,緑色の発光素子,及び青色の発光素子がこの順で発光し,ある発光素子が点灯しているタイミングでは他の発光素子は消灯している。各発光素子から照射された原色光は映像素子(液晶パネル)へと入力され,そこで映像光に変換される。赤色の映像光,緑色の映像光,及び青色の映像光は,順番に観察者の瞳に投影されるが,観察者の脳内でそれらの3種の映像光が合成されて,一つの映像となる。また,図12に示されるように,各映像光について映像素子でその透過量を調整することで,3種の映像光を合成した画像については細かい色相(具体的には256階調の3乗=16777216色)を表現することができる。 FIG. 12 conceptually shows an example of time-division driving of three types of light emitting elements emitting red, green and blue light. As shown in FIG. 12, the red light emitting element, the green light emitting element, and the blue light emitting element emit light in this order, and another light emitting element is turned off at the timing when a certain light emitting element is lit. The primary color light emitted from each light emitting element is input to a video element (liquid crystal panel) and converted into video light there. The red image light, the green image light, and the blue image light are sequentially projected onto the observer's pupil, but these three types of image light are combined in the observer's brain to form one image. It becomes. In addition, as shown in FIG. 12, by adjusting the transmission amount of each image light with the image element, fine hue (specifically, the cube of 256 gradations) is obtained for an image obtained by combining three types of image light. = 16777216) can be expressed.
 各発光素子21を時分割駆動すると,実質的には観察者の眼(具体的には網膜のL錐体,M錐体,S錐体)に各発光素子21の発光色が別々に(時分割で)投影される。このため,時分割駆動の場合,観察者の眼に投影される段階で映像光の色相が混合されないため,色弱者にとって映像光を色覚しやすくなるというメリットがある。 When each light emitting element 21 is driven in a time-division manner, the light emission color of each light emitting element 21 is substantially separately (when the light cone of the retina L cone, M cone and S cone) of the observer. Projected) For this reason, in the case of time-division driving, the hue of the image light is not mixed at the stage of being projected onto the eye of the observer, so that there is an advantage that the image light can be easily perceived by people with color blindness.
 続いて,図13及び図14を参照して,光源部22を構成する発光素子21を切り替えることのできる構成について説明する。図13は,駆動する発光素子21をスイッチで切り替える例を示しており,図14は,表示光学系に組み込む発光素子21を物理的に取り替える例を示している。 Subsequently, a configuration capable of switching the light emitting elements 21 constituting the light source unit 22 will be described with reference to FIGS. 13 and 14. FIG. 13 shows an example in which the light emitting element 21 to be driven is switched by a switch, and FIG. 14 shows an example in which the light emitting element 21 incorporated in the display optical system is physically replaced.
 図13に示した例では,表示光学系20は,発光部22に予め4種以上の発光素子21を設けられ,どの発光素子21を駆動させるのかを制御回路26で選択することができるように構成されている。具体的に説明すると,発光部22は,3種の発光素子21を含む第1光源22aと,3種の発光素子21を含む第2光源22bを備える。第1光源22aの発光素子21と,第2光源22bの発光素子21は,少なくとも1つ以上の発光素子21の発光色が異なっており,すべての発光素子21の発光色が異なっていてもよい。例えば,第1光源22aは,通常画像を表示する為の赤色,緑色,及び青色の3原色の発光素子21で構成されており,第2光源22bは,色弱者向けに上記3原色と異なる色の発光素子21(例えば橙色・水色・青色の組合わせ,又は赤色・黄色・青緑色の組み合わせなど)で構成されている。第1光源22aと第2光源22bは,それぞれ制御回路26に接続されている。制御回路26は,観察者が選択したモードに従って,第1光源22aと第2光源22bのどちらを駆動させるかを決定する。このように,第1光源22aと第2光源22bのどちらを駆動させるのかをスイッチで切り替えることができる。 In the example shown in FIG. 13, in the display optical system 20, four or more types of light emitting elements 21 are provided in advance in the light emitting unit 22, and the control circuit 26 can select which light emitting element 21 is to be driven. It is configured. Specifically, the light emitting unit 22 includes a first light source 22 a including three types of light emitting elements 21 and a second light source 22 b including three types of light emitting elements 21. The light emitting elements 21 of the first light source 22 a and the light emitting elements 21 of the second light source 22 b may have different emission colors of at least one light emitting element 21, and the emission colors of all the light emitting elements 21 may be different. . For example, the first light source 22a is composed of light emitting elements 21 of three primary colors of red, green and blue for displaying a normal image, and the second light source 22b has a color different from the above three primary colors for the weak Light emitting elements 21 (for example, a combination of orange, light blue and blue, or a combination of red, yellow and blue green). The first light source 22a and the second light source 22b are connected to the control circuit 26, respectively. The control circuit 26 determines which of the first light source 22 a and the second light source 22 b is to be driven according to the mode selected by the observer. As described above, which of the first light source 22a and the second light source 22b is to be driven can be switched by the switch.
 図14に示した例では,物理的に分離された複数の光源ユニット41,42,43が予め用意されており,表示光学系20は,任意に選択された光源ユニットを取り付けることができるように構成されている。具体的に説明すると,表示光学系20と各光源ユニット41,42,43には,それぞれ接続用の端子が設けられており,両者の端子を繋ぐことで表示光学系20と各光源ユニット41,42,43とが電気的に接続される。図14に示した例において,第1光源ユニット41は,3種の発光素子21を備えており,各発光素子21は通常画像を表示する為の赤色,緑色,及び青色の3原色で発光する。第2光源ユニット42は,第1光源42aと第2光源42bを備える。第2光源ユニット42において,例えば,第1光源22aは,通常画像を表示する為の赤色,緑色,及び青色の3原色の発光素子21で構成され,第2光源22bは,色弱者向けに上記3原色と異なる色の発光素子21(例えば橙色・水色・青色の組合わせ,又は赤色・黄色・青緑色の組み合わせなど)で構成されている。第2光源ユニット42が表示光学系20に取り付けられた場合,制御回路26は,第1光源22aと第2光源22bのどちらを駆動させるかを選択することができる。第3の光源ユニット43も,第2光源ユニット42と同様に,第1光源43aと第2光源43bを備える。第1光源43aは,通常画像を表示する為の赤色,緑色,及び青色の3原色の発光素子21で構成される。他方で,第2光源43bは,第1光源43aと同時に駆動することを想定したものであり,色弱者向けに,例えば第1光源43aの3原色のうちの一つと同じ発光色の発光素子21を3つ備えている。例えば,第2光源43aを構成する3つの発光素子21は同じ色で発光するものであり,赤色,緑色,又は青色から選ばれる。つまり,第3光源ユニット43が表示光学系20に取り付けられた場合,制御回路26は,第1光源43aのみを駆動させるか,第1光源43a及び第2光源43bの両方を駆動させるかを選択する。例えば,第2光源43bが赤色で発光する3つの発光素子41から構成されている場合,第1光源43a及び第2光源43bの両方を駆動させることで,第1光源ユニット41のみを駆動させたときに比べて赤色の光量(最大輝度)を4倍に設定することができる。 In the example shown in FIG. 14, a plurality of physically separated light source units 41, 42, 43 are prepared in advance, and the display optical system 20 can be mounted with an arbitrarily selected light source unit. It is configured. Specifically, the display optical system 20 and each light source unit 41, 42, 43 are provided with terminals for connection, respectively, and by connecting the two terminals, the display optical system 20 and each light source unit 41, 42 and 43 are electrically connected. In the example shown in FIG. 14, the first light source unit 41 includes three light emitting elements 21. Each light emitting element 21 emits light in three primary colors of red, green and blue for displaying an ordinary image. . The second light source unit 42 includes a first light source 42a and a second light source 42b. In the second light source unit 42, for example, the first light source 22a is composed of light emitting elements 21 of three primary colors of red, green and blue for displaying a normal image, and the second light source 22b is for the weak The light-emitting elements 21 (for example, a combination of orange, light blue, and blue, or a combination of red, yellow, and blue-green) that are different from the three primary colors are configured. When the second light source unit 42 is attached to the display optical system 20, the control circuit 26 can select which of the first light source 22a and the second light source 22b is to be driven. Similar to the second light source unit 42, the third light source unit 43 also includes the first light source 43a and the second light source 43b. The first light source 43a is composed of light emitting elements 21 of three primary colors of red, green and blue for displaying a normal image. On the other hand, the second light source 43b is assumed to be driven at the same time as the first light source 43a, and for the color-weaker, for example, the light emitting element 21 of the same luminescent color as one of the three primary colors of the first light source 43a. There are three. For example, the three light emitting elements 21 constituting the second light source 43a emit light of the same color, and are selected from red, green or blue. That is, when the third light source unit 43 is attached to the display optical system 20, the control circuit 26 selects whether to drive only the first light source 43a or to drive both the first light source 43a and the second light source 43b. Do. For example, when the second light source 43b is composed of three light emitting elements 41 emitting red light, only the first light source unit 41 is driven by driving both the first light source 43a and the second light source 43b. The amount of red light (maximum luminance) can be set to four times as much as sometimes.
 続いて,図15~図17を参照して,観察者の瞳とカメラ10との視差を考慮して,カメラ10で取得した画像データから観察者の視角と同等の表示画像データを抽出する処理を説明する。図15に示されるように,例えば,カメラ10を観察者のこめかみ付近に配置して観察者の片眼に映像光を投影する場合について考える。特に,ここでは,図1等に示した構成のように,プリズム32によって映像光を観察者に瞳に投影し,そのプリズム32の背面が遮光されている場合を例に挙げて説明する。 Subsequently, with reference to FIGS. 15 to 17, processing for extracting display image data equivalent to the observer's visual angle from the image data acquired by the camera 10 in consideration of the parallax between the observer's pupil and the camera 10 Explain. As shown in FIG. 15, for example, the case where the camera 10 is disposed near the temple of the observer and the image light is projected onto one eye of the observer is considered. In particular, here, as in the configuration shown in FIG. 1 and the like, the case where image light is projected onto the pupil by the prism 32 by the prism 32 and the back surface of the prism 32 is shielded will be described as an example.
 図15に示されるように,観察者の片眼とカメラ10の間には隙間が存在し,その結果,観察者の片眼の視線とカメラ10の視線の間には視差Pが生じる。この場合に,カメラ10で撮影した画像データをそのまま観察者の片眼に投影すると,視差Pの影響により,観察者が実際に視ている背景と装置によって生成した映像とをシームレスに重ねることはできない。そこで,この視差Pを考慮して,カメラ10で撮影した画像データの中から,観察者の注視範囲と観察者が注視している物体面とが重なる範囲を表示画像として抽出して(切り出して),表示画像データを生成することが有効である。ただし,図15と図16を比較すると判るように,表示画像として抽出する範囲は,カメラ10の撮影レンズから観察者が注視している物体面までの距離によって変動する。ここで,カメラ10の撮影レンズから観察者が注視している物体面までの距離は,カメラ10が備えるオートフォーカスレンズによって測定することが可能である。そこで,制御回路26は,視差Pと,このオートフォーカスレンズで測定した物体面までの距離とに基づいて,画像データの中から表示画像として抽出する範囲を決定することが好ましい。なお,視差Pは,色覚支援装置100全体の構成を考慮して任意に設定することができ,また,観察者の操作によって微調整できるようにしてもよい。また,カメラ10の撮影レンズから観察者が注視している物体面までの距離は,前述したようにカメラ10が備えるオートフォーカスレンズの測定値に制御回路26に入力するようにすればよい。 As shown in FIG. 15, a gap is present between the observer's one eye and the camera 10, and as a result, a parallax P is generated between the line of sight of the observer's one eye and the line of sight of the camera 10. In this case, if the image data captured by the camera 10 is projected as it is to the observer's one eye, it is possible to seamlessly overlap the background actually viewed by the observer with the image generated by the device under the influence of the parallax P. Can not. Therefore, in consideration of the parallax P, a range in which the gaze range of the observer and the object plane at which the observer is gazing overlap is extracted as a display image from the image data captured by the camera 10 (cut out ), It is effective to generate display image data. However, as can be seen by comparing FIG. 15 and FIG. 16, the range extracted as the display image fluctuates depending on the distance from the photographing lens of the camera 10 to the object plane at which the observer is gazing. Here, the distance from the photographing lens of the camera 10 to the object plane at which the observer is gazing can be measured by an autofocus lens provided in the camera 10. Therefore, the control circuit 26 preferably determines a range to be extracted as a display image from the image data based on the parallax P and the distance to the object plane measured by the autofocus lens. The parallax P can be arbitrarily set in consideration of the entire configuration of the color vision assistance device 100, and may be finely adjusted by the operation of the observer. In addition, the distance from the photographing lens of the camera 10 to the object plane at which the observer is gazing may be input to the control circuit 26 as the measurement value of the autofocus lens provided in the camera 10 as described above.
 上記のようにして,制御回路26は,カメラ10が取得した画像データから,上記視差P及びオートフォーカスレンズの測定値に基づいて,観察者の瞳に投影する表示画像データを抽出する。そして,この表示画像データに基づいて,光源部22及び映像素子25とを制御して映像光を生成する。図17は,実際の風景に色覚支援装置100で生成した映像を重ねて表示する場合の例を示している。映像の表示範囲が実際の視角(みかけの大きさ)よりも縮小されている場合,風景と映像の間にずれが生じ,観察者に違和感を与えるおそれがある。これに対して,映像の表示範囲が実際の視角と同等である場合,風景と映像とがシームレスに繋がり,観察者は風景の中に映像が重なった状態を自然に視認することができる。このような表示を行うことにより,観察者に対して自然な視覚支援を提供することが可能になる。 As described above, the control circuit 26 extracts display image data to be projected onto the observer's pupil from the image data acquired by the camera 10 based on the parallax P and the measurement value of the autofocus lens. Then, based on the display image data, the light source unit 22 and the video device 25 are controlled to generate video light. FIG. 17 shows an example in the case where an image generated by the color vision support apparatus 100 is superimposed and displayed on an actual landscape. If the display range of the image is reduced relative to the actual viewing angle (the apparent size), a gap may occur between the landscape and the image, which may cause the viewer to feel uncomfortable. On the other hand, when the display range of the video is equal to the actual viewing angle, the landscape and the video are seamlessly connected, and the observer can naturally view the state where the video is superimposed in the landscape. Such a display makes it possible to provide natural visual assistance to the observer.
 また,本発明において,表光学系20の制御回路26は,オートフォーカスセンサ11が検出した距離に応じて,画像データの色相を変換するモードを選択することとしてもよい。具体的には,制御回路26は,画像データの色相の変換方法についての複数のモードをメモリ等に記憶しており,オートフォーカスセンサ11が検出した距離に応じてモードを決定する。色相の変換方法のモードとは,例えば,特定の発光素子21の光量を増加する程度を複数のモードに分けることとしてもよい。また,RGB比のみを変更するモードや,RGB比の変更に加えて原点(白色色度点)の移動を含むモード,あるいはRGB比は変更するがW/Bバランスは維持するモードなどを用意してもよい。例えば,観察者が遠くの景色を眺めている場合と,手元で色の判断を行う作業をしている場合とで,色相の変換方法のモードを変えるようにすることで,その状況に応じた適切な色覚支援を提供することができる。具体的には,図15に示した場合と図16に示した場合とで,異なるモードで画像データの色変換が行われる。 In the present invention, the control circuit 26 of the front optical system 20 may select a mode for converting the hue of the image data in accordance with the distance detected by the autofocus sensor 11. Specifically, the control circuit 26 stores a plurality of modes for the method of converting the hue of the image data in a memory or the like, and determines the mode in accordance with the distance detected by the autofocus sensor 11. With the mode of the hue conversion method, for example, the degree to which the light amount of the specific light emitting element 21 is increased may be divided into a plurality of modes. Also, prepare a mode that changes only the RGB ratio, a mode that includes movement of the origin (white chromaticity point) in addition to changing the RGB ratio, or a mode that changes the RGB ratio but maintains the W / B balance. May be For example, depending on the situation, by changing the mode of the hue conversion method depending on whether the observer is looking at a distant view or the task of making a color judgment at hand It can provide appropriate color vision support. Specifically, color conversion of image data is performed in different modes between the case shown in FIG. 15 and the case shown in FIG.
 以上,本願明細書では,本発明の内容を表現するために,図面を参照しながら本発明の実施形態の説明を行った。ただし,本発明は,上記実施形態に限定されるものではなく,本願明細書に記載された事項に基づいて当業者が自明な変更形態や改良形態を包含するものである。 Hereinabove, in order to express the content of the present invention, the embodiments of the present invention have been described with reference to the drawings. However, the present invention is not limited to the above embodiment, and includes modifications and improvements apparent to those skilled in the art based on the matters described in the present specification.
10…カメラ          11…オートフォーカスセンサ
20…表示光学系        21…発光素子
22…光源部          22a…第1光源
22b…第2光源        23…均一化素子
24…集光レンズ        25…映像素子
26…制御回路         30…接眼光学系
31…透明基板         32…プリズム
32a…入射面         32b…反射面
32c…射出面         33…接眼レンズ
41…第1光源ユニット     42…第2光源ユニット
43…第3光源ユニット     100…色覚支援装置
DESCRIPTION OF SYMBOLS 10 ... Camera 11 ... Auto-focus sensor 20 ... Display optical system 21 ... Light emitting element 22 ... Light source part 22a ... 1st light source 22b ... 2nd light source 23 ... Equalization element 24 ... Condensing lens 25 ... Picture element 26 ... Control circuit 30 ... eyepiece optical system 31 ... transparent substrate 32 ... prism 32a ... incident surface 32b ... reflection surface 32c ... emission surface 33 ... eyepiece lens 41 ... first light source unit 42 ... second light source unit 43 ... third light source unit 100 ... color vision support device

Claims (7)

  1.  外界を撮影して画像データを生成するカメラ(10)と,
     前記画像データに対応する映像光を射出する表示光学系(20)と,
     前記映像光を観察者の瞳に導く接眼光学系(30)と,を備える
     色覚支援装置であって,
     前記表示光学系(20)は,
      それぞれ発光波長の異なる複数の発光素子(21)を含む光源部(22)と,
      前記光源部(22)からの照射光を前記映像光に変換する映像素子(25)と,
      前記複数の発光素子(21)の光量を独立して制御する制御回路(26)とを有し,
      前記画像データを観察者の色弁別しやすい色相に変換した映像光を出力することのできる
     色覚支援装置。
    A camera (10) that captures the outside world and generates image data;
    A display optical system (20) for emitting image light corresponding to the image data;
    An eyepiece optical system (30) for guiding the image light to the observer's pupil,
    The display optical system (20) is
    A light source unit (22) including a plurality of light emitting elements (21) each having a different emission wavelength;
    A video element (25) for converting light emitted from the light source unit (22) into the video light;
    A control circuit (26) for independently controlling the light amounts of the plurality of light emitting elements (21);
    The color vision assistance apparatus which can output the image light which converted the said image data into the hue which is easy to be color-distinctive of a viewer.
  2.  前記制御回路(26)は,前記複数の発光素子(21)を順次発光させる時分割駆動を行う
     請求項1に記載の色覚支援装置。
    The color vision assistance device according to claim 1, wherein the control circuit (26) performs time-division driving in which the plurality of light emitting elements (21) sequentially emit light.
  3.  前記光源部(22)は,それぞれ発光波長の異なる発光素子(21)を4つ以上含み,
     前記制御回路(26)は,前記発光素子(21)の中から少なくとも3つを選択して駆動する
     請求項1に記載の色覚支援装置。
    The light source unit (22) includes four or more light emitting elements (21) having different emission wavelengths,
    The color vision assistance device according to claim 1, wherein the control circuit (26) selects and drives at least three of the light emitting elements (21).
  4.  前記光源部(22)は,それぞれ発光波長の異なる複数の発光素子(21)を含む第1光源ユニット(41)と,当該第1光源ユニット(41)に含まれる少なくともいずれかの1つの発光素子(21)と発光波長の異なる発光素子(21)を少なくとも1つ以上含む第2光源ユニット(42)とを物理的に交換可能に構成されている
     請求項1に記載の色覚支援装置。
    The light source unit (22) includes a first light source unit (41) including a plurality of light emitting elements (21) having different emission wavelengths, and at least one light emitting element included in the first light source unit (41) The color vision assistance device according to claim 1, wherein the color vision assistance device is configured to be physically exchangeable with the second light source unit (42) including at least one or more light emitting elements (21) having different emission wavelengths.
  5.  前記接眼光学系(30)は,前記映像光を反射して観察者の瞳に導くプリズム(32)を有し,
     前記プリズム(32)の反射面(32a)の裏面側は遮光されている
     請求項1に記載の色覚支援装置。
    The eyepiece optical system (30) has a prism (32) that reflects the image light and guides it to the pupil of the observer;
    The color vision aiding apparatus according to claim 1, wherein the back surface side of the reflection surface (32a) of the prism (32) is shielded.
  6.  前記カメラ(10)は,撮影レンズと被写体との距離を検出するオートフォーカスセンサ(11)を有しており,
     前記表示光学系(20)は,前記画像データの色相の変換方法について複数のモードを有し,前記オートフォーカスセンサ(11)が検出した前記距離に応じて前記モードを決定する
     請求項1に記載の色覚支援装置。
    The camera (10) has an autofocus sensor (11) for detecting the distance between the photographing lens and the subject,
    The display optical system (20) has a plurality of modes for the method of converting the hue of the image data, and determines the mode in accordance with the distance detected by the autofocus sensor (11). Color vision support device.
  7.  前記カメラ(10)は,レンズと被写体との距離を検出するオートフォーカスセンサ(11)を有しており,
     前記制御回路(26)は,前記オートフォーカスセンサ(11)が検出した前記距離に基づいて,前記カメラ(10)が生成した画像データの一部を切り出した表示画像データを生成し,当該表示画像データを表示するための映像信号を前記光源部(22)と前記映像素子(25)へ送出する
     請求項1に記載の色覚支援装置。
    The camera (10) has an autofocus sensor (11) for detecting the distance between the lens and the subject,
    The control circuit (26) generates display image data obtained by cutting out a part of image data generated by the camera (10) based on the distance detected by the autofocus sensor (11), and the display image The color vision support device according to claim 1, wherein a video signal for displaying data is sent to the light source unit (22) and the video element (25).
PCT/JP2018/042261 2017-11-17 2018-11-15 Color vision support device WO2019098265A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-222265 2017-11-17
JP2017222265A JP7011295B2 (en) 2017-11-17 2017-11-17 Color vision support device

Publications (1)

Publication Number Publication Date
WO2019098265A1 true WO2019098265A1 (en) 2019-05-23

Family

ID=66539078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/042261 WO2019098265A1 (en) 2017-11-17 2018-11-15 Color vision support device

Country Status (2)

Country Link
JP (1) JP7011295B2 (en)
WO (1) WO2019098265A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004333758A (en) * 2003-05-06 2004-11-25 Seiko Epson Corp Display device, display method, and projector
JP2006208687A (en) * 2005-01-27 2006-08-10 Konica Minolta Photo Imaging Inc Video control system for head-mounted display
JP2007214964A (en) * 2006-02-10 2007-08-23 Konica Minolta Photo Imaging Inc Video display device
JP2008042517A (en) * 2006-08-07 2008-02-21 Seiko Epson Corp Projector
JP2008310130A (en) * 2007-06-15 2008-12-25 Konica Minolta Holdings Inc Control unit and head-mounted display device
JP2009111489A (en) * 2007-10-26 2009-05-21 Nikon Corp Imaging apparatus
JP2010128072A (en) * 2008-11-26 2010-06-10 Necディスプレイソリューションズ株式会社 Backlight driving device and backlight driving control method
JP2014228595A (en) * 2013-05-20 2014-12-08 コニカミノルタ株式会社 Augmented reality space display device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004333758A (en) * 2003-05-06 2004-11-25 Seiko Epson Corp Display device, display method, and projector
JP2006208687A (en) * 2005-01-27 2006-08-10 Konica Minolta Photo Imaging Inc Video control system for head-mounted display
JP2007214964A (en) * 2006-02-10 2007-08-23 Konica Minolta Photo Imaging Inc Video display device
JP2008042517A (en) * 2006-08-07 2008-02-21 Seiko Epson Corp Projector
JP2008310130A (en) * 2007-06-15 2008-12-25 Konica Minolta Holdings Inc Control unit and head-mounted display device
JP2009111489A (en) * 2007-10-26 2009-05-21 Nikon Corp Imaging apparatus
JP2010128072A (en) * 2008-11-26 2010-06-10 Necディスプレイソリューションズ株式会社 Backlight driving device and backlight driving control method
JP2014228595A (en) * 2013-05-20 2014-12-08 コニカミノルタ株式会社 Augmented reality space display device

Also Published As

Publication number Publication date
JP7011295B2 (en) 2022-01-26
JP2019096939A (en) 2019-06-20

Similar Documents

Publication Publication Date Title
US9269193B2 (en) Head-mount type display device
US7436568B1 (en) Head mountable video display
US9158113B2 (en) Integrated display and photosensor
US8182084B2 (en) Display unit
JP3338837B2 (en) Composite display
CN103323950B (en) Head-mount type display unit
EP3729182B1 (en) Eye tracking for head-worn display
EP3514606A1 (en) Eye tracking for head-worn display
JPH07255063A (en) Video display device
US20220107503A1 (en) Head-mounted display apparatus
JP5012640B2 (en) Display device
JP6561606B2 (en) Display device and control method of display device
JP2012002889A (en) See-through type image display device
JPWO2018043254A1 (en) Video display and optical see-through display
WO2019098265A1 (en) Color vision support device
JP2008287049A (en) Image display apparatus and head-mounted display
JP2010128414A (en) Image display device
JP3785768B2 (en) Image forming system and projection display device
JP2010128246A (en) Display unit
JP2021022851A (en) Head-up display apparatus
US20180120497A1 (en) Thick backlight for rgb led of a liquid crystal display used in a virtual reality head mounted display
TWI832559B (en) Stereoscopic image display system, stereoscopic display device, and filter glasses
JPH06335007A (en) Automatic dimmer mechanism in eyeglass type video image display device
KR20220149787A (en) Projecting device for augmented reality glasses, image information display method and control device using the projection device
CN115202047A (en) Optical unit and head-mounted display device using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18879361

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18879361

Country of ref document: EP

Kind code of ref document: A1