WO2017081915A1 - Image processing device, image processing method and program - Google Patents

Image processing device, image processing method and program Download PDF

Info

Publication number
WO2017081915A1
WO2017081915A1 PCT/JP2016/075878 JP2016075878W WO2017081915A1 WO 2017081915 A1 WO2017081915 A1 WO 2017081915A1 JP 2016075878 W JP2016075878 W JP 2016075878W WO 2017081915 A1 WO2017081915 A1 WO 2017081915A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
eyeball
target object
image processing
information
Prior art date
Application number
PCT/JP2016/075878
Other languages
French (fr)
Japanese (ja)
Inventor
浩尚 後藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2017081915A1 publication Critical patent/WO2017081915A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B25/00Eyepieces; Magnifying glasses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • This technology relates to an image processing apparatus. Specifically, the present invention relates to an image processing apparatus and an image processing method that handle an image to be displayed, and a program that causes a computer to execute the method.
  • an image processing apparatus that provides an image to a user while being worn on a part of the user's body.
  • a glasses-type wearable device that can display a display target object on an external object.
  • a technique for displaying a display target object on coordinates fixed with respect to an external object has been proposed.
  • a virtual image display that detects a change in the posture of the wearer's head and adjusts the screen position of the image based on the virtual image to change in a direction opposite to the direction in which the posture of the wearer changes.
  • An apparatus has been proposed (see, for example, Patent Document 1).
  • an image based on a virtual image can be adjusted according to a change in the posture of the wearer of the virtual image display device by adjusting the posture so that the posture of the wearer is changed in the opposite direction.
  • This technology was created in view of such a situation, and aims to improve visibility.
  • a first aspect of the present technology is view information acquisition for acquiring view information related to an image included in the view of a person wearing the image processing apparatus.
  • An eyeball information acquisition unit that acquires eyeball information related to the eyeball of the person, a display target object superimposed on an image included in the field of view, and the field of view based on the field of view information and the eyeball information.
  • An image processing apparatus including a display control unit that performs control to change a display mode of the display target object in an included image, an image processing method thereof, and a program for causing a computer to execute the method. This brings about the effect
  • the eyeball information acquisition unit may determine the position of the eyeball of the person and the line of sight of the eyeball based on images generated by a plurality of imaging units provided in the eyeball direction of the person.
  • the eyeball information may be acquired by detecting the direction. Thereby, based on the image produced
  • the image processing apparatus further includes a display unit that provides an image included in the field of view of the person to the eyes of the person, and the plurality of imaging units includes the display unit that displays the display target object.
  • the visual field information acquisition unit extracts feature points of an object included in the visual field based on images generated by a plurality of imaging units provided in the visual line direction of the person.
  • Information regarding the position of the object included in the field of view may be acquired as the field of view information by performing the feature point extraction processing to be performed and the depth detection processing for detecting the depth of the object included in the field of view.
  • the image processing apparatus further includes a display unit that provides an image included in the field of view of the person to the eyes of the person, and the display control unit includes position information regarding the position of the display unit and the field of view. You may make it perform control which changes the said display mode based on information and the said eyeball information. This brings about the effect
  • the display unit is arranged on the display surface based on a refraction image output from an image output unit arranged at an edge of the display surface that transmits an image included in the field of view of the person.
  • the refraction display unit that displays the display target object, wherein the display control unit is based on position information regarding a display position where the display target object is virtually displayed, the visual field information, and the eyeball information. You may make it perform control which changes the said display mode. This brings about the effect
  • the display control unit changes the display mode by changing at least one of a display position, a display angle, and a display size of the display target object on the display surface of the display unit. May be changed. Accordingly, there is an effect that the display mode is changed by changing at least one of the display position, the display angle, and the display size of the display target object on the display surface.
  • the display control unit may control the sharpness of the display target object based on the view information and the eyeball information. This brings about the effect
  • the display control unit may control the sharpness of the display target object based on a three-dimensional position in the field of view where the display target object is to be displayed. . This brings about the effect
  • the display control unit may control the sharpness by performing a blurring process on the display target object. This brings about the effect that the sharpness is controlled by performing the blur process on the display target object.
  • the image processing apparatus further includes a posture information acquisition unit that acquires posture information related to a change in posture of the image processing device, and the display control unit includes the visual field information, the eyeball information, and the posture.
  • Control for changing the display mode may be performed on the basis of the posture information related to the change of the display. This brings about the effect
  • FIG. 6 is a flowchart illustrating an example of a processing procedure of a stabilizer process performed by the image processing apparatus 100 according to the first embodiment of the present technology. It is a figure which shows typically an example of the blurring process by the image processing apparatus 100 in 2nd Embodiment of this technique. It is a figure showing an example of a display at the time of performing blur processing by image processing device 100 in a 2nd embodiment of this art. It is a figure which shows the example of a display (comparative example) before implementing blurring process with the image processing apparatus 100 in 2nd Embodiment of this technique.
  • 14 is a flowchart illustrating an example of a processing procedure of blur processing by the image processing apparatus 100 according to the second embodiment of the present technology.
  • First Embodiment Example in which display mode of display target object is corrected based on view information and eyeball information
  • Second Embodiment Example of controlling the sharpness of a display target object based on visual field information and eyeball information
  • FIG. 1 is a top view illustrating an example of an external configuration of the image processing apparatus 100 according to the first embodiment of the present technology.
  • the relationship between the eyeball (right eye) 11 and the eyeball (left eye) 12 when the image processing apparatus 100 is mounted on a human face is shown in a simplified manner.
  • the image processing apparatus 100 includes an outward imaging unit (R) 101, an outward imaging unit (L) 102, an inward imaging unit (R) 103, an inward imaging unit (L) 104, and a display unit (R). ) 181, a display unit (L) 182, infrared light emitting devices 183 to 186, and a bridge 190.
  • the image processing apparatus 100 can be, for example, a transmissive glasses-type electronic device (for example, a glasses-type wearable device) using a hologram light guide plate type optical technology.
  • the image processing apparatus 100 can include various sensing functions such as an image sensor, an acceleration sensor, a gyroscope, an electronic compass, an illuminance sensor, and a microphone.
  • the image sensor is, for example, a CMOS (Complementary Metal Oxide Semiconductor) image sensor or a CCD (Charge Coupled Device Image Sensor) image sensor.
  • the image processing apparatus 100 can provide information according to the user's situation by utilizing position information or the like by GPS (Global Positioning System).
  • the infrared light emitting devices 183 to 186 are infrared light emitting devices that emit infrared light.
  • the infrared light emitting devices 183 and 184 can be provided at both ends of the display surface of the display portion (R) 181
  • the infrared light emitting devices 185 and 186 can be provided at both ends of the display surface of the display portion (L) 182.
  • the infrared light emitting devices may be arranged at other positions where the imaging operation can be appropriately performed.
  • the outward imaging unit (R) 101 and the outward imaging unit (L) 102 are imaging units that image subjects included in the field of view of the person wearing the image processing apparatus 100. That is, the outward image capturing unit (R) 101 and the outward image capturing unit (L) 102 are a plurality of image capturing units provided in the line-of-sight direction of a person.
  • the inward imaging unit (R) 103, the inward imaging unit (L) 104, and the infrared light emitting devices 183 to 186 capture the eyes of a person wearing the image processing device 100 (for example, the eyeball and its surroundings).
  • the imaging unit is configured. That is, the inward imaging unit (R) 103 and the inward imaging unit (L) 104 are a plurality of imaging units provided in the direction of the human eyeball. Further, the inward imaging unit (R) 103 receives the light reflected by the infrared light emitted from the infrared light emitting devices 183 and 184, so that the position of the eyeball (right eye) 11 and the direction of the line of sight can be calculated. Become.
  • the inward imaging unit (L) 104 receives the light reflected by the infrared light emitted from the infrared light emitting devices 185 and 186, so that the position of the eyeball (left eye) 12 and the direction of the line of sight can be calculated. It becomes.
  • the inward imaging unit (R) 103 is provided at an end of the display surface of the display unit (R) 181, and the inward imaging unit (L) 104 is an end of the display unit (L) 182 on the display surface. It can be provided in the part.
  • the display unit (R) 181 and the display unit (L) 182 are glass display units for providing various images to a person wearing the image processing apparatus 100.
  • a hologram optical element having high transparency can be used as the display unit (R) 181 and the display unit (L) 182 without using a half mirror that blocks the visual field.
  • the display unit (R) 181 and the display unit (L) 182 in addition to the hologram element type display unit, a half mirror type display unit, a video transmission type display unit (for example, an external camera instead of a see-through display) Or a retina projection type display unit may be used.
  • a video transmission type display unit for example, an external camera instead of a see-through display
  • a retina projection type display unit may be used.
  • the hologram optical elements are optical elements for displaying the image light emitted from the optical engine by the hologram light guide plate technology.
  • the hologram light guide plate technology is a technology that, for example, incorporates hologram optical elements at both ends of a glass plate to propagate the image light emitted from the optical engine through a very thin glass plate to the eyes. is there.
  • the display unit (R) 181 and the display unit (L) 182 are provided with a display surface that transmits an image included in the person's field of view, and provide the image included in the person's field of view to the eyes of the person.
  • a refracted video output from an optical engine arranged at the edge of the glass portion (display surface) of the display unit (R) 181 and the display unit (L) 182 is provided.
  • an image is formed on the glass portion. That is, when the display unit (R) 181 and the display unit (L) 182 are refractive display units, the display surface is based on the refraction image output from the image output unit arranged at the edge of the display surface. Displays the display target object.
  • the bridge 190 is a member that connects the display unit (R) 181 and the display unit (L) 182.
  • the bridge 190 corresponds to, for example, a bridge for glasses.
  • each of the outward imaging unit (R) 101, the outward imaging unit (L) 102, the inward imaging unit (R) 103, and the inward imaging unit (L) 104 is illustrated for ease of explanation.
  • An example in which the display unit (R) 181 and the display unit (L) 182 are arranged outside is shown.
  • these imaging units may be arranged at other positions where the imaging range (including the imaging direction (optical axis direction)) can be appropriately set.
  • FIG. 2 is a block diagram illustrating a functional configuration example of the image processing apparatus 100 according to the first embodiment of the present technology.
  • the image processing apparatus 100 includes an outward imaging unit (R) 101, an outward imaging unit (L) 102, an inward imaging unit (R) 103, an inward imaging unit (L) 104, and an image processing unit 111.
  • the visibility information acquisition unit 120 the space model creation unit 130, the DB (database) 140, the eyeball information acquisition unit 150, the display control unit 160, the display processing unit 170, and the display unit (R) 181.
  • a display unit (L) 182 and a posture information acquisition unit 195 To 114, the visibility information acquisition unit 120, the space model creation unit 130, the DB (database) 140, the eyeball information acquisition unit 150, the display control unit 160, the display processing unit 170, and the display unit (R) 181.
  • a display unit (L) 182 and a posture information acquisition unit 195 the posture information acquisition unit 195.
  • the outward imaging unit (R) 101 and the outward imaging unit (L) 102 are a plurality of cameras facing the visual field side used for SLAM (SimultaneousaneLocalization and Mapping).
  • FIG. 2 shows an example in which two outward imaging units are provided, three or more outward imaging units may be provided.
  • the inward imaging unit (R) 103 and the inward imaging unit (L) 104 generate a plurality of images used to detect the position, movement, line of sight, etc. of the eyeball of the person wearing the image processing apparatus 100.
  • It is an inward camera. 2 shows an example in which two inward imaging units are provided, but three or more inward imaging units may be provided.
  • a device for photographing the movement of the eyeball for example, an infrared light emitting device (for example, four for each imaging unit)
  • the image processing unit 111 performs various types of image processing (for example, POST processing) on the image data generated by the outward imaging unit (R) 101.
  • image processing for example, POST processing
  • the image data subjected to the image processing is subjected to the view information acquisition unit 120. Output to.
  • the image processing unit 112 performs various image processing (for example, POST processing) on the image data generated by the outward imaging unit (L) 102, and the image data subjected to the image processing is subjected to the visual field information acquisition unit 120. Output to.
  • image processing for example, POST processing
  • the image processing unit 113 performs various types of image processing (for example, POST processing) on the image data generated by the inward imaging unit (R) 103, and the image data subjected to the image processing is used as the eyeball information acquisition unit 150. Output to.
  • POST processing various types of image processing
  • the image processing unit 114 performs various types of image processing (for example, POST processing) on the image data generated by the inward imaging unit (L) 104, and the image data subjected to the image processing is subjected to the eyeball information acquisition unit 150. Output to.
  • image processing for example, POST processing
  • the view information acquisition unit 120 uses the image data output from the image processing units 111 and 112 to display the view information (objects included in the imaging range of the outward imaging unit (R) 101 and the outward imaging unit (L) 102). Information). Then, the view information acquisition unit 120 outputs the extracted view information (for example, the feature point of the object, the depth of the object) to the space model creation unit 130.
  • the visual field information acquisition unit 120 extracts feature points of an object by surrounding feature point extraction processing.
  • the feature point of the object can be, for example, the vertex of the object.
  • the view information acquisition unit 120 can extract feature points from the entire image (frame) corresponding to the image data output from the image processing units 111 and 112. For example, for the first image among the images (frames) corresponding to the image data output from the image processing units 111 and 112, the view information acquisition unit 120 extracts feature points from the entire image. In addition, for an image (frame) other than the first image, feature points are extracted from a newly imaged region compared with an image corresponding to the immediately preceding image (frame). Note that, for example, a point having a strong edge gradient in the vertical direction and the horizontal direction can be extracted as the feature point. This feature point is a strong feature point for optical flow calculation, and can be obtained using edge detection.
  • feature points are extracted from the entire image for the first image, and feature points are extracted from a newly captured region portion compared to the previous image for images other than the first image.
  • feature points may be extracted from the entire image according to the processing capability and the like.
  • the visual field information acquisition unit 120 detects the depth of the object by depth detection processing.
  • the depth of the object is, for example, a distance in the depth direction (gaze direction).
  • the field-of-view information acquisition unit 120 includes an image generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102 and each information (for example, a lens position and a focus position) when the image is generated. )
  • a depth map depth map (Depth Map)
  • subject distances related to each region can be obtained based on the depth map.
  • the depth map is a map composed of data representing the subject distance.
  • a TOF (Time of Flight) method, a blur amount analysis (Depth from Defocus), or the like can be used.
  • the TOF method is a method of calculating the distance to the subject based on the light delay time until the light emitted from the light source is reflected by the object and reaches the sensor, and the speed of the light.
  • each image pickup unit for example, an image pickup device that receives transmitted light that has passed through different portions of the exit pupil and performs focus detection (phase difference AF (Auto-Focus)) using a phase difference detection method is used.
  • phase difference AF Auto-Focus
  • An image sensor that performs this phase difference AF outputs a phase difference detection signal together with an image signal (analog signal). Therefore, the field-of-view information acquisition unit 120 can calculate the subject distance of each region in the image corresponding to the image signal output from each image sensor based on the phase difference detection signal output from each image sensor. .
  • the visibility information acquisition unit 120 can simultaneously estimate the position of the image processing apparatus 100 and create an environment map using SLAM technology.
  • the visual field information acquisition unit 120 acquires visual field information (external information) related to an image included in the visual field of the person wearing the image processing apparatus 100. For example, the visual field information acquisition unit 120 extracts feature points of objects included in the visual field based on images generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102. And depth detection processing for detecting the depth of an object included in the field of view. The visual field information acquisition unit 120 acquires information regarding the position of the object included in the visual field as the visual field information.
  • the visibility information acquisition unit 120 is an example of a line-of-sight information acquisition unit described in the claims.
  • the DB (database) 140 is a database that stores information (spatial model information) for displaying virtual objects (space models) on the display unit (R) 181 and the display unit (L) 182.
  • the spatial model information is supplied to the spatial model creation unit 130.
  • a spatial model created in the past (for example, an object specified by feature points and depth) is stored in the DB 140, and this spatial model is compared with a spatial model created thereafter. Also good.
  • the space model creation unit 130 displays objects on the display unit (R) 181 and the display unit (L) 182 based on the view information output from the view information acquisition unit 120 and the space model information supplied from the DB 140. Information (spatial model display information) is created. Then, the space model creation unit 130 supplies the created space model display information to the display control unit 160.
  • the eyeball information acquisition unit 150 detects eyeball information using the image data output from the image processing units 113 and 114, and outputs the detected eyeball information to the display control unit 160.
  • the eyeball information is, for example, the position of the eyeball, the line of sight, the position of the iris in the eyeball (for example, the iris center) and size, the position and size of the pupil in the eyeball, and the eye axis angle.
  • the iris is a portion where the eyeball is colored.
  • the pupil is a portion (a portion called “black eyes”) existing in the middle of the iris.
  • the position of the eyeball for example, a position on the eyeball, a three-dimensional position, a distance from the display surface of the display unit (R) 181 and the display unit (L) 182 can be obtained.
  • the eyeball information acquisition unit 150 can detect eyeball information (for example, the position of the eyeball, the position and size of the iris on the eyeball, and the position and size of the pupil on the eyeball) using an image recognition technique.
  • the eyeball information acquisition unit 150 can detect the line of sight using the detected eyeball information (for example, each movement of the position of the eyeball, the position of the iris on the eyeball, and the position of the pupil on the eyeball).
  • an eyeball detection method for example, a detection method by matching a template in which the luminance distribution information of the eyeball is recorded with an actual image (see, for example, Japanese Patent Application Laid-Open No. 2004-133737), an eyeball color part included in image data, A detection method based on the feature amount of the eyeball or the like can be used.
  • the image data output from the image processing units 113 and 114 may be binarized and black eyes may be detected based on the binarized data on one screen.
  • the image data output from the image processing units 113 and 114 is binarized, and the black pixel and the white pixel of each line in the horizontal direction (left-right direction) are determined.
  • the number of white pixels in the predetermined range continues after the number of white pixels in the predetermined range, and the number of white pixels in the predetermined range continues after the continuous black pixels.
  • the line is extracted as a candidate line for the black eye region.
  • a region where a number of lines within a predetermined range are continuous can be extracted as the extracted candidate lines, and the region can be detected as a black eye region.
  • the eye may be detected using an eye movement measurement device.
  • the position of the black eye portion can be specified by applying infrared light from an infrared LED (Light Emitting Diode) to the eyeball and detecting the reflected light with a light receiver.
  • infrared LED Light Emitting Diode
  • the eyeball can be detected using another infrared light emitting device.
  • infrared rays can be used in order not to be affected by external light and to prevent specular reflection from the cornea.
  • eyeball information detection methods for example, eye gaze detection
  • eye gaze detection is merely examples, and high-precision eye gaze detection can be realized by applying a plurality of techniques other than these.
  • the eyeball information acquisition unit 150 acquires eyeball information related to the human eyeball. For example, the eyeball information acquisition unit 150 detects the position of the human eyeball and the eye gaze direction based on the images generated by the inward imaging unit (R) 103 and the inward imaging unit (L) 104. Get eyeball information.
  • the display control unit 160 performs control for displaying each image (for example, a display target object) on the display unit (R) 181 and the display unit (L) 182. For example, the display control unit 160 performs display position calculation processing, image effect processing, and image superposition based on the spatial model display information output from the spatial model creation unit 130 and the eyeball information output from the eyeball information acquisition unit 150. Perform processing.
  • the display control unit 160 is realized by, for example, a host CPU (Central Processing Unit). Note that some of the above-described units can be realized by the host CPU.
  • a host CPU Central Processing Unit
  • the display processing unit 170 performs various image processing on the images displayed on the display unit (R) 181 and the display unit (L) 182 based on the control of the display control unit 160.
  • the display control unit 160 performs control to change the display mode of the display target object based on the field-of-view information, the eyeball information, and the posture information regarding the posture change.
  • the posture information acquisition unit 195 detects a change in posture of the image processing device 100 by detecting acceleration, movement, inclination, and the like of the image processing device 100, and controls display of posture information regarding the detected posture change. Output to the unit 160.
  • various sensors such as a gyro sensor and an acceleration sensor can be used as the posture information acquisition unit 195, for example.
  • the image processing apparatus 100 also has two functions (stabilizer and depth correction) for the display object.
  • the stabilizer is a function for displaying the display object at a desired position in the external coordinates.
  • the stabilizer is a function that makes a display object appear at an expected position in external coordinates.
  • a display example by this stabilizer is shown in FIG.
  • FIG. 3 is a diagram schematically illustrating a display example of the display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
  • FIG. 3 shows an example in which the relationship among the image processing apparatus 100, the external objects A (201) and B (202), and the display target object X (203) is viewed from above.
  • the top view of the image processing apparatus 100 is the same as the example shown in FIG. However, in FIG. 3, the eyeball (right eye) 11 and the eyeball (left eye) 12 are moved in the direction of viewing the display target object X (203) displayed on the display unit (R) 181 and the display unit (L) 182. An example is shown.
  • External objects A (201) and B (202) are actually existing objects.
  • the external objects A (201) and B (202) are shown as rectangular objects 201 and 202 for ease of explanation.
  • the display target object X (203) virtually indicates an object displayed on the display unit (R) 181 and the display unit (L) 182 by the stabilizer.
  • FIG. 3 shows an example in which the display target object X (203) is displayed between the external objects A (201) and B (202).
  • the virtual display screen positions 211 and 212 are virtual display positions for displaying the display target object X (203) when the display target object X (203) is displayed on the display unit (R) 181 and the display unit (L) 182 by the stabilizer. Display position.
  • the display position of the display target object X (203) at the virtual display screen position 211 is indicated by a display position 213.
  • the display position 214 indicates the display position of the display target object X (203) at the virtual display screen position 212.
  • FIG. 4 is a diagram schematically illustrating a display example of a display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
  • FIG. 4 shows an example in which the position of the eyeball and the image processing apparatus 100 is moved for some reason when the display target object X is displayed as shown in FIG. 3 (display example before correction). .
  • This example is shown as a comparative example in FIG.
  • the image processing apparatus 100 before movement, the eyeball (right eye) 11, and the eyeball (left eye) 12 are indicated by dotted lines.
  • the positions of the image processing apparatus 100, the eyeball (right eye) 11, and the eyeball (left eye) 12 indicated by the dotted lines correspond to the positions shown in FIG.
  • the image processing apparatus 100 after movement, the eyeball (right eye) 11, and the eyeball (left eye) 12 are indicated by solid lines.
  • the virtual display screen positions 211 and 212 before the movement are indicated by normal dotted lines.
  • the positions of the virtual display screen positions 211 and 212 indicated by the normal dotted lines correspond to the positions shown in FIG. Further, the virtual display screen positions 211 and 212 after the movement are indicated by thick dotted lines.
  • the virtual display screen positions 211 and 212 change according to the change of the position of the image processing apparatus 100.
  • the positions of the eyeball (right eye) 11 and the eyeball (left eye) 12 also change.
  • the display target object X (205) may be displayed at an unintended position (the same position as the object 202).
  • the first embodiment of the present technology shows an example in which the position of the display target object X is appropriately displayed even when the position of the image processing apparatus 100 or the position of the eyeball is changed.
  • FIG. 5 is a diagram schematically illustrating a display example of a display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
  • FIG. 6 is a diagram illustrating an example of a procedure when the local coordinate system of the display target object is converted into the display coordinate system of the glass by the image processing apparatus 100 according to the first embodiment of the present technology.
  • FIG. 5 shows an example in which the position of the display target object X is appropriately displayed by moving the display position of the display target object X in the example shown in FIG.
  • a line segment extending from the position where the position is to be arranged is indicated by a line segment 30.
  • the distance between the line segment 30 in the horizontal direction and the eyeball (right eye) 11 (iris center) is W1, and the distance between the display surface of the display unit (R) 181 and the eyeball (right eye) 11 in the horizontal direction (line segment 30).
  • the upper distance) is L1.
  • the distance (distance on the line segment 30) between the display surface of the display unit (R) 181 and the display target object X (204) in the horizontal direction is L2.
  • the distance (distance on the line segment 30) between the display surface of the display unit (R) 181 in the horizontal direction and the virtual display screen position 211 is L3.
  • the distances L1 and W1 can be obtained based on the eyeball information output from the eyeball information acquisition unit 150.
  • the distances L2 and L3 can be acquired based on the display target object X.
  • the display position 221 in the horizontal direction of the display target object X (204) at the virtual display screen position 211 can be obtained by the following equation.
  • W2 ⁇ (L2-L3) / (L2 + L1) ⁇ W1
  • W2 is the distance from the intersection of the virtual display screen position 211 and the line segment 30 to the display position 221.
  • the display position 221 (W2) in the horizontal direction of the display target object X (204) at the virtual display screen position 211 can be obtained by the trigonometric function formula. Further, the display position 221 in the vertical direction of the display target object X (204) at the virtual display screen position 211 can be similarly obtained. Further, the display position 222 in the horizontal direction and the vertical direction of the display target object X (204) at the virtual display screen position 212 can be similarly obtained.
  • the display mode of the display target object X (204) at the virtual display screen position is changed according to the rotation.
  • the display target object X is rotated and displayed in the direction opposite to the rotation direction.
  • the display size of the display target object X (204) at the virtual display screen position is changed according to the change in the distance L1.
  • each of these processes is calculated by a determinant.
  • An example of this procedure (steps S701 to S705) is shown in FIG.
  • the view coordinate system changes as the eyeball moves.
  • the view transformation matrix V is multiplied by the movement vector of the eyeball corresponding to the movement and multiplied by the view transformation matrix V
  • the view transformation matrix changes from V ⁇ V ′.
  • the parameters for calculating the projection transformation matrix P and the screen transformation matrix S also change due to the change of the viewpoint. Therefore, due to the movement of the eyeball, the projection conversion matrix changes from P ⁇ P ′, and the screen conversion matrix changes from S ⁇ S ′. Further, the display coordinates of the glass can be finally calculated based on these values.
  • the position of the display target object X can remain at the position before the movement.
  • FIG. 7 is a diagram schematically illustrating a display example of a display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
  • FIG. 7 shows an example of a stabilizer for the movement of the line of sight of a person wearing the image processing apparatus 100.
  • FIG. 7 shows an example in which the eyeball 20 is looking at the display target object 303 displayed at the display position 301 in the virtual display screen position 300 (line of sight 305).
  • the eyeball 20 moves to the left and the line of sight moves.
  • the display target object 304 becomes visible (line segment 306).
  • the display position 301 in the virtual display screen position 300 is moved to the display position 302 by performing the above-described correction processing. Therefore, even when the eyeball 20 moves, the eyeball 20 can see the display target object 303 displayed at the display position 302 in the virtual display screen position 300 (line segment 307).
  • the display target object can be appropriately displayed by performing the above-described correction processing.
  • the information of the display target object is a local coordinate, it is necessary to perform coordinate conversion processing in order to display it on the screen.
  • the local coordinate ⁇ world coordinate conversion process is a process of converting where each object is arranged in the outside world.
  • the world coordinate ⁇ view coordinate conversion process is a process for converting where the camera (image processing apparatus 100) is located in the outside world.
  • the view coordinate ⁇ projection coordinate conversion process is a process for converting where the screen (on the display screen of the display unit (R) 181 and the display unit (L) 182) is arranged.
  • the projection coordinate ⁇ screen coordinate conversion process is a process for converting where the display target object is displayed on the screen.
  • the eye position (camera position) of a person wearing the image processing apparatus 100 is fixed with respect to the glasses (display unit (R) 181 and display unit (L) 182)
  • the eyeball position is calculated based on the images generated by the inward camera (inward imaging unit (R) 103, inward imaging unit (L) 104), and the movement of the eye position relative to the glass is grasped. can do.
  • the fixed eye position (camera position) and the glass (display unit (R) 181, display unit ( L) The relationship with the position of 182) is corrected.
  • coordinate conversion is performed in consideration of the eye position and screen position of the person wearing the image processing apparatus 100.
  • a conversion matrix can be obtained based on the coordinate information of the world coordinates of the display target object, the world coordinates of the eyes, and the upward coordinates of the glass.
  • Display example of display target object 8 to 10 are diagrams illustrating display examples of display target objects by the image processing apparatus 100 according to the first embodiment of the present technology. 8 to 10, only the image processing apparatus 100 and a part of the periphery thereof are shown in a rectangle.
  • FIG. 8 shows a relationship between a frame 401 that holds the display unit (L) 182 of the image processing apparatus 100 and an arrow (display target object) 402 displayed on the display unit (L) 182.
  • arrow display target object
  • an arrow (display target object) 402 can be displayed on the display unit (L) 182 of the image processing apparatus 100.
  • FIG. 9 shows a display example when the image processing apparatus 100 is tilted.
  • FIG. 9 a shows a display example when the arrow (display target object) 402 is corrected according to the inclination of the image processing apparatus 100.
  • FIG. 9B shows a display example when correction according to the inclination of the image processing apparatus 100 is not performed.
  • the arrow (display target object) 402 is tilted according to the tilt of the image processing apparatus 100.
  • the arrow (display target object) 402 is not tilted because the arrow (display target object) 402 is corrected according to the tilt of the image processing apparatus 100. .
  • FIG. 10 shows a display example when the image processing apparatus 100 moves horizontally.
  • FIG. 10 a shows a display example when the arrow (display target object) 402 is corrected in accordance with the horizontal movement of the image processing apparatus 100.
  • FIG. 10B shows a display example when the correction according to the horizontal movement of the image processing apparatus 100 is not performed.
  • the arrow (display target object) 402 moves in accordance with the horizontal movement of the image processing apparatus 100.
  • the arrow (display target object) 402 is corrected in order to correct the arrow (display target object) 402 in accordance with the horizontal movement of the image processing apparatus 100. Does not move.
  • FIG. 11 is a flowchart illustrating an example of a processing procedure of a stabilizer process performed by the image processing apparatus 100 according to the first embodiment of the present technology.
  • the visual field information acquisition unit 120 extracts visual field information using images generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102 (step S801). For example, the visual field information acquisition unit 120 acquires an external singular point and an acceleration sensor value as visual field information (step S801).
  • the view information is obtained by using the images generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102, but the view information is obtained by other methods. You may make it acquire.
  • an acceleration sensor value may be acquired from the posture information acquisition unit 195 (for example, an acceleration sensor). Note that step S801 is an example of a view information acquisition procedure described in the claims.
  • the space model creation unit 130 calculates the absolute coordinates of the external object based on the view information extracted by the view information acquisition unit 120 (step S802). Further, the space model creation unit 130 determines the display coordinates of the display target object X on the absolute coordinates based on the view information extracted by the view information acquisition unit 120 and the display target object information stored in the DB 140. Confirm (step S802).
  • the absolute coordinates of the external object are calculated by multi-SLAM using a plurality of outward cameras, and the display coordinates of the display target object X on the absolute coordinates are determined.
  • the eyeball information acquisition unit 150 extracts eyeball information using images generated by the inward imaging unit (R) 103 and the inward imaging unit (L) 104 (step S803).
  • the eyeball information acquisition unit 150 obtains the absolute coordinates of the iris center point in the eyeball and the line-of-sight vector of the eyeball as eyeball information (step S803).
  • Step S803 is an example of an eyeball information acquisition procedure described in the claims.
  • the display control unit 160 determines the display position, display size, and display direction of the display target object X at the virtual display screen positions 211 and 212 based on each acquired information (step S804). For example, the display control unit 160 displays the display target at the virtual display screen positions 211 and 212 based on the absolute coordinates of the iris center point, the display coordinates of the display target object X, and the absolute coordinates of the virtual display screen positions 211 and 212. The display position, display size, and display direction of the object X are determined.
  • Step S805 the display control unit 160 performs correction processing on the display target object X based on the determined contents (display position, display size, display direction of the display target object X at the virtual display screen positions 211 and 212) ( Step S805).
  • Steps S804 and S805 are an example of a control procedure described in the claims.
  • step S806 it is determined whether or not to end the display of the display target object.
  • step S806 the stabilizer processing operation is ended.
  • step S806 the process returns to step S801.
  • the display control unit 160 can display the display target object so as to overlap the image included in the field of view of the person.
  • the display control unit 160 can perform control to change the display mode of the display target object in the image included in the person's view based on the view information and the eyeball information.
  • the display control unit 160 changes at least one of the display position, the display angle, and the display size of the display target object on the display surface of the display unit (R) 181 and the display unit (L) 182.
  • the display mode can be changed.
  • the display control unit 160 performs control to change the display mode of the display target object based on position information regarding the positions of the display unit (R) 181 and the display unit (L) 182, view information, and eyeball information. It can be carried out. Further, for example, the display control unit 160 displays the display target object based on the position information regarding the display position (for example, virtual display screen position) where the display target object is virtually displayed, the visual field information, and the eyeball information. Control to change the aspect can be performed. For example, the display mode of the display target object can be changed based on the relative relationship between the positions specified by these pieces of information.
  • Second Embodiment> In 1st Embodiment of this technique, the example which correct
  • the configuration of the image processing apparatus according to the second embodiment of the present technology is substantially the same as that of the image processing apparatus 100 shown in FIGS. For this reason, about the part which is common in 1st Embodiment of this technique, the code
  • FIG. 12 is a diagram schematically illustrating an example of a blur process performed by the image processing apparatus 100 according to the second embodiment of the present technology.
  • FIG. 12 shows an example of blurring the display target object X when the person wearing the image processing apparatus 100 is not in focus on the display target object X (blur processing example based on line-of-sight detection). Specifically, a case where a person wearing the image processing apparatus 100 sees a distance and a case where the person sees a distance are detected from the line of sight of the person. An example in which the display target object X is subjected to the blur process in accordance with the depth when the display target object X is not in focus (an example of the blur process based on line-of-sight detection) is shown.
  • Gaussian Blur (Gaussian blurring) is an example of image processing for performing blur processing.
  • a person with a visual acuity of 1.0 can recognize a 15 cm Landolt ring ("C" mark) 10 m ahead.
  • C 15 cm Landolt ring
  • the diameter of the Gaussian blur required for visual acuity 1.0 is x / 333.
  • a comparatively simple calculation example has been shown.
  • focal length of 50 mm and the F value of 1.0
  • an unadjustable process is performed.
  • this non-adjustable processing for example, tele blur processing is performed regardless of the line-of-sight detection result. In this case, the blur process is increased as the value approaches 0.
  • A1 ⁇ X1 ⁇ A2 for example, A2 is 20 to 50 cm
  • macro processing is performed.
  • the blur process by the next line-of-sight detection is performed as an additional process of the adjustment impossible process described above.
  • (1) When the focal length matches the display target object X, the blur process is not performed. However, the blur process corresponding to the above-described visual acuity is performed.
  • (2) When the focal length is shorter than the display target object X, blur processing corresponding to an effective diameter of 50 to 85 is performed. In other words, the blur process is roughly performed.
  • the focal length is farther than the display target object X, blur processing corresponding to an effective diameter of 28 to 50 is performed. That is, the blur process is performed with a small amount.
  • A2 ⁇ X1 standard processing is performed.
  • this standard process for example, a blur process corresponding to an effective diameter of 50 is performed as an additional process of the adjustment impossible process described above.
  • blur processing one-eye blur processing
  • the macro process and the standard process described above are performed with a blur process corresponding to an effective diameter of 35.
  • the distance to the display target object can be correctly recognized. For this reason, it is possible to correctly recognize the size of the display target object.
  • FIG. 13 is a diagram illustrating a display example when the blur processing is performed by the image processing apparatus 100 according to the second embodiment of the present technology.
  • FIG. 14 is a diagram illustrating a display example (comparative example) before the blur processing is performed by the image processing apparatus 100 according to the second embodiment of the present technology.
  • FIGS. 13 and 14 show examples (images 600 and 610) of images (images entering a human eye) displayed on either the display unit (R) 181 or the display unit (L) 182.
  • FIG. The imaging range corresponding to the images 600 and 610 includes an object (cone) 601 disposed relatively close to the person (front side position) and a position relatively far from the person (back side position). ) (Object (cylinder)) 602 arranged in the above. Further, it is assumed that the line of sight of the person is focused on the object (cone) 601 but the object (cylinder) 602 appears out of focus.
  • the display target object (sphere) is arranged and displayed at substantially the same position (back position) as the object (cylinder) 602.
  • the display target object (sphere) 611 is displayed without performing the blur process.
  • the object (cylinder) 602 looks out of focus without being focused, the display target object (sphere) 611 appears to be in focus.
  • one object (object (cylinder) 602) arranged at substantially the same position (back side position) appears to be blurred, and the other object (display target object (sphere) 611) is in focus. If you see it, it will make you feel strange.
  • FIG. 13 it is assumed that the above-described blur processing is performed to display the display target object (sphere) 603.
  • the object (cylinder) 602 and the display target object (sphere) 611 look out of focus without being in focus.
  • two objects (object (cylinder) 602 and display target object (sphere) 611) arranged at substantially the same position (back side position) appear to be blurred in the same way, so they are seen. Does not give a sense of incongruity to people.
  • FIG. 15 is a flowchart illustrating an example of a processing procedure of blur processing by the image processing apparatus 100 according to the second embodiment of the present technology.
  • the display control unit 160 allows the person wearing the image processing apparatus 100 to display the display unit (R) 181 and the display unit (L) 182 with both eyes. It is determined whether or not the user is watching (step S811). That is, the display control unit 160 determines whether or not the person wearing the image processing apparatus 100 is looking at both the display unit (R) 181 and the display unit (L) 182 (step S811).
  • the display control unit 160 determines that the person wearing the image processing apparatus 100 is the display unit (R) 181 and the display. It is determined whether or not the part (L) 182 is viewed with one eye (step S812). That is, the display control unit 160 determines whether or not the person wearing the image processing apparatus 100 is looking at one of the display unit (R) 181 and the display unit (L) 182 (step). S812).
  • Step S812 When a person wearing the image processing apparatus 100 does not see the display unit (R) 181 and the display unit (L) 182 with one eye (that is, both the display unit (R) 181 and the display unit (L) 182) (Step S812), it is not necessary to perform blur processing or the like. Therefore, the blur processing operation is terminated.
  • the display control unit 160 displays the one eye.
  • the blur process is performed (step S813).
  • the above-described macro processing and standard processing are performed with blur processing corresponding to an effective diameter of 35.
  • the display control unit 160 determines whether or not the distance X1 to the display target object X is A1 or less (Ste S814).
  • the display control unit 160 performs an adjustment impossible process (step S815). In this adjustment impossible processing, for example, tele blur processing is performed regardless of the line-of-sight detection result (step S815).
  • step S814 the display control unit 160 determines whether or not the distance X1 to the display target object X is A2 or less (step S816). .
  • step S816 the display control unit 160 performs standard processing (step S817). In this standard process, for example, a blur process corresponding to an effective diameter of 50 is performed as an additional process of the adjustment impossible process described above.
  • the display control unit 160 determines whether or not the focal length matches the display target object X (step S818). If the focal length matches the display target object X (step S818), the display control unit 160 performs a blur process according to the visual acuity (step S819).
  • the display control unit 160 determines whether the focal length is closer than the display target object X (step S820). When the focal length is shorter than the display target object X (step S820), the display control unit 160 performs a blur process corresponding to an effective diameter of 50 to 85 (step S822). In other words, the blur process is roughly performed.
  • the display control unit 160 When the focal distance is not closer than the display target object X (when the focal distance is farther than the display target object X) (step S820), the display control unit 160 performs a blur process corresponding to an effective diameter of 28 to 50. (Step S821). That is, the blur process is performed with a small amount.
  • the display control unit 160 can control the sharpness of the display target object based on the visual field information and the eyeball information. For example, the display control unit 160 can control the sharpness of the display target object based on a three-dimensional position (for example, a distance to the subject) in the field of view where the display target object is to be displayed. For example, the display control unit 160 can control the sharpness of the display target object by performing a blur process.
  • the display target object is displayed on coordinates fixed to the external object.
  • a method (first method) of displaying by using an acceleration sensor to move the display position of the display target object in the direction opposite to the operation direction of the image processing apparatus is conceivable.
  • a method (second method) of calculating and displaying the coordinate axis of the display target object by calculating the absolute position of the display unit using SLAM can be considered.
  • the display target object appears to vibrate only by moving the face a little.
  • the accuracy of the acceleration sensor is low.
  • the display target object cannot be set to an appropriate display position.
  • the accuracy for the external object is effective only for the display unit.
  • the display target object appears to vibrate only by moving the face a little.
  • the display target object when the displayed image is always in focus and there is no target object that is the starting point (for example, darkness), the display target object is in front of the eyes (for example, virtual There is a risk of falling into the illusion displayed at the position of the display unit.
  • SLAM that can accurately determine the position of the external object is an effective means, but the accuracy is effective only for the display unit. Also in this case, there is a possibility that the display target object appears to vibrate only by moving the face a little.
  • the display unit may vibrate due to the walking vibration.
  • the display unit changes with respect to the position of the visual recognition unit (for example, the left and right eyeball positions).
  • the position of the visual recognition unit for example, the left and right eyeball positions
  • the position of the visual recognition unit for example, the left and right eyeball positions
  • the first method and the second method do not take into consideration that the relationship between the position of the visual recognition unit and the display unit changes. Therefore, it is important to accurately display the display target object on the coordinate axis fixed with respect to the external object in consideration of the change in the relationship between the position of the visual recognition unit and the display unit.
  • each piece of information for example, position information
  • an image processing device for example, an image processing device
  • an eyeball is obtained using a plurality of outward cameras and a plurality of inward cameras. Detect and use and correct the display image appropriately. Thereby, blurring of the display image can be reduced. In addition, the position of the virtual screen can be appropriately controlled.
  • the display target object can be displayed in a state of being blended into the outside world without a sense of incongruity.
  • visibility can be improved, 3D sickness can be prevented, and fatigue during long-time viewing can be reduced.
  • the sharpness of the display target object can be appropriately controlled based on the target depth of the display target object.
  • the visibility can be improved.
  • the processing procedure described in the above embodiment may be regarded as a method having a series of these procedures, and a program for causing a computer to execute these series of procedures or a recording medium storing the program. You may catch it.
  • a recording medium for example, a CD (Compact Disc), an MD (MiniDisc), a DVD (Digital Versatile Disc), a memory card, a Blu-ray disc (Blu-ray (registered trademark) Disc), or the like can be used.
  • a field-of-view information acquisition unit that acquires field-of-view information about an image included in the field of view of a person wearing the image processing apparatus;
  • An eyeball information acquisition unit for acquiring eyeball information related to the eyeball of the person;
  • Display control for displaying a display target object superimposed on an image included in the field of view and changing the display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information
  • An image processing apparatus An image processing apparatus.
  • the eyeball information acquisition unit detects the position of the eyeball of the person and the line-of-sight direction of the eyeball based on images generated by a plurality of imaging units provided in the eyeball direction of the person.
  • the image processing apparatus wherein: (3) A display unit for providing an image included in the field of view of the person to the eyes of the person; The image processing device according to (2), wherein the plurality of imaging units are provided at an end of a display surface of the display unit that displays the display target object.
  • the visual field information acquisition unit extracts a characteristic point of an object included in the visual field based on images generated by a plurality of imaging units provided in the visual line direction of the person, and the visual field
  • the display unit displays the object to be displayed on the display surface based on a refraction image output from an image output unit disposed at an edge of the display surface that transmits an image included in the field of view of the person.
  • a display unit, The display control unit performs control to change the display mode based on position information regarding a display position for virtually displaying the display target object, the visual field information, and the eyeball information.
  • the display control unit changes the display mode by changing at least one of a display position, a display angle, and a display size of the display target object on the display surface of the display unit.
  • the image processing apparatus according to any one of 6).
  • the image processing apparatus controls a sharpness of the display target object based on the visual field information and the eyeball information.
  • the display control unit controls the sharpness of the display target object based on a three-dimensional position in the field of view where the display target object is to be displayed.
  • a posture information acquisition unit that acquires posture information related to a change in posture of the image processing apparatus; The display control unit according to any one of (1) to (10), wherein the display control unit performs control to change the display mode based on the visual field information, the eyeball information, and posture information regarding the change in posture.
  • Image processing apparatus (12) A view information acquisition procedure for acquiring view information regarding an image included in the view of a person to whom the image processing apparatus is attached; Eyeball information acquisition procedure for acquiring eyeball information relating to the eyeball of the person; A control procedure for superimposing a display target object on an image included in the field of view and changing a display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information.
  • a view information acquisition procedure for acquiring view information regarding an image included in the view of a person to whom the image processing apparatus is attached; Eyeball information acquisition procedure for acquiring eyeball information relating to the eyeball of the person; A control procedure for causing a display target object to overlap and display an image included in the field of view, and changing a display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information.
  • Image processing apparatus 101 Outward imaging unit (R) 102 Outward imaging unit (L) 103 Inward imaging unit (R) 104 Inward imaging unit (L) 111 to 114 Image processing unit 120 Visibility information acquisition unit 130 Spatial model creation unit 140 DB (database) 150 Eyeball Information Acquisition Unit 160 Display Control Unit 170 Display Processing Unit 181 Display Unit (R) 182 Display (L) 183 to 186 Infrared light emitting device 190 Bridge 195 Attitude information acquisition unit

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Lenses (AREA)
  • Processing Or Creating Images (AREA)
  • Position Input By Displaying (AREA)

Abstract

The present invention improves visibility. An image processing device provided with a visual field information acquisition unit, an eyeball information acquisition unit, and a display control unit. The visual field information acquisition unit acquires visual field information pertaining to an image included in the visual field of a person on whom the image processing device is worn. The eyeball information acquisition unit acquires eyeball information pertaining to the eyeball of the person. The display control unit exercises control for making an object to be displayed displayed superimposed on the image included in the visual field of the person. The display control unit also exercises control for changing the display mode of the object to be displayed in the image included in the visual field of the person on the basis of the visual field information and the eyeball information.

Description

画像処理装置、画像処理方法およびプログラムImage processing apparatus, image processing method, and program
 本技術は、画像処理装置に関する。詳しくは、表示対象となる画像を扱う画像処理装置および画像処理方法ならびに当該方法をコンピュータに実行させるプログラムに関する。 This technology relates to an image processing apparatus. Specifically, the present invention relates to an image processing apparatus and an image processing method that handle an image to be displayed, and a program that causes a computer to execute the method.
 従来、ユーザの身体の一部に装着された状態でユーザに画像を提供する画像処理装置が存在する。例えば、外部オブジェクトに表示対象オブジェクトを重ねて表示することが可能なメガネタイプのウェアラブル機器が存在する。 Conventionally, there is an image processing apparatus that provides an image to a user while being worn on a part of the user's body. For example, there is a glasses-type wearable device that can display a display target object on an external object.
 また、例えば、表示対象オブジェクトを外部オブジェクトに対して固定された座標上に表示するための技術が提案されている。例えば、装着者の頭部の姿勢の変化を検出し、その検出結果に基づいて、虚像による画像の画面位置を、装着者の姿勢が変化する方向と逆方向に変化させるように調整する虚像表示装置が提案されている(例えば、特許文献1参照。)。 Also, for example, a technique for displaying a display target object on coordinates fixed with respect to an external object has been proposed. For example, a virtual image display that detects a change in the posture of the wearer's head and adjusts the screen position of the image based on the virtual image to change in a direction opposite to the direction in which the posture of the wearer changes. An apparatus has been proposed (see, for example, Patent Document 1).
特開2013-225042号公報JP 2013-225042 A
 上述の従来技術では、装着者の姿勢が変化する方向と逆方向に変化させるように調整することにより、虚像表示装置の装着者の姿勢の変化に応じて、虚像による画像を調整することができる。 In the above-described conventional technology, an image based on a virtual image can be adjusted according to a change in the posture of the wearer of the virtual image display device by adjusting the posture so that the posture of the wearer is changed in the opposite direction. .
 ここで、虚像表示装置を装着している装着者の眼球と、虚像表示装置の表示部との関係が変化することも想定される。この場合には、装着者の眼球と、虚像表示装置の表示部との関係も考慮して視認性を向上させることが好ましい。 Here, it is also assumed that the relationship between the eyeball of the wearer wearing the virtual image display device and the display unit of the virtual image display device changes. In this case, it is preferable to improve the visibility in consideration of the relationship between the eyeball of the wearer and the display unit of the virtual image display device.
 本技術はこのような状況に鑑みて生み出されたものであり、視認性を向上させることを目的とする。 This technology was created in view of such a situation, and aims to improve visibility.
 本技術は、上述の問題点を解消するためになされたものであり、その第1の側面は、画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得部と、上記人物の眼球に関する眼球情報を取得する眼球情報取得部と、上記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、上記視界情報と上記眼球情報とに基づいて、上記視界に含まれる像における上記表示対象オブジェクトの表示態様を変更する制御を行う表示制御部とを具備する画像処理装置およびその画像処理方法ならびに当該方法をコンピュータに実行させるプログラムである。これにより、視界情報と眼球情報とに基づいて、視界に含まれる像における表示対象オブジェクトの表示態様を変更するという作用をもたらす。 The present technology has been made to solve the above-described problems, and a first aspect of the present technology is view information acquisition for acquiring view information related to an image included in the view of a person wearing the image processing apparatus. An eyeball information acquisition unit that acquires eyeball information related to the eyeball of the person, a display target object superimposed on an image included in the field of view, and the field of view based on the field of view information and the eyeball information. An image processing apparatus including a display control unit that performs control to change a display mode of the display target object in an included image, an image processing method thereof, and a program for causing a computer to execute the method. This brings about the effect | action of changing the display mode of the display target object in the image contained in a visual field based on visual field information and eyeball information.
 また、この第1の側面において、上記眼球情報取得部は、上記人物の眼球方向に設けられている複数の撮像部により生成された画像に基づいて、上記人物の眼球の位置および上記眼球の視線方向を検出することにより上記眼球情報を取得するようにしてもよい。これにより、人物の眼球方向に設けられている複数の撮像部により生成された画像に基づいて、人物の眼球の位置および眼球の視線方向を検出することにより眼球情報を取得するという作用をもたらす。 In the first aspect, the eyeball information acquisition unit may determine the position of the eyeball of the person and the line of sight of the eyeball based on images generated by a plurality of imaging units provided in the eyeball direction of the person. The eyeball information may be acquired by detecting the direction. Thereby, based on the image produced | generated by the several imaging part provided in the person's eyeball direction, the effect | action of acquiring eyeball information by detecting the position of the person's eyeball and the gaze direction of the eyeball is brought about.
 また、この第1の側面において、上記人物の視界に含まれる像を上記人物の眼に提供する表示部をさらに具備し、上記複数の撮像部は、上記表示対象オブジェクトを表示する上記表示部の表示面における端部に設けられるようにしてもよい。これにより、表示対象オブジェクトを表示する表示部の表示面における端部に設けられる複数の撮像部を用いるという作用をもたらす。 In the first aspect, the image processing apparatus further includes a display unit that provides an image included in the field of view of the person to the eyes of the person, and the plurality of imaging units includes the display unit that displays the display target object. You may make it provide in the edge part in a display surface. This brings about the effect | action that the some imaging part provided in the edge part in the display surface of the display part which displays a display target object is used.
 また、この第1の側面において、上記視界情報取得部は、上記人物の視線方向に設けられている複数の撮像部により生成された画像に基づいて、上記視界に含まれる物体の特徴点を抽出する特徴点抽出処理と、上記視界に含まれる物体の深度を検出する深度検出処理とを行うことにより、上記視界に含まれる物体の位置に関する情報を上記視界情報として取得するようにしてもよい。これにより、人物の視線方向に設けられている複数の撮像部により生成された画像に基づいて、特徴点抽出処理および深度検出処理を行うことにより、視界に含まれる物体の位置に関する情報を視界情報として取得するという作用をもたらす。 In the first aspect, the visual field information acquisition unit extracts feature points of an object included in the visual field based on images generated by a plurality of imaging units provided in the visual line direction of the person. Information regarding the position of the object included in the field of view may be acquired as the field of view information by performing the feature point extraction processing to be performed and the depth detection processing for detecting the depth of the object included in the field of view. Thus, by performing feature point extraction processing and depth detection processing based on images generated by a plurality of imaging units provided in the line-of-sight direction of a person, information on the position of an object included in the visual field can be obtained. As an effect to get as.
 また、この第1の側面において、上記人物の視界に含まれる像を上記人物の眼に提供する表示部をさらに具備し、上記表示制御部は、上記表示部の位置に関する位置情報と、上記視界情報と、上記眼球情報とに基づいて、上記表示態様を変更する制御を行うようにしてもよい。これにより、表示部の位置に関する位置情報と、視界情報と、眼球情報とに基づいて、表示態様を変更するという作用をもたらす。 In the first aspect, the image processing apparatus further includes a display unit that provides an image included in the field of view of the person to the eyes of the person, and the display control unit includes position information regarding the position of the display unit and the field of view. You may make it perform control which changes the said display mode based on information and the said eyeball information. This brings about the effect | action of changing a display mode based on the positional information regarding the position of a display part, visual field information, and eyeball information.
 また、この第1の側面において、上記表示部は、上記人物の視界に含まれる像を透過する表示面の縁に配置されている画像出力部から出力される屈折画像に基づいて上記表示面に上記表示対象オブジェクトを表示する屈折式表示部であり、上記表示制御部は、上記表示対象オブジェクトを仮想的に表示する表示位置に関する位置情報と、上記視界情報と、上記眼球情報とに基づいて、上記表示態様を変更する制御を行うようにしてもよい。これにより、表示対象オブジェクトを仮想的に表示する表示位置に関する位置情報と、視界情報と、眼球情報とに基づいて、表示態様を変更するという作用をもたらす。 In the first aspect, the display unit is arranged on the display surface based on a refraction image output from an image output unit arranged at an edge of the display surface that transmits an image included in the field of view of the person. The refraction display unit that displays the display target object, wherein the display control unit is based on position information regarding a display position where the display target object is virtually displayed, the visual field information, and the eyeball information. You may make it perform control which changes the said display mode. This brings about the effect | action that a display mode is changed based on the positional information regarding the display position which displays a display target object virtually, visual field information, and eyeball information.
 また、この第1の側面において、上記表示制御部は、上記表示対象オブジェクトの上記表示部の表示面における表示位置と表示角度と表示サイズとのうちの少なくとも1つを変更することにより上記表示態様を変更するようにしてもよい。これにより、表示対象オブジェクトの表示部の表示面における表示位置と表示角度と表示サイズとのうちの少なくとも1つを変更することにより表示態様を変更するという作用をもたらす。 In the first aspect, the display control unit changes the display mode by changing at least one of a display position, a display angle, and a display size of the display target object on the display surface of the display unit. May be changed. Accordingly, there is an effect that the display mode is changed by changing at least one of the display position, the display angle, and the display size of the display target object on the display surface.
 また、この第1の側面において、上記表示制御部は、上記視界情報と上記眼球情報とに基づいて、上記表示対象オブジェクトの鮮鋭度を制御するようにしてもよい。これにより、視界情報と眼球情報とに基づいて、表示対象オブジェクトの鮮鋭度を制御するという作用をもたらす。 In the first aspect, the display control unit may control the sharpness of the display target object based on the view information and the eyeball information. This brings about the effect | action of controlling the sharpness of a display target object based on visual field information and eyeball information.
 また、この第1の側面において、上記表示制御部は、上記表示対象オブジェクトを表示すべき上記視界における3次元上の位置に基づいて、上記表示対象オブジェクトの鮮鋭度を制御するようにしてもよい。これにより、表示対象オブジェクトを表示すべき視界における3次元上の位置に基づいて、表示対象オブジェクトの鮮鋭度を制御するという作用をもたらす。 In the first aspect, the display control unit may control the sharpness of the display target object based on a three-dimensional position in the field of view where the display target object is to be displayed. . This brings about the effect | action of controlling the sharpness of a display target object based on the three-dimensional position in the visual field which should display a display target object.
 また、この第1の側面において、上記表示制御部は、上記表示対象オブジェクトに対するボケ処理を行うことにより上記鮮鋭度を制御するようにしてもよい。これにより、表示対象オブジェクトに対するボケ処理を行うことにより鮮鋭度を制御するという作用をもたらす。 In the first aspect, the display control unit may control the sharpness by performing a blurring process on the display target object. This brings about the effect that the sharpness is controlled by performing the blur process on the display target object.
 また、この第1の側面において、上記画像処理装置の姿勢の変化に関する姿勢情報を取得する姿勢情報取得部をさらに具備し、上記表示制御部は、上記視界情報と、上記眼球情報と、上記姿勢の変化に関する姿勢情報とに基づいて、上記表示態様を変更する制御を行うようにしてもよい。これにより、視界情報と眼球情報と姿勢情報とに基づいて、表示態様を変更するという作用をもたらす。 In the first aspect, the image processing apparatus further includes a posture information acquisition unit that acquires posture information related to a change in posture of the image processing device, and the display control unit includes the visual field information, the eyeball information, and the posture. Control for changing the display mode may be performed on the basis of the posture information related to the change of the display. This brings about the effect | action of changing a display mode based on visual field information, eyeball information, and attitude | position information.
 本技術によれば、視認性を向上させることができるという優れた効果を奏し得る。なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 According to the present technology, an excellent effect that visibility can be improved can be achieved. Note that the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
本技術の第1の実施の形態における画像処理装置100の外観構成の一例を示す上面図である。It is a top view showing an example of the appearance composition of image processing device 100 in a 1st embodiment of this art. 本技術の第1の実施の形態における画像処理装置100の機能構成例を示すブロック図である。It is a block diagram showing an example of functional composition of image processing device 100 in a 1st embodiment of this art. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。It is a figure which shows typically the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。It is a figure which shows typically the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。It is a figure which shows typically the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトのローカル座標系をグラスの表示座標系に変換する場合の手順の一例を示す図である。It is a figure which shows an example of the procedure in the case of converting the local coordinate system of the display target object into the display coordinate system of the glass by the image processing apparatus 100 in the first embodiment of the present technology. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。It is a figure which shows typically the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を示す図である。It is a figure which shows the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を示す図である。It is a figure which shows the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を示す図である。It is a figure which shows the example of a display of the display target object by the image processing apparatus 100 in 1st Embodiment of this technique. 本技術の第1の実施の形態における画像処理装置100によるスタビライザ処理の処理手順の一例を示すフローチャートである。6 is a flowchart illustrating an example of a processing procedure of a stabilizer process performed by the image processing apparatus 100 according to the first embodiment of the present technology. 本技術の第2の実施の形態における画像処理装置100によるボケ処理の一例を模式的に示す図である。It is a figure which shows typically an example of the blurring process by the image processing apparatus 100 in 2nd Embodiment of this technique. 本技術の第2の実施の形態における画像処理装置100によりボケ処理を実施した場合の表示例を示す図である。It is a figure showing an example of a display at the time of performing blur processing by image processing device 100 in a 2nd embodiment of this art. 本技術の第2の実施の形態における画像処理装置100によりボケ処理を実施する前の表示例(比較例)を示す図である。It is a figure which shows the example of a display (comparative example) before implementing blurring process with the image processing apparatus 100 in 2nd Embodiment of this technique. 本技術の第2の実施の形態における画像処理装置100によるボケ処理の処理手順の一例を示すフローチャートである。14 is a flowchart illustrating an example of a processing procedure of blur processing by the image processing apparatus 100 according to the second embodiment of the present technology.
 以下、本技術を実施するための形態(以下、実施の形態と称する)について説明する。説明は以下の順序により行う。
 1.第1の実施の形態(視界情報および眼球情報に基づいて、表示対象オブジェクトの表示態様を補正する例)
 2.第2の実施の形態(視界情報および眼球情報に基づいて、表示対象オブジェクトの鮮鋭度を制御する例)
Hereinafter, modes for carrying out the present technology (hereinafter referred to as embodiments) will be described. The description will be made in the following order.
1. First Embodiment (Example in which display mode of display target object is corrected based on view information and eyeball information)
2. Second Embodiment (Example of controlling the sharpness of a display target object based on visual field information and eyeball information)
 <1.第1の実施の形態>
 [画像処理装置の外観構成例]
 図1は、本技術の第1の実施の形態における画像処理装置100の外観構成の一例を示す上面図である。図1では、画像処理装置100を人物の顔に装着した場合の眼球(右目)11および眼球(左目)12との関係を簡略化して示す。
<1. First Embodiment>
[External configuration example of image processing apparatus]
FIG. 1 is a top view illustrating an example of an external configuration of the image processing apparatus 100 according to the first embodiment of the present technology. In FIG. 1, the relationship between the eyeball (right eye) 11 and the eyeball (left eye) 12 when the image processing apparatus 100 is mounted on a human face is shown in a simplified manner.
 画像処理装置100は、外向き撮像部(R)101と、外向き撮像部(L)102と、内向き撮像部(R)103と、内向き撮像部(L)104と、表示部(R)181と、表示部(L)182と、赤外線発光装置183乃至186と、ブリッジ190とを備える。 The image processing apparatus 100 includes an outward imaging unit (R) 101, an outward imaging unit (L) 102, an inward imaging unit (R) 103, an inward imaging unit (L) 104, and a display unit (R). ) 181, a display unit (L) 182, infrared light emitting devices 183 to 186, and a bridge 190.
 画像処理装置100は、例えば、ホログラム導光板方式光学技術を用いた透過式メガネ型の電子機器(例えば、メガネタイプのウェアラブル機器)とすることができる。また、例えば、画像処理装置100は、イメージセンサ、加速度センサ、ジャイロスコープ、電子コンパス、照度センサ、マイク等の各種のセンシング機能を備えることができる。イメージセンサは、例えば、CMOS(Complementary Metal Oxide Semiconductor)イメージセンサ、CCD(Charge Coupled Device Image Sensor)イメージセンサである。また、画像処理装置100は、GPS(Global Positioning System)による位置情報等を活用して、ユーザの状況に応じた情報を提供することができる。 The image processing apparatus 100 can be, for example, a transmissive glasses-type electronic device (for example, a glasses-type wearable device) using a hologram light guide plate type optical technology. For example, the image processing apparatus 100 can include various sensing functions such as an image sensor, an acceleration sensor, a gyroscope, an electronic compass, an illuminance sensor, and a microphone. The image sensor is, for example, a CMOS (Complementary Metal Oxide Semiconductor) image sensor or a CCD (Charge Coupled Device Image Sensor) image sensor. Further, the image processing apparatus 100 can provide information according to the user's situation by utilizing position information or the like by GPS (Global Positioning System).
 赤外線発光装置183乃至186は、赤外線を発光する赤外線発光装置である。例えば、赤外線発光装置183、184を、表示部(R)181の表示面における両端部に設け、赤外線発光装置185、186を、表示部(L)182の表示面における両端部に設けることができる。ただし、各赤外線発光装置については、撮像動作を適切に行うことができる他の位置に配置するようにしてもよい。また、左右における各赤外線発光装置の設置数については、3以上とするようにしてもよい。 The infrared light emitting devices 183 to 186 are infrared light emitting devices that emit infrared light. For example, the infrared light emitting devices 183 and 184 can be provided at both ends of the display surface of the display portion (R) 181, and the infrared light emitting devices 185 and 186 can be provided at both ends of the display surface of the display portion (L) 182. . However, the infrared light emitting devices may be arranged at other positions where the imaging operation can be appropriately performed. Moreover, you may make it set it as 3 or more about the installation number of each infrared rays light-emitting device in right and left.
 外向き撮像部(R)101および外向き撮像部(L)102は、画像処理装置100を装着した人物の視界に含まれる被写体を撮像する撮像部である。すなわち、外向き撮像部(R)101および外向き撮像部(L)102は、人物の視線方向に設けられている複数の撮像部である。 The outward imaging unit (R) 101 and the outward imaging unit (L) 102 are imaging units that image subjects included in the field of view of the person wearing the image processing apparatus 100. That is, the outward image capturing unit (R) 101 and the outward image capturing unit (L) 102 are a plurality of image capturing units provided in the line-of-sight direction of a person.
 内向き撮像部(R)103と、内向き撮像部(L)104と、赤外線発光装置183乃至186とは、画像処理装置100を装着した人物の目(例えば、眼球およびその周辺)を撮像する撮像部を構成するものである。すなわち、内向き撮像部(R)103および内向き撮像部(L)104は、人物の眼球方向に設けられている複数の撮像部である。また、赤外線発光装置183、184が発光した赤外線が眼球に反射した光を、内向き撮像部(R)103が受像することにより、眼球(右目)11の位置、視線の方向等を算出可能となる。同様に、赤外線発光装置185、186が発光した赤外線が眼球に反射した光を、内向き撮像部(L)104が受像することにより、眼球(左目)12の位置、視線の方向等を算出可能となる。また、例えば、内向き撮像部(R)103は、表示部(R)181の表示面における端部に設け、内向き撮像部(L)104は、表示部(L)182の表示面における端部に設けることができる。 The inward imaging unit (R) 103, the inward imaging unit (L) 104, and the infrared light emitting devices 183 to 186 capture the eyes of a person wearing the image processing device 100 (for example, the eyeball and its surroundings). The imaging unit is configured. That is, the inward imaging unit (R) 103 and the inward imaging unit (L) 104 are a plurality of imaging units provided in the direction of the human eyeball. Further, the inward imaging unit (R) 103 receives the light reflected by the infrared light emitted from the infrared light emitting devices 183 and 184, so that the position of the eyeball (right eye) 11 and the direction of the line of sight can be calculated. Become. Similarly, the inward imaging unit (L) 104 receives the light reflected by the infrared light emitted from the infrared light emitting devices 185 and 186, so that the position of the eyeball (left eye) 12 and the direction of the line of sight can be calculated. It becomes. Further, for example, the inward imaging unit (R) 103 is provided at an end of the display surface of the display unit (R) 181, and the inward imaging unit (L) 104 is an end of the display unit (L) 182 on the display surface. It can be provided in the part.
 表示部(R)181および表示部(L)182は、画像処理装置100を装着した人物に各種の画像を提供するためのグラス表示部である。例えば、ホログラム光学技術を採用する場合には、視野を遮るハーフミラーを使わず、高い透過性を有するホログラム光学素子を表示部(R)181、表示部(L)182とすることができる。 The display unit (R) 181 and the display unit (L) 182 are glass display units for providing various images to a person wearing the image processing apparatus 100. For example, when adopting the hologram optical technique, a hologram optical element having high transparency can be used as the display unit (R) 181 and the display unit (L) 182 without using a half mirror that blocks the visual field.
 また、表示部(R)181、表示部(L)182として、ホログラム素子型の表示部以外に、ハーフミラー型の表示部、ビデオ透過型の表示部(例えば、シースルーではなく、外部カメラで外界を表示する表示部)、網膜投影型の表示部を用いるようにしてもよい。 Further, as the display unit (R) 181 and the display unit (L) 182, in addition to the hologram element type display unit, a half mirror type display unit, a video transmission type display unit (for example, an external camera instead of a see-through display) Or a retina projection type display unit may be used.
 ここで、ホログラム光学素子(表示部(R)181、表示部(L)182)は、ホログラム導光板技術により、光学エンジンから出射した映像光を表示させるための光学素子である。ここで、ホログラム導光板技術は、例えば、ガラス板の両端にホログラム光学素子を組み込むことにより、光学エンジンから出射した映像光を、非常に薄いガラス板の中を伝搬して、目まで届ける技術である。 Here, the hologram optical elements (display unit (R) 181 and display unit (L) 182) are optical elements for displaying the image light emitted from the optical engine by the hologram light guide plate technology. Here, the hologram light guide plate technology is a technology that, for example, incorporates hologram optical elements at both ends of a glass plate to propagate the image light emitted from the optical engine through a very thin glass plate to the eyes. is there.
 このように、表示部(R)181、表示部(L)182は、人物の視界に含まれる像を透過する表示面を備え、人物の視界に含まれる像を人物の眼に提供する。また、画像処理装置100を透過型メガネとする場合には、表示部(R)181、表示部(L)182のグラス部(表示面)の縁に配置されている光学エンジンからの屈折映像出力により、そのグラス部に像が結ばれる。すなわち、表示部(R)181、表示部(L)182を屈折式表示部とする場合には、表示面の縁に配置されている画像出力部から出力される屈折画像に基づいて、表示面に表示対象オブジェクトを表示する。 Thus, the display unit (R) 181 and the display unit (L) 182 are provided with a display surface that transmits an image included in the person's field of view, and provide the image included in the person's field of view to the eyes of the person. Further, when the image processing apparatus 100 is transmissive glasses, a refracted video output from an optical engine arranged at the edge of the glass portion (display surface) of the display unit (R) 181 and the display unit (L) 182 is provided. As a result, an image is formed on the glass portion. That is, when the display unit (R) 181 and the display unit (L) 182 are refractive display units, the display surface is based on the refraction image output from the image output unit arranged at the edge of the display surface. Displays the display target object.
 ブリッジ190は、表示部(R)181、表示部(L)182をつなぐ部材である。ブリッジ190は、例えば、メガネのブリッジに相当する。 The bridge 190 is a member that connects the display unit (R) 181 and the display unit (L) 182. The bridge 190 corresponds to, for example, a bridge for glasses.
 なお、図1では、説明の容易のため、外向き撮像部(R)101、外向き撮像部(L)102、内向き撮像部(R)103および内向き撮像部(L)104のそれぞれを表示部(R)181、表示部(L)182の外側に配置する例を示す。ただし、これらの各撮像部については、撮像範囲(撮像方向(光軸方向)を含む)を適切に設定することができる他の位置に配置するようにしてもよい。 In FIG. 1, each of the outward imaging unit (R) 101, the outward imaging unit (L) 102, the inward imaging unit (R) 103, and the inward imaging unit (L) 104 is illustrated for ease of explanation. An example in which the display unit (R) 181 and the display unit (L) 182 are arranged outside is shown. However, these imaging units may be arranged at other positions where the imaging range (including the imaging direction (optical axis direction)) can be appropriately set.
 [画像処理装置の構成例]
 図2は、本技術の第1の実施の形態における画像処理装置100の機能構成例を示すブロック図である。
[Configuration example of image processing apparatus]
FIG. 2 is a block diagram illustrating a functional configuration example of the image processing apparatus 100 according to the first embodiment of the present technology.
 画像処理装置100は、外向き撮像部(R)101と、外向き撮像部(L)102と、内向き撮像部(R)103と、内向き撮像部(L)104と、画像処理部111乃至114と、視界情報取得部120と、空間モデル作成部130と、DB(データベース)140と、眼球情報取得部150と、表示制御部160と、表示処理部170と、表示部(R)181と、表示部(L)182と、姿勢情報取得部195とを備える。 The image processing apparatus 100 includes an outward imaging unit (R) 101, an outward imaging unit (L) 102, an inward imaging unit (R) 103, an inward imaging unit (L) 104, and an image processing unit 111. To 114, the visibility information acquisition unit 120, the space model creation unit 130, the DB (database) 140, the eyeball information acquisition unit 150, the display control unit 160, the display processing unit 170, and the display unit (R) 181. And a display unit (L) 182 and a posture information acquisition unit 195.
 外向き撮像部(R)101および外向き撮像部(L)102は、SLAM(Simultaneous Localization and Mapping)に用いられる視野側を向いた複数のカメラである。なお、図2では、2つの外向き撮像部を設ける例を示すが、3以上の外向き撮像部を設けるようにしてもよい。 The outward imaging unit (R) 101 and the outward imaging unit (L) 102 are a plurality of cameras facing the visual field side used for SLAM (SimultaneousaneLocalization and Mapping). Although FIG. 2 shows an example in which two outward imaging units are provided, three or more outward imaging units may be provided.
 内向き撮像部(R)103および内向き撮像部(L)104は、画像処理装置100を装着した人物の眼球の位置、動き、視線等を検出する際に用いる画像を生成するための複数の内向きカメラである。なお、図2では、2つの内向き撮像部を設ける例を示すが、3以上の内向き撮像部を設けるようにしてもよい。また、図2では、図示を省略するが、眼球の動きを撮影するための機器(例えば、赤外線発光装置(例えば、各撮像部に4個))は、内向き撮像部(R)103および内向き撮像部(L)104のシステムに含まれるものとする。 The inward imaging unit (R) 103 and the inward imaging unit (L) 104 generate a plurality of images used to detect the position, movement, line of sight, etc. of the eyeball of the person wearing the image processing apparatus 100. It is an inward camera. 2 shows an example in which two inward imaging units are provided, but three or more inward imaging units may be provided. Although not shown in FIG. 2, a device for photographing the movement of the eyeball (for example, an infrared light emitting device (for example, four for each imaging unit)) includes an inward imaging unit (R) 103 and an inner imaging unit (R) 103. It is assumed to be included in the system of the orientation imaging unit (L) 104.
 画像処理部111は、外向き撮像部(R)101により生成された画像データについて各種画像処理(例えば、POST処理)を施すものであり、画像処理が施された画像データを視界情報取得部120に出力する。 The image processing unit 111 performs various types of image processing (for example, POST processing) on the image data generated by the outward imaging unit (R) 101. The image data subjected to the image processing is subjected to the view information acquisition unit 120. Output to.
 画像処理部112は、外向き撮像部(L)102により生成された画像データについて各種画像処理(例えば、POST処理)を施すものであり、画像処理が施された画像データを視界情報取得部120に出力する。 The image processing unit 112 performs various image processing (for example, POST processing) on the image data generated by the outward imaging unit (L) 102, and the image data subjected to the image processing is subjected to the visual field information acquisition unit 120. Output to.
 画像処理部113は、内向き撮像部(R)103により生成された画像データについて各種画像処理(例えば、POST処理)を施すものであり、画像処理が施された画像データを眼球情報取得部150に出力する。 The image processing unit 113 performs various types of image processing (for example, POST processing) on the image data generated by the inward imaging unit (R) 103, and the image data subjected to the image processing is used as the eyeball information acquisition unit 150. Output to.
 画像処理部114は、内向き撮像部(L)104により生成された画像データについて各種画像処理(例えば、POST処理)を施すものであり、画像処理が施された画像データを眼球情報取得部150に出力する。 The image processing unit 114 performs various types of image processing (for example, POST processing) on the image data generated by the inward imaging unit (L) 104, and the image data subjected to the image processing is subjected to the eyeball information acquisition unit 150. Output to.
 視界情報取得部120は、画像処理部111、112から出力された画像データを用いて、視界情報(外向き撮像部(R)101、外向き撮像部(L)102の撮像範囲に含まれる物体に関する情報)を抽出するものである。そして、視界情報取得部120は、抽出された視界情報(例えば、物体の特徴点、物体の深度)を空間モデル作成部130に出力する。 The view information acquisition unit 120 uses the image data output from the image processing units 111 and 112 to display the view information (objects included in the imaging range of the outward imaging unit (R) 101 and the outward imaging unit (L) 102). Information). Then, the view information acquisition unit 120 outputs the extracted view information (for example, the feature point of the object, the depth of the object) to the space model creation unit 130.
 例えば、視界情報取得部120は、周囲特徴点抽出処理により物体の特徴点を抽出する。なお、物体の特徴点は、例えば、物体の頂点とすることができる。 For example, the visual field information acquisition unit 120 extracts feature points of an object by surrounding feature point extraction processing. The feature point of the object can be, for example, the vertex of the object.
 例えば、視界情報取得部120は、画像処理部111、112から出力された画像データに対応する画像(フレーム)全体から特徴点を抽出することができる。例えば、視界情報取得部120は、画像処理部111、112から出力された画像データに対応する画像(フレーム)のうち、最初の画像については、画像全体から特徴点を抽出する。また、最初の画像以外の画像(フレーム)については、直前の画像(フレーム)に対応する画像と比較して新しく撮像された領域部分から特徴点を抽出する。なお、特徴点として、例えば、縦方向および横方向にエッジの勾配が強い点を抽出することができる。この特徴点は、オプティカルフローの計算に強い特徴点であり、エッジ検出を用いて求めることができる。なお、ここでは、最初の画像については画像全体から特徴点を抽出し、最初の画像以外の画像については直前の画像と比較して新しく撮像された領域部分から特徴点を抽出する例を示す。しかしながら、処理能力等に応じて、最初の画像以外の各画像についても、画像全体から特徴点を抽出するようにしてもよい。 For example, the view information acquisition unit 120 can extract feature points from the entire image (frame) corresponding to the image data output from the image processing units 111 and 112. For example, for the first image among the images (frames) corresponding to the image data output from the image processing units 111 and 112, the view information acquisition unit 120 extracts feature points from the entire image. In addition, for an image (frame) other than the first image, feature points are extracted from a newly imaged region compared with an image corresponding to the immediately preceding image (frame). Note that, for example, a point having a strong edge gradient in the vertical direction and the horizontal direction can be extracted as the feature point. This feature point is a strong feature point for optical flow calculation, and can be obtained using edge detection. Here, an example is shown in which feature points are extracted from the entire image for the first image, and feature points are extracted from a newly captured region portion compared to the previous image for images other than the first image. However, for each image other than the first image, feature points may be extracted from the entire image according to the processing capability and the like.
 また、例えば、視界情報取得部120は、深度検出処理により物体の深度を検出する。なお、物体の深度は、例えば、奥行方向(視線方向)における距離である。 Also, for example, the visual field information acquisition unit 120 detects the depth of the object by depth detection processing. Note that the depth of the object is, for example, a distance in the depth direction (gaze direction).
 例えば、視界情報取得部120は、外向き撮像部(R)101および外向き撮像部(L)102により生成された画像およびこの画像の生成時における各情報(例えば、レンズの位置および合焦位置)に基づいて、その画像に関する被写体距離を算出する。例えば、奥行マップ(デプスマップ(Depth Map))を生成し、この奥行マップに基づいて各領域に係る被写体距離を求めることができる。ここで、奥行マップは、被写体距離を表すデータにより構成されるマップである。この奥行マップの生成方法として、例えば、TOF(Time of flight)方式やボケ量解析(Depth from Defocus)等の方法を用いることができる。例えば、TOF方式は、光源から出射された光が対象物で反射し、センサに到達するまでの光の遅れ時間と光の速度とに基づいて被写体までの距離を算出する方法である。 For example, the field-of-view information acquisition unit 120 includes an image generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102 and each information (for example, a lens position and a focus position) when the image is generated. ) To calculate the subject distance for the image. For example, a depth map (depth map (Depth Map)) can be generated, and subject distances related to each region can be obtained based on the depth map. Here, the depth map is a map composed of data representing the subject distance. As a method for generating the depth map, for example, a TOF (Time of Flight) method, a blur amount analysis (Depth from Defocus), or the like can be used. For example, the TOF method is a method of calculating the distance to the subject based on the light delay time until the light emitted from the light source is reflected by the object and reaches the sensor, and the speed of the light.
 また、例えば、各撮像部の撮像素子として、例えば、射出瞳が異なる部分を透過した透過光を受光して位相差検出方式の焦点検出(位相差AF(Auto Focus))を行う撮像素子を用いることができる。この位相差AFを行う撮像素子からは、画像信号(アナログ信号)とともに、位相差検出信号が出力される。このため、視界情報取得部120は、各撮像素子から出力された位相差検出信号に基づいて、各撮像素子から出力された画像信号に対応する画像における各領域の被写体距離を算出することができる。 Further, for example, as the image pickup device of each image pickup unit, for example, an image pickup device that receives transmitted light that has passed through different portions of the exit pupil and performs focus detection (phase difference AF (Auto-Focus)) using a phase difference detection method is used. be able to. An image sensor that performs this phase difference AF outputs a phase difference detection signal together with an image signal (analog signal). Therefore, the field-of-view information acquisition unit 120 can calculate the subject distance of each region in the image corresponding to the image signal output from each image sensor based on the phase difference detection signal output from each image sensor. .
 また、例えば、視界情報取得部120は、SLAM技術により画像処理装置100の位置推定と環境マップの作成とを同時に行うことができる。 Also, for example, the visibility information acquisition unit 120 can simultaneously estimate the position of the image processing apparatus 100 and create an environment map using SLAM technology.
 このように、視界情報取得部120は、画像処理装置100が装着されている人物の視界に含まれる像に関する視界情報(外部情報)を取得する。例えば、視界情報取得部120は、外向き撮像部(R)101および外向き撮像部(L)102により生成された画像に基づいて、視界に含まれる物体の特徴点を抽出する特徴点抽出処理と、視界に含まれる物体の深度を検出する深度検出処理とを行う。そして、視界情報取得部120は、その視界に含まれる物体の位置に関する情報を視界情報として取得する。なお、視界情報取得部120は、請求の範囲に記載の視線情報取得部の一例である。 As described above, the visual field information acquisition unit 120 acquires visual field information (external information) related to an image included in the visual field of the person wearing the image processing apparatus 100. For example, the visual field information acquisition unit 120 extracts feature points of objects included in the visual field based on images generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102. And depth detection processing for detecting the depth of an object included in the field of view. The visual field information acquisition unit 120 acquires information regarding the position of the object included in the visual field as the visual field information. The visibility information acquisition unit 120 is an example of a line-of-sight information acquisition unit described in the claims.
 DB(データベース)140は、表示部(R)181、表示部(L)182に仮想的なオブジェクト(空間モデル)を表示させるための情報(空間モデル情報)を記憶するデータベースであり、記憶されている空間モデル情報を空間モデル作成部130に供給する。また、過去に作成された空間モデル(例えば、特徴点および深度により特定されるオブジェクト)をDB140に記憶しておき、この空間モデルと、これ以降に作成される空間モデルとを比較するようにしてもよい。 The DB (database) 140 is a database that stores information (spatial model information) for displaying virtual objects (space models) on the display unit (R) 181 and the display unit (L) 182. The spatial model information is supplied to the spatial model creation unit 130. Also, a spatial model created in the past (for example, an object specified by feature points and depth) is stored in the DB 140, and this spatial model is compared with a spatial model created thereafter. Also good.
 空間モデル作成部130は、視界情報取得部120から出力された視界情報と、DB140から供給された空間モデル情報とに基づいて、表示部(R)181、表示部(L)182に表示させる物体に関する情報(空間モデル表示情報)を作成するものである。そして、空間モデル作成部130は、その作成した空間モデル表示情報を表示制御部160に供給する。 The space model creation unit 130 displays objects on the display unit (R) 181 and the display unit (L) 182 based on the view information output from the view information acquisition unit 120 and the space model information supplied from the DB 140. Information (spatial model display information) is created. Then, the space model creation unit 130 supplies the created space model display information to the display control unit 160.
 眼球情報取得部150は、画像処理部113、114から出力された画像データを用いて、眼球情報を検出するものであり、検出された眼球情報を表示制御部160に出力する。ここで、眼球情報は、例えば、眼球の位置、視線、眼球における虹彩の位置(例えば、虹彩中心)およびサイズ、眼球における瞳孔の位置およびサイズ、眼軸角度である。なお、虹彩は、眼球の色がついている部分である。また、瞳孔は、虹彩の真ん中に存在する部分(「黒目」と呼ばれている部分)である。また、眼球の位置として、例えば、眼球における位置、3次元上の位置、表示部(R)181、表示部(L)182の表示面からの距離等を求めることができる。 The eyeball information acquisition unit 150 detects eyeball information using the image data output from the image processing units 113 and 114, and outputs the detected eyeball information to the display control unit 160. Here, the eyeball information is, for example, the position of the eyeball, the line of sight, the position of the iris in the eyeball (for example, the iris center) and size, the position and size of the pupil in the eyeball, and the eye axis angle. The iris is a portion where the eyeball is colored. Further, the pupil is a portion (a portion called “black eyes”) existing in the middle of the iris. Further, as the position of the eyeball, for example, a position on the eyeball, a three-dimensional position, a distance from the display surface of the display unit (R) 181 and the display unit (L) 182 can be obtained.
 例えば、眼球情報取得部150は、画像認識技術を用いて、眼球情報(例えば、眼球の位置、眼球における虹彩の位置およびサイズ、眼球における瞳孔の位置およびサイズ)を検出することができる。また、例えば、眼球情報取得部150は、検出された眼球情報(例えば、眼球の位置、眼球における虹彩の位置、眼球における瞳孔の位置の各移動)を用いて、視線を検出することができる。 For example, the eyeball information acquisition unit 150 can detect eyeball information (for example, the position of the eyeball, the position and size of the iris on the eyeball, and the position and size of the pupil on the eyeball) using an image recognition technique. For example, the eyeball information acquisition unit 150 can detect the line of sight using the detected eyeball information (for example, each movement of the position of the eyeball, the position of the iris on the eyeball, and the position of the pupil on the eyeball).
 眼球検出方法として、例えば、眼球の輝度分布情報が記録されているテンプレートと実画像とのマッチングによる検出方法(例えば、特開2004-133637参照。)、画像データに含まれる眼球の色の部分や眼球の特徴量等に基づいた検出方法等を用いることができる。 As an eyeball detection method, for example, a detection method by matching a template in which the luminance distribution information of the eyeball is recorded with an actual image (see, for example, Japanese Patent Application Laid-Open No. 2004-133737), an eyeball color part included in image data, A detection method based on the feature amount of the eyeball or the like can be used.
 また、他の眼球検出方法を用いるようにしてもよい。例えば、画像処理部113、114から出力された画像データを2値化し、1画面における2値化データに基づいて黒目を検出するようにしてもよい。例えば、画像処理部113、114から出力された画像データを2値化して、水平方向(左右方向)における各ラインの黒画素および白画素を判定する。そして、水平方向において、所定範囲内の数の白画素が連続した後に所定範囲内の数の黒画素が連続し、かつ、その連続する黒画素の後に所定範囲内の数の白画素が連続するラインを、黒目領域の候補ラインとして抽出する。そして、垂直方向(上下方向)において、その抽出された候補ラインとして、所定範囲内の数のラインが連続している領域を抽出して、その領域を黒目領域として検出することができる。 Further, other eyeball detection methods may be used. For example, the image data output from the image processing units 113 and 114 may be binarized and black eyes may be detected based on the binarized data on one screen. For example, the image data output from the image processing units 113 and 114 is binarized, and the black pixel and the white pixel of each line in the horizontal direction (left-right direction) are determined. Then, in the horizontal direction, the number of white pixels in the predetermined range continues after the number of white pixels in the predetermined range, and the number of white pixels in the predetermined range continues after the continuous black pixels. The line is extracted as a candidate line for the black eye region. Then, in the vertical direction (vertical direction), a region where a number of lines within a predetermined range are continuous can be extracted as the extracted candidate lines, and the region can be detected as a black eye region.
 また、眼球運動の計測装置を用いて、黒目を検出するようにしてもよい。例えば、赤外LED(Light Emitting Diode)からの赤外光を眼球に当て、反射してきた光を受光器で検出することにより、黒目部分の位置を特定することができる。 Also, the eye may be detected using an eye movement measurement device. For example, the position of the black eye portion can be specified by applying infrared light from an infrared LED (Light Emitting Diode) to the eyeball and detecting the reflected light with a light receiver.
 また、例えば、他の赤外線発光装置を用いて眼球を検出することができる。例えば、瞳孔検出処理および輝点検出処理により視線認識を実現する場合に、外光の影響を受けないためと、角膜からの鏡面反射を防ぐために、赤外線を使用することができる。 Also, for example, the eyeball can be detected using another infrared light emitting device. For example, when line-of-sight recognition is realized by pupil detection processing and bright spot detection processing, infrared rays can be used in order not to be affected by external light and to prevent specular reflection from the cornea.
 また、視線検出の精度を上げる手段として、角膜曲率中心を求める場合には、最大4点の赤外光源の反射(輝点、赤円)を検出することができる。 Also, as a means of improving the accuracy of eye-gaze detection, when obtaining the corneal curvature center, it is possible to detect a maximum of four infrared light source reflections (bright spots, red circles).
 なお、これらの眼球情報の検出方法(例えば、視線検出)は、一例であり、これら以外にも複数の技術を応用することにより、高精度の視線検出を実現することができる。 It should be noted that these eyeball information detection methods (for example, eye gaze detection) are merely examples, and high-precision eye gaze detection can be realized by applying a plurality of techniques other than these.
 このように、眼球情報取得部150は、人物の眼球に関する眼球情報を取得する。例えば、眼球情報取得部150は、内向き撮像部(R)103および内向き撮像部(L)104により生成された画像に基づいて、人物の眼球の位置および眼球の視線方向を検出することにより、眼球情報を取得する。 Thus, the eyeball information acquisition unit 150 acquires eyeball information related to the human eyeball. For example, the eyeball information acquisition unit 150 detects the position of the human eyeball and the eye gaze direction based on the images generated by the inward imaging unit (R) 103 and the inward imaging unit (L) 104. Get eyeball information.
 表示制御部160は、各画像(例えば、表示対象オブジェクト)を表示部(R)181、表示部(L)182に表示させるための制御を行うものである。例えば、表示制御部160は、空間モデル作成部130から出力された空間モデル表示情報と、眼球情報取得部150から出力された眼球情報とに基づいて、表示位置算出処理、画像効果処理、画像重畳処理等を行う。 The display control unit 160 performs control for displaying each image (for example, a display target object) on the display unit (R) 181 and the display unit (L) 182. For example, the display control unit 160 performs display position calculation processing, image effect processing, and image superposition based on the spatial model display information output from the spatial model creation unit 130 and the eyeball information output from the eyeball information acquisition unit 150. Perform processing.
 表示制御部160は、例えば、ホストCPU(Central Processing Unit)により実現される。なお、上述した各部のうちの一部をホストCPUにより実現することができる。 The display control unit 160 is realized by, for example, a host CPU (Central Processing Unit). Note that some of the above-described units can be realized by the host CPU.
 表示処理部170は、表示制御部160の制御に基づいて、表示部(R)181、表示部(L)182に表示させる画像について各種画像処理を行うものである。例えば、表示制御部160は、視界情報と、眼球情報と、姿勢の変化に関する姿勢情報とに基づいて、表示対象オブジェクトの表示態様を変更する制御を行う。 The display processing unit 170 performs various image processing on the images displayed on the display unit (R) 181 and the display unit (L) 182 based on the control of the display control unit 160. For example, the display control unit 160 performs control to change the display mode of the display target object based on the field-of-view information, the eyeball information, and the posture information regarding the posture change.
 姿勢情報取得部195は、画像処理装置100の加速度、動き、傾き等を検出することにより画像処理装置100の姿勢の変化を検出するものであり、検出された姿勢の変化に関する姿勢情報を表示制御部160に出力する。なお、姿勢情報取得部195として、例えば、ジャイロセンサ、加速度センサ等の各種センサを用いることができる。 The posture information acquisition unit 195 detects a change in posture of the image processing device 100 by detecting acceleration, movement, inclination, and the like of the image processing device 100, and controls display of posture information regarding the detected posture change. Output to the unit 160. Note that various sensors such as a gyro sensor and an acceleration sensor can be used as the posture information acquisition unit 195, for example.
 また、画像処理装置100は、表示オブジェクトに対する2つの機能(スタビライザ、深度補正)を備える。 The image processing apparatus 100 also has two functions (stabilizer and depth correction) for the display object.
 ここで、スタビライザは、表示オブジェクトを、外部座標における所望の位置に表示する機能である。言い換えると、スタビライザは、表示オブジェクトを、外部座標の期待した位置に見えるようにする機能である。このスタビライザによる表示例を図3に示す。 Here, the stabilizer is a function for displaying the display object at a desired position in the external coordinates. In other words, the stabilizer is a function that makes a display object appear at an expected position in external coordinates. A display example by this stabilizer is shown in FIG.
 [スタビライザによる表示例]
 図3は、本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。
[Display example with stabilizer]
FIG. 3 is a diagram schematically illustrating a display example of the display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
 図3では、画像処理装置100と、外部オブジェクトA(201)、B(202)と、表示対象オブジェクトX(203)との関係を上から見た場合の例を示す。なお、画像処理装置100の上面図については、図1に示す例と同様である。ただし、図3では、眼球(右目)11および眼球(左目)12が、表示部(R)181、表示部(L)182に表示される表示対象オブジェクトX(203)を見る方向に移動している場合の例を示す。 FIG. 3 shows an example in which the relationship among the image processing apparatus 100, the external objects A (201) and B (202), and the display target object X (203) is viewed from above. The top view of the image processing apparatus 100 is the same as the example shown in FIG. However, in FIG. 3, the eyeball (right eye) 11 and the eyeball (left eye) 12 are moved in the direction of viewing the display target object X (203) displayed on the display unit (R) 181 and the display unit (L) 182. An example is shown.
 外部オブジェクトA(201)、B(202)は、実際に存在する物体である。なお、図3では、説明の容易のため、外部オブジェクトA(201)、B(202)を矩形状の物体201、202として示す。 External objects A (201) and B (202) are actually existing objects. In FIG. 3, the external objects A (201) and B (202) are shown as rectangular objects 201 and 202 for ease of explanation.
 表示対象オブジェクトX(203)は、スタビライザにより表示部(R)181、表示部(L)182に表示される物体を仮想的に示すものである。図3では、外部オブジェクトA(201)およびB(202)の間に、表示対象オブジェクトX(203)を表示する場合の例を示す。 The display target object X (203) virtually indicates an object displayed on the display unit (R) 181 and the display unit (L) 182 by the stabilizer. FIG. 3 shows an example in which the display target object X (203) is displayed between the external objects A (201) and B (202).
 ここで、上述したように、透過型メガネは、グラス部の縁に配置されている光学エンジンの屈折映像出力により、表示部(R)181、表示部(L)182のグラス部に像が結ばれる。このため、図3では、表示部(R)181、表示部(L)182の実際の表示位置に相当するものを仮想表示画面位置211、212として示す。 Here, as described above, in the transmission glasses, images are formed on the glass portions of the display portion (R) 181 and the display portion (L) 182 by the refractive image output of the optical engine arranged at the edge of the glass portion. It is. For this reason, in FIG. 3, virtual display screen positions 211 and 212 correspond to the actual display positions of the display unit (R) 181 and the display unit (L) 182.
 仮想表示画面位置211、212は、スタビライザにより表示対象オブジェクトX(203)を表示部(R)181、表示部(L)182に表示する場合に、表示対象オブジェクトX(203)を表示する仮想的な表示位置である。 The virtual display screen positions 211 and 212 are virtual display positions for displaying the display target object X (203) when the display target object X (203) is displayed on the display unit (R) 181 and the display unit (L) 182 by the stabilizer. Display position.
 また、仮想表示画面位置211における表示対象オブジェクトX(203)の表示位置を表示位置213で示す。また、仮想表示画面位置212における表示対象オブジェクトX(203)の表示位置を表示位置214で示す。 Also, the display position of the display target object X (203) at the virtual display screen position 211 is indicated by a display position 213. The display position 214 indicates the display position of the display target object X (203) at the virtual display screen position 212.
 [画像処理装置が移動した場合の補正前の表示例(比較例)]
 図4は、本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。
[Display example before correction when image processing apparatus moves (comparative example)]
FIG. 4 is a diagram schematically illustrating a display example of a display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
 図4では、図3に示すように、表示対象オブジェクトXが表示されている場合に、何らかの要因で、眼球および画像処理装置100の位置が移動した場合の例(補正前の表示例)を示す。この例は、図5の比較例として示す。 FIG. 4 shows an example in which the position of the eyeball and the image processing apparatus 100 is moved for some reason when the display target object X is displayed as shown in FIG. 3 (display example before correction). . This example is shown as a comparative example in FIG.
 図4では、移動前の画像処理装置100、眼球(右目)11、眼球(左目)12を点線で示す。この点線で示す画像処理装置100、眼球(右目)11、眼球(左目)12の各位置は、図3に示す位置に対応する。また、移動後の画像処理装置100、眼球(右目)11、眼球(左目)12を実線で示す。 In FIG. 4, the image processing apparatus 100 before movement, the eyeball (right eye) 11, and the eyeball (left eye) 12 are indicated by dotted lines. The positions of the image processing apparatus 100, the eyeball (right eye) 11, and the eyeball (left eye) 12 indicated by the dotted lines correspond to the positions shown in FIG. In addition, the image processing apparatus 100 after movement, the eyeball (right eye) 11, and the eyeball (left eye) 12 are indicated by solid lines.
 また、図4では、移動前の仮想表示画面位置211、212を通常の点線で示す。この通常の点線で示す仮想表示画面位置211、212の各位置は、図3に示す位置に対応する。また、移動後の仮想表示画面位置211、212を太い点線で示す。 In FIG. 4, the virtual display screen positions 211 and 212 before the movement are indicated by normal dotted lines. The positions of the virtual display screen positions 211 and 212 indicated by the normal dotted lines correspond to the positions shown in FIG. Further, the virtual display screen positions 211 and 212 after the movement are indicated by thick dotted lines.
 図4に示すように、画像処理装置100の位置の変化に応じて、仮想表示画面位置211、212が変化する。また、眼球(右目)11、眼球(左目)12の位置も変化する。このため、表示対象オブジェクトX(205)は、意図しない位置(物体202と同じ位置)に表示されることがある。 As shown in FIG. 4, the virtual display screen positions 211 and 212 change according to the change of the position of the image processing apparatus 100. The positions of the eyeball (right eye) 11 and the eyeball (left eye) 12 also change. For this reason, the display target object X (205) may be displayed at an unintended position (the same position as the object 202).
 そこで、本技術の第1の実施の形態では、画像処理装置100の位置や眼球の位置が変化したような場合であっても、表示対象オブジェクトXの位置を適切に表示させる例を示す。 Therefore, the first embodiment of the present technology shows an example in which the position of the display target object X is appropriately displayed even when the position of the image processing apparatus 100 or the position of the eyeball is changed.
 [画像処理装置が移動した場合の補正後の表示例]
 図5は、本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。
[Example of display after correction when image processing device moves]
FIG. 5 is a diagram schematically illustrating a display example of a display target object by the image processing apparatus 100 according to the first embodiment of the present technology.
 図6は、本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトのローカル座標系をグラスの表示座標系に変換する場合の手順の一例を示す図である。 FIG. 6 is a diagram illustrating an example of a procedure when the local coordinate system of the display target object is converted into the display coordinate system of the glass by the image processing apparatus 100 according to the first embodiment of the present technology.
 図5では、図4に示す例において、表示対象オブジェクトXの表示位置を移動させることにより、表示対象オブジェクトXの位置を適切に表示させる例を示す。 FIG. 5 shows an example in which the position of the display target object X is appropriately displayed by moving the display position of the display target object X in the example shown in FIG.
 例えば、表示部(R)181および表示部(L)182の表示面に対して垂直(または、略垂直)となる方向(視線方向)に延びる線分であって、表示対象オブジェクトX(204)を配置すべき位置から延びる線分を線分30で示す。 For example, a line segment extending in a direction (line-of-sight direction) perpendicular to (or substantially perpendicular to) the display surfaces of the display unit (R) 181 and the display unit (L) 182, and the display target object X (204) A line segment extending from the position where the position is to be arranged is indicated by a line segment 30.
 また、水平方向における線分30と眼球(右目)11(虹彩中心)との距離をW1とし、水平方向における表示部(R)181の表示面と眼球(右目)11との距離(線分30上の距離)をL1とする。また、水平方向における表示部(R)181の表示面と表示対象オブジェクトX(204)との距離(線分30上の距離)をL2とする。また、水平方向における表示部(R)181の表示面と仮想表示画面位置211との距離(線分30上の距離)をL3とする。 The distance between the line segment 30 in the horizontal direction and the eyeball (right eye) 11 (iris center) is W1, and the distance between the display surface of the display unit (R) 181 and the eyeball (right eye) 11 in the horizontal direction (line segment 30). The upper distance) is L1. Further, the distance (distance on the line segment 30) between the display surface of the display unit (R) 181 and the display target object X (204) in the horizontal direction is L2. Further, the distance (distance on the line segment 30) between the display surface of the display unit (R) 181 in the horizontal direction and the virtual display screen position 211 is L3.
 なお、距離L1およびW1は、眼球情報取得部150から出力される眼球情報に基づいて求めることができる。また、距離L2およびL3は、表示対象オブジェクトXに基づいて取得することができる。 The distances L1 and W1 can be obtained based on the eyeball information output from the eyeball information acquisition unit 150. The distances L2 and L3 can be acquired based on the display target object X.
 この場合には、仮想表示画面位置211における表示対象オブジェクトX(204)の水平方向における表示位置221は、次の式により求めることができる。
  W2={(L2-L3)/(L2+L1)}W1
In this case, the display position 221 in the horizontal direction of the display target object X (204) at the virtual display screen position 211 can be obtained by the following equation.
W2 = {(L2-L3) / (L2 + L1)} W1
 ここで、W2は、仮想表示画面位置211と線分30との交点から表示位置221までの距離である。 Here, W2 is the distance from the intersection of the virtual display screen position 211 and the line segment 30 to the display position 221.
 このように、三角関数の公式により、仮想表示画面位置211における表示対象オブジェクトX(204)の水平方向における表示位置221(W2)を求めることができる。また、仮想表示画面位置211における表示対象オブジェクトX(204)の垂直方向における表示位置221についても、同様に求めることができる。また、仮想表示画面位置212における表示対象オブジェクトX(204)の水平方向および垂直方向における表示位置222についても、同様に求めることができる。 Thus, the display position 221 (W2) in the horizontal direction of the display target object X (204) at the virtual display screen position 211 can be obtained by the trigonometric function formula. Further, the display position 221 in the vertical direction of the display target object X (204) at the virtual display screen position 211 can be similarly obtained. Further, the display position 222 in the horizontal direction and the vertical direction of the display target object X (204) at the virtual display screen position 212 can be similarly obtained.
 また、視線方向を回転軸として眼球と画像処理装置100との関係が回転した場合には、その回転に応じて、仮想表示画面位置における表示対象オブジェクトX(204)の表示態様を変更するようにする。例えば、その回転方向の反対方向に表示対象オブジェクトXを回転させて表示するようにする。 Further, when the relationship between the eyeball and the image processing apparatus 100 is rotated with the line-of-sight direction as the rotation axis, the display mode of the display target object X (204) at the virtual display screen position is changed according to the rotation. To do. For example, the display target object X is rotated and displayed in the direction opposite to the rotation direction.
 また、距離L1の変化に応じて、仮想表示画面位置における表示対象オブジェクトX(204)の表示サイズを変更するようにする。 Also, the display size of the display target object X (204) at the virtual display screen position is changed according to the change in the distance L1.
 これらの各処理は行列式によって算出される。この手順の一例(ステップS701乃至S705)を、図6に示す。例えば、眼球の移動により、ビュー座標系が変化する。この場合に、その移動に対応する眼球の移動ベクトルを移動行列としてビュー変換行列Vに乗算して算出された行列をV'とすると、ビュー変換行列がV→V'に変化する。また、ビューポイントの変化により、プロジェクション変換行列P、スクリーン変換行列Sを算出するためのパラメータにも変化がある。このため、眼球の移動により、プロジェクション変換行列がP→P'となり、スクリーン変換行列がS→S'となる。また、これらの各値に基づいて最終的にグラスの表示座標を算出することができる。 Each of these processes is calculated by a determinant. An example of this procedure (steps S701 to S705) is shown in FIG. For example, the view coordinate system changes as the eyeball moves. In this case, assuming that the view transformation matrix V is multiplied by the movement vector of the eyeball corresponding to the movement and multiplied by the view transformation matrix V, the view transformation matrix changes from V → V ′. Further, the parameters for calculating the projection transformation matrix P and the screen transformation matrix S also change due to the change of the viewpoint. Therefore, due to the movement of the eyeball, the projection conversion matrix changes from P → P ′, and the screen conversion matrix changes from S → S ′. Further, the display coordinates of the glass can be finally calculated based on these values.
 このように、補正処理を実施することにより、表示対象オブジェクトXの位置を、移動前の位置に留まるようにすることができる。 Thus, by performing the correction process, the position of the display target object X can remain at the position before the movement.
 [人物の視線が移動した場合の補正例]
 図7は、本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を模式的に示す図である。図7では、画像処理装置100を装着した人物の視線の動きに対するスタビライザ例を示す。
[Example of correction when the line of sight of a person moves]
FIG. 7 is a diagram schematically illustrating a display example of a display target object by the image processing apparatus 100 according to the first embodiment of the present technology. FIG. 7 shows an example of a stabilizer for the movement of the line of sight of a person wearing the image processing apparatus 100.
 図7では、最初に、仮想表示画面位置300における表示位置301に表示されている表示対象オブジェクト303を、眼球20が見ている場合(視線305)の例を示す。この場合に、眼球20が左側に移動して視線が移動した場合を想定する。このように、眼球20が移動した場合には、仮想表示画面位置300における表示位置301に表示されている表示対象オブジェクトを眼球20が見ると、表示対象オブジェクト304が見えるようになる(線分306)。 FIG. 7 shows an example in which the eyeball 20 is looking at the display target object 303 displayed at the display position 301 in the virtual display screen position 300 (line of sight 305). In this case, it is assumed that the eyeball 20 moves to the left and the line of sight moves. Thus, when the eyeball 20 moves, when the eyeball 20 sees the display target object displayed at the display position 301 in the virtual display screen position 300, the display target object 304 becomes visible (line segment 306). ).
 そこで、眼球20が左側に移動して視線が移動した場合には、上述した補正処理を行うことにより、仮想表示画面位置300における表示位置301を表示位置302に移動するようにする。これにより、眼球20が移動した場合でも、仮想表示画面位置300における表示位置302に表示されている表示対象オブジェクト303を眼球20が見ることができる(線分307)。 Therefore, when the eyeball 20 moves to the left side and the line of sight moves, the display position 301 in the virtual display screen position 300 is moved to the display position 302 by performing the above-described correction processing. Thereby, even when the eyeball 20 moves, the eyeball 20 can see the display target object 303 displayed at the display position 302 in the virtual display screen position 300 (line segment 307).
 このように、画像処理装置100を装着した人物が、左右に視線を移動したような場合には、上述した補正処理を行うことにより表示対象オブジェクトを適切に表示することができる。 As described above, when the person wearing the image processing apparatus 100 moves his / her line of sight to the left and right, the display target object can be appropriately displayed by performing the above-described correction processing.
 [座標変換例]
 ここで、表示対象オブジェクトをスクリーン上(表示部(R)181、表示部(L)182の表示画面上)に表示するための座標変換処理について説明する。
[Coordinate conversion example]
Here, a coordinate conversion process for displaying the display target object on the screen (on the display screen of the display unit (R) 181 and the display unit (L) 182) will be described.
 表示対象オブジェクトの情報は、ローカル座標であるため、スクリーン上に表示するためには、座標変換処理をする必要がある。 Since the information of the display target object is a local coordinate, it is necessary to perform coordinate conversion processing in order to display it on the screen.
 例えば、3Dのオブジェクトデータを2Dのスクリーン(例えば、透過型メガネ)に表示するために必要な一般的な処理について説明する。例えば、ローカル座標→ワールド座標変換処理、ワールド座標→ビュー座標変換処理、ビュー座標→プロジェクション座標変換処理、プロジェクション座標→スクリーン座標変換処理が一般的な処理となる。 For example, a general process necessary for displaying 3D object data on a 2D screen (for example, transmissive glasses) will be described. For example, local coordinates → world coordinate conversion processing, world coordinates → view coordinate conversion processing, view coordinates → projection coordinate conversion processing, and projection coordinates → screen coordinate conversion processing are general processes.
 ここで、ローカル座標→ワールド座標変換処理は、各オブジェクトが外界の何処に配置されているかを変換する処理である。 Here, the local coordinate → world coordinate conversion process is a process of converting where each object is arranged in the outside world.
 また、ワールド座標→ビュー座標変換処理は、カメラ(画像処理装置100)が外界の何処に配置されているかを変換するための処理である。 Also, the world coordinate → view coordinate conversion process is a process for converting where the camera (image processing apparatus 100) is located in the outside world.
 また、ビュー座標→プロジェクション座標変換処理は、スクリーン(表示部(R)181、表示部(L)182の表示画面上)が何処に配置されているかを変換するための処理である。 Further, the view coordinate → projection coordinate conversion process is a process for converting where the screen (on the display screen of the display unit (R) 181 and the display unit (L) 182) is arranged.
 また、プロジェクション座標→スクリーン座標変換処理は、表示対象オブジェクトをスクリーン上の何処に表示するかを変換するための処理である。 Also, the projection coordinate → screen coordinate conversion process is a process for converting where the display target object is displayed on the screen.
 ただし、実際には、射影変換(前後関係により、他のオブジェクトの影に入れる処理)、テクスチャマッピング、ライティング等の処理も必要となる。ただし、ここでは、説明の容易のため、座標系に限定した処理についてのみ示す。 However, in reality, it is also necessary to perform projective transformation (processing to put in shadows of other objects depending on the context), texture mapping, lighting, and the like. However, only the processing limited to the coordinate system is shown here for ease of explanation.
 例えば、画像処理装置100を装着した人物の目の位置(カメラ位置)が、グラス(表示部(R)181、表示部(L)182)に対して固定されている場合において、目の位置がグラスに対して移動する場合を想定する。この場合には、内向きカメラ(内向き撮像部(R)103、内向き撮像部(L)104)により生成された画像に基づいて眼球位置を算出し、グラスに対する目の位置の移動を把握することができる。このため、上述したワールド座標→ビュー座標変換処理において、グラスに対する目の位置の移動に基づいて、固定されていた目の位置(カメラ位置)と、グラス(表示部(R)181、表示部(L)182)の位置との関係を補正する。 For example, when the eye position (camera position) of a person wearing the image processing apparatus 100 is fixed with respect to the glasses (display unit (R) 181 and display unit (L) 182), Assume the case of moving with respect to the glass. In this case, the eyeball position is calculated based on the images generated by the inward camera (inward imaging unit (R) 103, inward imaging unit (L) 104), and the movement of the eye position relative to the glass is grasped. can do. For this reason, in the world coordinate → view coordinate conversion process described above, based on the movement of the eye position relative to the glass, the fixed eye position (camera position) and the glass (display unit (R) 181, display unit ( L) The relationship with the position of 182) is corrected.
 また、上述したビュー座標→プロジェクション座標変換処理と、プロジェクション座標→スクリーン座標変換処理とにおいて、画像処理装置100を装着した人物の目の位置およびスクリーン位置を考慮した座標変換を行う。 Also, in the above-described view coordinates → projection coordinate conversion processing and projection coordinates → screen coordinate conversion processing, coordinate conversion is performed in consideration of the eye position and screen position of the person wearing the image processing apparatus 100.
 ここで、ビュー座標変換では、表示対象オブジェクトのワールド座標、目のワールド座標、グラスの上方向座標の各座標情報に基づいて変換行列を求めることができる。 Here, in the view coordinate conversion, a conversion matrix can be obtained based on the coordinate information of the world coordinates of the display target object, the world coordinates of the eyes, and the upward coordinates of the glass.
 [表示対象オブジェクトの表示例]
 図8乃至図10は、本技術の第1の実施の形態における画像処理装置100による表示対象オブジェクトの表示例を示す図である。なお、図8乃至図10では、画像処理装置100およびその周辺の一部のみを矩形内に示す。
[Display example of display target object]
8 to 10 are diagrams illustrating display examples of display target objects by the image processing apparatus 100 according to the first embodiment of the present technology. 8 to 10, only the image processing apparatus 100 and a part of the periphery thereof are shown in a rectangle.
 図8には、画像処理装置100の表示部(L)182を保持するフレーム401と、表示部(L)182に表示される矢印(表示対象オブジェクト)402との関係を示す。図8に示すように、画像処理装置100の表示部(L)182のレンズを通して、実際に存在するものを見ることができる。なお、図8では、説明の容易のため、実際に存在するものの一例としてマス目を示す。 FIG. 8 shows a relationship between a frame 401 that holds the display unit (L) 182 of the image processing apparatus 100 and an arrow (display target object) 402 displayed on the display unit (L) 182. As shown in FIG. 8, what actually exists can be seen through the lens of the display unit (L) 182 of the image processing apparatus 100. In FIG. 8, squares are shown as an example of what actually exists for ease of explanation.
 また、画像処理装置100の表示部(L)182には、矢印(表示対象オブジェクト)402を表示させることができる。 Further, an arrow (display target object) 402 can be displayed on the display unit (L) 182 of the image processing apparatus 100.
 図9には、画像処理装置100が傾いた場合の表示例を示す。図9のaには、画像処理装置100の傾きに応じて矢印(表示対象オブジェクト)402を補正した場合の表示例を示す。図9のbには、画像処理装置100の傾きに応じた補正を行わない場合の表示例を示す。 FIG. 9 shows a display example when the image processing apparatus 100 is tilted. FIG. 9 a shows a display example when the arrow (display target object) 402 is corrected according to the inclination of the image processing apparatus 100. FIG. 9B shows a display example when correction according to the inclination of the image processing apparatus 100 is not performed.
 図9のbに示すように、補正処理を行わない場合には、画像処理装置100の傾きに応じて矢印(表示対象オブジェクト)402が傾く。 As shown in FIG. 9b, when the correction process is not performed, the arrow (display target object) 402 is tilted according to the tilt of the image processing apparatus 100.
 一方、図9のaに示すように、補正処理を行う場合には、画像処理装置100の傾きに応じて矢印(表示対象オブジェクト)402を補正するため、矢印(表示対象オブジェクト)402が傾かない。 On the other hand, as shown in FIG. 9A, when the correction process is performed, the arrow (display target object) 402 is not tilted because the arrow (display target object) 402 is corrected according to the tilt of the image processing apparatus 100. .
 図10には、画像処理装置100が水平に移動した場合の表示例を示す。図10のaには、画像処理装置100の水平への移動に応じて矢印(表示対象オブジェクト)402を補正した場合の表示例を示す。図10のbには、画像処理装置100の水平への移動に応じた補正を行わない場合の表示例を示す。 FIG. 10 shows a display example when the image processing apparatus 100 moves horizontally. FIG. 10 a shows a display example when the arrow (display target object) 402 is corrected in accordance with the horizontal movement of the image processing apparatus 100. FIG. 10B shows a display example when the correction according to the horizontal movement of the image processing apparatus 100 is not performed.
 図10のbに示すように、補正処理を行わない場合には、画像処理装置100の水平への移動に応じて矢印(表示対象オブジェクト)402が移動する。 As shown in FIG. 10b, when the correction process is not performed, the arrow (display target object) 402 moves in accordance with the horizontal movement of the image processing apparatus 100.
 一方、図10のaに示すように、補正処理を行う場合には、画像処理装置100の水平への移動に応じて矢印(表示対象オブジェクト)402を補正するため、矢印(表示対象オブジェクト)402が移動しない。 On the other hand, as shown in a of FIG. 10, when the correction process is performed, the arrow (display target object) 402 is corrected in order to correct the arrow (display target object) 402 in accordance with the horizontal movement of the image processing apparatus 100. Does not move.
 このように、補正処理を行うことにより、外部オブジェクトに対して、矢印(表示対象オブジェクト)402が固定しているように見せることができる。 Thus, by performing the correction process, it is possible to make the external object appear as if the arrow (display target object) 402 is fixed.
 [画像処理装置の動作例(スタビライザ処理例)]
 図11は、本技術の第1の実施の形態における画像処理装置100によるスタビライザ処理の処理手順の一例を示すフローチャートである。
[Operation example of image processing apparatus (stabilizer processing example)]
FIG. 11 is a flowchart illustrating an example of a processing procedure of a stabilizer process performed by the image processing apparatus 100 according to the first embodiment of the present technology.
 最初に、視界情報取得部120は、外向き撮像部(R)101および外向き撮像部(L)102のそれぞれにより生成された画像を用いて、視界情報を抽出する(ステップS801)。例えば、視界情報取得部120は、外部特異点および加速度センサ値を視界情報として取得する(ステップS801)。なお、この例では、外向き撮像部(R)101および外向き撮像部(L)102のそれぞれにより生成された画像を用いて視界情報を取得する例を示すが、他の方法により視界情報を取得するようにしてもよい。例えば、姿勢情報取得部195(例えば、加速度センサ)から加速度センサ値を取得するようにしてもよい。なお、ステップS801は、請求の範囲に記載の視界情報取得手順の一例である。 First, the visual field information acquisition unit 120 extracts visual field information using images generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102 (step S801). For example, the visual field information acquisition unit 120 acquires an external singular point and an acceleration sensor value as visual field information (step S801). In this example, the view information is obtained by using the images generated by the outward imaging unit (R) 101 and the outward imaging unit (L) 102, but the view information is obtained by other methods. You may make it acquire. For example, an acceleration sensor value may be acquired from the posture information acquisition unit 195 (for example, an acceleration sensor). Note that step S801 is an example of a view information acquisition procedure described in the claims.
 続いて、空間モデル作成部130は、視界情報取得部120により抽出された視界情報に基づいて、外部オブジェクトの絶対座標を算出する(ステップS802)。また、空間モデル作成部130は、視界情報取得部120により抽出された視界情報と、DB140に記憶されている表示対象オブジェクト情報とに基づいて、絶対座標上での表示対象オブジェクトXの表示座標を確定する(ステップS802)。 Subsequently, the space model creation unit 130 calculates the absolute coordinates of the external object based on the view information extracted by the view information acquisition unit 120 (step S802). Further, the space model creation unit 130 determines the display coordinates of the display target object X on the absolute coordinates based on the view information extracted by the view information acquisition unit 120 and the display target object information stored in the DB 140. Confirm (step S802).
 このように、複数の外向きカメラによるマルチSLAMにより、外部オブジェクトの絶対座標を算出し、絶対座標上での表示対象オブジェクトXの表示座標を確定する。 In this way, the absolute coordinates of the external object are calculated by multi-SLAM using a plurality of outward cameras, and the display coordinates of the display target object X on the absolute coordinates are determined.
 続いて、眼球情報取得部150は、内向き撮像部(R)103および内向き撮像部(L)104のそれぞれにより生成された画像を用いて、眼球情報を抽出する(ステップS803)。例えば、眼球情報取得部150は、眼球における虹彩中心点の絶対座標と、眼球の視線ベクトルとを眼球情報として求める(ステップS803)。なお、ステップS803は、請求の範囲に記載の眼球情報取得手順の一例である。 Subsequently, the eyeball information acquisition unit 150 extracts eyeball information using images generated by the inward imaging unit (R) 103 and the inward imaging unit (L) 104 (step S803). For example, the eyeball information acquisition unit 150 obtains the absolute coordinates of the iris center point in the eyeball and the line-of-sight vector of the eyeball as eyeball information (step S803). Step S803 is an example of an eyeball information acquisition procedure described in the claims.
 続いて、表示制御部160は、取得された各情報に基づいて、仮想表示画面位置211、212における表示対象オブジェクトXの表示位置、表示サイズ、表示方向を決定する(ステップS804)。例えば、表示制御部160は、虹彩中心点の絶対座標と、表示対象オブジェクトXの表示座標と、仮想表示画面位置211、212の絶対座標とに基づいて、仮想表示画面位置211、212における表示対象オブジェクトXの表示位置、表示サイズ、表示方向を決定する。 Subsequently, the display control unit 160 determines the display position, display size, and display direction of the display target object X at the virtual display screen positions 211 and 212 based on each acquired information (step S804). For example, the display control unit 160 displays the display target at the virtual display screen positions 211 and 212 based on the absolute coordinates of the iris center point, the display coordinates of the display target object X, and the absolute coordinates of the virtual display screen positions 211 and 212. The display position, display size, and display direction of the object X are determined.
 続いて、表示制御部160は、決定された内容(仮想表示画面位置211、212における表示対象オブジェクトXの表示位置、表示サイズ、表示方向)に基づいて、表示対象オブジェクトXの補正処理を行う(ステップS805)。なお、ステップS804、S805は、請求の範囲に記載の制御手順の一例である。 Subsequently, the display control unit 160 performs correction processing on the display target object X based on the determined contents (display position, display size, display direction of the display target object X at the virtual display screen positions 211 and 212) ( Step S805). Steps S804 and S805 are an example of a control procedure described in the claims.
 続いて、表示対象オブジェクトの表示を終了するか否かが判断される(ステップS806)。表示対象オブジェクトの表示を終了する場合には(ステップS806)、スタビライザ処理の動作を終了する。一方、表示対象オブジェクトの表示を終了しない場合には(ステップS806)、ステップS801に戻る。 Subsequently, it is determined whether or not to end the display of the display target object (step S806). When the display of the display target object is ended (step S806), the stabilizer processing operation is ended. On the other hand, when the display of the display target object is not terminated (step S806), the process returns to step S801.
 このように、表示制御部160は、人物の視界に含まれる像に表示対象オブジェクトを重ねて表示させることができる。また、表示制御部160は、視界情報と眼球情報とに基づいて、人物の視界に含まれる像における表示対象オブジェクトの表示態様を変更する制御を行うことができる。例えば、表示制御部160は、表示対象オブジェクトの表示部(R)181および表示部(L)182の表示面における表示位置と、表示角度と、表示サイズとのうちの少なくとも1つを変更することにより、表示態様を変更することができる。 As described above, the display control unit 160 can display the display target object so as to overlap the image included in the field of view of the person. In addition, the display control unit 160 can perform control to change the display mode of the display target object in the image included in the person's view based on the view information and the eyeball information. For example, the display control unit 160 changes at least one of the display position, the display angle, and the display size of the display target object on the display surface of the display unit (R) 181 and the display unit (L) 182. Thus, the display mode can be changed.
 例えば、表示制御部160は、表示部(R)181および表示部(L)182の位置に関する位置情報と、視界情報と、眼球情報とに基づいて、表示対象オブジェクトの表示態様を変更する制御を行うことができる。また、例えば、表示制御部160は、表示対象オブジェクトを仮想的に表示する表示位置(例えば、仮想表示画面位置)に関する位置情報と、視界情報と、眼球情報とに基づいて、表示対象オブジェクトの表示態様を変更する制御を行うことができる。例えば、これらの各情報により特定される各位置の相対関係に基づいて、表示対象オブジェクトの表示態様を変更することができる。 For example, the display control unit 160 performs control to change the display mode of the display target object based on position information regarding the positions of the display unit (R) 181 and the display unit (L) 182, view information, and eyeball information. It can be carried out. Further, for example, the display control unit 160 displays the display target object based on the position information regarding the display position (for example, virtual display screen position) where the display target object is virtually displayed, the visual field information, and the eyeball information. Control to change the aspect can be performed. For example, the display mode of the display target object can be changed based on the relative relationship between the positions specified by these pieces of information.
 <2.第2の実施の形態>
 本技術の第1の実施の形態では、視界情報および眼球情報に基づいて、表示対象オブジェクトの表示態様を補正する例を示した。
<2. Second Embodiment>
In 1st Embodiment of this technique, the example which correct | amends the display mode of a display target object based on visual field information and eyeball information was shown.
 本技術の第2の実施の形態では、視界情報および眼球情報に基づいて、表示対象オブジェクトの鮮鋭度を制御する例を示す。すなわち、人物の視線検出に基づいて、奥行き方向のボケ処理(ピント処理、深度補正処理)を行う例を示す。なお、本技術の第2の実施の形態では、ボケ処理を行う例のみを示すが、スタビライズおよびボケ処理を同時に行うようにしてもよい。 In the second embodiment of the present technology, an example in which the sharpness of a display target object is controlled based on view information and eyeball information will be described. That is, an example of performing blur processing (focus processing, depth correction processing) in the depth direction based on detection of a person's line of sight is shown. Note that in the second embodiment of the present technology, only an example of performing blur processing is shown, but stabilization and blur processing may be performed simultaneously.
 なお、本技術の第2の実施の形態における画像処理装置の構成については、図1および図2等に示す画像処理装置100と略同一である。このため、本技術の第1の実施の形態と共通する部分については、本技術の第1の実施の形態と同一の符号を付してこれらの説明の一部を省略する。 Note that the configuration of the image processing apparatus according to the second embodiment of the present technology is substantially the same as that of the image processing apparatus 100 shown in FIGS. For this reason, about the part which is common in 1st Embodiment of this technique, the code | symbol same as 1st Embodiment of this technique is attached | subjected, and some of these description is abbreviate | omitted.
 [視線検出に基づくボケ処理例]
 図12は、本技術の第2の実施の形態における画像処理装置100によるボケ処理の一例を模式的に示す図である。
[Example of blur processing based on gaze detection]
FIG. 12 is a diagram schematically illustrating an example of a blur process performed by the image processing apparatus 100 according to the second embodiment of the present technology.
 図12では、画像処理装置100を装着した人物のピントが表示対象オブジェクトXに合ってない場合には、表示対象オブジェクトXにボケ処理を施す例(視線検出に基づくボケ処理例)を示す。具体的には、画像処理装置100を装着した人物が遠くを見た場合と、近くを見た場合とを、その人物の視線によって検出する。そして、表示対象オブジェクトXにピントが合ってない場合には、表示対象オブジェクトXに深度に合わせたボケ処理を施す例(視線検出に基づくボケ処理例)を示す。 FIG. 12 shows an example of blurring the display target object X when the person wearing the image processing apparatus 100 is not in focus on the display target object X (blur processing example based on line-of-sight detection). Specifically, a case where a person wearing the image processing apparatus 100 sees a distance and a case where the person sees a distance are detected from the line of sight of the person. An example in which the display target object X is subjected to the blur process in accordance with the depth when the display target object X is not in focus (an example of the blur process based on line-of-sight detection) is shown.
 ここで、被写体深度と視線とのボケ処理の関係について説明する。本技術の第2の実施の形態では、以下を考慮してボケ処理を行う例を示す。 Here, the relationship of blur processing between the subject depth and the line of sight will be described. In the second embodiment of the present technology, an example in which blur processing is performed in consideration of the following will be described.
 [視力に応じたボケ処理例(視線に関わらずかける処理)]
 本技術の第2の実施の形態では、視力に応じたボケ処理を、焦点距離に関わらず、常に処理する例を示す。この視力に応じたボケ処理を行う場合には、グラス(画像処理装置100)をかけている人物の視力を予め入力(または、調整)する必要がある。
[Bokeh processing example according to visual acuity (processing applied regardless of line of sight)]
In the second embodiment of the present technology, an example in which blur processing according to visual acuity is always processed regardless of the focal length is shown. When performing the blur process according to the visual acuity, it is necessary to input (or adjust) the visual acuity of the person wearing the glass (image processing apparatus 100) in advance.
 例えば、視力1.0の人物に関するボケ処理を行う場合を想定する。この場合には、視力1.0の人物に対して、10m先の対象オブジェクトで30mmのGaussian Blurのボケ処理をかけるようにする。ここで、Gaussian Blur(ガウシアンぼかし)は、ボケ処理を行う画像処理の一例である。 For example, assume a case where blur processing is performed on a person with a vision of 1.0. In this case, a 30-mm Gaussian Blur blur process is applied to a person with a visual acuity of 1.0 with a target object 10 m ahead. Here, Gaussian Blur (Gaussian blurring) is an example of image processing for performing blur processing.
 例えば、視力1.0の人物は、10m先の15cmのランドルト環(「C」のマーク)を認識することができると定義されている。例えば、15cmのランドルト環を認識することができる分解能を30mmのGaussian blurをかけたレベルとする場合には、視力係数と同じ計算式で表現すると、Kg=10000/30=333となる。 For example, it is defined that a person with a visual acuity of 1.0 can recognize a 15 cm Landolt ring ("C" mark) 10 m ahead. For example, when the resolution capable of recognizing a 15 cm Landolt ring is set to a level multiplied by 30 mm Gaussian Blur, Kg = 10000/30 = 333 when expressed by the same calculation formula as the visual acuity coefficient.
 そこで、表示対象オブジェクトXまでの距離をX1cmとすると、視力1.0で必要とされるGaussian blurの直径は、x/333となる。なお、この例では、説明の容易のため、比較的簡単な計算例を示したが、実際には、人間の視力により近いGaussian blurを使用することが好ましい。 Therefore, when the distance to the display target object X is X1 cm, the diameter of the Gaussian blur required for visual acuity 1.0 is x / 333. In this example, for ease of explanation, a comparatively simple calculation example has been shown. However, in practice, it is preferable to use a Gaussian blur closer to human vision.
 [表示対象オブジェクトXまでの距離X1と焦点位置X1'によるボケ処理例(視線によって変化する処理)]
 例えば、人間の目の性能を、カメラ(例えば、デジタルスチルカメラ)の指標で表すと、人間の目のF値は、約1.0であると言われている。また、人間の目は、状況によって焦点距離が28mm乃至130mmぐらいの範囲で変化すると言われている。
[Example of blur processing based on the distance X1 to the display target object X and the focal position X1 ′ (processing that varies depending on the line of sight)]
For example, when the performance of the human eye is represented by an index of a camera (for example, a digital still camera), the F value of the human eye is said to be about 1.0. In addition, it is said that the human eye changes its focal length in the range of about 28 mm to 130 mm depending on the situation.
 そこで、焦点距離50mm、F値1.0に基づいて「焦点距離50mm/F値1.0=有効径50」を標準有効径とし、視線検出によって焦点距離に合ったボケ処理を行うことができる。 Therefore, based on the focal length of 50 mm and the F value of 1.0, “focal length of 50 mm / F value of 1.0 = effective diameter of 50” is set as the standard effective diameter, and blur processing suitable for the focal length can be performed by visual line detection. .
 例えば、表示対象オブジェクトXまでの距離X1によって次のように定義する。 For example, it is defined as follows according to the distance X1 to the display target object X.
 例えば、X1≦A1(例えば、A1は約10cm)の場合には、調整不可処理を行う。この調整不可処理では、例えば、視線検出結果に関わらず、teleボケ処理を実施する。この場合には、0に近づく程、ボケ処理を大きくする。 For example, when X1 ≦ A1 (for example, A1 is about 10 cm), an unadjustable process is performed. In this non-adjustable processing, for example, tele blur processing is performed regardless of the line-of-sight detection result. In this case, the blur process is increased as the value approaches 0.
 また、例えば、A1<X1≦A2(例えば、A2は20乃至50cm)の場合には、マクロ処理を行う。このマクロ処理では、上述した調整不可処理の追加処理として、次の視線検出によるボケ処理を実施する。
 (1)焦点距離が表示対象オブジェクトXに合っている場合には、ボケ処理を実施しない。ただし、上述した視力に応じたボケ処理は実施する。
 (2)焦点距離が表示対象オブジェクトXよりも近い場合には、有効径50乃至85相当のボケ処理を実施する。すなわち、ボケ処理を大目に実施する。
 (3)焦点距離が表示対象オブジェクトXよりも遠い場合には、有効径28乃至50相当のボケ処理を実施する。すなわち、ボケ処理を少な目に実施する。
For example, when A1 <X1 ≦ A2 (for example, A2 is 20 to 50 cm), macro processing is performed. In this macro process, the blur process by the next line-of-sight detection is performed as an additional process of the adjustment impossible process described above.
(1) When the focal length matches the display target object X, the blur process is not performed. However, the blur process corresponding to the above-described visual acuity is performed.
(2) When the focal length is shorter than the display target object X, blur processing corresponding to an effective diameter of 50 to 85 is performed. In other words, the blur process is roughly performed.
(3) When the focal length is farther than the display target object X, blur processing corresponding to an effective diameter of 28 to 50 is performed. That is, the blur process is performed with a small amount.
 また、A2<X1の場合には、標準処理を行う。この標準処理では、例えば、上述した調整不可処理の追加処理として、有効径50相当のボケ処理を実施する。 If A2 <X1, standard processing is performed. In this standard process, for example, a blur process corresponding to an effective diameter of 50 is performed as an additional process of the adjustment impossible process described above.
 ここで、画像処理装置100を装着している人物は、片目を瞑る、または、手で片目を隠す等により、片目で見ていることも想定される。この場合には、片目で見ている場合のボケ処理(片目用ボケ処理)を実施する。 Here, it is assumed that the person wearing the image processing apparatus 100 is looking with one eye by meditating one eye or hiding one eye with a hand. In this case, blur processing (one-eye blur processing) when viewing with one eye is performed.
 片目用ボケ処理では、例えば、上述したマクロ処理および標準処理を、有効径35相当のボケ処理で実施する。 In the one-eye blur process, for example, the macro process and the standard process described above are performed with a blur process corresponding to an effective diameter of 35.
 このように、ボケ処理を実施することにより、例えば、表示対象オブジェクトしか視界に存在しないような場合(例えば、暗闇)でも、表示対象オブジェクトまでの距離を正しく認識することができる。このため、表示対象オブジェクトの大きさを正しく認識することができる。 As described above, by performing the blur processing, for example, even when only the display target object exists in the field of view (for example, darkness), the distance to the display target object can be correctly recognized. For this reason, it is possible to correctly recognize the size of the display target object.
 [ボケ処理を実施した場合の比較例]
 ここでは、図13および図14を参照して、上述したボケ処理による効果の概要を示す。
[Comparison example when blur processing is performed]
Here, with reference to FIG. 13 and FIG. 14, an outline of the effect of the above-described blur processing is shown.
 図13は、本技術の第2の実施の形態における画像処理装置100によりボケ処理を実施した場合の表示例を示す図である。 FIG. 13 is a diagram illustrating a display example when the blur processing is performed by the image processing apparatus 100 according to the second embodiment of the present technology.
 図14は、本技術の第2の実施の形態における画像処理装置100によりボケ処理を実施する前の表示例(比較例)を示す図である。 FIG. 14 is a diagram illustrating a display example (comparative example) before the blur processing is performed by the image processing apparatus 100 according to the second embodiment of the present technology.
 図13および図14には、表示部(R)181、表示部(L)182の何れかに表示される画像(人物の目に入射する像)の一例(画像600、610)を示す。画像600、610に対応する撮像範囲には、人物から比較的近い位置(手側前の位置)に配置されている物体(円錐)601と、人物から比較的離れている位置(奥側の位置)に配置されている物体(円柱)602とが含まれているものとする。また、人物の視線は、物体(円錐)601にピントが合っているが、物体(円柱)602はピントが合わずにボケて見えるものとする。 FIGS. 13 and 14 show examples (images 600 and 610) of images (images entering a human eye) displayed on either the display unit (R) 181 or the display unit (L) 182. FIG. The imaging range corresponding to the images 600 and 610 includes an object (cone) 601 disposed relatively close to the person (front side position) and a position relatively far from the person (back side position). ) (Object (cylinder)) 602 arranged in the above. Further, it is assumed that the line of sight of the person is focused on the object (cone) 601 but the object (cylinder) 602 appears out of focus.
 このような場合に、表示対象オブジェクト(球)を物体(円柱)602とほぼ同じ位置(奥側の位置)に配置して表示させる場合を想定する。 In such a case, it is assumed that the display target object (sphere) is arranged and displayed at substantially the same position (back position) as the object (cylinder) 602.
 例えば、図14に示すように、ボケ処理を行わずに、表示対象オブジェクト(球)611を表示させる場合を想定する。この場合には、物体(円柱)602はピントが合わずにボケて見えるにも関わらず、表示対象オブジェクト(球)611はピントが合って見える。このように、ほぼ同じ位置(奥側の位置)に配置されている一方の物体(物体(円柱)602)はボケて見え、他方の物体(表示対象オブジェクト(球)611)はピントが合って見えると、これらを見ている人物に違和感を与えることになる。 For example, as shown in FIG. 14, it is assumed that the display target object (sphere) 611 is displayed without performing the blur process. In this case, although the object (cylinder) 602 looks out of focus without being focused, the display target object (sphere) 611 appears to be in focus. In this way, one object (object (cylinder) 602) arranged at substantially the same position (back side position) appears to be blurred, and the other object (display target object (sphere) 611) is in focus. If you see it, it will make you feel strange.
 そこで、図13に示すように、上述したボケ処理を実施して表示対象オブジェクト(球)603を表示させる場合を想定する。この場合には、物体(円柱)602および表示対象オブジェクト(球)611はピントが合わずにボケて見える。このように、ほぼ同じ位置(奥側の位置)に配置されている2つの物体(物体(円柱)602および表示対象オブジェクト(球)611)が同じようにボケて見えるため、これらを見ている人物に違和感を与えない。 Therefore, as shown in FIG. 13, it is assumed that the above-described blur processing is performed to display the display target object (sphere) 603. In this case, the object (cylinder) 602 and the display target object (sphere) 611 look out of focus without being in focus. In this way, two objects (object (cylinder) 602 and display target object (sphere) 611) arranged at substantially the same position (back side position) appear to be blurred in the same way, so they are seen. Does not give a sense of incongruity to people.
 [画像処理装置の動作例(ボケ処理例)]
 図15は、本技術の第2の実施の形態における画像処理装置100によるボケ処理の処理手順の一例を示すフローチャートである。
[Operation Example of Image Processing Device (Bokeh Processing Example)]
FIG. 15 is a flowchart illustrating an example of a processing procedure of blur processing by the image processing apparatus 100 according to the second embodiment of the present technology.
 最初に、表示制御部160は、眼球情報取得部150からの眼球情報に基づいて、画像処理装置100を装着している人物が、表示部(R)181および表示部(L)182を両目で見ているか否かを判断する(ステップS811)。すなわち、表示制御部160は、画像処理装置100を装着している人物が、表示部(R)181および表示部(L)182の双方を見ているか否かを判断する(ステップS811)。 First, based on the eyeball information from the eyeball information acquisition unit 150, the display control unit 160 allows the person wearing the image processing apparatus 100 to display the display unit (R) 181 and the display unit (L) 182 with both eyes. It is determined whether or not the user is watching (step S811). That is, the display control unit 160 determines whether or not the person wearing the image processing apparatus 100 is looking at both the display unit (R) 181 and the display unit (L) 182 (step S811).
 画像処理装置100を装着している人物が両目で見ていない場合には(ステップS811)、表示制御部160は、画像処理装置100を装着している人物が、表示部(R)181および表示部(L)182を片目で見ているか否かを判断する(ステップS812)。すなわち、表示制御部160は、画像処理装置100を装着している人物が、表示部(R)181および表示部(L)182のうちの何れか一方を見ているか否かを判断する(ステップS812)。画像処理装置100を装着している人物が、表示部(R)181および表示部(L)182を片目で見ていない場合(すなわち、表示部(R)181および表示部(L)182の双方を見ていない場合)には(ステップS812)、ボケ処理等を行う必要がない。このため、ボケ処理の動作を終了する。 When the person wearing the image processing apparatus 100 is not looking with both eyes (step S811), the display control unit 160 determines that the person wearing the image processing apparatus 100 is the display unit (R) 181 and the display. It is determined whether or not the part (L) 182 is viewed with one eye (step S812). That is, the display control unit 160 determines whether or not the person wearing the image processing apparatus 100 is looking at one of the display unit (R) 181 and the display unit (L) 182 (step). S812). When a person wearing the image processing apparatus 100 does not see the display unit (R) 181 and the display unit (L) 182 with one eye (that is, both the display unit (R) 181 and the display unit (L) 182) (Step S812), it is not necessary to perform blur processing or the like. Therefore, the blur processing operation is terminated.
 画像処理装置100を装着している人物が、表示部(R)181および表示部(L)182のうちの何れか一方を見ている場合には(ステップS812)、表示制御部160は、片目用ボケ処理を実施する(ステップS813)。例えば、上述したマクロ処理および標準処理を、有効径35相当のボケ処理で実施する。 When the person wearing the image processing apparatus 100 is looking at one of the display unit (R) 181 and the display unit (L) 182 (step S812), the display control unit 160 displays the one eye. The blur process is performed (step S813). For example, the above-described macro processing and standard processing are performed with blur processing corresponding to an effective diameter of 35.
 画像処理装置100を装着している人物が両目で見ている場合には(ステップS811)、表示制御部160は、表示対象オブジェクトXまでの距離X1がA1以下であるか否かを判断する(ステップS814)。表示対象オブジェクトXまでの距離X1がA1以下である場合には(ステップS814)、表示制御部160は、調整不可処理を行う(ステップS815)。この調整不可処理では、例えば、視線検出結果に関わらず、teleボケ処理を実施する(ステップS815)。 When the person wearing the image processing apparatus 100 is looking with both eyes (step S811), the display control unit 160 determines whether or not the distance X1 to the display target object X is A1 or less ( Step S814). When the distance X1 to the display target object X is A1 or less (step S814), the display control unit 160 performs an adjustment impossible process (step S815). In this adjustment impossible processing, for example, tele blur processing is performed regardless of the line-of-sight detection result (step S815).
 表示対象オブジェクトXまでの距離X1がA1よりも長い場合には(ステップS814)、表示制御部160は、表示対象オブジェクトXまでの距離X1がA2以下であるか否かを判断する(ステップS816)。表示対象オブジェクトXまでの距離X1がA2よりも長い場合には(ステップS816)、表示制御部160は、標準処理を行う(ステップS817)。この標準処理では、例えば、上述した調整不可処理の追加処理として、有効径50相当のボケ処理を実施する。 When the distance X1 to the display target object X is longer than A1 (step S814), the display control unit 160 determines whether or not the distance X1 to the display target object X is A2 or less (step S816). . When the distance X1 to the display target object X is longer than A2 (step S816), the display control unit 160 performs standard processing (step S817). In this standard process, for example, a blur process corresponding to an effective diameter of 50 is performed as an additional process of the adjustment impossible process described above.
 表示対象オブジェクトXまでの距離X1がA2以下である場合には(ステップS816)、表示制御部160は、焦点距離が表示対象オブジェクトXに合っているか否かを判断する(ステップS818)。焦点距離が表示対象オブジェクトXに合っている場合には(ステップS818)、表示制御部160は、視力に応じたボケ処理を実施する(ステップS819)。 When the distance X1 to the display target object X is A2 or less (step S816), the display control unit 160 determines whether or not the focal length matches the display target object X (step S818). If the focal length matches the display target object X (step S818), the display control unit 160 performs a blur process according to the visual acuity (step S819).
 焦点距離が表示対象オブジェクトXに合っていない場合には(ステップS818)、表示制御部160は、焦点距離が表示対象オブジェクトXよりも近いか否かを判断する(ステップS820)。焦点距離が表示対象オブジェクトXよりも近い場合には(ステップS820)、表示制御部160は、有効径50乃至85相当のボケ処理を実施する(ステップS822)。すなわち、ボケ処理を大目に実施する。 When the focal length does not match the display target object X (step S818), the display control unit 160 determines whether the focal length is closer than the display target object X (step S820). When the focal length is shorter than the display target object X (step S820), the display control unit 160 performs a blur process corresponding to an effective diameter of 50 to 85 (step S822). In other words, the blur process is roughly performed.
 焦点距離が表示対象オブジェクトXよりも近くない場合(焦点距離が表示対象オブジェクトXよりも遠い場合)には(ステップS820)、表示制御部160は、有効径28乃至50相当のボケ処理を実施する(ステップS821)。すなわち、ボケ処理を少な目に実施する。 When the focal distance is not closer than the display target object X (when the focal distance is farther than the display target object X) (step S820), the display control unit 160 performs a blur process corresponding to an effective diameter of 28 to 50. (Step S821). That is, the blur process is performed with a small amount.
 このように、表示制御部160は、視界情報および眼球情報に基づいて、表示対象オブジェクトの鮮鋭度を制御することができる。例えば、表示制御部160は、表示対象オブジェクトを表示すべき視界における3次元上の位置(例えば、被写体までの距離)に基づいて、表示対象オブジェクトの鮮鋭度を制御することができる。例えば、表示制御部160は、表示対象オブジェクトに対するボケ処理を行うことにより、その鮮鋭度を制御することができる。 As described above, the display control unit 160 can control the sharpness of the display target object based on the visual field information and the eyeball information. For example, the display control unit 160 can control the sharpness of the display target object based on a three-dimensional position (for example, a distance to the subject) in the field of view where the display target object is to be displayed. For example, the display control unit 160 can control the sharpness of the display target object by performing a blur process.
 ここで、例えば、表示対象オブジェクトを外部オブジェクトに対して固定された座標上に表示する場合を想定する。例えば、加速度センサを用いて、表示対象オブジェクトの表示位置を、画像処理装置の動作方向とは逆方向に移動させることにより表示する方法(第1方法)が考えられる。また、例えば、SLAMを用いて表示部の絶対位置を算出することにより、表示対象オブジェクトの座標軸を算出して表示する方法(第2方法)が考えられる。 Suppose here that, for example, the display target object is displayed on coordinates fixed to the external object. For example, a method (first method) of displaying by using an acceleration sensor to move the display position of the display target object in the direction opposite to the operation direction of the image processing apparatus is conceivable. Further, for example, a method (second method) of calculating and displaying the coordinate axis of the display target object by calculating the absolute position of the display unit using SLAM can be considered.
 しかしながら、第1方法では、少し顔を動かすだけで表示対象オブジェクトが振動して見えるおそれがある。また、第1方法では、加速度センサの精度が低いことも想定される。この場合には、表示対象オブジェクトを適切な表示位置とすることができないおそれがある。また、仮に、加速度センサの精度として100%の精度が得られたとしても、外部オブジェクトに対する精度は表示部に対してのみ有効となる。この場合にも、少し顔を動かすだけで表示対象オブジェクトが振動して見えるおそれがある。 However, in the first method, there is a possibility that the display target object appears to vibrate only by moving the face a little. In the first method, it is also assumed that the accuracy of the acceleration sensor is low. In this case, there is a possibility that the display target object cannot be set to an appropriate display position. Even if 100% accuracy is obtained as the accuracy of the acceleration sensor, the accuracy for the external object is effective only for the display unit. Also in this case, there is a possibility that the display target object appears to vibrate only by moving the face a little.
 このため、第1方法では、表示対象オブジェクトが見難い、表示対象オブジェクトを見ていると酔う等の症状が出るおそれがある。 For this reason, in the first method, it is difficult to see the display target object, and there is a possibility that symptoms such as intoxication appear when looking at the display target object.
 また、第2方法では、表示された画像のピントが常に合っており、かつ、外部に起点となる対象物が無い場合(例えば、暗闇)には、表示対象オブジェクトが目の前(例えば、仮想表示部の位置)に表示されている錯覚に陥ってしまうおそれがある。また、第2方法では、外部オブジェクトの位置測定として、正確なカメラ位置出しを行えるSLAMは有効な手段であるが、精度は表示部に対してのみ有効となる。この場合にも、少し顔を動かすだけで表示対象オブジェクトが振動して見えるおそれがある。 In the second method, when the displayed image is always in focus and there is no target object that is the starting point (for example, darkness), the display target object is in front of the eyes (for example, virtual There is a risk of falling into the illusion displayed at the position of the display unit. In the second method, SLAM that can accurately determine the position of the external object is an effective means, but the accuracy is effective only for the display unit. Also in this case, there is a possibility that the display target object appears to vibrate only by moving the face a little.
 このため、第2方法では、表示対象オブジェクトが見難い、表示対象オブジェクトを見ていると酔う等の症状が出るおそれがある。 For this reason, in the second method, it is difficult to see the display target object, and there is a possibility that symptoms such as intoxication appear when looking at the display target object.
 ここで、例えば、画像処理装置を装着している人物が歩いている場合に、その歩く振動により表示部も振動してしまうことがある。この場合には、視認部の位置(例えば、左右の眼球位置)に対して表示部が変化することが想定される。また、例えば、画像処理装置を装着している人物が眉毛を動かす場合、近くを見た後に遠くを見る場合(または、その逆の場合)、左右に視線を移す場合等には、眼球位置が動く。この場合には、表示部に対して視認部の位置(例えば、左右の眼球位置)が変化することが想定される。 Here, for example, when a person wearing the image processing apparatus is walking, the display unit may vibrate due to the walking vibration. In this case, it is assumed that the display unit changes with respect to the position of the visual recognition unit (for example, the left and right eyeball positions). In addition, for example, when a person wearing the image processing apparatus moves eyebrows, when looking near and looking far away (or vice versa), when shifting his / her line of sight, the eyeball position is Move. In this case, it is assumed that the position of the visual recognition unit (for example, the left and right eyeball positions) changes with respect to the display unit.
 しかしながら、第1方法および第2方法では、視認部の位置と表示部との関係が変化することについて考慮されていない。そこで、視認部の位置と表示部との関係が変化することを考慮して、表示対象オブジェクトを、外部オブジェクトに対して固定された座標軸上に正確に表示することが重要となる。 However, the first method and the second method do not take into consideration that the relationship between the position of the visual recognition unit and the display unit changes. Therefore, it is important to accurately display the display target object on the coordinate axis fixed with respect to the external object in consideration of the change in the relationship between the position of the visual recognition unit and the display unit.
 そこで、本技術の第1の実施の形態では、複数の外向きカメラと、複数の内向きカメラとを用いて、外界の対象物、画像処理装置、眼球に関する各情報(例えば、位置情報)を検出して使用し、表示画像について適切な補正を行う。これにより、表示画像のブレを軽減することができる。また、仮想画面の位置を適切に制御することができる。 Therefore, in the first embodiment of the present technology, each piece of information (for example, position information) related to an external object, an image processing device, and an eyeball is obtained using a plurality of outward cameras and a plurality of inward cameras. Detect and use and correct the display image appropriately. Thereby, blurring of the display image can be reduced. In addition, the position of the virtual screen can be appropriately controlled.
 これにより、表示対象オブジェクトの揺れを防止することができる。また、視認性を飛躍的に向上させることができる。また、より強い現実感を得ることができる。また、表示に対して、脳が受ける違和感を少なくすることができる。これにより、長時間視聴の疲労を緩和することができ、酔い止めを防止することができる。これらにより、透過型ウェアラブルグラスの快適なアプリケーションを実現することができる。 This can prevent the display target object from shaking. Also, the visibility can be dramatically improved. In addition, a stronger sense of reality can be obtained. In addition, it is possible to reduce the discomfort that the brain receives from the display. Thereby, it is possible to alleviate the fatigue of viewing for a long time, and to prevent sickness. As a result, a comfortable application of the transmissive wearable glass can be realized.
 言い換えると、表示対象オブジェクトを、外界に違和感なく溶け込んだ状態で表示することができる。また、視認性を向上させることができ、3D酔いを防止し、長時間視聴時の疲労を緩和することができる。 In other words, the display target object can be displayed in a state of being blended into the outside world without a sense of incongruity. In addition, visibility can be improved, 3D sickness can be prevented, and fatigue during long-time viewing can be reduced.
 言い換えると、視覚上、表示対象オブジェクトを、より安定、かつ、自然に、外部オブジェクトに対して固定された座標上に表示することができる。 In other words, it is possible to visually display the display target object on the coordinates fixed with respect to the external object more stably and naturally.
 また、本技術の第2の実施の形態によれば、表示対象オブジェクトの目的とする深度に基づいて、表示対象オブジェクトの鮮鋭度を適切に制御することができる。 Further, according to the second embodiment of the present technology, the sharpness of the display target object can be appropriately controlled based on the target depth of the display target object.
 このように、本技術の実施の形態によれば、視認性を向上させることができる。 Thus, according to the embodiment of the present technology, the visibility can be improved.
 なお、上述の実施の形態は本技術を具現化するための一例を示したものであり、実施の形態における事項と、請求の範囲における発明特定事項とはそれぞれ対応関係を有する。同様に、請求の範囲における発明特定事項と、これと同一名称を付した本技術の実施の形態における事項とはそれぞれ対応関係を有する。ただし、本技術は実施の形態に限定されるものではなく、その要旨を逸脱しない範囲において実施の形態に種々の変形を施すことにより具現化することができる。 Note that the above-described embodiment is an example for embodying the present technology, and the matters in the embodiment and the invention-specific matters in the claims have a corresponding relationship. Similarly, the invention specific matter in the claims and the matter in the embodiment of the present technology having the same name as this have a corresponding relationship. However, the present technology is not limited to the embodiment, and can be embodied by making various modifications to the embodiment without departing from the gist thereof.
 また、上述の実施の形態において説明した処理手順は、これら一連の手順を有する方法として捉えてもよく、また、これら一連の手順をコンピュータに実行させるためのプログラム乃至そのプログラムを記憶する記録媒体として捉えてもよい。この記録媒体として、例えば、CD(Compact Disc)、MD(MiniDisc)、DVD(Digital Versatile Disc)、メモリカード、ブルーレイディスク(Blu-ray(登録商標)Disc)等を用いることができる。 Further, the processing procedure described in the above embodiment may be regarded as a method having a series of these procedures, and a program for causing a computer to execute these series of procedures or a recording medium storing the program. You may catch it. As this recording medium, for example, a CD (Compact Disc), an MD (MiniDisc), a DVD (Digital Versatile Disc), a memory card, a Blu-ray disc (Blu-ray (registered trademark) Disc), or the like can be used.
 なお、本明細書に記載された効果はあくまで例示であって、限定されるものではなく、また、他の効果があってもよい。 It should be noted that the effects described in this specification are merely examples, and are not limited, and other effects may be obtained.
 なお、本技術は以下のような構成もとることができる。
(1)
 画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得部と、
 前記人物の眼球に関する眼球情報を取得する眼球情報取得部と、
 前記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、前記視界情報と前記眼球情報とに基づいて、前記視界に含まれる像における前記表示対象オブジェクトの表示態様を変更する制御を行う表示制御部と
を具備する画像処理装置。
(2)
 前記眼球情報取得部は、前記人物の眼球方向に設けられている複数の撮像部により生成された画像に基づいて、前記人物の眼球の位置および前記眼球の視線方向を検出することにより前記眼球情報を取得する前記(1)に記載の画像処理装置。
(3)
 前記人物の視界に含まれる像を前記人物の眼に提供する表示部をさらに具備し、
 前記複数の撮像部は、前記表示対象オブジェクトを表示する前記表示部の表示面における端部に設けられる
前記(2)に記載の画像処理装置。
(4)
 前記視界情報取得部は、前記人物の視線方向に設けられている複数の撮像部により生成された画像に基づいて、前記視界に含まれる物体の特徴点を抽出する特徴点抽出処理と、前記視界に含まれる物体の深度を検出する深度検出処理とを行うことにより、前記視界に含まれる物体の位置に関する情報を前記視界情報として取得する前記(1)から(3)のいずれかに記載の画像処理装置。
(5)
 前記人物の視界に含まれる像を前記人物の眼に提供する表示部をさらに具備し、
 前記表示制御部は、前記表示部の位置に関する位置情報と、前記視界情報と、前記眼球情報とに基づいて、前記表示態様を変更する制御を行う
前記(1)から(4)のいずれかに記載の画像処理装置。
(6)
 前記表示部は、前記人物の視界に含まれる像を透過する表示面の縁に配置されている画像出力部から出力される屈折画像に基づいて前記表示面に前記表示対象オブジェクトを表示する屈折式表示部であり、
 前記表示制御部は、前記表示対象オブジェクトを仮想的に表示する表示位置に関する位置情報と、前記視界情報と、前記眼球情報とに基づいて、前記表示態様を変更する制御を行う
前記(5)に記載の画像処理装置。
(7)
 前記表示制御部は、前記表示対象オブジェクトの前記表示部の表示面における表示位置と表示角度と表示サイズとのうちの少なくとも1つを変更することにより前記表示態様を変更する前記(1)から(6)のいずれかに記載の画像処理装置。
(8)
 前記表示制御部は、前記視界情報と前記眼球情報とに基づいて、前記表示対象オブジェクトの鮮鋭度を制御する前記(1)から(7)のいずれかに記載の画像処理装置。
(9)
 前記表示制御部は、前記表示対象オブジェクトを表示すべき前記視界における3次元上の位置に基づいて、前記表示対象オブジェクトの鮮鋭度を制御する前記(8)に記載の画像処理装置。
(10)
 前記表示制御部は、前記表示対象オブジェクトに対するボケ処理を行うことにより前記鮮鋭度を制御する前記(8)または(9)に記載の画像処理装置。
(11)
 前記画像処理装置の姿勢の変化に関する姿勢情報を取得する姿勢情報取得部をさらに具備し、
 前記表示制御部は、前記視界情報と、前記眼球情報と、前記姿勢の変化に関する姿勢情報とに基づいて、前記表示態様を変更する制御を行う
前記(1)から(10)のいずれかに記載の画像処理装置。
(12)
 画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得手順と、
 前記人物の眼球に関する眼球情報を取得する眼球情報取得手順と、
 前記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、前記視界情報と前記眼球情報とに基づいて、前記視界に含まれる像における前記表示対象オブジェクトの表示態様を変更する制御手順と
を具備する画像処理方法。
(13)
 画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得手順と、
 前記人物の眼球に関する眼球情報を取得する眼球情報取得手順と、
 前記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、前記視界情報と前記眼球情報とに基づいて、前記視界に含まれる像における前記表示対象オブジェクトの表示態様を変更する制御手順と
をコンピュータに実行させるプログラム。
In addition, this technique can also take the following structures.
(1)
A field-of-view information acquisition unit that acquires field-of-view information about an image included in the field of view of a person wearing the image processing apparatus;
An eyeball information acquisition unit for acquiring eyeball information related to the eyeball of the person;
Display control for displaying a display target object superimposed on an image included in the field of view and changing the display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information An image processing apparatus.
(2)
The eyeball information acquisition unit detects the position of the eyeball of the person and the line-of-sight direction of the eyeball based on images generated by a plurality of imaging units provided in the eyeball direction of the person. The image processing apparatus according to (1), wherein:
(3)
A display unit for providing an image included in the field of view of the person to the eyes of the person;
The image processing device according to (2), wherein the plurality of imaging units are provided at an end of a display surface of the display unit that displays the display target object.
(4)
The visual field information acquisition unit extracts a characteristic point of an object included in the visual field based on images generated by a plurality of imaging units provided in the visual line direction of the person, and the visual field The image according to any one of (1) to (3), wherein information regarding a position of an object included in the field of view is acquired as the field of view information by performing depth detection processing for detecting a depth of the object included in the field of view. Processing equipment.
(5)
A display unit for providing an image included in the field of view of the person to the eyes of the person;
In any one of (1) to (4), the display control unit performs control to change the display mode based on position information regarding the position of the display unit, the visual field information, and the eyeball information. The image processing apparatus described.
(6)
The display unit displays the object to be displayed on the display surface based on a refraction image output from an image output unit disposed at an edge of the display surface that transmits an image included in the field of view of the person. A display unit,
The display control unit performs control to change the display mode based on position information regarding a display position for virtually displaying the display target object, the visual field information, and the eyeball information. The image processing apparatus described.
(7)
The display control unit changes the display mode by changing at least one of a display position, a display angle, and a display size of the display target object on the display surface of the display unit. The image processing apparatus according to any one of 6).
(8)
The image processing apparatus according to any one of (1) to (7), wherein the display control unit controls a sharpness of the display target object based on the visual field information and the eyeball information.
(9)
The image processing apparatus according to (8), wherein the display control unit controls the sharpness of the display target object based on a three-dimensional position in the field of view where the display target object is to be displayed.
(10)
The image processing apparatus according to (8) or (9), wherein the display control unit controls the sharpness by performing blurring processing on the display target object.
(11)
A posture information acquisition unit that acquires posture information related to a change in posture of the image processing apparatus;
The display control unit according to any one of (1) to (10), wherein the display control unit performs control to change the display mode based on the visual field information, the eyeball information, and posture information regarding the change in posture. Image processing apparatus.
(12)
A view information acquisition procedure for acquiring view information regarding an image included in the view of a person to whom the image processing apparatus is attached;
Eyeball information acquisition procedure for acquiring eyeball information relating to the eyeball of the person;
A control procedure for superimposing a display target object on an image included in the field of view and changing a display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information. Image processing method.
(13)
A view information acquisition procedure for acquiring view information regarding an image included in the view of a person to whom the image processing apparatus is attached;
Eyeball information acquisition procedure for acquiring eyeball information relating to the eyeball of the person;
A control procedure for causing a display target object to overlap and display an image included in the field of view, and changing a display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information. A program to be executed.
 100 画像処理装置
 101 外向き撮像部(R)
 102 外向き撮像部(L)
 103 内向き撮像部(R)
 104 内向き撮像部(L)
 111~114 画像処理部
 120 視界情報取得部
 130 空間モデル作成部
 140 DB(データベース)
 150 眼球情報取得部
 160 表示制御部
 170 表示処理部
 181 表示部(R)
 182 表示部(L)
 183~186 赤外線発光装置
 190 ブリッジ
 195 姿勢情報取得部
100 Image processing apparatus 101 Outward imaging unit (R)
102 Outward imaging unit (L)
103 Inward imaging unit (R)
104 Inward imaging unit (L)
111 to 114 Image processing unit 120 Visibility information acquisition unit 130 Spatial model creation unit 140 DB (database)
150 Eyeball Information Acquisition Unit 160 Display Control Unit 170 Display Processing Unit 181 Display Unit (R)
182 Display (L)
183 to 186 Infrared light emitting device 190 Bridge 195 Attitude information acquisition unit

Claims (13)

  1.  画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得部と、
     前記人物の眼球に関する眼球情報を取得する眼球情報取得部と、
     前記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、前記視界情報と前記眼球情報とに基づいて、前記視界に含まれる像における前記表示対象オブジェクトの表示態様を変更する制御を行う表示制御部と
    を具備する画像処理装置。
    A field-of-view information acquisition unit that acquires field-of-view information about an image included in the field of view of a person wearing the image processing apparatus;
    An eyeball information acquisition unit for acquiring eyeball information related to the eyeball of the person;
    Display control for displaying a display target object superimposed on an image included in the field of view and changing the display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information An image processing apparatus.
  2.  前記眼球情報取得部は、前記人物の眼球方向に設けられている複数の撮像部により生成された画像に基づいて、前記人物の眼球の位置および前記眼球の視線方向を検出することにより前記眼球情報を取得する請求項1記載の画像処理装置。 The eyeball information acquisition unit detects the position of the eyeball of the person and the line-of-sight direction of the eyeball based on images generated by a plurality of imaging units provided in the eyeball direction of the person. The image processing device according to claim 1, wherein
  3.  前記人物の視界に含まれる像を前記人物の眼に提供する表示部をさらに具備し、
     前記複数の撮像部は、前記表示対象オブジェクトを表示する前記表示部の表示面における端部に設けられる
    請求項2記載の画像処理装置。
    A display unit for providing an image included in the field of view of the person to the eyes of the person;
    The image processing apparatus according to claim 2, wherein the plurality of imaging units are provided at an end portion of a display surface of the display unit that displays the display target object.
  4.  前記視界情報取得部は、前記人物の視線方向に設けられている複数の撮像部により生成された画像に基づいて、前記視界に含まれる物体の特徴点を抽出する特徴点抽出処理と、前記視界に含まれる物体の深度を検出する深度検出処理とを行うことにより、前記視界に含まれる物体の位置に関する情報を前記視界情報として取得する請求項1記載の画像処理装置。 The visual field information acquisition unit extracts a characteristic point of an object included in the visual field based on images generated by a plurality of imaging units provided in the visual line direction of the person, and the visual field The image processing apparatus according to claim 1, wherein information regarding a position of an object included in the field of view is acquired as the field of view information by performing depth detection processing for detecting a depth of the object included in the field of view.
  5.  前記人物の視界に含まれる像を前記人物の眼に提供する表示部をさらに具備し、
     前記表示制御部は、前記表示部の位置に関する位置情報と、前記視界情報と、前記眼球情報とに基づいて、前記表示態様を変更する制御を行う
    請求項1記載の画像処理装置。
    A display unit for providing an image included in the field of view of the person to the eyes of the person;
    The image processing apparatus according to claim 1, wherein the display control unit performs control to change the display mode based on position information regarding the position of the display unit, the visual field information, and the eyeball information.
  6.  前記表示部は、前記人物の視界に含まれる像を透過する表示面の縁に配置されている画像出力部から出力される屈折画像に基づいて前記表示面に前記表示対象オブジェクトを表示する屈折式表示部であり、
     前記表示制御部は、前記表示対象オブジェクトを仮想的に表示する表示位置に関する位置情報と、前記視界情報と、前記眼球情報とに基づいて、前記表示態様を変更する制御を行う
    請求項5記載の画像処理装置。
    The display unit displays the object to be displayed on the display surface based on a refraction image output from an image output unit disposed at an edge of the display surface that transmits an image included in the field of view of the person. A display unit,
    The said display control part performs control which changes the said display mode based on the positional information regarding the display position which displays the said display target object virtually, the said visual field information, and the said eyeball information. Image processing device.
  7.  前記表示制御部は、前記表示対象オブジェクトの前記表示部の表示面における表示位置と表示角度と表示サイズとのうちの少なくとも1つを変更することにより前記表示態様を変更する請求項1記載の画像処理装置。 The image according to claim 1, wherein the display control unit changes the display mode by changing at least one of a display position, a display angle, and a display size of the display target object on a display surface of the display unit. Processing equipment.
  8.  前記表示制御部は、前記視界情報と前記眼球情報とに基づいて、前記表示対象オブジェクトの鮮鋭度を制御する請求項1記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the display control unit controls a sharpness of the display target object based on the visual field information and the eyeball information.
  9.  前記表示制御部は、前記表示対象オブジェクトを表示すべき前記視界における3次元上の位置に基づいて、前記表示対象オブジェクトの鮮鋭度を制御する請求項8記載の画像処理装置。 The image processing apparatus according to claim 8, wherein the display control unit controls the sharpness of the display target object based on a three-dimensional position in the field of view where the display target object is to be displayed.
  10.  前記表示制御部は、前記表示対象オブジェクトに対するボケ処理を行うことにより前記鮮鋭度を制御する請求項8記載の画像処理装置。 The image processing apparatus according to claim 8, wherein the display control unit controls the sharpness by performing a blurring process on the display target object.
  11.  前記画像処理装置の姿勢の変化に関する姿勢情報を取得する姿勢情報取得部をさらに具備し、
     前記表示制御部は、前記視界情報と、前記眼球情報と、前記姿勢の変化に関する姿勢情報とに基づいて、前記表示態様を変更する制御を行う
    請求項1記載の画像処理装置。
    A posture information acquisition unit that acquires posture information related to a change in posture of the image processing apparatus;
    The image processing apparatus according to claim 1, wherein the display control unit performs control to change the display mode based on the visual field information, the eyeball information, and posture information regarding the change in posture.
  12.  画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得手順と、
     前記人物の眼球に関する眼球情報を取得する眼球情報取得手順と、
     前記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、前記視界情報と前記眼球情報とに基づいて、前記視界に含まれる像における前記表示対象オブジェクトの表示態様を変更する制御手順と
    を具備する画像処理方法。
    A view information acquisition procedure for acquiring view information regarding an image included in the view of a person to whom the image processing apparatus is attached;
    Eyeball information acquisition procedure for acquiring eyeball information relating to the eyeball of the person;
    A control procedure for superimposing a display target object on an image included in the field of view and changing a display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information. Image processing method.
  13.  画像処理装置が装着されている人物の視界に含まれる像に関する視界情報を取得する視界情報取得手順と、
     前記人物の眼球に関する眼球情報を取得する眼球情報取得手順と、
     前記視界に含まれる像に表示対象オブジェクトを重ねて表示させ、前記視界情報と前記眼球情報とに基づいて、前記視界に含まれる像における前記表示対象オブジェクトの表示態様を変更する制御手順と
    をコンピュータに実行させるプログラム。
    A view information acquisition procedure for acquiring view information regarding an image included in the view of a person to whom the image processing apparatus is attached;
    Eyeball information acquisition procedure for acquiring eyeball information relating to the eyeball of the person;
    A control procedure for causing a display target object to overlap and display an image included in the field of view, and changing a display mode of the display target object in the image included in the field of view based on the field of view information and the eyeball information. A program to be executed.
PCT/JP2016/075878 2015-11-10 2016-09-02 Image processing device, image processing method and program WO2017081915A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015220082A JP2017091190A (en) 2015-11-10 2015-11-10 Image processor, image processing method, and program
JP2015-220082 2015-11-10

Publications (1)

Publication Number Publication Date
WO2017081915A1 true WO2017081915A1 (en) 2017-05-18

Family

ID=58695160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/075878 WO2017081915A1 (en) 2015-11-10 2016-09-02 Image processing device, image processing method and program

Country Status (2)

Country Link
JP (1) JP2017091190A (en)
WO (1) WO2017081915A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11386527B2 (en) 2018-01-30 2022-07-12 Sony Corporation Image processor and imaging processing method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11579688B2 (en) 2019-06-28 2023-02-14 Canon Kabushiki Kaisha Imaging display device and wearable device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010044383A1 (en) * 2008-10-17 2010-04-22 Hoya株式会社 Visual field image display device for eyeglasses and method for displaying visual field image for eyeglasses
WO2015125626A1 (en) * 2014-02-20 2015-08-27 ソニー株式会社 Display control device, display control method, and computer program
JP2015170175A (en) * 2014-03-07 2015-09-28 ソニー株式会社 Information processing apparatus, and information processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010044383A1 (en) * 2008-10-17 2010-04-22 Hoya株式会社 Visual field image display device for eyeglasses and method for displaying visual field image for eyeglasses
WO2015125626A1 (en) * 2014-02-20 2015-08-27 ソニー株式会社 Display control device, display control method, and computer program
JP2015170175A (en) * 2014-03-07 2015-09-28 ソニー株式会社 Information processing apparatus, and information processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11386527B2 (en) 2018-01-30 2022-07-12 Sony Corporation Image processor and imaging processing method

Also Published As

Publication number Publication date
JP2017091190A (en) 2017-05-25

Similar Documents

Publication Publication Date Title
JP7177213B2 (en) Adaptive parameters in image regions based on eye-tracking information
JP6717377B2 (en) Information processing device, information processing method, and program
US11762462B2 (en) Eye-tracking using images having different exposure times
US10048750B2 (en) Content projection system and content projection method
US11675432B2 (en) Systems and techniques for estimating eye pose
US10382699B2 (en) Imaging system and method of producing images for display apparatus
US20140152558A1 (en) Direct hologram manipulation using imu
US20190018236A1 (en) Varifocal aberration compensation for near-eye displays
KR20220120649A (en) Artificial Reality System with Varifocal Display of Artificial Reality Content
JP7388349B2 (en) Information processing device, information processing method, and program
JP6349660B2 (en) Image display device, image display method, and image display program
WO2015051605A1 (en) Image collection and locating method, and image collection and locating device
US10819898B1 (en) Imaging device with field-of-view shift control
CN111830714B (en) Image display control method, image display control device and head-mounted display device
CN111886564A (en) Information processing apparatus, information processing method, and program
KR20190048241A (en) Wearable device and image processing method thereof
JP6576639B2 (en) Electronic glasses and control method of electronic glasses
US11743447B2 (en) Gaze tracking apparatus and systems
CN110895433A (en) Method and apparatus for user interaction in augmented reality
WO2017081915A1 (en) Image processing device, image processing method and program
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
KR101817436B1 (en) Apparatus and method for displaying contents using electrooculogram sensors
WO2013179425A1 (en) Display device, head-mounted display, calibration method, calibration program, and recording medium
US20230379594A1 (en) Image blending
JP2016133541A (en) Electronic spectacle and method for controlling the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16863879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16863879

Country of ref document: EP

Kind code of ref document: A1