WO2013111280A1 - Appareil d'affichage et procédé d'affichage - Google Patents

Appareil d'affichage et procédé d'affichage Download PDF

Info

Publication number
WO2013111280A1
WO2013111280A1 PCT/JP2012/051514 JP2012051514W WO2013111280A1 WO 2013111280 A1 WO2013111280 A1 WO 2013111280A1 JP 2012051514 W JP2012051514 W JP 2012051514W WO 2013111280 A1 WO2013111280 A1 WO 2013111280A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual image
distance
angle
area
Prior art date
Application number
PCT/JP2012/051514
Other languages
English (en)
Japanese (ja)
Inventor
哲也 藤榮
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2012/051514 priority Critical patent/WO2013111280A1/fr
Publication of WO2013111280A1 publication Critical patent/WO2013111280A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/22
    • B60K35/23
    • B60K35/28
    • B60K35/53
    • B60K35/60
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • B60K2360/166
    • B60K2360/334
    • B60K2360/771
    • B60K2360/785
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/10Automotive applications

Definitions

  • the present invention relates to a technique for displaying information.
  • Patent Literature 1 proposes a technique for capturing a driver's hand and viewpoint with a camera and recognizing a gesture for an image displayed on a head-up display based on the obtained captured image.
  • Patent Document 2 describes a technique related to the present invention.
  • Patent Document 1 relates to a so-called air touch technique in which a motion (gesture) of a user's hand or finger is recognized and a virtual image selected by the user is identified and processed.
  • a motion gesture
  • the appearance of the virtual image tends to change depending on the distance and angle between the user and the virtual image. For example, when the distance between the user and the virtual image is long, the virtual image recognized by the user is small, and when the distance between the user and the virtual image is short, the virtual image recognized by the user tends to be large. Further, for example, when the angle between the user and the virtual image increases, it tends to be difficult to recognize the virtual image.
  • Patent Document 1 does not describe taking into account the distance and angle between the user and the virtual image. The same applies to Patent Document 2.
  • the main object of the present invention is to provide a display device and a display method capable of appropriately realizing air touch in consideration of the distance between a user and a virtual image.
  • the display device that allows the user to recognize the virtual image is a distance acquisition unit that acquires a distance between the virtual image position that is the position of the virtual image recognized by the user and the user, and the user.
  • Object detection means for detecting an object located between the virtual image position and an area corresponding to the virtual image in a defined space, and an operation corresponding to the area where the object is detected in the display device
  • a first control means for causing the area to be observed, and the area has an area in a plane direction with respect to the user observing the virtual image, and depending on a distance between the virtual image position and the user, The width of the surface direction changes.
  • the display method executed by the display device that allows the user to recognize the virtual image is distance acquisition for acquiring a distance between the virtual image position, which is the position of the virtual image recognized by the user, and the user.
  • An object detection step of detecting an object located between the user and the virtual image position, and an area corresponding to the virtual image existing in a defined space, and the area where the object is detected A first control step for causing the display device to perform the operation, wherein the area has a width in a plane direction with respect to the user observing the virtual image, and the virtual image position and the user Depending on the distance, the width in the surface direction changes.
  • FIG. 1 shows a schematic configuration of a head-up display according to the present embodiment.
  • the schematic structure of the control part which concerns on a present Example is shown.
  • region is shown.
  • region based on a 3rd distance and a 3rd angle is shown.
  • region determined based on the 3rd distance and the 3rd angle is shown.
  • the figure for demonstrating the calculation method of a 3rd distance and a 3rd angle concretely is shown.
  • An example of the table for determining the size of a selectable area is shown.
  • region concretely is shown.
  • the processing flow for determining a selectable area is shown.
  • the processing flow for determining a three-dimensional reaction area is shown.
  • the processing flow for realizing an air touch is shown.
  • the above display device is suitably used for displaying an image to allow a user to recognize a virtual image corresponding to the image.
  • the distance acquisition unit acquires the distance between the virtual image position and the user.
  • the object detection means detects an object that is located between the user and the virtual image position and exists in a space in which a region corresponding to the virtual image is defined.
  • the first control means causes the display device to perform an operation corresponding to the area where the object is detected. For example, the object detection unit detects a user gesture performed on a region corresponding to the virtual image (a region included in a three-dimensional space extending in front of the user), and the control unit detects the user gesture. Control according to the area.
  • the control means changes the width in the surface direction of such a region according to the distance between the virtual image position and the user. Thereby, it is possible to easily select a virtual image. That is, the accuracy of successful air touch by the user can be increased.
  • the display device further includes an angle acquisition unit configured to acquire an angle between the virtual image position and the user, and the region is wide in the surface direction according to the angle between the virtual image position and the user. Changes. Thereby, also when a user observes a virtual image from the diagonal direction, the success precision of the air touch by a user can be improved.
  • the width in the surface direction increases as the angle between the virtual image position and the user increases.
  • the width in the surface direction increases as the distance between the virtual image position and the user increases.
  • the region exists between the windshield of the moving body on which the display device is mounted and the user, and has a depth in the direction from the user to the windshield. And it further has a 2nd control means to change the length of the said depth direction according to the said user, and the said 2nd control means is controlled so that the said area
  • the second control unit determines the length of the region in the depth direction according to the length of the arm. Change.
  • a display method executed by a display device that allows a user to recognize a virtual image is a distance acquisition step of acquiring a distance between the virtual image position that is the position of the virtual image recognized by the user and the user. And an object detection step of detecting an object that is located between the user and the virtual image position and that corresponds to the virtual image and exists in a defined space, and according to the area in which the object is detected.
  • FIG. 1 is a schematic configuration diagram of a head-up display 2 according to the present embodiment.
  • the head-up display 2 according to the present embodiment includes a light source unit 3, a control unit 4, a first camera 6, a second camera 7, and a combiner 9, and includes a front window 25, It is attached to a vehicle including a ceiling portion 27, a bonnet 28, a dashboard 29, and the like.
  • the light source unit 3 is installed on the ceiling portion 27 in the vehicle compartment via the support members 5a and 5b, and emits light constituting the image toward the combiner 9. Specifically, the light source unit 3 causes the driver 9 to recognize the virtual image “Iv” via the combiner 9 by emitting light constituting the image generated by the control unit 4 to the combiner 9.
  • the light source unit 3 includes a laser light source, an LCD light source, and the like, and emits light from such a light source.
  • the combiner 9 functions as a display unit, and a display image emitted from the light source unit 3 is projected, and the display image is reflected to the driver's viewpoint (eye point) Pe to display the display image as a virtual image Iv.
  • the combiner 9 has a support shaft portion 8 installed on the ceiling portion 27, and rotates around the support shaft portion 8 as a support shaft.
  • the support shaft portion 8 is installed, for example, in the vicinity of the ceiling portion 27 in the vicinity of the upper end of the front window 25, in other words, the position where a sun visor (not shown) for the driver is installed.
  • the support shaft portion 8 may be installed instead of the above-described sun visor.
  • the first camera 6 photographs a range in which at least the virtual image Iv formed by the head-up display 2 is included.
  • the second camera 7 captures a range that includes at least the user's eyes and hands.
  • the first camera 6 and the second camera 7 supply a captured image obtained by photographing to the control unit 4.
  • the control unit 4 includes a CPU, RAM, ROM, and the like (not shown), and performs general control of the head-up display 2. Specifically, the control unit 4 generates an image to be presented to the user, and controls the light source unit 3 so that the image is displayed. For example, the control unit 4 performs control to display images such as buttons that can be selected by the user. Further, when such a button image is displayed, the control unit 4 performs a predetermined operation corresponding to the button selected by the hand (finger) by observing the virtual image Iv corresponding to the image. . For example, the control unit 4 displays a menu screen on which a plurality of buttons are presented, and when a button is selected by the user, displays a content image corresponding to the selected button. In this way, the control unit 4 performs control for realizing so-called air touch.
  • the control unit 4 obtains the distance and angle between the user and the virtual image Iv based on the photographed images photographed by the first camera 6 and the second camera 7, and according to the distance and the angle, An area corresponding to the virtual image of the button as described above is expanded, and an area where the button can be selected (hereinafter referred to as “selectable area”) is determined.
  • the selectable area is an area that includes a virtual image corresponding to the button image and surrounds the virtual image.
  • the control unit 4 treats the control unit 4 as if it is performing an operation for selecting a button when the user operates the hand (finger) based on the captured image captured by the second camera 7.
  • a region in the dimensional space (hereinafter referred to as “three-dimensional reaction region”) is determined.
  • the three-dimensional reaction region is a region located between the user and the virtual image position (that is, a region in a three-dimensional space extending in front of the user), and is defined by the size of the user's body, the size of the virtual image, and the like.
  • the selectable region is located within the three-dimensional reaction region and has at least a width in the plane direction for the user.
  • the width in the surface direction means, for example, a surface orthogonal to a straight line connecting the user and the virtual image, and is a virtual surface that the user faces. Further, in addition to the width in the surface direction, it may have a width in the depth direction from the user toward the virtual image.
  • the selectable region is a two-dimensional shape of a virtual image or a columnar three-dimensional space having a similar shape as a bottom surface.
  • air touch is realized by using the selectable area and the three-dimensional reaction area determined as described above.
  • the control unit 4 detects the position of the user's fingertip based on the captured image captured by the second camera 7, and the user selects a button within the three-dimensional reaction region. It is determined whether or not (for example, a touch operation) has been performed. When it is determined that the user has performed an operation of selecting a button in the three-dimensional reaction area, the control unit 4 can select the position of the user's fingertip (selectable on the three-dimensional reaction area). It is determined whether or not it is within an area corresponding to the area. When it is determined that the position of the user's fingertip is within the selectable area, the control unit 4 determines that a button corresponding to the selectable area has been selected, and executes an operation corresponding to the button.
  • control unit 4 is an example of “distance acquisition means”, “object detection means”, “first control means”, “second control means”, and “angle acquisition means” in the present invention.
  • FIG. 1 You may use a windshield as a display part instead of the combiner 9.
  • FIG. 1 The light source unit 3 is not limited to being installed on the ceiling portion 27, and the light source unit 3 may be installed inside the dashboard 29 instead of the ceiling portion 27.
  • control unit 4 includes a first distance calculation unit 4a, a second distance / second angle calculation unit 4b, a third distance / third angle calculation unit 4c, and a selectable area determination unit 4d.
  • the first distance calculation unit 4a analyzes the captured image captured by the first camera 6 and obtains the size of the virtual image Iv and the like, thereby obtaining a distance from a predetermined reference point to the virtual image Iv (hereinafter, “first distance”). ").) Is calculated.
  • the reference point is a point determined in advance according to the installation positions of the first camera 6 and the second camera 7 (hereinafter the same).
  • the second distance / second angle calculation unit 4b analyzes the captured image captured by the second camera 7 and obtains the position of the user's viewpoint, and the distance from the reference point to the viewpoint (hereinafter referred to as “second”). And an angle defined by the reference point and the viewpoint (hereinafter referred to as “second angle”).
  • the second angle is an angle formed by a straight line drawn perpendicularly from the reference point and a straight line connecting the reference point and the viewpoint with respect to a plane parallel to the plane on which the virtual image Iv is formed and passing through the user's viewpoint. is there.
  • the third distance / third angle calculation unit 4c is based on the first distance calculated by the first distance calculation unit 4a and the second distance and second angle calculated by the second distance / second angle calculation unit 4b.
  • the distance from the virtual image Iv to the viewpoint (hereinafter referred to as “third distance”) and the angle formed by the virtual image Iv and the viewpoint (hereinafter referred to as “third angle”) are calculated.
  • the third angle is an angle formed by the line-of-sight direction when the user views the virtual image Iv and the normal direction in the plane on which the virtual image Iv is formed.
  • the third angle is 0 [°].
  • the selectable area determination unit 4d determines a selectable area based on the third distance and the third angle calculated by the third distance / third angle calculation unit 4c (details will be described later).
  • the selectable area determination unit 4d stores the selectable area thus determined in a memory or the like.
  • the fingertip position detector 4e detects the position of the user's fingertip in the captured image by analyzing the captured image captured by the second camera 7.
  • the fingertip position detection unit 4e detects the position of the user's fingertip in the captured image using a known image analysis technique.
  • the three-dimensional reaction region determination unit 4f determines a three-dimensional reaction region by analyzing a captured image captured by the second camera 7 and estimating the size of the user's body (details will be described later). .
  • the display control unit 4g has the fingertip position detected by the fingertip position detection unit 4e within the three-dimensional reaction region determined by the three-dimensional reaction region determination unit 4f, and the selectable region determined by the selectable region determination unit 4d. If it is within, the operation corresponding to the button corresponding to the selectable area is executed. For example, the display control unit 4g performs control to display a content image corresponding to a button corresponding to the selectable area.
  • FIG. 3 is a diagram for explaining the Voronoi region used for determining the selectable region.
  • the region where the virtual image Iv is formed by the head-up display 2 is divided by Voronoi division, and the selectable region is determined according to the region (Voronoi region) obtained by the division.
  • FIG. 3A schematically shows an example of the virtual image Iv recognized by the user by the display by the head-up display 2.
  • reference numerals 71a to 71c schematically show virtual images corresponding to the selection buttons.
  • virtual images denoted by reference numerals 71a to 71c are referred to as “buttons 71a to 71c”.
  • a region surrounded by an outer frame line corresponds to a region where a virtual image Iv is formed by display by the head-up display 2 (hereinafter the same).
  • FIG. 3B shows a view in which Voronoi division is performed on the region where the virtual image Iv is formed.
  • Voronoi division centroid Voronoi division
  • FIG. 3B shows a view in which Voronoi division is performed on the region where the virtual image Iv is formed.
  • Voronoi division centroid Voronoi division
  • the vertical bisector 76a for the line segment 75a connecting the centroid 71ag and the centroid 71bg the vertical bisector 76b for the line segment 75b connecting the centroid 71bg and the centroid 71cg
  • the centroid 71cg the centroid 71cg
  • Voronoi regions 72a to 72c are obtained.
  • the Voronoi regions 72a to 72c correspond to the power regions for the buttons 71a to 71c, respectively.
  • FIG. 3C shows an example of selectable areas 73a to 73c (shown by broken lines) of the buttons 71a to 71c defined by the Voronoi areas 72a to 72c.
  • the selectable areas 73a to 73c are larger in size than the buttons 71a to 71c, and have a shape (rectangular shape) similar to the buttons 71a to 71c.
  • the selectable area determination unit 4d determines that the selectable areas 73a to 73c are based on the third distance from the virtual image Iv to the viewpoint and the third angle formed by the virtual image Iv and the viewpoint. Change the size.
  • the selectable area determination unit 4d changes the sizes of the selectable areas 73a to 73c in the Voronoi areas 72a to 72c based on the third distance and the third angle. In other words, the selectable area determining unit 4d uses, as the selectable areas 73a to 73c, areas whose sizes are changed from the areas corresponding to the buttons 71a to 71c toward the Voronoi areas 72a to 72c.
  • buttons 71a to 71c are referred to as “buttons 71”.
  • buttons 71 When not distinguished, it is described as “selectable area 73”.
  • FIG. 4 schematically shows the positional relationship between the user, the three-dimensional reaction region 80, and the virtual image Iv (specifically, a plane on which the virtual image Iv is formed). Specifically, the figure which observed the user, the three-dimensional reaction area
  • FIG. 4A shows an example where the distance between the user and the virtual image Iv is large.
  • the virtual image Iv tends to appear small to the user.
  • the area corresponding to the button 71 may be smaller with respect to the size of the finger. In this case, it can be said that the user has difficulty selecting the button 71. Therefore, when the distance between the user and the virtual image Iv is large, it may be desirable to increase the selectable area 73.
  • FIG. 4B shows an example where the distance between the user and the virtual image Iv is small.
  • the virtual image Iv looks large to the user.
  • the area corresponding to the button 71 may be sufficiently large for the user to select.
  • the user may feel that the touch accuracy has been reduced by reacting to the button selection. Therefore, when the distance between the user and the virtual image Iv is small, it may be desirable to reduce the selectable area 73.
  • FIG. 4C shows an example in which the user observes the virtual image Iv from an oblique direction. That is, an example is shown in which the viewing direction when the user views the virtual image Iv is not orthogonal to the plane corresponding to the virtual image Iv.
  • the selection is performed by air touch from an oblique direction, it can be said that the positional relationship between the direction of the fingertip and the button 71 is difficult to understand. Therefore, when the user observes the virtual image Iv from an oblique direction, it is considered desirable to enlarge the selectable area 73 in consideration of ease of selection.
  • the selectable area determination unit 4d increases the selectable area 73 when the third distance from the virtual image Iv to the viewpoint is large compared to when the third distance is small. To do. In other words, when the third distance is small, the selectable area determination unit 4d makes the selectable area 73 smaller than when the third distance is large. Furthermore, in the present embodiment, the selectable area determination unit 4d enlarges the selectable area 73 when the third angle formed by the virtual image Iv and the viewpoint is large compared to when the third angle is small.
  • the selectable area determination unit 4d prevents the selectable area 73 from exceeding the Voronoi area 72 when the selectable area 73 is enlarged as described above. For example, the selectable area determination unit 4d sets the size when the selectable area 73 is in contact with the Voronoi boundary 76 of the Voronoi area 72 as the maximum size of the selectable area 73, and the selectable area 73 so as not to exceed this maximum size. Change the size. Incidentally, as the minimum size of the selectable area 73, for example, the size of the button 71 or a size slightly smaller than the button 71 is used.
  • FIG. 5 shows an example of the selectable area 73 determined based on the third distance and the third angle.
  • FIG. 5A shows an example of selectable areas 73aa to 73ca determined when the third distance and / or the third angle is large
  • FIG. 5B shows the third distance and / or the third distance.
  • An example of selectable areas 73ab to 73cb determined when the three angles are small is shown.
  • the selectable areas 73aa to 73ca shown in FIG. 5A are larger in size than the selectable areas 73ab to 73cb shown in FIG.
  • the selectable areas 73ab to 73cb shown in FIG. 5B are smaller in size than the selectable areas 73aa to 73ca shown in FIG.
  • the selectable area 73 whose size is changed by the third distance and the third angle, it is possible to appropriately realize the air touch. Specifically, by increasing the selectable area 73 when the third distance or the third angle is large, the button 71 can be easily selected, that is, the accuracy of successful air touch by the user can be increased. Further, by reducing the selectable area 73 when the third distance is small, it is possible to appropriately suppress the user from feeling that the touch accuracy has decreased.
  • the calculation method of the first distance, the second distance, the second angle, the third distance, and the third angle will be specifically described with reference to FIG.
  • the first distance is calculated by the first distance calculation unit 4a
  • the second distance and the second angle are calculated by the second distance / second angle calculation unit 4b
  • the third distance and the third angle are calculated.
  • FIG. 6A shows a view when the user is observing the virtual image Iv from the front, as indicated by reference numeral A.
  • FIG. 6A shows a view when the user is observing the virtual image Iv from the front, as indicated by reference numeral A.
  • the angle (third angle) formed by the user's viewpoint Pea and the virtual image Iv is “0”.
  • the first distance calculation unit 4a calculates a first distance Xa1 from the reference point P1 to the virtual image Iv by analyzing a photographed image photographed by the first camera 6. For example, the first distance calculation unit 4a calculates the first distance Xa1 based on the size of the virtual image Iv in the captured image. 6 illustrates a case where the reference point P1 is set at a predetermined position (an intermediate position in one example) between the first camera 6 and the second camera 7.
  • the second distance / second angle calculation unit 4b calculates a second distance Xa2 from the reference point P1 to the viewpoint Pea by analyzing a photographed image photographed by the second camera 7.
  • the second distance / second angle calculation unit 4b calculates the second distance Xa2 based on the size and position of the user's face and eyes in the captured image.
  • a marker is attached in advance on the seat on which the user sits, and the second distance / second angle calculation unit 4b calculates the second distance Xa2 based on the size and position of the marker. .
  • the third distance / third angle calculation unit 4c is based on the first distance Xa1 calculated by the first distance calculation unit 4a and the second distance Xa2 calculated by the second distance / second angle calculation unit 4b.
  • the third distance Ya from the virtual image Iv to the viewpoint Pea is calculated.
  • the third distance / third angle calculation unit 4c attaches the first camera 6 and the second camera 7 with the internal angles (hereinafter referred to as “ ⁇ 1” and “ ⁇ 2” in FIG. 6A).
  • the third distance / third angle calculation unit 4c calculates the third distance Ya from the following equation (1) based on the cosine theorem.
  • a potentiometer capable of detecting the rotation angle is mounted on each of the first camera 6 and the second camera 7 (the resistance value of the potentiometer changes according to the rotation angle), and the output of the potentiometer Can be determined based on 6A shows an example in which “ ⁇ 1” and “ ⁇ 2” are defined based on the vertical direction from the reference point P1, but instead, the horizontal direction from the reference point P1 is used as a reference.
  • FIG. 6B shows a diagram when the user is observing the virtual image Iv from an oblique direction, as indicated by reference numeral B.
  • FIG. 6B a diagram in the case where the user is observing the virtual image Iv from the front is shown by overlapping with a broken line (see reference A).
  • the first distance calculation unit 4a calculates a first distance Xa1 from the reference point P1 to the virtual image Iv by analyzing a photographed image photographed by the first camera 6.
  • the calculation method of the first distance Xa1 is the same as that described above.
  • the second distance / second angle calculation unit 4b calculates a second distance Xb2 from the reference point P1 to the viewpoint Peb by analyzing a photographed image photographed by the second camera 7.
  • the calculation method of the second distance Xb2 is the same as that described above.
  • the second distance / second angle calculation unit 4b analyzes the captured image captured by the second camera 7 to calculate the second angle ⁇ defined by the reference point P1 and the viewpoint Peb.
  • the second angle ⁇ is defined as follows.
  • a line segment Ya ′ extending from the center of the virtual image is defined on the straight line of the first camera 6 and the second camera 7, and an intersection point P2 between the line segment Ya ′ and a line segment Z that is a horizontal line of the viewpoint Peb is defined.
  • the interior angle between the line segment Ya ′ and the line segment Z is “ ⁇ / 2”).
  • the second angle ⁇ is defined as an angle formed by a straight line connecting the reference point P1 and the point P2 and a straight line connecting the reference point P1 and the viewpoint Peb.
  • the second angle ⁇ can also be obtained based on the size and position of the user's face and eyes in the image taken by the second camera 7, for example.
  • the second distance / second angle calculation unit 4b also calculates a distance Xa2 ′ from the reference point P1 to the point P2 based on an image captured by the second camera 7 (a line segment corresponding to the distance Xa2 ′).
  • the internal angle of the line segment Z is “ ⁇ / 2”).
  • the third distance / third angle calculator 4c includes the first distance Xa1 calculated by the first distance calculator 4a, the second distance Xb2 calculated by the second distance / second angle calculator 4b, and the second distance Xb2. Based on the angle ⁇ , the third distance Yb formed by the third distance Yb from the virtual image Iv to the viewpoint Peb and the line-of-sight direction when the user views the virtual image Iv and the normal direction in the plane on which the virtual image Iv is formed. Is calculated. Specifically, the third distance / third angle calculation unit 4c determines the length of the line segment Z and the line segment Ya based on the first distance Xa1, the second distance Xb2, the second angle ⁇ , and the distance Xa2 ′.
  • the length of ' is obtained, and the third distance Yb and the third angle ⁇ are calculated using these.
  • the length of the line segment Z is calculated from the following equation (2) using a trigonometric function. Further, the length of the line segment Ya ′ is calculated from the following equation (3) from the cosine theorem.
  • the third distances Ya and Yb and the third angle ⁇ are not limited to the above procedure.
  • the first distance Xa1, the second distances Xa2 and Xb2, the second angle ⁇ , and the like are not limited.
  • the third distances Ya and Yb and the third angle ⁇ may be obtained using other parameters.
  • FIG. 7 shows an example of a table (map) for determining the size of the selectable area 73.
  • the horizontal axis indicates the size of the selectable area 73
  • the vertical axis indicates the third distances Ya, Yb and the third angle ⁇ (hereinafter, Ya, Yb Are simply written as “Y”).
  • Ya, Yb are simply written as “Y”.
  • the third distance Y and / or the third angle ⁇ when the third distance Y and / or the third angle ⁇ is large, the size is larger than when the third distance Y and / or the third angle ⁇ is small.
  • the selectable area 73 is determined.
  • a table defined based on each of the third distance Y and the third angle ⁇ is prepared separately.
  • the third distance Y and / or the third angle is not limited to using a table in which the size of the selectable region 73 is continuously changed according to the third distance Y and / or the third angle ⁇ .
  • a table in which the size of the selectable area 73 changes stepwise according to ⁇ may be used.
  • FIG. 8 shows an example of the three-dimensional reaction region 80.
  • the three-dimensional reaction region determination unit 4f analyzes the captured image captured by the second camera 7 and estimates the size of the user's body (for example, sitting height). A three-dimensional reaction region 80 is determined. Specifically, the three-dimensional reaction region determination unit 4f estimates the arm length from the size of the user's body. The relationship between the size of the body and the length of the arm may be summarized on a table after a large number of samples have been investigated in advance. Then, the length when the user extends the arm forward is estimated from the length of the arm.
  • the three-dimensional reaction region determination unit 4f sets the length when the arm is extended forward to the length in the depth direction (corresponding to the user's forward direction) in the three-dimensional reaction region 80. That is, the length in the depth direction in the three-dimensional reaction region 80 increases as the length of the user's arm increases, and decreases as the length of the user's arm decreases. As shown in FIG. 8, the three-dimensional reaction region determination unit 4f sets a plane parallel to the virtual image Iv to the bottom surface of the three-dimensional reaction region 80 (the surface facing the user in the three-dimensional reaction region 80). To do.
  • the three-dimensional reaction region 80 it is preferable to set the three-dimensional reaction region 80 so that the three-dimensional reaction region 80 is not positioned in front of the vehicle windshield. That is, it is preferable to set so that the three-dimensional reaction region 80 does not straddle the windshield.
  • the three-dimensional reaction region determination unit 4f determines the distance between the user and the windshield. Can be set to the length of the three-dimensional reaction region 80 in the depth direction. Accordingly, it is possible to avoid reacting to a touch operation performed on the other side of the windshield.
  • the distance between the user and the windshield can be estimated based on a captured image captured by the second camera 7, a sheet position, and the like.
  • the three-dimensional reaction region 80 has a rectangular parallelepiped shape.
  • the three-dimensional reaction region 80 is not limited to the rectangular parallelepiped shape.
  • the three-dimensional reaction region 80 may be configured in a shape that narrows toward the virtual image Iv.
  • a plane parallel to the virtual image Iv is set on the bottom surface of the three-dimensional reaction region 80 is shown, but instead of such a plane parallel to the virtual image Iv, the user spreads his arms up, down, left, and right.
  • a plane defined by the length of time may be set on the bottom surface of the three-dimensional reaction region 80.
  • the display control unit 4g described above uses an area corresponding to the selectable area 73 on the three-dimensional reaction area 80 when determining whether or not the position of the fingertip of the user is within the selectable area 73. It becomes. Since the selectable area 73 determined by the selectable area determining unit 4d is defined on a virtual image, it is necessary to convert the selectable area 73 into an area on the three-dimensional reaction area 80. For example, by moving the selectable region 73 on the plane where the virtual image Iv is formed horizontally toward the three-dimensional reaction region 80, the three-dimensional region formed in the three-dimensional reaction region 80 is three-dimensional. It is used as an area corresponding to the selectable area 73 on the reaction area 80.
  • FIG. 9 shows a processing flow for determining the selectable area 73.
  • the first distance calculation unit 4a analyzes the captured image captured by the first camera 6 and obtains the first distance from the reference point to the virtual image Iv by obtaining the size of the virtual image Iv and the like. calculate. Then, the process proceeds to step S102.
  • step S102 the second distance / second angle calculation unit 4b analyzes the captured image captured by the second camera 7, and obtains the position of the user's viewpoint, etc., so that the second distance from the reference point to the viewpoint is determined. The second angle defined by the distance and the reference point and the viewpoint is calculated. Then, the process proceeds to step S103.
  • step S103 the third distance / third angle calculation unit 4c determines from the virtual image Iv to the viewpoint based on the first distance calculated in step S101 and the second distance and second angle calculated in step S102. And a third angle formed by the virtual image Iv and the viewpoint is calculated. Then, the process proceeds to step S104.
  • step S104 the selectable area determination unit 4d determines the selectable area 73 based on the third distance and the third angle calculated in step S103. Specifically, the selectable area determination unit 4d first acquires an image of the button 71 to be displayed and obtains a Voronoi area 72 based on a virtual image corresponding to the image of the button 71 (see FIG. 3). Then, the selectable area determination unit 4d imposes a limit on the size of the selectable area 73 so as not to exceed the Voronoi area 72, and then, for example, according to the table shown in FIG. Alternatively, the size of the selectable area 73 is determined based on the third angle.
  • the selectable area determination unit 4d stores information on the determined selectable area 73 (specifically, information on a coordinate range corresponding to the selectable area 73) in a memory or the like. After step S104 described above, the process proceeds to step S105.
  • step S105 the selectable area determination unit 4d transmits the information of the selectable area 73 stored in step S104 to the display control unit 4g. Then, the process ends.
  • FIG. 10 shows a processing flow for determining the three-dimensional reaction region 80.
  • the three-dimensional reaction region determination unit 4f analyzes the captured image captured by the second camera 7 to determine the size of the user's body and the like, thereby determining the three-dimensional reaction region 80. . Specifically, the three-dimensional reaction region determination unit 4f estimates the length when the user extends the arm forward from the captured image, and sets the length in the depth direction of the three-dimensional reaction region 80. In addition to setting, a plane parallel to the virtual image Iv is set as the bottom surface in the three-dimensional reaction region 80. Then, the process proceeds to step S202.
  • step S202 the three-dimensional reaction region determination unit 4f transmits information on the three-dimensional reaction region 80 determined in step S201 (information on the three-dimensional space corresponding to the three-dimensional reaction region 80) to the display control unit 4g. Then, the process ends.
  • FIG. 11 shows a processing flow for realizing the air touch.
  • the display control unit 4g acquires information on the selectable region 73 determined by the selectable region determination unit 4d (specifically, information on the coordinate range corresponding to the selectable region 73), and Information on the three-dimensional reaction region 80 determined by the three-dimensional reaction region determination unit 4f is acquired. Then, the process proceeds to step S302.
  • step S302 the display control unit 4g acquires information on the position of the fingertip detected by the fingertip position detection unit 4e.
  • the display control unit 4g analyzes the movement of the user's finger based on the information on the fingertip position thus obtained. Then, the process proceeds to step S303.
  • step S303 the display control unit 4g determines whether or not an operation (for example, a touch operation) for selecting the button 71 in the three-dimensional reaction region 80 has been performed based on the finger movement analyzed in step S302. To do.
  • an operation for example, a touch operation
  • the process proceeds to step S304.
  • the operation for selecting the button 71 is not performed (step S303: No)
  • the process is performed. The process returns to step S302.
  • step S304 the display control unit 4g determines whether or not the position of the fingertip acquired in step S302 is within the selectable area 73. To be exact, the display control unit 4g determines whether or not the position of the fingertip is in an area corresponding to the selectable area 73 on the three-dimensional reaction area 80. The display control unit 4g performs such determination by converting the selectable area 73 defined on the virtual image into an area on the three-dimensional reaction area 80. If the position of the fingertip is within the selectable area 73 (step S304: Yes), the process proceeds to step S305. If the position of the fingertip is not within the selectable area 73 (step S304: No), the process proceeds to step S302. Return.
  • step S305 the display control unit 4g determines that the button 71 corresponding to the selectable area 73 is selected, and executes an operation corresponding to the button 71. For example, the display control unit 4g performs control to call up the content image corresponding to the selected button 71 from the memory and display the called content image. Then, the process ends.
  • FIG. 12 shows selectable areas 73ax to 73cx according to the modification.
  • selectable areas 73ax to 73cx according to the modification have shapes similar to Voronoi areas 72a to 72c, respectively. That is, the selectable areas 73ax to 73cx have a shape obtained by reducing the size of the Voronoi areas 72a to 72c.
  • the third distance and the third distance are set after limiting the size of the selectable regions 73ax to 73cx so as not to exceed the Voronoi regions 72a to 72c. Based on the angle, the sizes of the selectable areas 73ax to 73cx are determined.
  • the selectable areas 73ax to 73cx can be set using the range of the Voronoi areas 72a to 72c to the maximum extent. Therefore, according to the modification, the sizes of the selectable areas 73ax to 73cx are larger than those in the above-described embodiment. Therefore, according to the modification, the button 71 can be selected more easily.
  • the region where the virtual image Iv is formed may be divided using a method other than Voronoi division.
  • the region where the virtual image Iv is formed is not limited to being divided. That is, if a restriction is imposed so that the plurality of selectable areas 73 do not overlap each other, the area where the virtual image Iv is formed need not be divided.
  • the selectable area 73 is applied to the button 71 .
  • the selectable area 73 determined by the same method as in the above-described embodiment can be applied to icons, scroll bars, and the like.
  • the present invention is not limited to application to the head-up display 2.
  • the present invention can also be applied to a mobile terminal such as a smartphone that realizes AR (Augmented Reality) that displays information superimposed on a scene in front of the user.
  • the present invention can also be applied to, for example, a head-mounted display, a general display device (on-vehicle monitor or home television), a digital camera with a monitor, and the like.
  • the present invention can be applied to a display device such as a head-up display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un appareil d'affichage permettant à un utilisateur de reconnaître une image virtuelle, qui comporte : un moyen d'acquisition de distance, qui acquiert une distance entre l'utilisateur et une position d'image virtuelle, c'est-à-dire, une position de l'image virtuelle devant être reconnue par l'utilisateur ; un moyen de détection d'objet, qui est positionné entre l'utilisateur et la position d'image virtuelle, et détecte un objet dans un espace où une région qui correspond à l'image virtuelle est spécifiée ; et un premier moyen de commande, qui entraîne que l'appareil d'affichage effectue des opérations qui correspondent à une région où l'objet est détecté. La région possède une largeur dans la direction plane par rapport à l'utilisateur qui observe l'image virtuelle, et la largeur dans la direction plane change en correspondant à la distance entre la position d'image virtuelle et l'utilisateur.
PCT/JP2012/051514 2012-01-25 2012-01-25 Appareil d'affichage et procédé d'affichage WO2013111280A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/051514 WO2013111280A1 (fr) 2012-01-25 2012-01-25 Appareil d'affichage et procédé d'affichage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/051514 WO2013111280A1 (fr) 2012-01-25 2012-01-25 Appareil d'affichage et procédé d'affichage

Publications (1)

Publication Number Publication Date
WO2013111280A1 true WO2013111280A1 (fr) 2013-08-01

Family

ID=48873051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/051514 WO2013111280A1 (fr) 2012-01-25 2012-01-25 Appareil d'affichage et procédé d'affichage

Country Status (1)

Country Link
WO (1) WO2013111280A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015166229A (ja) * 2014-03-04 2015-09-24 アイシン・エィ・ダブリュ株式会社 ヘッドアップディスプレイ装置
GB2525653A (en) * 2014-05-01 2015-11-04 Jaguar Land Rover Ltd Apparatus and method for providing information within a vehicle
JP2015223913A (ja) * 2014-05-27 2015-12-14 アルパイン株式会社 車載システム、視線入力受付方法及びコンピュータプログラム
CN113302661A (zh) * 2019-01-10 2021-08-24 三菱电机株式会社 信息显示控制装置及方法、以及程序及记录介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0895708A (ja) * 1994-09-22 1996-04-12 Aisin Aw Co Ltd 画面タッチ式入力装置
JPH08197981A (ja) * 1995-01-23 1996-08-06 Aqueous Res:Kk 車輌用表示装置
JPH1115420A (ja) * 1997-06-26 1999-01-22 Yazaki Corp 車両用表示装置
JP2005138755A (ja) * 2003-11-07 2005-06-02 Denso Corp 虚像表示装置およびプログラム
JP2007302223A (ja) * 2006-04-12 2007-11-22 Hitachi Ltd 車載装置の非接触入力操作装置
WO2009084084A1 (fr) * 2007-12-27 2009-07-09 Pioneer Corporation Dispositif de reproduction de support d'enregistrement, procédé de reproduction de support d'enregistrement, programme de reproduction de support d'enregistrement et support d'enregistrement sur lequel un programme de reproduction de support d'enregistrement est mémorisé
JP2010083206A (ja) * 2008-09-29 2010-04-15 Denso Corp 車載用電子機器操作装置
JP2010249862A (ja) * 2009-04-10 2010-11-04 Sanyo Electric Co Ltd 表示装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0895708A (ja) * 1994-09-22 1996-04-12 Aisin Aw Co Ltd 画面タッチ式入力装置
JPH08197981A (ja) * 1995-01-23 1996-08-06 Aqueous Res:Kk 車輌用表示装置
JPH1115420A (ja) * 1997-06-26 1999-01-22 Yazaki Corp 車両用表示装置
JP2005138755A (ja) * 2003-11-07 2005-06-02 Denso Corp 虚像表示装置およびプログラム
JP2007302223A (ja) * 2006-04-12 2007-11-22 Hitachi Ltd 車載装置の非接触入力操作装置
WO2009084084A1 (fr) * 2007-12-27 2009-07-09 Pioneer Corporation Dispositif de reproduction de support d'enregistrement, procédé de reproduction de support d'enregistrement, programme de reproduction de support d'enregistrement et support d'enregistrement sur lequel un programme de reproduction de support d'enregistrement est mémorisé
JP2010083206A (ja) * 2008-09-29 2010-04-15 Denso Corp 車載用電子機器操作装置
JP2010249862A (ja) * 2009-04-10 2010-11-04 Sanyo Electric Co Ltd 表示装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015166229A (ja) * 2014-03-04 2015-09-24 アイシン・エィ・ダブリュ株式会社 ヘッドアップディスプレイ装置
GB2525653A (en) * 2014-05-01 2015-11-04 Jaguar Land Rover Ltd Apparatus and method for providing information within a vehicle
GB2542237A (en) * 2014-05-01 2017-03-15 Jaguar Land Rover Ltd Apparatus and method for providing information within a vehicle
GB2542237B (en) * 2014-05-01 2017-10-25 Jaguar Land Rover Ltd Apparatus and method for providing information within a vehicle
GB2526938B (en) * 2014-05-01 2018-05-30 Jaguar Land Rover Ltd Apparatus and method for providing information within a vehicle
JP2015223913A (ja) * 2014-05-27 2015-12-14 アルパイン株式会社 車載システム、視線入力受付方法及びコンピュータプログラム
CN113302661A (zh) * 2019-01-10 2021-08-24 三菱电机株式会社 信息显示控制装置及方法、以及程序及记录介质

Similar Documents

Publication Publication Date Title
JP6702489B2 (ja) ヘッドマウントディスプレイ、情報処理方法、及びプログラム
US9961259B2 (en) Image generation device, image display system, image generation method and image display method
US8730164B2 (en) Gesture recognition apparatus and method of gesture recognition
KR101416378B1 (ko) 영상 이동이 가능한 디스플레이 장치 및 방법
JP5839220B2 (ja) 情報処理装置、情報処理方法、及びプログラム
US20150116206A1 (en) Screen operation apparatus and screen operation method
JP5167439B1 (ja) 立体画像表示装置及び立体画像表示方法
JP5047650B2 (ja) 車載カメラシステム
JP2010184600A (ja) 車載用ジェスチャースイッチ装置
JP6339887B2 (ja) 画像表示装置
US9727229B2 (en) Stereoscopic display device, method for accepting instruction, and non-transitory computer-readable medium for recording program
WO2006013783A1 (fr) Appareil de saisie
JP6257978B2 (ja) 画像生成装置、画像表示システム及び画像生成方法
US10146058B2 (en) Visual line direction sensing device
US9162621B2 (en) Parking support apparatus
JP2014056462A (ja) 操作装置
US20120271102A1 (en) Stereoscopic endoscope apparatus
WO2013111280A1 (fr) Appareil d'affichage et procédé d'affichage
JP2014069629A (ja) 画像処理装置、及び画像処理システム
JP5977130B2 (ja) 画像生成装置、画像表示システム、および、画像生成方法
JP5810874B2 (ja) 表示制御システム
JP2015005823A (ja) 画像処理装置およびデジタルミラーシステム
JP2019002976A (ja) 空中映像表示装置
KR101542671B1 (ko) 공간 터치 방법 및 공간 터치 장치
JP2012103980A5 (fr)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12866424

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12866424

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP