WO2014096896A1 - A method of selecting display data in a display system of a vehicle - Google Patents

A method of selecting display data in a display system of a vehicle Download PDF

Info

Publication number
WO2014096896A1
WO2014096896A1 PCT/IB2012/003057 IB2012003057W WO2014096896A1 WO 2014096896 A1 WO2014096896 A1 WO 2014096896A1 IB 2012003057 W IB2012003057 W IB 2012003057W WO 2014096896 A1 WO2014096896 A1 WO 2014096896A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
hand
area
shape
contactless control
Prior art date
Application number
PCT/IB2012/003057
Other languages
French (fr)
Inventor
Rémi BARRELLON
Ryme SEBKY
Original Assignee
Renault Trucks
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renault Trucks filed Critical Renault Trucks
Priority to PCT/IB2012/003057 priority Critical patent/WO2014096896A1/en
Publication of WO2014096896A1 publication Critical patent/WO2014096896A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3664Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/141Activation of instrument input devices by approaching fingers or pens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/146Instrument input by gesture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/33Illumination features
    • B60K2360/334Projection means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the invention relates to a display system for a vehicle, and more particularly to a method for selecting display data in this display system.
  • Such a display system comprises means for displaying the image in a display area, for instance a screen.
  • An example of display system is a Head-up display system (also called HUD system).
  • HUD system Head-up display system
  • Such a display system is intended to display an image in the field of view of a driver of a motor vehicle, such a truck or a car.
  • the displaying means use a translucent screen.
  • This translucent screen can be formed by an additional translucent screen arranged in front of the windshield of the vehicle, or can be formed by this windshield.
  • the displayed image provides useful information for the driver, such as the speed of the vehicle, a display of a representation of the environment of the vehicle such as a map for a GPS (Global Positioning System), or any type of visual alarm in general.
  • a display of a representation of the environment of the vehicle such as a map for a GPS (Global Positioning System), or any type of visual alarm in general.
  • this information is alphanumeric, or it is presented as pictograms and/or graphics.
  • This information can be static or dynamic.
  • this information is displayed in the field of view of the driver so as to avoid the driver having his eyes move off the road to see it.
  • the display system is usually associated with a control device for controlling said display system.
  • This control device is intended to allow the driver to interact with the displayed image, for instance by choosing what information has to be displayed, or by navigating in a menu of a GPS, or by choosing an option in a list of options, or any other feature.
  • a control device is already known from prior art, in particular from FR 2 952 012.
  • This control device comprises at least a sensor for detecting a motion of the driver's hand in a contactless control area.
  • Control means are connected to the sensor so as to correlate motion of the driver's hand with instructions given to the head-up display system.
  • the contactless control area is arranged between the driver and the display area of the head-up display system, so that the head-up display system is controlled by having a driver's finger pointing towards the display area. Consequently, when the driver controls the head-up display system, his hand is moved in his field of view, thus his hand may disturb his vision.
  • One of the objects of the invention is to overcome this disadvantage by proposing a method for selecting display data in a display system, which allows the driver to control the display system without having his vision disturbed by his hand, while keeping a hand motion detection way to control the system.
  • the invention relates to a method for selecting display data in a display system of a vehicle, characterized in that the display system comprises:
  • a contactless control device including at least a sensor set that is arranged in a passenger compartment of the vehicle, so as to detect motions of at least one user's hand in a detection area reachable by the user's hand at least when the user is in a driving position, and
  • the method comprising the following configuration steps, for determining a contactless control area of the display system in the detection area:
  • a step of identifying at least a first local point in the detection area, corresponding to said at least first motion or position of the user's hand
  • the method also comprising the following operating steps, for selecting displayed data:
  • Such a method allows the user to define a contactless control area disposed at a distance of his field of view.
  • the user can control the system without having his hand disturbing his vision.
  • this method allows each user to define a contactless control area that is the most comfortable for him, and not an imposed contactless control area as in prior art.
  • a method according to the invention can comprise one or several of the following features, alone or in combination:
  • the configuration steps comprise a step of correlating the contactless control area with the display area, comprising:
  • the operating steps comprise a step of assigning a position in the display area to the detected further motion or position of the user's hand, comprising:
  • the step of determining which displayed data is selected being performed according to said corresponding position determined in the display area.
  • the configuration steps comprise a step of correlating the contactless control area with the display area, comprising:
  • determining the corresponding position in the display area that corresponds to said sub-area where the user's hand is detected, the step of determining which displayed data is selected being performed according to the corresponding position determined in the display area.
  • the operating steps comprise a step of displaying a pointer on the display area, corresponding to the user's hand, as long as this user's hand is in the contactless control area.
  • this contactless control area is delimited by a predetermined 2D or 3D shape that is positioned according to the position of said first local point in the detection area and is oriented by orienting said predetermined 2D or 3D shape according to a determined direction.
  • a determined direction is preferably determined by the display system so that the contactless control area is oriented according to a direction that extends from the theoretical position of user's shoulder on the side of the firs local point, user's head or user's torso and that passes through the first local point.
  • the step of detection of the first motion and/or position of the user's hand then comprises a detection of at least one second motion and/or position of the user's hand
  • the step of identifying a first local point then comprises an identification of at least a second local point in the detection area, corresponding to said at least one second motion and/or position of the user's hand, and
  • the step of defining the contactless control area comprises delimiting the contactless control area by positioning a predetermined 2D or 3D shape according to the position of said first local point in the detection area (30), and orienting the contactless control area by orienting said predetermined 2D or 3D shape depending on the position of the second local point in the detection area.
  • the predetermined 2D or 3D shape is oriented according to at least one direction that extends from the first local point to the second local point.
  • the predetermined 2D shape have the shape of a rectangle or of a disc and the predetermined 3D shape has the shape of a parallelepiped, a truncated pyramid or a truncated cone centred on said first local point.
  • the step of detecting the first motion and/or position of the user's hand also comprises a detection of at least one second motion and/or position of the user's hand
  • the step of identifying a first local point also comprises an identification of at least a second local point in the detection area, corresponding to said at least one second motion and/or position of the user's hand
  • the step of defining the contactless control area comprises delimiting and orienting the contactless control area by determining a 2D or a 3D shape according to the position of said first and second local points in the detection area.
  • the step of defining a 3D shape of the contactless control area comprises determining an intermediate 2D shape according to the position of said at least first and second local points in the detection area, orienting said intermediate 2D shape according to a determined direction, and then projecting said intermediate 2D shape according to a direction that is perpendicular to the intermediate 2D shape, in order to form a 3D shape of the contactless control area.
  • intermediate 2D shape refers to an intermediate shape that is determined before obtaining the 3D shape of the contactless control area.
  • the step of defining a 3D shape of the contactless control area comprises determining an intermediate 2D shape according to the position of said at least first and second local points in the detection area, orienting the intermediate 2D shape depending on the position of at least a third local point corresponding to a third motion and/or position of the user's hand in the detection area, and then projecting the intermediate 2D shape according to a direction that is perpendicular to this intermediate 2D shape in order to form a 3D shape of the contactless control area.
  • said intermediate 2D shape is centred on said first local point, and the periphery of the intermediate 2D shape is defined so that the second local point is part of the periphery of this intermediate 2D shape.
  • said intermediate 2D shape is a rectangle centred on said first local point, and the periphery of the rectangle is defined so that the second local point corresponds to a corner of the rectangle.
  • said 2D shape or intermediate 2D shape is expanded from said first local point and around said first local point until the periphery of the 2D shape or intermediate 2D shape reaches said second local point or reach at least one limit of the detection area.
  • the 2D shape or intermediate 2D shape continues to be expanded in the opposite direction to said limit until reaching the second local point.
  • the 2D shape or the intermediate 2D shape has the shape of a rectangle or of a disc
  • the 3D shape of the contactless control area has the shape of a parallelepiped, a truncated pyramid or a truncated cone that corresponds to the intermediate 2D shape.
  • the contactless control area, the 3D and the 2D shapes are intangible area and shapes.
  • the method comprises a preliminary step of determining the detection area, comprising:
  • the method comprises a preliminary step of determining the detection area, comprising:
  • the method comprises a preliminary step of determining the detection area, comprising:
  • the invention also relates to a display system of a vehicle, characterized in that it comprises:
  • a contactless control device including at least a sensor set that is arranged in a passenger compartment of the vehicle, so as to detect motions of at least one user's hand in a detection area reachable by the user's hand when the user is in a driving position, and
  • the display system also comprising means for defining a contactless control area in the detection area, said determining means comprising:
  • the display system also comprising means for selecting displayed data, comprising:
  • Such a display system is adapted to perform the method disclosed above.
  • a display system according to the invention can comprise one or several of the following features, alone or in combination:
  • the display area is projected in the field of view of the driver.
  • the sensor set is able to detect 3D motions of user's hand in the detection area.
  • the sensor set comprises at least one camera, preferably two cameras.
  • the sensor set comprises at least an Infra-Red or Ultra-Violet projector able to caption a motion of the user's hand.
  • the detection area is determined by the arrangement of the sensor set in the cabin and by the characteristics of the sensor set.
  • FIG. 1 is a schematic representation of a head up display system for a motor vehicle, comprising a control device according to an example of embodiment of the invention
  • FIG. 2 is a schematic representation of a matrix delimiting a contactless control area, projected by the control device of figure 1 , in a non limiting example of embodiment;
  • Figure 3 is a schematic representation of a contactless control area and a display area of the display system of figure 1 ;
  • Figure 4a is a schematic representation of configuration steps of a method for selecting display data in the display system of figure 1 ;
  • FIG. 4b is a schematic representation of operating steps of a method for selecting display data in the display system of figure 1 ;
  • FIGS. 5 and 6 are schematic representations of contactless control areas such as defined during the configuration steps of the method represented of figure 4a;
  • FIG. 7 is a schematic representation of a contactless control area such as defined according to a first example of a first variant of the configuration method of figure 4a;
  • FIG. 8a, 8b, 8c represent steps to define the contactless control area according to a second example of the first variant.
  • Figures 9a, 9b, 9c are schematic representations of contactless control areas such as defined according to a fourth example of the first variant.
  • the display system 10 is a HUD system, but it can be any other display system in a variant.
  • This display system 10 is intended to display data 50, such as for instance an image, in the field of view 11 of a driver 12 of the motor vehicle.
  • the display system 10 comprises means 14 for displaying data in a predetermined display area 16 (visible on figure 3), by displaying this data on a screen of the dashboard or, in case of a HUD system, by projecting this image or data on the windscreen 18 of the vehicle or on translucent screen located between the driver and the windscreen 18.
  • These means 14 for displaying data 50 habitually comprise a video or picture projector.
  • the display means 14 may also comprise at least a mirror to convoy light beams emitted by the projector towards the windscreen 18.
  • the means 14 for displaying data comprises means for manually defining a location of the display area 16 on the windshield 18.
  • selectable display data 50 that can be, via the display system 10, selected or modified by the user.
  • Any selectable display data 50 may be associated with a unique code in the system 10.
  • the system 10 may know which part of the display area 16 must be considered as an active part that can be useable as a selectable display data 50.
  • Other parts of the display area 16 are considered as inactive by the system 10.
  • the display area 16 is set on the windshield 18 in the field of view of the driver, so as to help the driver 12 to keep eyes on road.
  • the display system 10 comprises a control device 20 for controlling said display system 10.
  • This control device 20 is intended to allow the driver or any other user 12 to interact with the displayed data, for instance by choosing what information has to be displayed, or by navigating in a menu of a GPS, or by choosing an option in a list of options, or any other feature.
  • the control device 20 allows a contactless control, so that the driver or any other passenger or user 12 can activate features just by moving his hand 13 or by moving his fingers.
  • the wording "hand” refers to any part of the user's hand 13 that can be used to control the control device 20, generally the user's fingers, or to any tool held in the user's hand 13.
  • the term "user” refers to the driver or to any other passenger or user .
  • the control device 20 comprises a sensor set 26, that may comprise at least one camera 27, preferably two cameras, intended to analyse a motion of the user's hand 13.
  • the sensor set 26 is able to detect 3D motions of user's hand 3 in a detection area 30. Therefore, if it used only one camera, this camera is preferably a 3D camera and if it used two cameras, these cameras can be 2D cameras arranged and oriented in order to allow the control device 20 to detect 3D motions of user's hand 13.
  • the sensor set 26 can also comprise an Infrared and/or Ultraviolet projector 28, projecting Infrared and/or Ultraviolet towards the user's hand 13.
  • the sensor set 26 is able to work with normal light, Infrared and/or Ultraviolet. These Infrared or Ultraviolet light allow to ensure that the user's hand 13 can be detected by the sensor set 26, whatever is the ambient light in the passenger compartment of the motor vehicle (summer sun light, night conditions, etc.).
  • the Infrared/Ultraviolet projector 28 and the cameras 27 of the sensor set 26, are oriented and configured to cover motions of the user's hands 13 when he is in a driving position. Besides, the sensor set 26 should cover a zone of the cabin from the lower part of the steering wheel of the motor vehicle to the top of the windshield 18, to ensure that it can detect the user's hand 13.
  • the area that is covered both by the Infrared/Ultraviolet projector 28 and by the sensor set 26 can form a detection area 30, i.e. an area in the passenger compartment that is reachable by the user's hand 13 when the user is in a driving position and where a motion of the user's hand 13 can be detected.
  • a restricted area called contactless control area 32
  • the contactless control area 32 is intangible and allows the user to control the display system without the need to contact with his hands or fingers a control member.
  • This contactless control area 32 can be invisible to the naked eye or partially visible in such a way that, for instance, only the limits of the contactless control area 32 can be made visible thanks to a specific projector (not represented) in order to make the contactless control area 32 easier to find by the user.
  • a method for selecting display data comprising configuration steps for determining the contactless control area 32, will be disclosed thereafter.
  • the display system 10 also comprises means for correlating hand motions detected in the contactless control area 32, with a position in the display area 16.
  • the display system 10 comprises at least one electronic control unit, arranged between the sensor set 26 and the display means 14, intended to convert detected motions of the user's hand 13 into input information for the display means 14.
  • the electronic control unit is preferably part of the control device 20.
  • the electronic control unit may comprise means for assigning coordinates to the contactless control area 32, and means for correlating the coordinates of the contactless control area 32 with the dimensions of the display area 16. In this case and when a motion or a position of the user's hand 13 is detected in the contactless control area 32, coordinates are assigned to a motion or a position of the user's hand 13. These coordinates are processed by electronic control unit for determining the corresponding position in the display area 16 that corresponds to said coordinates. Then, according to this corresponding position, it is determined, for instance by the same electronic control unit, which display data 50 is currently selected.
  • control device 20 may be configured to divide the contactless control area 32 into sub-areas. In this case the control device 20 is also configured to assign to each sub-area a corresponding position on the display area 16. The control device 20 detects in which sub-area of the contactless control area 32 the user's hand 13 is currently positioned. Then, it is determined the position on the display area 16 that corresponds to said sub-area where the user's hand 13 has been detected. According to this corresponding position, a selected display data 50 is determined.
  • the contactless control area 32 may be divided into sub-areas thanks, to for instance, a laser projector 22 that is part of the control device 20 and that uses a visible laser or an invisible wave length light in order to display a control matrix 24 that matches with the contactless control area.
  • the control matrix 24 is used to divide the contactless control area 32 into sub-areas and the control device 10 thanks to its sensor set 26 is configured to detect a position or a motion of the user's hand 13 relative to the matrix 24.
  • the choice of the wave-length of the laser projector 22 depends of the capacity for the sensor set 26 to detect this wave-length. In particular, if the sensor set 26 is able to detect eyes invisibles wave-length, such an invisible wave-length is preferred.
  • squares and/or crossing lines points of the control matrix 24 allow defining a map with a coordinates system.
  • the amount of coordinates (sub-areas and/or crossing lines points) depends on a researched precision. This amount can be low if a low precision is needed, for instance for simple controls, or this amount can be high if a high precision is needed, for instance for tricky controls.
  • the displayed control matrix 24 does not depend on the image displayed by the display means 14. Thus, this displayed control matrix 24 can remain the same during the use of the system.
  • each selectable display data 50 is associated with a specific and unique code
  • the control device 10 assigns to each specific code at least one corresponding sub-area or at least one crossing line point of the matrix.
  • the sensor set 26 is able to determine in which sub-area of the control matrix 24 or more generally at which coordinates the user's hand is positioned in the control matrix or between which sub-areas or between which coordinates the user's hand is moving in the control matrix. Then the electronic control unit of the control device 10 is able to determine if this sub-area(s) or these coordinates correspond to a specific code. If it is the case, the corresponding specific code is considered as activated and the selectable display data associated with the specific code is considered as selected.
  • the display system 0 preferably comprises means for displaying a pointer 34 on the display area 16.
  • Said means for displaying the pointer are connected to the sensor set 26, so as to correlate a displacement of the pointer 34 with a motion of the user's hand 13 in the contactless control area 32.
  • each position of the user's hand in the contactless control area 32 corresponds to a respective position of the pointer 34 in the display area 16.
  • the user 12 can see where, on the display area 16, he can interact, depending on the motions of his hand 13.
  • the pointer 34 does not appear on the display area 16, so that the user 12 cannot interact with the display system 10.
  • only a particular motion of user's hand for instance, a particular motion of fingers, or a motion of the user's hand in a different direction in the contactless control area 32 allows the confirmation of the selection of a selectable display data 50, in order to avoid selection errors.
  • a particular motion of user's hand for instance, a particular motion of fingers, or a motion of the user's hand in a different direction in the contactless control area 32 allows the confirmation of the selection of a selectable display data 50, in order to avoid selection errors.
  • the pointer 34 is targeting a selectable display data 50 in the display area 16 only a motion of the user's in a different direction that is, for instance, perpendicular to the 2D plane allows the confirmation of the selectable display data 50.
  • the user may also be modified by the user via the contactless control area 32.
  • the user can increment or decrement the value of a selectable display data such as, for instance, the target speed of a cruise control system.
  • a selectable display data such as, for instance, the target speed of a cruise control system.
  • the user when the user has selected a selectable display data thanks to, for instance, a first motion of his hand in the contactless control area 32, he can then modify this selected display data 50 by a further motion of his hand in a direction that is different from the first motion.
  • the user 12 can set the contactless control area 32 in any part of the detection area 30, in particular in a part that is the most comfortable for him.
  • he can set the contactless control area 32 in a part that is outside his field of view, i.e. that is out of a line 36 between the driver's eyes and the road and/or the display area 16, so as to avoid being in the way of his vision.
  • the invention relates to a method for selecting displayed data, comprising configuration steps for determining the contactless control area 32, and operating steps for selecting displayed data according the determined contactless control area 32.
  • the configuration steps comprise a step 100 of activation a configuration mode of the display system 10.
  • This activation step 100 can be activated by any way, such as by being chosen from a menu, being controlled by voice, being activated by a switch, being automatically activated when the user 12 enter in the vehicle, etc.
  • the method comprises an initialisation step 110, wherein each element of the system 10 is activated.
  • the initialisation step 110 can also be activated at the same time and in the same way as step 100.
  • the status of all these elements can be displayed on the display area 16 during a status verification step 120.
  • the user 12 can notice if there is a malfunction in any of these elements.
  • the method then comprises a step 130 of defining limits of the detection area 30.
  • said limits of the detection area 30 can be determined by the field of detection of the sensor set 26.
  • the detection area 30 can be defined as the detection field of one camera 27, as an intersection area between detection fields of at least two cameras 27 or as an intersection area between detection field of at least one camera 26 and the projection field of at least one Infrared/Ultraviolet projector 28.
  • the detection area 30 can be determined automatically, without an intervention of the user 12. For instance, it could be considered that the whole field of view of the sensor set 26 forms the detection area 30.
  • the user can also determine a restricted zone of detection inside the field of detection of the sensor set 26.
  • the user 12 can for instance be asked to move his hand 13 in several directions. The user moves then his hand until his hand stops, for instance, because of an obstacle inside of the cabin that obliges the user to stop moving his hand whereas his hand is still in the field of detection of the sensor set 26.
  • the last point reachable by the user's hand 13 in said field of detection of the sensor set 26 can be set as a point of the limits of the detection area 30.
  • the step 130 of defining limits of the detection area 30 comprises:
  • the neutral position comprises at least a neutral orientation of the sensor set 26, to be more precise the neutral position comprises at least a neutral orientation of at least one camera 27.
  • the step 130 of defining limits of the detection area 30 comprises:
  • the sensor set 26 in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set 26 and an intermediate zoom position of the sensor set 26, to be more precise an intermediate zoom position of at least one camera 27 wherein the zoom of the camera can be a digital or an optical one.
  • the method may comprise a step 135 of displaying, the detection area 30 as previously defined.
  • the invention allows the user to define a personalized contactless control area 32.
  • the method comprises a step 1 0 of detecting at least a first motion or position of the user's hand 13 in the detection area 30.
  • This step of detecting 140 is followed by a step 150 of identifying at least a first local point 40 in the detection area 30, corresponding to said at least first motion or position of the user's hand 13. For instance, the user 12 chooses the emplacement of the first local point 40 by pointing out a zone in the detection area 30. Such a local point 40 is shown on figure 5.
  • the method then comprises a step 160 of defining the contactless control area 32, wherein the contactless control area 32 is delimited and oriented according to said at least one first local point 40.
  • this contactless control area 32 is delimited by positioning a predetermined 2D or 3D shape 41 according to the position of said first local point 40 and is oriented by orienting said predetermined 2D or 3D shape according to a determined direction 42.
  • the first local point 40 is for example the centre of the predetermined 2D or 3D shape.
  • the predetermined 2D shape may have the shape of a rectangle or of a disc and the predetermined 3D shape 41 may have the shape of a parallelepiped (figure 5), a truncated pyramid (figure 6) or a truncated cone centred on said first local point 40.
  • the determined direction 42 is preferably determined by the display system, in particular by the control device 20, so that the 2D or 3D shape is oriented according to a direction that extends from the theoretical position of user's shoulder on the side of the first local point, user's head or user's torso and that passes through the first local point.
  • the step 140 of detection of the first motion and/or position of the user's hand 13 also comprises a detection of at least one second motion and/or position of the user's hand 13.
  • the step 150 of identifying a first local point 40 also comprises an identification of at least a second local point 43, 53 in the detection area 30, corresponding to said at least one second motion and/or position of the user's hand 13.
  • the step 160 of defining the contactless control area 32 comprises delimiting the contactless control area 32 by determining a 3D shape 41 according to the position of said first and second points.
  • the step 160 of defining the contactless control area 32 comprises delimiting the contactless control area 32 by positioning a predetermined 3D shape 41 according to said first local point 40, and orienting the contactless control area 32 by orienting said predetermined 3D shape according to at least one direction 44 that extends from the first local point 40 to the second local point 43.
  • the step 160 of defining the contactless control area 32 comprises (figure 8a) determining an intermediate 2D shape 45 according to said at least first and second local points 40, 53.
  • said intermediate 2D shape 45 is centred on said first local point 40, and the periphery of the intermediate 2D shape 45 is defined so that the second local point 53 is part of the periphery of this intermediate 2D shape 45.
  • said intermediate 2D shape is a rectangle 45 centred on said first local point 40, and the periphery of the rectangle is defined so that the second local point 53 may correspond to a corner of the rectangle.
  • the step 160 also comprises orienting said intermediate 2D shape 45 according to a determined direction 42 (figure 8b).
  • the orientation of the intermediate 2D shape can be performed by rotating said intermediate 2D shape 45 around said first local point 40.
  • said intermediate 2D shape 45 is projected, for instance, according to the determined direction 42 or according to a direction that is perpendicular to the intermediate 2D shape, in order to form a 3D shape 41 of the contactless control area 32.
  • the step 160 of defining the contactless control area 32 comprises determining an intermediate 2D shape according to said at least first and second local points, for instance, in the same manner as in the second example.
  • the step 160 then comprises orienting the intermediate 2D shape according to a direction determined by a third local point corresponding to a third motion and/or position of the user's hand 13, and then projecting the intermediate 2D shape according to this direction or to a direction that is perpendicular to this intermediate 2D shape in order to form a 3D shape 41 of the contactless control area 32.
  • the step 160 of defining the contactless control area 32 comprises determining an intermediate 2D shape 45 according to said at least first and second local points 40, 53.
  • said intermediate 2D shape 45 is expanded from said first local point 40 and around said first local point 40 until the periphery of the intermediate 2D shape 45 reaches said second local point 53 (figure 9a) or reach at least one limit 46 of the detection area 30 (figure 9b).
  • the intermediate 2D shape 45 is oriented and the 3D shape
  • the intermediate 2D shape 45 can have the shape of a rectangle, the shape of a disc or any other shape
  • the 3D shape 41 of the contactless control area 32 can have a corresponding shape, for instance, the shape of a parallelepiped, a truncated pyramid, a truncated cone or any other corresponding shape.
  • the contactless control area 32 can be defined according to three, four, five or more local points identified according to corresponding motions and/or positions of the user's hand 13.
  • the limits of the contactless control area 32 are confined in the detection area 30. In other words, the limits of the contactless control area 32 cannot go beyond the limits of the detection area 30. If so, a warning is displayed, and the user has to choose at least one other point for defining the limits of the contactless control area 32.
  • the method then optionally comprises a step 170 of displaying, in the display area 16, the contactless control area 32 and / or the intermediate 2D shape 45 as seen on figures 5 to 9.
  • the method comprises a step 175 of warning when at least a part of the previously defined contactless control area 32 is disposed in a predetermined area.
  • said predetermined area corresponds to a display area of the head-up display system 10, or to an area in front of the driver's eyes when he watches the road.
  • the driver 12 is warned that his hand can disturb his field of view when he controls the display system 10 while driving.
  • the method then comprises a step 180 of validating the contactless control area 32, wherein the control device 20 asks the user 12 if he validates the previously defined contactless control area 32 or not.
  • the method backs to the step 140 for defining a new contactless control area 32, as shown on figure 4.
  • the method then comprises a step 185 of correlating the contactless control area 32 with the display area 16.
  • this step of correlating 185 comprises:
  • the coordinates in the contactless control area 32 can be defined by considering the first local point 40 as an origin.
  • any other point, such as the centre of the contactless control area 32, can be considered as an origin for the coordinates.
  • this step of correlating 185 comprises:
  • the method then comprises a step 190 of storing coordinates of the previously defined contactless control area 32 in adapted storage means.
  • the contactless control area 32 is stored in combination with a user ID of the user 12.
  • each driver 12 can retrieve his contactless control area 32 from the storage means, and he does not need to define the contactless control area 32 each time he enters in the vehicle.
  • the method comprises a final step 195, wherein the configuration mode is ended.
  • the user 12 can control the head-up display system 10, by moving his hand
  • the method comprises a step 200 of activating an operating mode of the display system 10.
  • the operating mode can be a default mode, automatically activated when the configuration mode is not activated.
  • the method then comprises a step 2 0 of detecting at least one further motion or position of the user's hand 13 in the contactless control area 32 such as determined during the configuration steps.
  • the method then comprises a step 220 of assigning a position to the detected further motion or position of the user's hand 13.
  • the step of assigning 220 comprises:
  • the step of assigning 220 comprises:
  • the method then comprises a step 230 of determining which selectable display data 50 is selected, according to said further motion or position of the user's hand 13.
  • this step 230 is performed according to the corresponding position in the display area 16 such as determined in step 185.
  • each selectable display data 50 may be associated with a specific and unique code to which it is assigned corresponding coordinates of the contactless control area 32.
  • the display system 10 may use it to know which selectable display data 50 is selected depending on the motion or position of the user's hand 13 in the contactless control area 32.
  • the operating steps comprise a step of displaying a pointer 34 on the display area, corresponding to the user's hand 13, as long as this user's hand 13 is in the contactless control area 32.
  • the user can know which position in the display area 16 corresponds to the position of his hand 13 as long as this user's hand 13 is in the contactless control area 32.
  • the display system 10 comprises means for carrying out the previously disclosed method, containing an embedded computer.
  • means for identifying at least a first local point in the detection area 30, corresponding to said at least first motion or position of the user's hand 13, and
  • said means for defining the contactless control area 32 are connected to the sensor set 26, so that the limits are defined based on information given by the sensor set 26, said information relating to the motions of the user's hand 13.
  • the electronic control unit comprises means for selecting displayed data, comprising:
  • means for determining which displayed data is selected, according to said further motion or position of the user's hand 13. It should be noticed that the invention is not limited to the previously disclosed embodiment, but it can comprise several variants.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)

Abstract

The method comprises the following configuration steps, for determining a contactless control area (32) of the display system (10) in the detection area (30): a step of activating a configuration mode of the display system (10), a step of detecting at least a first motion or position of the user's hand (13) in the detection area (30), a step (150) of identifying at least a first local point (40) in the detection area (30), corresponding to said at least first motion or position of the user's hand (13), and a step (160) of defining the contactless control area (32), wherein the contactless control area (32) is delimited and oriented according to said at least one first local point (40). The method also comprising the following operating steps, for selecting displayed data: a step (200) of activating an operating mode of the display system (10), a step (210) of detecting at least one further motion or position of the user's hand (13) in the contactless control area (32) such as determined during the configuration steps, and a step (230) of determining which displayed data is selected, according to said further motion or position of the user's hand (13).

Description

A METHOD OF SELECTING DISPLAY DATA IN A DISPLAY SYSTEM OF A VEHICLE
Field of the invention The invention relates to a display system for a vehicle, and more particularly to a method for selecting display data in this display system.
Technological Background Several display systems for vehicle are well known in the prior art. Such a display system comprises means for displaying the image in a display area, for instance a screen.
An example of display system is a Head-up display system (also called HUD system). Such a display system is intended to display an image in the field of view of a driver of a motor vehicle, such a truck or a car. To this end, the displaying means use a translucent screen. This translucent screen can be formed by an additional translucent screen arranged in front of the windshield of the vehicle, or can be formed by this windshield.
In any display system, the displayed image provides useful information for the driver, such as the speed of the vehicle, a display of a representation of the environment of the vehicle such as a map for a GPS (Global Positioning System), or any type of visual alarm in general. Usually this information is alphanumeric, or it is presented as pictograms and/or graphics. This information can be static or dynamic. In case of a HUD system, this information is displayed in the field of view of the driver so as to avoid the driver having his eyes move off the road to see it.
The display system is usually associated with a control device for controlling said display system. This control device is intended to allow the driver to interact with the displayed image, for instance by choosing what information has to be displayed, or by navigating in a menu of a GPS, or by choosing an option in a list of options, or any other feature.
A control device is already known from prior art, in particular from FR 2 952 012.
This control device comprises at least a sensor for detecting a motion of the driver's hand in a contactless control area. Control means are connected to the sensor so as to correlate motion of the driver's hand with instructions given to the head-up display system.
In such a control device, the contactless control area is arranged between the driver and the display area of the head-up display system, so that the head-up display system is controlled by having a driver's finger pointing towards the display area. Consequently, when the driver controls the head-up display system, his hand is moved in his field of view, thus his hand may disturb his vision.
Summary
One of the objects of the invention is to overcome this disadvantage by proposing a method for selecting display data in a display system, which allows the driver to control the display system without having his vision disturbed by his hand, while keeping a hand motion detection way to control the system.
To this end, the invention relates to a method for selecting display data in a display system of a vehicle, characterized in that the display system comprises:
• display means for displaying data in a display area,
• a contactless control device including at least a sensor set that is arranged in a passenger compartment of the vehicle, so as to detect motions of at least one user's hand in a detection area reachable by the user's hand at least when the user is in a driving position, and
• at least one electronic control unit, arranged between said sensor set and said display means, intended to convert detected motions of the user's hand into input information for the display means,
the method comprising the following configuration steps, for determining a contactless control area of the display system in the detection area:
• a step of activating a configuration mode of the display system,
• a step of detecting at least a first motion or position of the user's hand in the detection area,
· a step of identifying at least a first local point in the detection area, corresponding to said at least first motion or position of the user's hand, and
• a step of defining the contactless control area, wherein the contactless control area is delimited and oriented according to said at least one first local point,
the method also comprising the following operating steps, for selecting displayed data:
• a step of activating an operating mode of the display system,
• a step of detecting at least one further motion or position of the user's hand in the contactless control area such as determined during the configuration steps, and • a step of determining which displayed data is selected, according to said further motion or position of the user's hand.
Such a method allows the user to define a contactless control area disposed at a distance of his field of view. Thus, the user can control the system without having his hand disturbing his vision.
Moreover, it should be noticed that this method allows each user to define a contactless control area that is the most comfortable for him, and not an imposed contactless control area as in prior art.
A method according to the invention can comprise one or several of the following features, alone or in combination:
- The configuration steps comprise a step of correlating the contactless control area with the display area, comprising:
• assigning coordinates to the delimited and oriented contactless control area, and
· correlating the coordinates of the delimited and oriented contactless control area with dimensions of the display area.
- The operating steps comprise a step of assigning a position in the display area to the detected further motion or position of the user's hand, comprising:
• assigning coordinates to said further motion or position of user's hand, · processing the coordinates of the further motion or position of user's hand, and
• determining the corresponding position in the display area that corresponds to said coordinates of the further motion or position user's hand,
the step of determining which displayed data is selected being performed according to said corresponding position determined in the display area.
- The configuration steps comprise a step of correlating the contactless control area with the display area, comprising:
• dividing the delimited and oriented contactless control area into sub-areas, and
· assigning to each sub-area a corresponding position on the display area, wherein the operating steps comprise a step of assigning a position in the display area to the detected further motion or position of the user's hand, comprising:
• detecting in which sub-area of the contactless control area the user's hand is currently positioned, and
· determining the corresponding position in the display area that corresponds to said sub-area where the user's hand is detected, the step of determining which displayed data is selected being performed according to the corresponding position determined in the display area.
- The operating steps comprise a step of displaying a pointer on the display area, corresponding to the user's hand, as long as this user's hand is in the contactless control area.
- During the step of defining the contactless control area, this contactless control area is delimited by a predetermined 2D or 3D shape that is positioned according to the position of said first local point in the detection area and is oriented by orienting said predetermined 2D or 3D shape according to a determined direction. A determined direction is preferably determined by the display system so that the contactless control area is oriented according to a direction that extends from the theoretical position of user's shoulder on the side of the firs local point, user's head or user's torso and that passes through the first local point.
- In the method:
· the step of detection of the first motion and/or position of the user's hand then comprises a detection of at least one second motion and/or position of the user's hand,
• the step of identifying a first local point then comprises an identification of at least a second local point in the detection area, corresponding to said at least one second motion and/or position of the user's hand, and
• the step of defining the contactless control area comprises delimiting the contactless control area by positioning a predetermined 2D or 3D shape according to the position of said first local point in the detection area (30), and orienting the contactless control area by orienting said predetermined 2D or 3D shape depending on the position of the second local point in the detection area. Preferably, the predetermined 2D or 3D shape is oriented according to at least one direction that extends from the first local point to the second local point.
- The predetermined 2D shape have the shape of a rectangle or of a disc and the predetermined 3D shape has the shape of a parallelepiped, a truncated pyramid or a truncated cone centred on said first local point.
- In this method:
• the step of detecting the first motion and/or position of the user's hand also comprises a detection of at least one second motion and/or position of the user's hand, • the step of identifying a first local point also comprises an identification of at least a second local point in the detection area, corresponding to said at least one second motion and/or position of the user's hand, and
• the step of defining the contactless control area comprises delimiting and orienting the contactless control area by determining a 2D or a 3D shape according to the position of said first and second local points in the detection area.
- The step of defining a 3D shape of the contactless control area comprises determining an intermediate 2D shape according to the position of said at least first and second local points in the detection area, orienting said intermediate 2D shape according to a determined direction, and then projecting said intermediate 2D shape according to a direction that is perpendicular to the intermediate 2D shape, in order to form a 3D shape of the contactless control area. The term "intermediate 2D shape" refers to an intermediate shape that is determined before obtaining the 3D shape of the contactless control area.
- The step of defining a 3D shape of the contactless control area comprises determining an intermediate 2D shape according to the position of said at least first and second local points in the detection area, orienting the intermediate 2D shape depending on the position of at least a third local point corresponding to a third motion and/or position of the user's hand in the detection area, and then projecting the intermediate 2D shape according to a direction that is perpendicular to this intermediate 2D shape in order to form a 3D shape of the contactless control area.
- During the step of defining the contactless control area, said intermediate 2D shape is centred on said first local point, and the periphery of the intermediate 2D shape is defined so that the second local point is part of the periphery of this intermediate 2D shape.
- During the step of defining the contactless control area, said intermediate 2D shape is a rectangle centred on said first local point, and the periphery of the rectangle is defined so that the second local point corresponds to a corner of the rectangle.
- During the step of defining the contactless control area, said 2D shape or intermediate 2D shape is expanded from said first local point and around said first local point until the periphery of the 2D shape or intermediate 2D shape reaches said second local point or reach at least one limit of the detection area.
- During the step of defining the contactless control area, if the periphery of the 2D shape or intermediate 2D shape reaches a limit of the detection area, the 2D shape or intermediate 2D shape continues to be expanded in the opposite direction to said limit until reaching the second local point.
- The 2D shape or the intermediate 2D shape has the shape of a rectangle or of a disc, and the 3D shape of the contactless control area has the shape of a parallelepiped, a truncated pyramid or a truncated cone that corresponds to the intermediate 2D shape.
- The contactless control area, the 3D and the 2D shapes are intangible area and shapes.
- The method comprises a preliminary step of determining the detection area, comprising:
• detecting at least one position or one motion of the user's hand in the detection field of the sensor set, and
• defining in the detection field of the sensor set and according to the at least one position or motion of the user's hand a detection area.
- The method comprises a preliminary step of determining the detection area, comprising:
• placing the sensor set in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set,
• detecting at least one position of the user's hand in a detection field of the sensor set, and
• modifying the orientation of the sensor set in order to focus the detection field of the sensor set on said position of the user's hand.
- The method comprises a preliminary step of determining the detection area, comprising:
• placing the sensor set in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set and an intermediate zoom position of the sensor set,
• detecting at least a first and a second position of the user's hand in the detection field of the sensor set, and
• defining a detection area by modifying the orientation of the sensor set in order to focus the detection field of the sensor set on the first position of the user's hand and adjusting the zoom position in order to detect the second position of the user's hand.
The invention also relates to a display system of a vehicle, characterized in that it comprises:
• display means for displaying data in a display area, • a contactless control device including at least a sensor set that is arranged in a passenger compartment of the vehicle, so as to detect motions of at least one user's hand in a detection area reachable by the user's hand when the user is in a driving position, and
• at least one electronic control unit, arranged between said sensor set and said display means, intended to convert detected motions of the user's hand into input information for the display means,
the display system also comprising means for defining a contactless control area in the detection area, said determining means comprising:
• means for activating a configuration mode of the display system,
• means for detecting at least a first motion or position of the user's hand in the detection area,
• means for identifying at least a first local point in the detection area, corresponding to said at least first motion or position of the user's hand, and
• means for defining the contactless control area, wherein the contactless control area is delimited and oriented according to said at least one first local point,
the display system also comprising means for selecting displayed data, comprising:
• means for activating an operating mode of the display system,
• means for detecting at least one further motion or position of the user's hand in the contactless control area such as determined during the configuration steps, and
• means for determining which displayed data is selected, according to said further motion or position of the user's hand.
Such a display system is adapted to perform the method disclosed above.
A display system according to the invention can comprise one or several of the following features, alone or in combination:
- The display area is projected in the field of view of the driver.
- The sensor set is able to detect 3D motions of user's hand in the detection area.
- The sensor set comprises at least one camera, preferably two cameras.
- The sensor set comprises at least an Infra-Red or Ultra-Violet projector able to caption a motion of the user's hand.
- The detection area is determined by the arrangement of the sensor set in the cabin and by the characteristics of the sensor set. These and other aspects and advantages will become apparent upon reading the following description in view of the drawing attached hereto representing, as non-limiting examples, embodiments of a vehicle according to the invention. Brief description of the drawings
The following detailed description of several embodiments of the invention is better understood when read in conjunction with the appended drawings, it being however understood that the invention is not limited to the specific embodiments disclosed.
- Figure 1 is a schematic representation of a head up display system for a motor vehicle, comprising a control device according to an example of embodiment of the invention;
- Figure 2 is a schematic representation of a matrix delimiting a contactless control area, projected by the control device of figure 1 , in a non limiting example of embodiment;
- Figure 3 is a schematic representation of a contactless control area and a display area of the display system of figure 1 ;
- Figure 4a is a schematic representation of configuration steps of a method for selecting display data in the display system of figure 1 ;
- Figure 4b is a schematic representation of operating steps of a method for selecting display data in the display system of figure 1 ;
- Figures 5 and 6 are schematic representations of contactless control areas such as defined during the configuration steps of the method represented of figure 4a;
- Figure 7 is a schematic representation of a contactless control area such as defined according to a first example of a first variant of the configuration method of figure 4a;
- Figures 8a, 8b, 8c represent steps to define the contactless control area according to a second example of the first variant; and
- Figures 9a, 9b, 9c are schematic representations of contactless control areas such as defined according to a fourth example of the first variant.
Detailed description of the invention
With reference to Figure 1 , we describe a display system 10 according to an example of embodiment of the invention, for a motor vehicle, particularly for a truck. In this example, the display system 10 is a HUD system, but it can be any other display system in a variant. This display system 10 is intended to display data 50, such as for instance an image, in the field of view 11 of a driver 12 of the motor vehicle. To this end, the display system 10 comprises means 14 for displaying data in a predetermined display area 16 (visible on figure 3), by displaying this data on a screen of the dashboard or, in case of a HUD system, by projecting this image or data on the windscreen 18 of the vehicle or on translucent screen located between the driver and the windscreen 18. These means 14 for displaying data 50 habitually comprise a video or picture projector. The display means 14 may also comprise at least a mirror to convoy light beams emitted by the projector towards the windscreen 18.
Preferentially, the means 14 for displaying data comprises means for manually defining a location of the display area 16 on the windshield 18.
Among the data that are displayed on the display area 16 some of them are selectable display data 50 that can be, via the display system 10, selected or modified by the user. Any selectable display data 50 may be associated with a unique code in the system 10. In this case, the system 10 may know which part of the display area 16 must be considered as an active part that can be useable as a selectable display data 50. Other parts of the display area 16 are considered as inactive by the system 10.
Preferably, the display area 16 is set on the windshield 18 in the field of view of the driver, so as to help the driver 12 to keep eyes on road.
The display system 10 comprises a control device 20 for controlling said display system 10. This control device 20 is intended to allow the driver or any other user 12 to interact with the displayed data, for instance by choosing what information has to be displayed, or by navigating in a menu of a GPS, or by choosing an option in a list of options, or any other feature.
According to the invention, the control device 20 allows a contactless control, so that the driver or any other passenger or user 12 can activate features just by moving his hand 13 or by moving his fingers. In the current disclosure, the wording "hand" refers to any part of the user's hand 13 that can be used to control the control device 20, generally the user's fingers, or to any tool held in the user's hand 13. Hereinafter the term "user" refers to the driver or to any other passenger or user .
To this end, the control device 20 comprises a sensor set 26, that may comprise at least one camera 27, preferably two cameras, intended to analyse a motion of the user's hand 13. In particular, the sensor set 26 is able to detect 3D motions of user's hand 3 in a detection area 30. Therefore, if it used only one camera, this camera is preferably a 3D camera and if it used two cameras, these cameras can be 2D cameras arranged and oriented in order to allow the control device 20 to detect 3D motions of user's hand 13. In a preferred embodiment, the sensor set 26 can also comprise an Infrared and/or Ultraviolet projector 28, projecting Infrared and/or Ultraviolet towards the user's hand 13. In that case, the sensor set 26 is able to work with normal light, Infrared and/or Ultraviolet. These Infrared or Ultraviolet light allow to ensure that the user's hand 13 can be detected by the sensor set 26, whatever is the ambient light in the passenger compartment of the motor vehicle (summer sun light, night conditions, etc.).
The Infrared/Ultraviolet projector 28 and the cameras 27 of the sensor set 26, are oriented and configured to cover motions of the user's hands 13 when he is in a driving position. Besides, the sensor set 26 should cover a zone of the cabin from the lower part of the steering wheel of the motor vehicle to the top of the windshield 18, to ensure that it can detect the user's hand 13.
The area that is covered both by the Infrared/Ultraviolet projector 28 and by the sensor set 26 can form a detection area 30, i.e. an area in the passenger compartment that is reachable by the user's hand 13 when the user is in a driving position and where a motion of the user's hand 13 can be detected.
Within this detection area 30, a restricted area, called contactless control area 32, is considered for controlling the display system 10. Any motion outside of the contactless control area 32 is not taken into account by the control device 20. The contactless control area 32 according to the invention is intangible and allows the user to control the display system without the need to contact with his hands or fingers a control member. This contactless control area 32 can be invisible to the naked eye or partially visible in such a way that, for instance, only the limits of the contactless control area 32 can be made visible thanks to a specific projector (not represented) in order to make the contactless control area 32 easier to find by the user. A method for selecting display data, comprising configuration steps for determining the contactless control area 32, will be disclosed thereafter.
The display system 10 also comprises means for correlating hand motions detected in the contactless control area 32, with a position in the display area 16. In particular, the display system 10 comprises at least one electronic control unit, arranged between the sensor set 26 and the display means 14, intended to convert detected motions of the user's hand 13 into input information for the display means 14. The electronic control unit is preferably part of the control device 20.
The electronic control unit may comprise means for assigning coordinates to the contactless control area 32, and means for correlating the coordinates of the contactless control area 32 with the dimensions of the display area 16. In this case and when a motion or a position of the user's hand 13 is detected in the contactless control area 32, coordinates are assigned to a motion or a position of the user's hand 13. These coordinates are processed by electronic control unit for determining the corresponding position in the display area 16 that corresponds to said coordinates. Then, according to this corresponding position, it is determined, for instance by the same electronic control unit, which display data 50 is currently selected.
In a variant, the control device 20 may be configured to divide the contactless control area 32 into sub-areas. In this case the control device 20 is also configured to assign to each sub-area a corresponding position on the display area 16. The control device 20 detects in which sub-area of the contactless control area 32 the user's hand 13 is currently positioned. Then, it is determined the position on the display area 16 that corresponds to said sub-area where the user's hand 13 has been detected. According to this corresponding position, a selected display data 50 is determined.
The contactless control area 32 may be divided into sub-areas thanks, to for instance, a laser projector 22 that is part of the control device 20 and that uses a visible laser or an invisible wave length light in order to display a control matrix 24 that matches with the contactless control area. The control matrix 24 is used to divide the contactless control area 32 into sub-areas and the control device 10 thanks to its sensor set 26 is configured to detect a position or a motion of the user's hand 13 relative to the matrix 24. The choice of the wave-length of the laser projector 22 depends of the capacity for the sensor set 26 to detect this wave-length. In particular, if the sensor set 26 is able to detect eyes invisibles wave-length, such an invisible wave-length is preferred.
Any motion of the user's hand 13 in the control matrix 24 is analysed and taken into account whereas any motion outside of the control matrix 24, and so outside of the contactless control area, is not taken into account.
In particular, squares and/or crossing lines points of the control matrix 24 allow defining a map with a coordinates system. The amount of coordinates (sub-areas and/or crossing lines points) depends on a researched precision. This amount can be low if a low precision is needed, for instance for simple controls, or this amount can be high if a high precision is needed, for instance for tricky controls.
It should be noticed that the displayed control matrix 24 does not depend on the image displayed by the display means 14. Thus, this displayed control matrix 24 can remain the same during the use of the system.
When for instance and as previously explained, each selectable display data 50 is associated with a specific and unique code, the control device 10 assigns to each specific code at least one corresponding sub-area or at least one crossing line point of the matrix. The sensor set 26 is able to determine in which sub-area of the control matrix 24 or more generally at which coordinates the user's hand is positioned in the control matrix or between which sub-areas or between which coordinates the user's hand is moving in the control matrix. Then the electronic control unit of the control device 10 is able to determine if this sub-area(s) or these coordinates correspond to a specific code. If it is the case, the corresponding specific code is considered as activated and the selectable display data associated with the specific code is considered as selected. As shown on figure 3, the display system 0 preferably comprises means for displaying a pointer 34 on the display area 16. Said means for displaying the pointer are connected to the sensor set 26, so as to correlate a displacement of the pointer 34 with a motion of the user's hand 13 in the contactless control area 32. In other words, each position of the user's hand in the contactless control area 32 corresponds to a respective position of the pointer 34 in the display area 16. Thus, the user 12 can see where, on the display area 16, he can interact, depending on the motions of his hand 13. Besides, when the user's hand 13 moves out of the contactless control area 32, the pointer 34 does not appear on the display area 16, so that the user 12 cannot interact with the display system 10.
Preferably, only a particular motion of user's hand, for instance, a particular motion of fingers, or a motion of the user's hand in a different direction in the contactless control area 32 allows the confirmation of the selection of a selectable display data 50, in order to avoid selection errors. For instance, when the user's hand is moving in a 2D plane of the contactless control area 32 in order to navigate in the display area 16, if the pointer 34 is targeting a selectable display data 50 in the display area 16 only a motion of the user's in a different direction that is, for instance, perpendicular to the 2D plane allows the confirmation of the selectable display data 50.
In the display system 10 according to the invention some selectable display data
50 may also be modified by the user via the contactless control area 32. For instance, the user can increment or decrement the value of a selectable display data such as, for instance, the target speed of a cruise control system. In this case and in a similar manner such as previously explained for the confirmation of a selection of a selectable display data, when the user has selected a selectable display data thanks to, for instance, a first motion of his hand in the contactless control area 32, he can then modify this selected display data 50 by a further motion of his hand in a direction that is different from the first motion.
It should be noticed that any other way to confirm the selection of a selectable display data 50 or to modify a selectable display data 50 by moving the hand 13 can be considered. According to the invention, the user 12 can set the contactless control area 32 in any part of the detection area 30, in particular in a part that is the most comfortable for him. In particular, he can set the contactless control area 32 in a part that is outside his field of view, i.e. that is out of a line 36 between the driver's eyes and the road and/or the display area 16, so as to avoid being in the way of his vision.
To this end, the invention relates to a method for selecting displayed data, comprising configuration steps for determining the contactless control area 32, and operating steps for selecting displayed data according the determined contactless control area 32.
Configuration steps of this method will now be disclosed regarding figure 4a.
The configuration steps comprise a step 100 of activation a configuration mode of the display system 10. This activation step 100 can be activated by any way, such as by being chosen from a menu, being controlled by voice, being activated by a switch, being automatically activated when the user 12 enter in the vehicle, etc.
Following this activation step 100, the method comprises an initialisation step 110, wherein each element of the system 10 is activated. In particular, the video/picture projector 14, the sensor set 26, that comprises at least one camera and, if provided, the Infrared and/or Ultraviolet projector 28, is activated. The initialisation step 110 can also be activated at the same time and in the same way as step 100.
Optionally, the status of all these elements can be displayed on the display area 16 during a status verification step 120. Thus, the user 12 can notice if there is a malfunction in any of these elements.
The method then comprises a step 130 of defining limits of the detection area 30. As previously disclosed, said limits of the detection area 30 can be determined by the field of detection of the sensor set 26. For instance and depending on camera(s) 27 and/or projector(s) 28 that are in the sensor set 26, the detection area 30 can be defined as the detection field of one camera 27, as an intersection area between detection fields of at least two cameras 27 or as an intersection area between detection field of at least one camera 26 and the projection field of at least one Infrared/Ultraviolet projector 28.
In this case, the detection area 30 can be determined automatically, without an intervention of the user 12. For instance, it could be considered that the whole field of view of the sensor set 26 forms the detection area 30.
In another variant, the user can also determine a restricted zone of detection inside the field of detection of the sensor set 26. In this case, the user 12 can for instance be asked to move his hand 13 in several directions. The user moves then his hand until his hand stops, for instance, because of an obstacle inside of the cabin that obliges the user to stop moving his hand whereas his hand is still in the field of detection of the sensor set 26. The last point reachable by the user's hand 13 in said field of detection of the sensor set 26 can be set as a point of the limits of the detection area 30.
In another variant, the step 130 of defining limits of the detection area 30 comprises:
- placing the sensor set 26 in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set 26, to be more precise the neutral position comprises at least a neutral orientation of at least one camera 27.
- detecting at least one position of the user's hand 13 in a detection field of the sensor set 26, and
- modifying the orientation of the sensor set 26 in order to focus the detection field of the sensor set 26 on said position of the user's hand 13.
In another variant, the step 130 of defining limits of the detection area 30 comprises:
- placing the sensor set 26 in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set 26 and an intermediate zoom position of the sensor set 26, to be more precise an intermediate zoom position of at least one camera 27 wherein the zoom of the camera can be a digital or an optical one.
- detecting at least a first and a second position of the user's hand 13 in the detection field of the sensor set 26, and
- defining a new detection area by modifying the orientation of the sensor set 26 in order to focus the detection field of the sensor set 26 on the first position of the user's hand 13 and adjusting the zoom position in order to detect the second position of the user's hand 13.
The method may comprise a step 135 of displaying, the detection area 30 as previously defined.
The invention allows the user to define a personalized contactless control area 32.
To this end, the method comprises a step 1 0 of detecting at least a first motion or position of the user's hand 13 in the detection area 30.
This step of detecting 140 is followed by a step 150 of identifying at least a first local point 40 in the detection area 30, corresponding to said at least first motion or position of the user's hand 13. For instance, the user 12 chooses the emplacement of the first local point 40 by pointing out a zone in the detection area 30. Such a local point 40 is shown on figure 5. The method then comprises a step 160 of defining the contactless control area 32, wherein the contactless control area 32 is delimited and oriented according to said at least one first local point 40.
For instance and such as illustrated on figures 5 and 6, during this step 160 of defining the contactless control area 32, this contactless control area 32 is delimited by positioning a predetermined 2D or 3D shape 41 according to the position of said first local point 40 and is oriented by orienting said predetermined 2D or 3D shape according to a determined direction 42. In particular, the first local point 40 is for example the centre of the predetermined 2D or 3D shape. For instance, the predetermined 2D shape may have the shape of a rectangle or of a disc and the predetermined 3D shape 41 may have the shape of a parallelepiped (figure 5), a truncated pyramid (figure 6) or a truncated cone centred on said first local point 40. The determined direction 42 is preferably determined by the display system, in particular by the control device 20, so that the 2D or 3D shape is oriented according to a direction that extends from the theoretical position of user's shoulder on the side of the first local point, user's head or user's torso and that passes through the first local point.
Several variants can be considered for defining the contactless control area 32. In a variant, the step 140 of detection of the first motion and/or position of the user's hand 13 also comprises a detection of at least one second motion and/or position of the user's hand 13. Then, the step 150 of identifying a first local point 40 also comprises an identification of at least a second local point 43, 53 in the detection area 30, corresponding to said at least one second motion and/or position of the user's hand 13.
Thus, in this variant, the step 160 of defining the contactless control area 32 comprises delimiting the contactless control area 32 by determining a 3D shape 41 according to the position of said first and second points.
In particular, in a first example, such as illustrated on figure 7, the step 160 of defining the contactless control area 32 comprises delimiting the contactless control area 32 by positioning a predetermined 3D shape 41 according to said first local point 40, and orienting the contactless control area 32 by orienting said predetermined 3D shape according to at least one direction 44 that extends from the first local point 40 to the second local point 43.
In a second example such as illustrated on figures 8a, 8b and 8c, the step 160 of defining the contactless control area 32 comprises (figure 8a) determining an intermediate 2D shape 45 according to said at least first and second local points 40, 53. In particular, said intermediate 2D shape 45 is centred on said first local point 40, and the periphery of the intermediate 2D shape 45 is defined so that the second local point 53 is part of the periphery of this intermediate 2D shape 45. For instance, said intermediate 2D shape is a rectangle 45 centred on said first local point 40, and the periphery of the rectangle is defined so that the second local point 53 may correspond to a corner of the rectangle. In this case, the step 160 also comprises orienting said intermediate 2D shape 45 according to a determined direction 42 (figure 8b). The orientation of the intermediate 2D shape can be performed by rotating said intermediate 2D shape 45 around said first local point 40. Then (figure 8c) said intermediate 2D shape 45 is projected, for instance, according to the determined direction 42 or according to a direction that is perpendicular to the intermediate 2D shape, in order to form a 3D shape 41 of the contactless control area 32.
In a third example, the step 160 of defining the contactless control area 32 comprises determining an intermediate 2D shape according to said at least first and second local points, for instance, in the same manner as in the second example. The step 160 then comprises orienting the intermediate 2D shape according to a direction determined by a third local point corresponding to a third motion and/or position of the user's hand 13, and then projecting the intermediate 2D shape according to this direction or to a direction that is perpendicular to this intermediate 2D shape in order to form a 3D shape 41 of the contactless control area 32.
In a fourth example, such as illustrated in figures 9a and 9b, the step 160 of defining the contactless control area 32 comprises determining an intermediate 2D shape 45 according to said at least first and second local points 40, 53. In particular, said intermediate 2D shape 45 is expanded from said first local point 40 and around said first local point 40 until the periphery of the intermediate 2D shape 45 reaches said second local point 53 (figure 9a) or reach at least one limit 46 of the detection area 30 (figure 9b).
In a variant of the fourth example (figure 9c), during the step 160 of defining the contactless control area 32, if the periphery of the intermediate 2D shape reaches a limit 46 of the detection area 30, the intermediate 2D shape continues to be expanded in the opposite direction to said limit 46 until reaching the second local point 53. In this variant, the intermediate 2D shape 45 and so the contactless control area 32 are not centred on the first local point 41.
In this fourth example, the intermediate 2D shape 45 is oriented and the 3D shape
41 of the contactless control area 32 is formed as explained in the preceding examples.
In all these examples, it should be noticed that the intermediate 2D shape 45 can have the shape of a rectangle, the shape of a disc or any other shape, and the 3D shape 41 of the contactless control area 32 can have a corresponding shape, for instance, the shape of a parallelepiped, a truncated pyramid, a truncated cone or any other corresponding shape. In another variant, the contactless control area 32 can be defined according to three, four, five or more local points identified according to corresponding motions and/or positions of the user's hand 13.
Anyways, the limits of the contactless control area 32 are confined in the detection area 30. In other words, the limits of the contactless control area 32 cannot go beyond the limits of the detection area 30. If so, a warning is displayed, and the user has to choose at least one other point for defining the limits of the contactless control area 32.
The method then optionally comprises a step 170 of displaying, in the display area 16, the contactless control area 32 and / or the intermediate 2D shape 45 as seen on figures 5 to 9.
Optionally, the method comprises a step 175 of warning when at least a part of the previously defined contactless control area 32 is disposed in a predetermined area. For instance, said predetermined area corresponds to a display area of the head-up display system 10, or to an area in front of the driver's eyes when he watches the road. Thus, the driver 12 is warned that his hand can disturb his field of view when he controls the display system 10 while driving.
The method then comprises a step 180 of validating the contactless control area 32, wherein the control device 20 asks the user 12 if he validates the previously defined contactless control area 32 or not.
If the user 12 does not validate this contactless control area 32, the method backs to the step 140 for defining a new contactless control area 32, as shown on figure 4.
If the user 12 validates this contactless control area 32, the method then comprises a step 185 of correlating the contactless control area 32 with the display area 16.
In a first variant, this step of correlating 185 comprises:
- assigning coordinates to the delimited and oriented contactless control area 32, and
- correlating the coordinates of the delimited and oriented contactless control area 32 with dimensions of the display area 16.
It should be noticed that the coordinates in the contactless control area 32 can be defined by considering the first local point 40 as an origin. In a variant, any other point, such as the centre of the contactless control area 32, can be considered as an origin for the coordinates.
In a second variant, this step of correlating 185 comprises:
- dividing the delimited and oriented contactless control area 32 into sub-areas, and - assigning to each sub-area a corresponding position on the display area 16.
The method then comprises a step 190 of storing coordinates of the previously defined contactless control area 32 in adapted storage means. Preferentially, the contactless control area 32 is stored in combination with a user ID of the user 12. Thus, for a motor vehicle that can have several drivers, such as a transport truck, each driver 12 can retrieve his contactless control area 32 from the storage means, and he does not need to define the contactless control area 32 each time he enters in the vehicle.
At last, the method comprises a final step 195, wherein the configuration mode is ended.
Then, the user 12 can control the head-up display system 10, by moving his hand
13 in the contactless control area 32 that he has defined. This control is performed during operating steps of the method for selecting displayed data, which now will be disclosed. These operating steps are shown on figure 4b.
The method comprises a step 200 of activating an operating mode of the display system 10. It should be noticed that the operating mode can be a default mode, automatically activated when the configuration mode is not activated.
The method then comprises a step 2 0 of detecting at least one further motion or position of the user's hand 13 in the contactless control area 32 such as determined during the configuration steps.
The method then comprises a step 220 of assigning a position to the detected further motion or position of the user's hand 13.
For instance, in case coordinates has been assigned to the contactless control area 32 during the step of correlating 185, the step of assigning 220 comprises:
- assigning coordinates to said further motion or position user's hand 13, and - processing the coordinates of the further motion or position user's hand 13, and determining the position in the display area 16 that corresponds to said coordinates of the further motion or position user's hand 13.
In a variant, in case the contactless control area has been divided in sub-areas during the step of correlating 185, the step of assigning 220 comprises:
- detecting in which sub-area of the contactless control area 32 the user's hand 13 is currently positioned, and
- determining the position in the display area 16 that corresponds to said sub-area where the user's hand is detected.
The method then comprises a step 230 of determining which selectable display data 50 is selected, according to said further motion or position of the user's hand 13. In particular, this step 230 is performed according to the corresponding position in the display area 16 such as determined in step 185.
For instance and as previously disclosed, each selectable display data 50 may be associated with a specific and unique code to which it is assigned corresponding coordinates of the contactless control area 32. The display system 10 may use it to know which selectable display data 50 is selected depending on the motion or position of the user's hand 13 in the contactless control area 32.
Preferably, the operating steps comprise a step of displaying a pointer 34 on the display area, corresponding to the user's hand 13, as long as this user's hand 13 is in the contactless control area 32. Thus, the user can know which position in the display area 16 corresponds to the position of his hand 13 as long as this user's hand 13 is in the contactless control area 32.
Naturally, the display system 10 comprises means for carrying out the previously disclosed method, containing an embedded computer.
In particular, it comprises means for defining the contactless control area 32, comprising:
• means for activating a configuration mode of the display system 10,
• means for detecting at least a first motion or position of the user's hand 13 in the detection area 30,
· means for identifying at least a first local point in the detection area 30, corresponding to said at least first motion or position of the user's hand 13, and
• means for delimiting and orienting the contactless control area 32, wherein the contactless control area 32 is delimited and oriented according to said at least one first local point.
It should be noticed that said means for defining the contactless control area 32 are connected to the sensor set 26, so that the limits are defined based on information given by the sensor set 26, said information relating to the motions of the user's hand 13.
Besides, the electronic control unit comprises means for selecting displayed data, comprising:
• means for activating an operating mode of the display system 10,
• means for detecting at least one further motion or position of the user's hand 13 in the contactless control area 32 such as determined during the configuration steps, and
· means for determining which displayed data is selected, according to said further motion or position of the user's hand 13. It should be noticed that the invention is not limited to the previously disclosed embodiment, but it can comprise several variants.

Claims

1. A method for selecting display data in a display system (10) of a vehicle, characterized in that the display system (10) comprises:
• display means (14) for displaying data (50) in a display area (16),
• a contactless control device (20) including at least a sensor set (26) that is arranged in a passenger compartment of the vehicle, so as to detect motions of at least one user's hand (13) in a detection area (30) reachable by the user's hand (13) at least when the user is in a driving position, and
• at least one electronic control unit, arranged between said sensor set (26) and said display means (14), intended to convert detected motions of the user's hand (13) into input information for the display means (14), the method comprising the following configuration steps, for determining a contactless control area (32) of the display system (10) in the detection area (30):
• a step (100) of activating a configuration mode of the display system (10),
• a step (140) of detecting at least a first motion or position of the user's hand (13) in the detection area (30),
• a step (150) of identifying at least a first local point (40) in the detection area (30), corresponding to said at least first motion or position of the user's hand (13), and
• a step (160) of defining the contactless control area (32), wherein the contactless control area (32) is delimited and oriented according to said at least one first local point (40),
the method also comprising the following operating steps, for selecting displayed data:
• a step (200) of activating an operating mode of the display system (10),
• a step (210) of detecting at least one further motion or position of the user's hand (13) in the contactless control area (32) such as determined during the configuration steps, and
• a step (230) of determining which displayed data (50) is selected, according to said further motion or position of the user's hand (13).
2. The method according to claim 1 , wherein the configuration steps comprise a step (185) of correlating the contactless control area (32) with the display area (16), comprising: • assigning coordinates to the delimited and oriented contactless control area (32), and
• correlating the coordinates of the delimited and oriented contactless control area (32) with dimensions of the display area (16).
3. The method according to claim 2, wherein the operating steps comprise a step
(220) of assigning a position in the display area (16) to the detected further motion or position of the user's hand (13), comprising:
• assigning coordinates to said further motion or position of user's hand (13),
• processing the coordinates of the further motion or position of user's hand (13), and
• determining the corresponding position in the display area (16) that corresponds to said coordinates of the further motion or position of user's hand (13),
the step (230) of determining which displayed data (50) is selected being performed according to said corresponding position determined in the display area (16).
4. The method according to claim 1 , wherein the configuration steps comprise a step (185) of correlating the contactless control area (32) with the display area (16), comprising:
• dividing the delimited and oriented contactless control area (32) into sub- areas, and
• assigning to each sub-area a corresponding position on the display area (16),
wherein the operating steps comprise a step (220) of assigning a position in the display area (16) to the detected further motion or position of the user's hand (13), comprising:
· detecting in which sub-area of the contactless control area (32) the user's hand (13) is currently positioned, and
• determining the corresponding position in the display area (16) that corresponds to said sub-area where the user's hand is detected, the step (230) of determining which displayed data (50) is selected being performed according to said corresponding position determined in the display area (16).
5. The method according to any one of the preceding claims, wherein the operating steps comprise a step of displaying a pointer (34) on the display area, corresponding to the user's hand (13), as long as this user's hand (13) is in the contactless control area (32).
6. The method according to any one of the preceding claims, wherein, during the step (160) of defining the contactless control area (32), this contactless control area (32) is delimited by a predetermined 2D or 3D shape (41) that is positioned according to the position of said first local point (40) in the detection area (30) and that is oriented by orienting said predetermined 2D or 3D shape (41) according to a determined direction (42).
7. The method according to any one of claims 1 to 6, wherein:
• the step of detection of the first motion and/or position of the user's hand (13) then comprises a detection of at least one second motion and/or position of the user's hand (13),
• the step of identifying a first local point (40) then comprises an identification of at least a second local point (43) in the detection area (30), corresponding to said at least one second motion and/or position of the user's hand (13), and
• the step of defining the contactless control area (32) comprises delimiting the contactless control area (32) by positioning a predetermined 2D or 3D shape according to the position of said first local point (40) in the detection area (30), and orienting the contactless control area (32) by orienting said predetermined 2D or 3D shape depending on the position of the second local point (43) in the detection area.
8. The method according to claim 6 or 7, wherein the predetermined 2D shape have the shape of a rectangle or of a disc and the predetermined 3D shape has the shape of a parallelepiped, a truncated pyramid or a truncated cone centred on said first local point (40).
9. The method according to any one of the claims 1 to 6, wherein:
• the step (140) of detecting the first motion and/or position of the user's hand (13) also comprises a detection of at least one second motion and/or position of the user's hand (13),
• the step (150) of identifying a first local point (40) also comprises an identification of at least a second local point (53) in the detection area (30), corresponding to said at least one second motion and/or position of the user's hand (13), and
• the step (160) of defining the contactless control area (32) comprises delimiting and orienting the contactless control area (32) by determining a 2D or a 3D shape (41) according to the position of said first and second local points (40, 53) in the detection area (30).
10. The method according to claim 9, wherein the step (160) of defining a 3D shape (41) of the contactless control area (32) comprises determining an intermediate 2D shape (45) according to the position of said at least first and second local points (40, 53) in the detection area (30), orienting said intermediate 2D shape (45) according to a determined direction, and then projecting said intermediate 2D shape (45) according to a direction that is perpendicular to the intermediate 2D shape (45), in order to form a 3D shape (41) of the contactless control area (32).
11. The method according to claim 9, wherein the step (160) of defining a 3D shape (41) of the contactless control area (32) comprises determining an intermediate 2D shape (45) according to said at least first and second local points (40, 53), orienting the intermediate 2D shape depending on the position of at least a third local point corresponding to a third motion and/or position of the user's hand (13) in the detection area (30), and then projecting the intermediate 2D shape according to a direction that is perpendicular to this intermediate 2D shape in order to form a 3D shape of the contactless control area (32).
12. The method according to claim 10 or 11 , wherein, during the step (160) of defining the contactless control area (32), said intermediate 2D shape (45) is centred on said first local point (40), and the periphery of the intermediate 2D shape (45) is defined so that the second local point (53) is part of the periphery of this intermediate 2D shape (45).
13. The method according to claim 12, wherein, during the step (160) of defining the contactless control area (32), said intermediate 2D shape (45) is a rectangle centred on said first local point (40), and the periphery of the rectangle is defined so that the second local point (53) corresponds to a corner of the rectangle.
14. The method according to any one of claims 9 to 11 , wherein, during the step (160) of defining the contactless control area (32), said 2D shape or intermediate 2D shape (45) is expanded from said first local point (40) and around said first local point (40) until the periphery of the 2D shape or intermediate 2D shape (45) reaches said second local point (53) or reach at least one limit (46) of the detection area (30).
15. The method according to claim 14, wherein, during the step (160) of defining the contactless control area (32), if the periphery of the 2D shape or intermediate 2D shape (45) reaches a limit (46) of the detection area (30), the 2D shape or intermediate 2D shape continues to be expanded in the opposite direction to said limit (46) until reaching the second local point (53).
16. The method according to any one of claims 9 to 15, wherein the 2D shape or the intermediate 2D shape (45) has the shape of a rectangle or of a disc, and the 3D shape (41) of the contactless control area (32) has the shape of a parallelepiped, a truncated pyramid or a truncated cone that corresponds to the intermediate 2D shape (45).
17. The method according to any one of the preceding claims, wherein the contactless control area (32) is an intangible area.
18. The method according to any one of the claims 6 to 16, wherein the 2D and 3D shape (41) is an intangible shape.
19. The method according to any one of claims 1 to 18, comprising a preliminary step (130) of determining the detection area (30), comprising:
• detecting at least one position or one motion of the user's hand (13) in the detection field of the sensor set (26), and
• defining, in the detection field of the sensor set (26) and according to the at least one position or motion of the user's hand (13), a detection area (30).
20. The method according to any one of claims 1 to 18, comprising a preliminary step (130) of determining the detection area (30), comprising:
• placing the sensor set (26) in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set (26),
• detecting at least one position of the user's hand (13) in a detection field of the sensor set (26), and
• modifying the orientation of the sensor set (26) in order to focus the detection field of the sensor set (26) on said position of the user's hand (13).
21. The method according to any one of claims 1 to 18, comprising a preliminary step (130) of determining the detection area (30), comprising:
• placing the sensor set (26) in a neutral position, wherein said neutral position comprises at least a neutral orientation of the sensor set (26) and an intermediate zoom position of the sensor set (26),
• detecting at least a first and a second position of the user's hand (13) in the detection field of the sensor set (26), and
• defining a detection area by modifying the orientation of the sensor set (26) in order to focus the detection field of the sensor set (26) on the first position of the user's hand (13) and adjusting the zoom position in order to detect the second position of the user's hand (13).
22. A display system (10) of a vehicle, characterized in that it comprises:
• display means (14) for displaying data (50) in a display area (16),
• a contactless control device (20) including at least a sensor set (26) that is arranged in a passenger compartment of the vehicle, so as to detect motions of at least one user's hand (13) in a detection area (30) reachable by the user's hand (13) when the user is in a driving position, and
• at least one electronic control unit, arranged between said sensor set (26) and said display means (14), intended to convert detected motions of the user's hand (13) into input information for the display means (14), the display system (10) also comprising means for defining a contactless control area (32) in the detection area (30), said determining means comprising:
• means for activating a configuration mode of the display system (10),
• means for detecting at least a first motion or position of the user's hand (13) in the detection area (30),
• means for identifying at least a first local point in the detection area (30), corresponding to said at least first motion or position of the user's hand (13), and
• means for defining the contactless control area (32), wherein the contactless control area (32) is delimited and oriented according to said at least one first local point,
the display system (10) also comprising means for selecting displayed data, comprising:
• means for activating an operating mode of the display system (10),
• means for detecting at least one further motion or position of the user's hand (13) in the contactless control area (32) such as determined during the configuration steps, and
• means for determining which displayed data (50) is selected, according to said further motion or position of the user's hand (13).
23. The display system (10) according to claim 22, wherein the display area (16) is projected in the field of view of the driver.
24. The display system (10) according to claim 22 or 23, wherein the sensor set (26) is able to detect 3D motions of user's hand (13) in the detection area (30).
25. The display system (10) according to any one of claims 22 to 24, wherein the sensor set (26) comprises at least one camera (27), preferably two cameras.
26. The display system (10) according to any one of claims 22 to 25, wherein the sensor set (26) comprises, at least an Infra-Red or Ultra-Violet projector (28) able to caption a motion of the user's hand (13).
27. The display system (10) according to any one of claims 22 to 26, wherein the detection area (30) is determined by the arrangement of the sensor set (26) in the cabin and by the characteristics of the sensor set (26).
PCT/IB2012/003057 2012-12-20 2012-12-20 A method of selecting display data in a display system of a vehicle WO2014096896A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2012/003057 WO2014096896A1 (en) 2012-12-20 2012-12-20 A method of selecting display data in a display system of a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2012/003057 WO2014096896A1 (en) 2012-12-20 2012-12-20 A method of selecting display data in a display system of a vehicle

Publications (1)

Publication Number Publication Date
WO2014096896A1 true WO2014096896A1 (en) 2014-06-26

Family

ID=48045590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/003057 WO2014096896A1 (en) 2012-12-20 2012-12-20 A method of selecting display data in a display system of a vehicle

Country Status (1)

Country Link
WO (1) WO2014096896A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015006614A1 (en) * 2015-05-21 2016-11-24 Audi Ag Method for operating an operating device and operating device for a motor vehicle
JP2022020704A (en) * 2017-10-24 2022-02-01 マクセル株式会社 Information displaying device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2258587A1 (en) * 2008-03-19 2010-12-08 Denso Corporation Operation input device for vehicle
FR2952012A1 (en) 2009-11-04 2011-05-06 Bosch Gmbh Robert VEHICLE DRIVER ASSISTANCE SYSTEM EQUIPPED WITH DATA ENTRY DEVICE
US20110260965A1 (en) * 2010-04-22 2011-10-27 Electronics And Telecommunications Research Institute Apparatus and method of user interface for manipulating multimedia contents in vehicle
US20120207345A1 (en) * 2011-02-10 2012-08-16 Continental Automotive Systems, Inc. Touchless human machine interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2258587A1 (en) * 2008-03-19 2010-12-08 Denso Corporation Operation input device for vehicle
FR2952012A1 (en) 2009-11-04 2011-05-06 Bosch Gmbh Robert VEHICLE DRIVER ASSISTANCE SYSTEM EQUIPPED WITH DATA ENTRY DEVICE
US20110260965A1 (en) * 2010-04-22 2011-10-27 Electronics And Telecommunications Research Institute Apparatus and method of user interface for manipulating multimedia contents in vehicle
US20120207345A1 (en) * 2011-02-10 2012-08-16 Continental Automotive Systems, Inc. Touchless human machine interface

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015006614A1 (en) * 2015-05-21 2016-11-24 Audi Ag Method for operating an operating device and operating device for a motor vehicle
JP2022020704A (en) * 2017-10-24 2022-02-01 マクセル株式会社 Information displaying device
JP7360433B2 (en) 2017-10-24 2023-10-12 マクセル株式会社 information display device
US11878586B2 (en) 2017-10-24 2024-01-23 Maxell, Ltd. Information display apparatus and spatial sensing apparatus

Similar Documents

Publication Publication Date Title
CN109484299B (en) Method, apparatus, and storage medium for controlling display of augmented reality display apparatus
EP3261871B1 (en) Display control apparatus and method
US9645640B2 (en) Device and method for navigating within a menu for controlling a vehicle, and selecting a menu entry from the menu
JP6123761B2 (en) Vehicle display device
JP5050735B2 (en) Vehicle driving support device
US10337881B2 (en) Navigation device, vehicle, and method for controlling the vehicle
WO2018147066A1 (en) Display control apparatus for vehicles
JP6252365B2 (en) Safety confirmation support system, safety confirmation support method
CN109383241B (en) System and method for sun protection
US20070139176A1 (en) Method and system for supporting path control
CN111788459A (en) Presentation of auxiliary information on a display unit
EP3007055B1 (en) Vehicle with a dual operational touch screen device
US10300789B2 (en) Vehicle heads-up display
WO2014176478A1 (en) Scene awareness system for a vehicle
KR102531888B1 (en) How to operate a display device in a car
KR102593383B1 (en) Control of a display of an augmented reality head-up display apparatus for a means of transportation
CN107923761B (en) Display control device, display device, and display control method
JP7415855B2 (en) Vehicle information display system
US20180067307A1 (en) Heads-up display windshield
US9658450B2 (en) Vehicle heads-up display device
JP7367680B2 (en) display device
JP4753735B2 (en) Car electronics
CN111094898A (en) Method, device, and computer-readable storage medium having instructions for controlling display of an augmented reality heads-up display device for a motor vehicle
JP2020071415A (en) Head-up display system
CN110116619B (en) Method for displaying information in a head-up display HUD of a vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12839157

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12839157

Country of ref document: EP

Kind code of ref document: A1