WO2021175749A1 - Method and system for triggering picture-taking of the interior of a vehicle based on a detection of a free-space gesture - Google Patents
Method and system for triggering picture-taking of the interior of a vehicle based on a detection of a free-space gesture Download PDFInfo
- Publication number
- WO2021175749A1 WO2021175749A1 PCT/EP2021/054958 EP2021054958W WO2021175749A1 WO 2021175749 A1 WO2021175749 A1 WO 2021175749A1 EP 2021054958 W EP2021054958 W EP 2021054958W WO 2021175749 A1 WO2021175749 A1 WO 2021175749A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- free
- vehicle
- space gesture
- sensor
- space
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012545 processing Methods 0.000 claims description 34
- 238000005286 illumination Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 11
- 238000001228 spectrum Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 10
- 230000006854 communication Effects 0.000 description 10
- 230000002411 adverse Effects 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 3
- 230000005670 electromagnetic radiation Effects 0.000 description 2
- 238000012797 qualification Methods 0.000 description 2
- 241000876446 Lanthanotidae Species 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 235000019628 coolness Nutrition 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/10—Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/28—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/146—Instrument input by gesture
- B60K2360/1464—3D-gesture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/16—Type of output information
- B60K2360/176—Camera images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/20—Optical features of instruments
- B60K2360/21—Optical features of instruments using cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Definitions
- the present invention relates to the field of human machine interfaces (HMI) for vehicles, in particular automobiles.
- HMI human machine interfaces
- the invention is directed to a method and a system for triggering taking of a picture of the interior of a vehicle based on a detection of a free- space gesture being performed by a human user, e.g. a driver or other passenger of the vehicle.
- Modern HMIs for vehicles are no longer restricted to providing various push buttons or switches or other conventional input devices for receiving user inputs.
- touch sensitive surfaces including in particular touchscreens
- Recently the detection of free-space gestures being performed by a human user have been added by some car manufacturers to the set of available input methodologies for automotive HMIs.
- such gestures are detected by a single 2D or 3D-camera provided in the vehicle, when they are being performed in the camera’s field of view, and are used to control a functionality, like a sound volume, of an entertainment system, e.g. head unit, of a vehicle or to turn on or off a light in the vehicle.
- an entertainment system e.g. head unit
- a solution to this problem is provided by the teaching of the independent claims.
- Various preferred embodiments of the present invention are provided by the teachings of the de pendent claims.
- a first aspect of the invention is directed to a method of triggering picture-taking of the inte rior of a vehicle, in particular of a selfie-picture or of a selfie-video of one or more passengers within in the vehicle.
- the method comprises (i) detecting, using a free-space gesture detec tion sensor being mounted in or on a vehicle, a specific predetermined free-space gesture being performed by a human user in a field of view of the free-space gesture detection sensor; and (ii) upon detecting said specific predetermined free-space gesture, generating a signal to trigger an image-sensor not coinciding with the free-space gesture detection sensor and being mounted in or on the vehicle such that its field of view covers the interior of the vehicle at least in parts, to take a picture or a sequence of pictures, e.g. to record a video or to take a limited number of one or more individual pictures.
- free-space gesture refers particularly to a bodily motion or state of a human user, i.e. a gesture, being performed in ordinary three-dimensional space to control or otherwise interact with one or more devices, but without a need to physically touch them.
- field of view refers to the spatial extent of the observable world that is detectable at any given moment by the sensor.
- FOV field of view
- sensors e.g. cameras
- it is usually a solid angle through which a detector is sensitive to electromagnetic radiation.
- the term “interior of the vehicle”, as used herein refers, in particular, to the interior of a passenger compartment of the vehicle, e.g. automobile.
- the above method provides an easy-to-use way of taking pictures of the interior of the vehicle, in particular of selfies of one or more passengers of the vehicle, wherein triggering of the picture-taking is based on detection of a specific predefined free-space gesture to be performed by a user.
- Performing such a free-space gesture is generally easier than operating a specific button, switch or other classical HMI element, because it suffices that the gesture be performed within the field of view of the free-space gesture detection sensor without the need to exactly position a finger on a typically small particular spot of a classical HMI element, e.g. a par ticular surface section of a pushbutton or toggle switch.
- a classical HMI element e.g. a par ticular surface section of a pushbutton or toggle switch.
- a free-space gesture detection sensor being separate from the image sensor allows for an individual configuration, optimization, enhancement and replacement of each of the sensors, inde pendently from the respective other sensor.
- the two sensors may be of a dif ferent type, each being optimized for its particular purpose within the context of the present invention, i.e. gesture detection on the one hand and picture taking on the other hand.
- gesture detection on the one hand and picture taking on the other hand.
- any software may be used to configure or operate either one of the sensors or to process their respective sensor data.
- each of the free-space gesture detection sensor and the image sensor have an associated individual illumination unit for illuminating their respective field of view, at least in parts, during their respective sensing activity, and the method further comprises de-synchronizing the activities of the illumination units of the free-space gesture detection sensor and the image-sensor such that the illumination units are not simultane ously active.
- This de-synchronizing has the effect that the operation of neither one of the sensors is adversely affected by the illumination associated with the respective other sen sor.
- the free-space gesture detection sensor operates in a spe cific range of the electromagnetic spectrum, e.g.
- the free-space gesture detection sensor is or comprises a time-of- flight, TOF, camera, which is used to detect said specific predetermined free-space gesture.
- TOF camera is a 3D camera system that measures distances on the basis of a time of flight (TOF) method. The measuring principle is based on the fact that the scene to be rec orded is illuminated by a light pulse, and the camera measures for each pixel the time that the light takes to reach the object and back again. This time is directly proportional to the distance due to the constancy of the speed of light. The camera thus provides the distance of the object imaged on each pixel.
- TOF camera system represents a particu larly effective and high-resolution implementation option for a 3D image sensor and may particularly be used to reliably and efficiently detect free-space gestures.
- the image sensor is or comprises a 2D-photo camera, which is con figured and used to take said picture in the visible or infrared part of the electromagnetic spectrum.
- This allows for taking ordinary two-dimensional photos, in particular digital pho tos.
- the photo camera is adapted to operate in the infrared part of the elec tromagnetic spectrum, it may be used for picture-taking in relatively dark environments, without a flashlight or another artificial illumination of its field of view.
- the photo camera is adapted to operate both in the visible and infrared part of the spectrum, so that it can be used in both light and dark environments, in particular during the day, when daylight is available for illumination, and at night, when infrared radiation becomes more relevant relative to visible radiation.
- detecting said specific predetermined free-space gesture comprises at least one of: (i) processing sensor data being generated by the free-space gesture de tection sensor to examine whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture; and (ii) communicating sen sor data being generated by the free-space gesture detection sensor to a vehicle-external processing platform, e.g. a server that can be reached over the internet or another commu nication link, and receiving in response result data being generated by the processing plat form and indicating whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture.
- a vehicle-external processing platform e.g. a server that can be reached over the internet or another commu nication link
- option (i) is particularly use ful for an autonomous solution that can be fully implemented by a system to be integrated into a vehicle
- option (ii) has the advantage that external processing power, e.g. that of a powerful server, which processing power might otherwise not be available in the vehicle or would add to the complexity and cost of it, may be used instead. This applies, in particular, to the often calculation-intensive processing needed in connection with properly recognizing a specific gesture based on sensor data provided by the free-space gesture detection sen sor.
- the processing of the sensor data is performed separately from a process for controlling the free-space gesture detection sensor and the image sensor.
- the processing of the sensor data in connection with the detection of said specific predetermined free-space gesture may be performed by a vehicle-external processing platform, such as a server in a backend that can be reached from the vehicle over the Internet or another communication link, while an application de fining the HMI and which is used to control the operation of the free-space gesture detection sensor and the image sensor may be run on a vehicle-internal processor platform, e.g. in a head unit or another control unit of the vehicle.
- each one of a predetermined limited set of multiple different prede termined free-space gestures that may potentially be performed by the user is defined as qualifying as said specific predetermined free-space gesture.
- this may be used to enable an imple mentation of the possibility for the user to select among various picture-taking options, such as without limitation timing options or color options or other modes or configurations of the image sensor, e.g. for selecting specific exposure or filter parameters, and the like.
- detecting the specific free-space gesture com prises detecting which of the multiple predetermined free-space gestures in the set is being performed by the user.
- the timing of triggering the image sensor to take a picture or sequence of picture is determined based on which of the multiple predetermined free- space gestures in the set is being detected.
- different timing sce narios may be implemented such that a first gesture in the set of gestures triggers an im mediate snapshot, while second, different gesture in the set only triggers a pre-set timer such that the picture-taking is delayed and is only initiated when the pre-set timer has ex pired.
- a gesture being performed by the user is defined as qualifying as said specific predetermined free-space gesture, if it is one of: (i) a gesture where the user spreads two fingers of a hand such as to exhibit a V-shape in the field of view of the free- space gesture detection sensor; (ii) a gesture where the user exhibits an open hand in the field of view of the free-space gesture detection sensor.
- a second aspect of the present invention is directed to a system for triggering picture-taking of the interior of a vehicle.
- the system comprises: (i) a free-space gesture detection sensor being mounted or configured to be mounted in or on a vehicle and being configured to detect a specific predetermined free-space gesture being performed by a human user; and (ii) an image-sensor not coinciding with the free-space gesture detection sensor and being mounted or configured for being mounted in or on the vehicle such that its field of view covers the interior of the vehicle at least in parts.
- the system is configured to perform the method of said first aspect of the invention.
- the free-space gesture detection sensor is arranged relative to the image-sensor in such a way that its field of view within the interior of the vehicle is located at least in parts outside of the field of view of the image sensor.
- the user it is possible for the user to trigger picture-taking by performing said specific predetermined free-space ges ture, or in the case that multiple such gestures are qualifying as such, any one of those qualifying gestures, in a special area that lies within the field of view of the free-space ges ture detection sensor, but not within the field of view of the image sensor.
- the one or more pictures taken will not show the body part or other object the user applies to perform the specific free-space gesture for triggering the picture-taking.
- the content shown on the one or more pictures is not impacted by the operation of the system.
- At least one of the free-space gesture detection sensor and the image sensor (i) is mounted or configured to be mounted at the vehicle outside of a pas senger compartment being located in the interior of the vehicle and (ii) has a respective field of view extending at least in parts into the passenger compartment.
- the at least one of the sensors need not be located in the passenger compartment and thus does not consume any space therein.
- the field-of-view and in particular the perspective that can be captured by the respective sensor can be defined as an outward-in-perspective. This may be particularly relevant for the image sensor, where in this way images can be taken from outside of the vehicle, e.g.
- said at least one of the free-space gesture detection sensor and the image sen sor is mounted or configured to be mounted at an exterior mirror assembly of the vehicle.
- the tree-space gesture detection sensor is mounted at or inte grated into or configured to be mounted at or to be integrated into a roof module of the vehicle; and
- the image sensor is mounted at or integrated into or configured to be mounted at or to be integrated into a rear-view mirror inside the vehicle.
- each of the fields of view may specifically have the shape of a cone being centered around a symmetry axis defining the main direction in the respective field of view. While thus the two fields of view may be overlapping, they may be positioned relative to each other in such a way that the user can perform free-space gestures to be detected within the field of view of the free- space gesture detection sensor in a spatial area that is at the same time outside of and below the field of view of the image sensor.
- the spatial area may be located above a middle console of the vehicle (esp. car) but below the field of view of the image sensor. Again, this enables taking pictures the content of which does not show the perfor mance of the specific free-space gesture being used to trigger the picture-taking.
- a third aspect of the present invention is directed to a vehicle comprising a system accord ing to the second aspect for triggering a picture-taking of the interior of the vehicle, at least in parts.
- a fourth aspect of the present invention is directed to a computer program, or a computer program product, comprising instructions to cause the system of the second aspect to per form the method of the first aspect of the present invention.
- the computer program (product) may in particular be implemented in the form of a data carrier on which one or more programs for performing the method are stored.
- this is a data carrier, such as a CD, a DVD or a flash memory module.
- This may be advan tageous, if the computer program product is meant to be traded as an individual product in individual product independent from the processor platform on which the one or more pro grams are to be executed.
- the computer program (product) is provided as a file on a data processing unit, in particular on a server, and can be down loaded via a data connection, e.g. the Internet or a dedicated data connection, such as a proprietary or local area network.
- the system of the second aspect may accordingly have a program memory in which the computer program is stored.
- system may also be set up to access a com puter program available externally, for example on one or more servers or other data pro cessing units, via a communication link, in particular to exchange with it data used during the course of the execution of the computer program or representing outputs of the com puter program.
- a com puter program available externally, for example on one or more servers or other data pro cessing units, via a communication link, in particular to exchange with it data used during the course of the execution of the computer program or representing outputs of the com puter program.
- Fig. 1 schematically illustrates an exemplary vehicle comprising a system according to an embodiment of the present invention
- FIG. 2 schematically illustrates a general concept underlying the method of the present in vention
- Fig. 3 shows a flowchart illustrating an exemplary embodiment of the method of the present invention.
- an exemplary vehicle 100 comprises a system 105 for triggering picture-taking of the interior of the vehicle, at least in parts.
- system 105 comprises a free-space gesture detection sensor 110 which is arranged within the passenger compartment at an interior surface of the roof of the vehicle 100.
- Free-space gesture detection sensor 110 is designed as a 3D image sensor of the “time-of-flight” type (TOF camera). It has a field-of-view (FOV) 115, which has substantially the form of a cone having its tip at free-space gesture detection sensor 100 and extending predominantly downwards, but optionally also with a horizontal directional component, towards a middle console located next to the driver seat of vehicle 100.
- FOV field-of-view
- Free space to detection sensor 110 is configured to detect free-space gestures G being performed within its field of view 115 by a person (user) U, such as a driver of vehicle 100.
- a person (user) U such as a driver of vehicle 100.
- user U performs a specific predetermined free- space gesture G with two fingers of her right hand, wherein these two fingers form a “V”- shape within a virtual plane that is substantially horizontal and thus predominantly perpen dicular to the predominantly vertical central direction of the cone defining the FOV 115 of gesture detection sensor 110. Accordingly, the “V”-shape is "visible” to and thus detectable as such by gesture detection sensor 110.
- System 105 further comprises an image sensor 120 in the form of a 2D photo camera which is sensitive both in the visible and at least parts of the infrared range of the electromagnetic spectrum, so that it can take pictures not only when the interior of vehicle 100 is illuminated, whether artificially or by daylight, but also when it is relatively dark, in particular at night.
- Image sensor 120 is arranged at the rear-view mirror of the vehicle inside the passenger compartment and has a field of view 125, which predominantly extends towards and covers, at least in parts, the location of the driver seat and optionally also of one or more further passenger seats of vehicle 100. Accordingly, image sensor 120 is arranged to take photos or videos of one or more passengers of vehicle 100 while seated in their respective pas senger seats, i.e. in particular “selfies”.
- gesture detection sensor 110 and the field of view 125 of image sensor 120 overlap in parts, they do not fully coincide, such that particularly a spatial area above the middle console of vehicle 100 is only located within FOV 115 of the gesture detection sensor 110, but not within FOV 125 of the image sensor 120. Accordingly, when a gesture G is performed within that spatial area by user U, body parts or other objects user U may use to perform gesture G will not be pictured within photos or videos taken by image sensor 120.
- Each one of sensors 110 and 120 may have an own associated illumination unit 111 or 121 , respectively, for illuminating the related field of view 115 or 125, respectively, with electro magnetic radiation, to which the respective sensor is sensitive, e.g. in the visible or infrared part of the electromagnetic spectrum.
- Each of these illumination units 111 and 121 may be activated and deactivated individually. Specifically, they may be controlled such that at any given point in time only one or none of the two illumination units 111 and 121 is active.
- system 105 comprises a processing unit 130 being signal-coupled to both ges ture detection sensor 110 and image sensor 120 in order to control them and receive their respective sensor data for further processing.
- processing unit 130 may com prise in a respective memory one or more computer programs being configured as an ap- plication for free-space-gesture-triggered picture-taking, such that a user U can initiate pic ture-taking by performing a specific predetermined gesture corresponding to such picture taking within FOV 115 of image detection sensor 110.
- processing unit 130 may be configured to recognize one or more different predetermined gestures, if such are represented by the sensor data provided by gesture detection sensor 110. Accordingly, gesture detection sensor 110 in combination with pro cessing unit 130 is then capable of detecting one or more different predetermined free- space gestures being performed by user U within the FOV 115 of sensor 110. In particular, one or more of these gestures may be predetermined as being gestures, which when prob ably detected, trigger picture-taking by image sensor 120.
- system 105 may further comprise a communication unit 135 being configured to exchange data via a communication link with a vehicle-external processing platform 145, such as a backend server.
- the communication link may be a wireless link and accordingly, communication unit 135 may be signal-connected to an antenna 140 of the vehicle 102 to therewith send and receive RF signals for communication over the wireless link.
- the setup may be used to communicate sensor data being generated by gesture detection sensor 110 to processing platform 145 for performing a similar processing as described above with reference to processing unit 130 in the context of gesture detection.
- This may be specifically advantageous, when a significant number of different and sometimes complex gestures needs to be reliably detected, such that a high processing power is needed to perform the recognition and discrimination of these various gestures.
- it might be easier and more efficient to provide such high processing power outside of the vehicle on a dedicated processing platform 145 instead of designing processing unit 130 as a respective high processing power unit.
- the latter approach might have all kinds of negative effects, including higher average cost of gesture detection per vehicle and gesture, or more demanding space requirements, power requirements, or cool ing requirements etc..
- a general concept 200 underlying the method of the present inven tion comprises (i) a process of detecting 205, by means of a free-space gesture detection sensor 110, a specific (i.e. “qualifying” according to a respective predefined gesture dis crimination criterion) predetermined free-space gesture G being performed by a user U, (ii) upon detection of such a qualifying gesture G, triggering 210 picture-taking 215, e.g. of one or more distinct photos, or a sequence of photos (e.g. a video), by an image sensor 120, e.g. a 2D or 3D photo or video camera.
- Fig. 3 illustrates a method 300 according to an embodiment of the present invention.
- Method 300 comprises a step 305, wherein a free-space gesture detection sensor 110 of the TOF camera type con tinuously “listens”, i.e. monitors, its field-of-view 115, for free-space gestures being per formed by a user U, e.g. a driver or other passenger of vehicle 100.
- a free-space gesture detection sensor 110 of the TOF camera type con tinuously “listens”, i.e. monitors, its field-of-view 115, for free-space gestures being per formed by a user U, e.g. a driver or other passenger of vehicle 100.
- gesture detection sensor 110 When gesture detection sensor 110 is active, also its illumination unit 111 may activated to illuminate field-of-view 115, while illumination unit 121 of image sensor 120 is deactivated at that time. Accordingly, the sensing activity of gesture detection sensor 110 is not adversely affected by radiation being emitted by illumination unit 121 of image sensor 120 (de-synchronization).
- the sensor data provided by gesture detection sensor 110 is being processed for the purposes of determining, (i) whether any predetermined gesture can be recognized based upon the sensor data (evaluation sub-step 315), (ii) if so, whether such recognized gesture is a qualified gesture according to a predetermined qualification criterion for discriminating gestures associated with picture-taking from any other potential predeter mined gestures for other purposes (evaluation sub-step 320), and (iii) in the case that there is a predefined set of multiple qualifying gestures for picture-taking, of which type the rec ognized qualified gesture is, i.e. to which of the predetermined gestures in the set it corre sponds (evaluation sub-step 325).
- the method loops back to step 305 into listing mode. Otherwise (315 — yes, and 320 - yes) the method branches depending on the type of qualified gesture that was determined in evaluation sub-step 325.
- there are two prede fined free-space gestures in the set of qualifying gestures namely first gesture correspond ing to the user U forming an “open hand” with one of his hands, and a second gesture corresponding to the user U forming a “V-shape” with two fingers one of his hands.
- step 330 a preset timer (e.g. 5 seconds) is being triggered and method 300 then waits until expiration of the timer, before continuing with a step 335, in which picture-taking by image sensor 120 is triggered while the illumination unit 111 of gesture detection sensor is deactivated. If, how ever, the detected qualify gesture is of the "V-shape” type, then step 335 immediately fol lows sub-step 325, thereby omitting step 330. Finally, image data being generated by image sensor 120 and representing the one or more pictures being taken accordingly of the interior of vehicle 100 upon execution of triggering step 335 is being stored in a memory or being output at a respective interface, e.g. at a data interface or a monitor in vehicle 100. Then method 300 loops back to step 305 for another run.
- a preset timer e.g. 5 seconds
- vehicle-external processing platform e.g. server 200 general concept underlying the method, e.g. method 300
- picture-taking process 300 exemplary method of free-space gesture-based triggering of picture-taking
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention is directed to a method and a corresponding system for triggering taking of a picture of the interior of a vehicle based on a detection of a free-space gesture being performed by a human user, e.g. a driver or other passenger of the vehicle. The method comprises detecting, using a free-space gesture detection sensor being mounted in or on a vehicle, a specific predetermined free-space gesture being performed by a human user in a field of view of the free-space gesture detection sensor; and upon detecting said specific predetermined free-space gesture, generating a signal to trigger an image-sensor not coinciding with the free-space gesture detection sensor and being mounted in or on the vehicle such that its field of view covers the interior of the vehicle at least in parts, to take a picture or a sequence of pictures, e.g. to record a video or to take a limited number of one or more individual pictures.
Description
METHOD AND SYSTEM FOR TRIGGERING PICTURE-TAKING OF THE INTERIOR OF A VEHICLE BASED ON A DETECTION OF A FREE-SPACE GESTURE
The present invention relates to the field of human machine interfaces (HMI) for vehicles, in particular automobiles. Specifically, the invention is directed to a method and a system for triggering taking of a picture of the interior of a vehicle based on a detection of a free- space gesture being performed by a human user, e.g. a driver or other passenger of the vehicle.
Modern HMIs for vehicles are no longer restricted to providing various push buttons or switches or other conventional input devices for receiving user inputs. Specifically, in addi tion to touch sensitive surfaces, including in particular touchscreens, recently the detection of free-space gestures being performed by a human user have been added by some car manufacturers to the set of available input methodologies for automotive HMIs. Typically, such gestures are detected by a single 2D or 3D-camera provided in the vehicle, when they are being performed in the camera’s field of view, and are used to control a functionality, like a sound volume, of an entertainment system, e.g. head unit, of a vehicle or to turn on or off a light in the vehicle.
Another trend that has recently become important is taking of so-called “selfies”, i.e. of dig ital snapshots, i.e. photos or videos, of oneself and optionally also others, often for use on social networks. Usually, handheld electronic devices such as smart phones or tablet com puters are used for that purpose. While taking such pictures in stationary in environments, such as for example restaurants, offices or private homes, has become ubiquitous based on the use of such handheld electronic devices, taking such pictures in the same way inside a moving vehicle, such as a car, might be in conflict with safety requirements and given space constraints, in particular, if the driver itself is meant to take the picture or to be on the picture, or if multiple persons occupying different seats within the car are all meant to be on the picture.
It is an object of the present invention to provide a secure and easy-to-use way of taking pictures of the interior of a vehicle, in particular selfies of one or more passengers of the vehicle. Specifically, it is desirable to address one or more of the above-mentioned draw backs of conventional selfie-taking using handheld electronic devices in an automotive en vironment.
A solution to this problem is provided by the teaching of the independent claims. Various preferred embodiments of the present invention are provided by the teachings of the de pendent claims.
A first aspect of the invention is directed to a method of triggering picture-taking of the inte rior of a vehicle, in particular of a selfie-picture or of a selfie-video of one or more passengers within in the vehicle. The method comprises (i) detecting, using a free-space gesture detec tion sensor being mounted in or on a vehicle, a specific predetermined free-space gesture being performed by a human user in a field of view of the free-space gesture detection sensor; and (ii) upon detecting said specific predetermined free-space gesture, generating a signal to trigger an image-sensor not coinciding with the free-space gesture detection sensor and being mounted in or on the vehicle such that its field of view covers the interior of the vehicle at least in parts, to take a picture or a sequence of pictures, e.g. to record a video or to take a limited number of one or more individual pictures.
The term " free-space gesture", as used herein, refers particularly to a bodily motion or state of a human user, i.e. a gesture, being performed in ordinary three-dimensional space to control or otherwise interact with one or more devices, but without a need to physically touch them.
The term “field of view” (FOV) of a sensor, as used herein, refers to the spatial extent of the observable world that is detectable at any given moment by the sensor. In the case of optical instruments or sensors, e.g. cameras, it is usually a solid angle through which a detector is sensitive to electromagnetic radiation.
The term “interior of the vehicle”, as used herein refers, in particular, to the interior of a passenger compartment of the vehicle, e.g. automobile.
The terms “first”, “second”, “third” and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
Where the term "comprising" or “including” is used in the present description and claims, it does not exclude other elements or steps. Where an indefinite or definite article is used
when referring to a singular noun e.g. "a" or "an", "the", this includes a plural of that noun unless something else is specifically stated.
Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Accordingly, the above method provides an easy-to-use way of taking pictures of the interior of the vehicle, in particular of selfies of one or more passengers of the vehicle, wherein triggering of the picture-taking is based on detection of a specific predefined free-space gesture to be performed by a user.
Performing such a free-space gesture is generally easier than operating a specific button, switch or other classical HMI element, because it suffices that the gesture be performed within the field of view of the free-space gesture detection sensor without the need to exactly position a finger on a typically small particular spot of a classical HMI element, e.g. a par ticular surface section of a pushbutton or toggle switch. Furthermore, there is no need for the user to watch the performance of the gesture in order for carrying it out correctly. This enables the user, in particular a driver of a vehicle, to continue watching the traffic around him while performing the gesture and thus triggering the picture-taking. Taking of pictures in the interior of the vehicle may thus be made more secure than if it was based on an operation of classical HMI elements.
In addition, there are further advantages, including in particular that the use of a free-space gesture detection sensor being separate from the image sensor allows for an individual configuration, optimization, enhancement and replacement of each of the sensors, inde pendently from the respective other sensor. Specifically, the two sensors may be of a dif ferent type, each being optimized for its particular purpose within the context of the present invention, i.e. gesture detection on the one hand and picture taking on the other hand. The same applies to any software that may be used to configure or operate either one of the sensors or to process their respective sensor data.
Furthermore, using two separate sensors instead of a single one for both detecting a ges ture and picture taking enables the use of two different fields of view for these two sensors, such that the performance of the gesture to be detected may be located particularly outside of the field of view of the image sensor. Accordingly, unlike in cases where a same sensor
is used for both gesture detection and image-taking and where thus the object, e.g. body part, performing the gesture is inevitably on the pictures being taken, this can be avoided when two separate sensors are used, one for each of these different purposes.
In the following, preferred embodiments of the method are described, which can be arbi trarily combined with each other or with other aspects of the present invention, unless such combination is explicitly excluded or technically impossible.
In some embodiments, each of the free-space gesture detection sensor and the image sensor have an associated individual illumination unit for illuminating their respective field of view, at least in parts, during their respective sensing activity, and the method further comprises de-synchronizing the activities of the illumination units of the free-space gesture detection sensor and the image-sensor such that the illumination units are not simultane ously active. This de-synchronizing has the effect that the operation of neither one of the sensors is adversely affected by the illumination associated with the respective other sen sor. In particular, if for example the free-space gesture detection sensor operates in a spe cific range of the electromagnetic spectrum, e.g. in the visible or infrared range of the spec trum, its operation and reliability might be adversely affected, if simultaneously the illumina tion unit associated with the image sensor were to emit light in that same range of the spec trum. Vice versa, radiation emitted from the illumination unit associated with the free-space gesture detection sensor in a range of the spectrum to which the image sensor is sensitive might have an adverse effect on the quality of pictures being taken by the image sensor, if operated simultaneously.
In some embodiments, the free-space gesture detection sensor is or comprises a time-of- flight, TOF, camera, which is used to detect said specific predetermined free-space gesture. A "TOF camera" is a 3D camera system that measures distances on the basis of a time of flight (TOF) method. The measuring principle is based on the fact that the scene to be rec orded is illuminated by a light pulse, and the camera measures for each pixel the time that the light takes to reach the object and back again. This time is directly proportional to the distance due to the constancy of the speed of light. The camera thus provides the distance of the object imaged on each pixel. The use of a TOF camera system represents a particu larly effective and high-resolution implementation option for a 3D image sensor and may particularly be used to reliably and efficiently detect free-space gestures.
In some embodiments, the image sensor is or comprises a 2D-photo camera, which is con figured and used to take said picture in the visible or infrared part of the electromagnetic
spectrum. This allows for taking ordinary two-dimensional photos, in particular digital pho tos. Specifically, if the photo camera is adapted to operate in the infrared part of the elec tromagnetic spectrum, it may be used for picture-taking in relatively dark environments, without a flashlight or another artificial illumination of its field of view. Ideally, the photo camera is adapted to operate both in the visible and infrared part of the spectrum, so that it can be used in both light and dark environments, in particular during the day, when daylight is available for illumination, and at night, when infrared radiation becomes more relevant relative to visible radiation.
In some embodiments, detecting said specific predetermined free-space gesture comprises at least one of: (i) processing sensor data being generated by the free-space gesture de tection sensor to examine whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture; and (ii) communicating sen sor data being generated by the free-space gesture detection sensor to a vehicle-external processing platform, e.g. a server that can be reached over the internet or another commu nication link, and receiving in response result data being generated by the processing plat form and indicating whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture. While option (i) is particularly use ful for an autonomous solution that can be fully implemented by a system to be integrated into a vehicle, option (ii) has the advantage that external processing power, e.g. that of a powerful server, which processing power might otherwise not be available in the vehicle or would add to the complexity and cost of it, may be used instead. This applies, in particular, to the often calculation-intensive processing needed in connection with properly recognizing a specific gesture based on sensor data provided by the free-space gesture detection sen sor.
Specifically, in some related embodiments, the processing of the sensor data is performed separately from a process for controlling the free-space gesture detection sensor and the image sensor. Particularly, for example, the processing of the sensor data in connection with the detection of said specific predetermined free-space gesture may be performed by a vehicle-external processing platform, such as a server in a backend that can be reached from the vehicle over the Internet or another communication link, while an application de fining the HMI and which is used to control the operation of the free-space gesture detection sensor and the image sensor may be run on a vehicle-internal processor platform, e.g. in a head unit or another control unit of the vehicle. This enables, in particular, on the one hand an optimal partitioning of the processing needed to perform the method and on the other hand the possibility to separately configure, optimize, scale, extend, maintain, or replace
the respective processing means, be it on the hardware side or on the software side, or both.
In some embodiments, each one of a predetermined limited set of multiple different prede termined free-space gestures that may potentially be performed by the user is defined as qualifying as said specific predetermined free-space gesture. This means that there may be more than one specific predetermined free-space gesture qualifying for triggering the pic ture-taking by the image sensor. This may be advantageous in at least two ways: Firstly , this may be used to increase the ease of use and thus also the reliability of the method by also defining further similar but not identical gestures as qualifying gestures for triggering the picture taking, as long as they can be safely distinguished from other gestures that are not intended to trigger such picture taking. Secondly, this may be used to enable an imple mentation of the possibility for the user to select among various picture-taking options, such as without limitation timing options or color options or other modes or configurations of the image sensor, e.g. for selecting specific exposure or filter parameters, and the like.
Specifically, in some related embodiments, detecting the specific free-space gesture com prises detecting which of the multiple predetermined free-space gestures in the set is being performed by the user. In addition, the timing of triggering the image sensor to take a picture or sequence of picture is determined based on which of the multiple predetermined free- space gestures in the set is being detected. In this way, for example, different timing sce narios may be implemented such that a first gesture in the set of gestures triggers an im mediate snapshot, while second, different gesture in the set only triggers a pre-set timer such that the picture-taking is delayed and is only initiated when the pre-set timer has ex pired.
In some embodiments, a gesture being performed by the user is defined as qualifying as said specific predetermined free-space gesture, if it is one of: (i) a gesture where the user spreads two fingers of a hand such as to exhibit a V-shape in the field of view of the free- space gesture detection sensor; (ii) a gesture where the user exhibits an open hand in the field of view of the free-space gesture detection sensor. These gestures have in common, that they are relatively easy to perform, even by an inexperienced user, and that they may be easily and reliably discriminated from each other and from other free-space gestures a user may perform. It is noted that these specific gestures are only particularly well-suited exemplary gestures and that of course other definitions of one or more specific predeter mined free-space gestures are possible in addition or instead.
A second aspect of the present invention is directed to a system for triggering picture-taking of the interior of a vehicle. The system comprises: (i) a free-space gesture detection sensor being mounted or configured to be mounted in or on a vehicle and being configured to detect a specific predetermined free-space gesture being performed by a human user; and (ii) an image-sensor not coinciding with the free-space gesture detection sensor and being mounted or configured for being mounted in or on the vehicle such that its field of view covers the interior of the vehicle at least in parts. The system is configured to perform the method of said first aspect of the invention.
In the following, preferred embodiments of the system will be described, which can be arbi trarily combined with each other or with other aspects of the present invention, unless such combination is explicitly excluded or technically impossible.
In some embodiments, the free-space gesture detection sensor is arranged relative to the image-sensor in such a way that its field of view within the interior of the vehicle is located at least in parts outside of the field of view of the image sensor. In this way, it is possible for the user to trigger picture-taking by performing said specific predetermined free-space ges ture, or in the case that multiple such gestures are qualifying as such, any one of those qualifying gestures, in a special area that lies within the field of view of the free-space ges ture detection sensor, but not within the field of view of the image sensor. Accordingly, the one or more pictures taken will not show the body part or other object the user applies to perform the specific free-space gesture for triggering the picture-taking. Thus, the content shown on the one or more pictures is not impacted by the operation of the system.
In some embodiments, at least one of the free-space gesture detection sensor and the image sensor (i) is mounted or configured to be mounted at the vehicle outside of a pas senger compartment being located in the interior of the vehicle and (ii) has a respective field of view extending at least in parts into the passenger compartment. In this way, multiple further advantages may be realized. In particular, the at least one of the sensors need not be located in the passenger compartment and thus does not consume any space therein. Furthermore, the field-of-view and in particular the perspective that can be captured by the respective sensor can be defined as an outward-in-perspective. This may be particularly relevant for the image sensor, where in this way images can be taken from outside of the vehicle, e.g. similar to a view someone would have when standing on a boardwalk or being located in another vehicle next to the vehicle in question. Specifically, in some related em bodiments, said at least one of the free-space gesture detection sensor and the image sen sor is mounted or configured to be mounted at an exterior mirror assembly of the vehicle.
In some embodiments, (i) the tree-space gesture detection sensor is mounted at or inte grated into or configured to be mounted at or to be integrated into a roof module of the vehicle; and (ii) the image sensor is mounted at or integrated into or configured to be mounted at or to be integrated into a rear-view mirror inside the vehicle. This enables, in particular, a specifically suitable spatial configuration of the sensors, where the field of view of the free-space gesture detection sensor extends predominantly downwards from the ve hicle’s roof, while the field of view of the image sensor extends predominantly horizontally, ideally including also a downward component, from the rear mirror. Each of the fields of view may specifically have the shape of a cone being centered around a symmetry axis defining the main direction in the respective field of view. While thus the two fields of view may be overlapping, they may be positioned relative to each other in such a way that the user can perform free-space gestures to be detected within the field of view of the free- space gesture detection sensor in a spatial area that is at the same time outside of and below the field of view of the image sensor. For example, the spatial area may be located above a middle console of the vehicle (esp. car) but below the field of view of the image sensor. Again, this enables taking pictures the content of which does not show the perfor mance of the specific free-space gesture being used to trigger the picture-taking.
A third aspect of the present invention is directed to a vehicle comprising a system accord ing to the second aspect for triggering a picture-taking of the interior of the vehicle, at least in parts.
A fourth aspect of the present invention is directed to a computer program, or a computer program product, comprising instructions to cause the system of the second aspect to per form the method of the first aspect of the present invention.
The computer program (product) may in particular be implemented in the form of a data carrier on which one or more programs for performing the method are stored. Preferably, this is a data carrier, such as a CD, a DVD or a flash memory module. This may be advan tageous, if the computer program product is meant to be traded as an individual product in individual product independent from the processor platform on which the one or more pro grams are to be executed. In another implementation, the computer program (product) is provided as a file on a data processing unit, in particular on a server, and can be down loaded via a data connection, e.g. the Internet or a dedicated data connection, such as a proprietary or local area network.
The system of the second aspect may accordingly have a program memory in which the computer program is stored. Alternatively, the system may also be set up to access a com puter program available externally, for example on one or more servers or other data pro cessing units, via a communication link, in particular to exchange with it data used during the course of the execution of the computer program or representing outputs of the com puter program.
The various embodiments and advantages described above in connection with the first as pect of the present invention similarly apply to the other aspects of the invention. Similarly, the various embodiments and advantages described above in connection with the second aspect of the present invention similarly apply to the third aspect of the invention.
Further advantages, features and applications of the present invention are provided in the following detailed description and the appended figures, wherein:
Fig. 1 schematically illustrates an exemplary vehicle comprising a system according to an embodiment of the present invention;
Fig. 2 schematically illustrates a general concept underlying the method of the present in vention; and
Fig. 3 shows a flowchart illustrating an exemplary embodiment of the method of the present invention.
In the figures, identical reference signs are used for the same or mutually corresponding elements of the systems described herein.
Referring to Fig. 1 , an exemplary vehicle 100 according to an embodiment of the present invention comprises a system 105 for triggering picture-taking of the interior of the vehicle, at least in parts. To that purpose, system 105 comprises a free-space gesture detection sensor 110 which is arranged within the passenger compartment at an interior surface of the roof of the vehicle 100. Free-space gesture detection sensor 110 is designed as a 3D image sensor of the “time-of-flight” type (TOF camera). It has a field-of-view (FOV) 115, which has substantially the form of a cone having its tip at free-space gesture detection sensor 100 and extending predominantly downwards, but optionally also with a horizontal directional component, towards a middle console located next to the driver seat of vehicle 100. Free space to detection sensor 110 is configured to detect free-space gestures G being
performed within its field of view 115 by a person (user) U, such as a driver of vehicle 100. In that specific example illustrated in Fig. 1 , user U performs a specific predetermined free- space gesture G with two fingers of her right hand, wherein these two fingers form a “V”- shape within a virtual plane that is substantially horizontal and thus predominantly perpen dicular to the predominantly vertical central direction of the cone defining the FOV 115 of gesture detection sensor 110. Accordingly, the “V”-shape is "visible" to and thus detectable as such by gesture detection sensor 110.
System 105 further comprises an image sensor 120 in the form of a 2D photo camera which is sensitive both in the visible and at least parts of the infrared range of the electromagnetic spectrum, so that it can take pictures not only when the interior of vehicle 100 is illuminated, whether artificially or by daylight, but also when it is relatively dark, in particular at night. Image sensor 120 is arranged at the rear-view mirror of the vehicle inside the passenger compartment and has a field of view 125, which predominantly extends towards and covers, at least in parts, the location of the driver seat and optionally also of one or more further passenger seats of vehicle 100. Accordingly, image sensor 120 is arranged to take photos or videos of one or more passengers of vehicle 100 while seated in their respective pas senger seats, i.e. in particular “selfies”. While the field of view 115 of gesture detection sensor 110 and the field of view 125 of image sensor 120 overlap in parts, they do not fully coincide, such that particularly a spatial area above the middle console of vehicle 100 is only located within FOV 115 of the gesture detection sensor 110, but not within FOV 125 of the image sensor 120. Accordingly, when a gesture G is performed within that spatial area by user U, body parts or other objects user U may use to perform gesture G will not be pictured within photos or videos taken by image sensor 120.
Each one of sensors 110 and 120 may have an own associated illumination unit 111 or 121 , respectively, for illuminating the related field of view 115 or 125, respectively, with electro magnetic radiation, to which the respective sensor is sensitive, e.g. in the visible or infrared part of the electromagnetic spectrum. Each of these illumination units 111 and 121 may be activated and deactivated individually. Specifically, they may be controlled such that at any given point in time only one or none of the two illumination units 111 and 121 is active.
In addition, system 105 comprises a processing unit 130 being signal-coupled to both ges ture detection sensor 110 and image sensor 120 in order to control them and receive their respective sensor data for further processing. Specifically, processing unit 130 may com prise in a respective memory one or more computer programs being configured as an ap-
plication for free-space-gesture-triggered picture-taking, such that a user U can initiate pic ture-taking by performing a specific predetermined gesture corresponding to such picture taking within FOV 115 of image detection sensor 110.
Furthermore, processing unit 130 may be configured to recognize one or more different predetermined gestures, if such are represented by the sensor data provided by gesture detection sensor 110. Accordingly, gesture detection sensor 110 in combination with pro cessing unit 130 is then capable of detecting one or more different predetermined free- space gestures being performed by user U within the FOV 115 of sensor 110. In particular, one or more of these gestures may be predetermined as being gestures, which when prob ably detected, trigger picture-taking by image sensor 120.
Optionally, system 105 may further comprise a communication unit 135 being configured to exchange data via a communication link with a vehicle-external processing platform 145, such as a backend server. For example, as illustrated in Fig. 1 , the communication link may be a wireless link and accordingly, communication unit 135 may be signal-connected to an antenna 140 of the vehicle 102 to therewith send and receive RF signals for communication over the wireless link. Specifically, the setup may be used to communicate sensor data being generated by gesture detection sensor 110 to processing platform 145 for performing a similar processing as described above with reference to processing unit 130 in the context of gesture detection. This may be specifically advantageous, when a significant number of different and sometimes complex gestures needs to be reliably detected, such that a high processing power is needed to perform the recognition and discrimination of these various gestures. Typically, it might be easier and more efficient to provide such high processing power outside of the vehicle on a dedicated processing platform 145 instead of designing processing unit 130 as a respective high processing power unit. The latter approach might have all kinds of negative effects, including higher average cost of gesture detection per vehicle and gesture, or more demanding space requirements, power requirements, or cool ing requirements etc..
Referring now to Fig. 2, a general concept 200 underlying the method of the present inven tion comprises (i) a process of detecting 205, by means of a free-space gesture detection sensor 110, a specific (i.e. “qualifying” according to a respective predefined gesture dis crimination criterion) predetermined free-space gesture G being performed by a user U, (ii) upon detection of such a qualifying gesture G, triggering 210 picture-taking 215, e.g. of one or more distinct photos, or a sequence of photos (e.g. a video), by an image sensor 120, e.g. a 2D or 3D photo or video camera.
Referring now to Fig. 3, which illustrates a method 300 according to an embodiment of the present invention. In addition, for the sake of better explanation and without limitation, ad ditional reference is made again to the vehicle shown in Fig. 1. Method 300 comprises a step 305, wherein a free-space gesture detection sensor 110 of the TOF camera type con tinuously “listens”, i.e. monitors, its field-of-view 115, for free-space gestures being per formed by a user U, e.g. a driver or other passenger of vehicle 100. When gesture detection sensor 110 is active, also its illumination unit 111 may activated to illuminate field-of-view 115, while illumination unit 121 of image sensor 120 is deactivated at that time. Accordingly, the sensing activity of gesture detection sensor 110 is not adversely affected by radiation being emitted by illumination unit 121 of image sensor 120 (de-synchronization).
In a further step 310, the sensor data provided by gesture detection sensor 110 is being processed for the purposes of determining, (i) whether any predetermined gesture can be recognized based upon the sensor data (evaluation sub-step 315), (ii) if so, whether such recognized gesture is a qualified gesture according to a predetermined qualification criterion for discriminating gestures associated with picture-taking from any other potential predeter mined gestures for other purposes (evaluation sub-step 320), and (iii) in the case that there is a predefined set of multiple qualifying gestures for picture-taking, of which type the rec ognized qualified gesture is, i.e. to which of the predetermined gestures in the set it corre sponds (evaluation sub-step 325).
If, according to the results of such processing, either no predetermined gesture has been recognized (315 - no) or a recognized gesture does not qualify according to the qualification criterion (320 - no), the method loops back to step 305 into listing mode. Otherwise (315 — yes, and 320 - yes) the method branches depending on the type of qualified gesture that was determined in evaluation sub-step 325. In the present example, there are two prede fined free-space gestures in the set of qualifying gestures, namely first gesture correspond ing to the user U forming an “open hand” with one of his hands, and a second gesture corresponding to the user U forming a “V-shape” with two fingers one of his hands.
If the detected qualified gesture is of the "open hand" type, then, in a step 330, a preset timer (e.g. 5 seconds) is being triggered and method 300 then waits until expiration of the timer, before continuing with a step 335, in which picture-taking by image sensor 120 is triggered while the illumination unit 111 of gesture detection sensor is deactivated. If, how ever, the detected qualify gesture is of the "V-shape" type, then step 335 immediately fol lows sub-step 325, thereby omitting step 330.
Finally, image data being generated by image sensor 120 and representing the one or more pictures being taken accordingly of the interior of vehicle 100 upon execution of triggering step 335 is being stored in a memory or being output at a respective interface, e.g. at a data interface or a monitor in vehicle 100. Then method 300 loops back to step 305 for another run.
While above at least one exemplary embodiment of the present invention has been de scribed, it has to be noted that a great number of variation thereto exists. Furthermore, it is appreciated that the described exemplary embodiments only illustrate non-limiting exam ples of how the present invention can be implemented and that it is not intended to limit the scope, the application or the configuration of the herein-described apparatus’ and methods. Rather, the preceding description will provide the person skilled in the art with constructions for implementing at least one exemplary embodiment of the invention, wherein it has to be understood that various changes of functionality and the arrangement of the elements of the exemplary embodiment can be made, without deviating from the subject-matter defined by the appended claims and their legal equivalents.
LIST OF REFERENCE SIGNS
100 vehicle
105 system for free-space gesture-based triggering of picture-taking
110 free-space gesture detection sensor of TOF type 111 illumination unit for free-space gesture detection sensor 110
115 field-of-view of free-space gesture detection sensor 110
120 image sensor, photo camera
121 illumination unit for image sensor 120
125 field-of-view of image sensor 120 130 processing unit
135 communication unit
140 antenna for wireless communication link
145 vehicle-external processing platform, e.g. server 200 general concept underlying the method, e.g. method 300
205 gesture detection process
210 trigger process
215 picture-taking process 300 exemplary method of free-space gesture-based triggering of picture-taking
305-340 steps of method 300
Claims
1. A method (300) of triggering picture-taking of the interior of a vehicle, the method (300) comprising: detecting, using a free-space gesture detection sensor (110) being mounted in or on a vehicle, a specific predetermined free-space gesture (G) being performed by a human user in a field of view (115) of the free-space gesture detection sensor; and upon detecting said specific predetermined free-space gesture (G), generating a signal to trigger an image-sensor not coinciding with the free-space gesture detec tion sensor (110) and being mounted in or on the vehicle (100) such that its field of view (125) covers the interior of the vehicle (100) at least in parts, to take a picture or a sequence of pictures.
2. The method (300) of claim 1 , wherein each of the free-space gesture detection sen sor (110) and the image-sensor have an associated individual illumination unit (111 ;121 ) for illuminating their respective field of view at least in parts during their respective sensing activity, the method (300) further comprising: de-synchronizing the activities of the illumination units of the free-space gesture de tection sensor (110) and the image-sensor such that the illumination units are not simultaneously active.
3. The method (300) of any one of the preceding claims, wherein the free-space ges ture detection sensor (110) is or comprises a time-of-flight, TOF, camera, which is used to detect said specific predetermined free-space gesture (G).
4. The method (300) of any one of the preceding claims, wherein the image sensor (120) is or comprises a 2D-photo camera, which is configured and used to take said picture in the visible or infrared part of the electromagnetic spectrum.
5. The method (300) of any one of the preceding claims, wherein detecting said specific predetermined free-space gesture (G) comprises at least one of: processing sensor data being generated by the free-space gesture detection sensor (110) to examine whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture (G); and communicating sensor data being generated by the free-space gesture detection sensor (110) to a vehicle-external processing platform and receiving in response result data being generated by the processing platform and indicating whether or not
the sensor data represents a situation where a user performs said specific predeter mined free-space gesture (G) .
6. The method (300) of claim 5, wherein the processing of the sensor data is performed separately from a process for controlling the free-space gesture detection sensor (110) and the image sensor (120).
7. The method (300) of any one of the preceding claims, wherein each one of a prede termined limited set of multiple different predetermined free-space gestures that may potentially be performed by the user is defined as qualifying as said specific prede termined free-space gesture (G).
8. The method (300) of claim 7, wherein: detecting the specific free-space gesture (G) comprises detecting which of the mul tiple predetermined free-space gestures in the set is being performed by the user; and the timing of triggering the image sensor (120) to take a picture or sequence of pic ture is determined based on which of the multiple predetermined free-space ges tures in the set is being detected.
9. The method (300) of any one of the preceding claims, wherein a gesture (G) being performed by the user is defined as qualifying as said specific predetermined free- space gesture (G) if it is one of: a gesture where the user spreads two fingers of a hand such as to exhibit a V-shape in the field of view of the free-space gesture detection sensor; a gesture where the user exhibits an open hand in the field of view of the free-space gesture detection sensor.
10. System for triggering picture-taking of the interior of a vehicle, the system compris ing: a free-space gesture detection sensor (110) being mounted or configured to be mounted in or on a vehicle (100) and being configured to detect a specific predeter mined free-space gesture (G) being performed by a human user; and an image-sensor not coinciding with the free-space gesture detection sensor (110) and being mounted or configured for being mounted in or on the vehicle (100) such that its field of view covers the interior of the vehicle (100) at least in parts;
wherein the system is configured to perform the method (300) of any one of the preceding claims.
11. The system of claim 10, wherein the free-space gesture detection sensor (110) is arranged relative to the image-sensor in such a way that its field of view within the interior of the vehicle (100) is located at least in parts outside of the field of view of the image sensor (120) .
12. The system of claim 10 or 11 , wherein at least one of the free-space gesture detec tion sensor (110) and the image sensor (120) is mounted or configured to be mounted at the vehicle (100) outside of a passenger compartment being located in the interior of the vehicle (100) and has a respective field of view extending at least in parts into the passenger compartment.
13. The system of claim 12, wherein said at least one of the free-space gesture detection sensor (110) and the image sensor (120) is mounted or configured to be mounted at an exterior mirror assembly of the vehicle.
14. The system of claim 10 or 11 , wherein: the free-space gesture detection sensor (110) is mounted at or integrated into or configured to be mounted at or to be integrated into a roof module of the vehicle; and the image sensor (120) is mounted at or integrated into or configured to be mounted at or to be integrated into a rear-view mirror inside the vehicle.
15. A vehicle (100) comprising a system according to any one of claims 10 to 14 for triggering a picture-taking of the interior of the vehicle, at least in parts.
16. Computer program comprising instructions to cause the system of any one of claims 10 to 14 to perform the method (300) of any one of claims 1 to 9.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102020106003.3A DE102020106003A1 (en) | 2020-03-05 | 2020-03-05 | METHOD AND SYSTEM FOR TRIGGERING A PICTURE RECORDING OF THE INTERIOR OF A VEHICLE BASED ON THE DETERMINATION OF A GESTURE OF CLEARANCE |
DE102020106003.3 | 2020-03-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021175749A1 true WO2021175749A1 (en) | 2021-09-10 |
Family
ID=74853627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2021/054958 WO2021175749A1 (en) | 2020-03-05 | 2021-03-01 | Method and system for triggering picture-taking of the interior of a vehicle based on a detection of a free-space gesture |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE102020106003A1 (en) |
WO (1) | WO2021175749A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113923355A (en) * | 2021-09-30 | 2022-01-11 | 上海商汤临港智能科技有限公司 | Vehicle, image shooting method, device, equipment and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102020106021A1 (en) | 2020-03-05 | 2021-09-09 | Gestigon Gmbh | METHOD AND SYSTEM FOR OPERATING A SELECTION MENU OF A GRAPHIC USER INTERFACE BASED ON THE CAPTURE OF A ROTATING CLEAR GESTURE |
CN114347788B (en) * | 2021-11-30 | 2023-10-13 | 岚图汽车科技有限公司 | Intelligent cabin man-machine interaction key control system based on service |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170313248A1 (en) * | 2014-09-19 | 2017-11-02 | Be Topnotch, Llc | Display rear passenger view on a display screen in vehicle |
US20190246036A1 (en) * | 2018-02-02 | 2019-08-08 | Futurewei Technologies, Inc. | Gesture- and gaze-based visual data acquisition system |
DE102018211908A1 (en) * | 2018-07-17 | 2020-01-23 | Audi Ag | Method for capturing a digital image of an environment of a motor vehicle and motor vehicle with an image capturing device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015011703A1 (en) | 2013-07-21 | 2015-01-29 | Pointgrab Ltd. | Method and system for touchless activation of a device |
US9712741B2 (en) | 2014-09-19 | 2017-07-18 | Be Topnotch, Llc | Smart vehicle sun visor |
US11436844B2 (en) | 2017-04-28 | 2022-09-06 | Klashwerks Inc. | In-vehicle monitoring system and devices |
-
2020
- 2020-03-05 DE DE102020106003.3A patent/DE102020106003A1/en active Pending
-
2021
- 2021-03-01 WO PCT/EP2021/054958 patent/WO2021175749A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170313248A1 (en) * | 2014-09-19 | 2017-11-02 | Be Topnotch, Llc | Display rear passenger view on a display screen in vehicle |
US20190246036A1 (en) * | 2018-02-02 | 2019-08-08 | Futurewei Technologies, Inc. | Gesture- and gaze-based visual data acquisition system |
DE102018211908A1 (en) * | 2018-07-17 | 2020-01-23 | Audi Ag | Method for capturing a digital image of an environment of a motor vehicle and motor vehicle with an image capturing device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113923355A (en) * | 2021-09-30 | 2022-01-11 | 上海商汤临港智能科技有限公司 | Vehicle, image shooting method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
DE102020106003A1 (en) | 2021-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10937419B2 (en) | Control device and method with voice and/or gestural recognition for the interior lighting of a vehicle | |
WO2021175749A1 (en) | Method and system for triggering picture-taking of the interior of a vehicle based on a detection of a free-space gesture | |
CN105026203B (en) | Method for synchronizing a display device of a motor vehicle | |
US10025388B2 (en) | Touchless human machine interface | |
US20140195096A1 (en) | Apparatus and method for contactlessly detecting objects and/or persons and gestures and/or operating procedures made and/or carried out thereby | |
JP6370358B2 (en) | Display fit on transparent electronic display | |
US11741630B2 (en) | Vehicle system with no-control operation | |
KR20150097040A (en) | Apparatus and method for opening trunk of vehicle, and recording medium for recording program performing the method | |
CN113330395B (en) | Multi-screen interaction method and device, terminal equipment and vehicle | |
WO2021085128A1 (en) | Distance measurement device, measurement method, and distance measurement system | |
CN111695401A (en) | Photometric stereo object detection for items left in autonomous vehicles | |
US10311313B2 (en) | In-vehicle passive entry lighting control system | |
US20210218923A1 (en) | Solid-state imaging device and electronic device | |
JP2013149257A (en) | Adaptive interface system | |
JP2024506809A (en) | Methods and devices for identifying dangerous acts, electronic devices, and storage media | |
EP3173278B1 (en) | Vehicle display control device | |
US20220128690A1 (en) | Light receiving device, histogram generating method, and distance measuring system | |
US20170323165A1 (en) | Capturing apparatus for recognizing a gesture and/or a viewing direction of an occupant of a motor vehicle by synchronous actuation of lighting units, operating arrangement, motor vehicle and method | |
CN113810530B (en) | Control method of electronic device and electronic device | |
CN114312550B (en) | Control method, device, equipment and storage medium for vehicle headlamp | |
WO2022230824A1 (en) | Image display device and image display method | |
US20220272256A1 (en) | Information processing device, visual line detection system, visual line detection method, and visual line detection program | |
CN114495072A (en) | Occupant state detection method and apparatus, electronic device, and storage medium | |
KR102506873B1 (en) | Vehicle cluster having a three-dimensional effect, system having the same and method providing a three-dimensional scene thereof | |
CN105882523A (en) | Detection method and device of safe driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21709366 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21709366 Country of ref document: EP Kind code of ref document: A1 |