WO2017123226A1 - Driver-identification system and method - Google Patents

Driver-identification system and method Download PDF

Info

Publication number
WO2017123226A1
WO2017123226A1 PCT/US2016/013351 US2016013351W WO2017123226A1 WO 2017123226 A1 WO2017123226 A1 WO 2017123226A1 US 2016013351 W US2016013351 W US 2016013351W WO 2017123226 A1 WO2017123226 A1 WO 2017123226A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
human
modified
driver
vehicle
Prior art date
Application number
PCT/US2016/013351
Other languages
French (fr)
Inventor
Hui Lin
Madeline Jane SCHRIER
Vidya Nariyambut Murali
Gintaras Vincent Puskorius
Original Assignee
Ford Global Technologies, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies, Llc filed Critical Ford Global Technologies, Llc
Priority to PCT/US2016/013351 priority Critical patent/WO2017123226A1/en
Publication of WO2017123226A1 publication Critical patent/WO2017123226A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A system for identifying a human driver is disclosed. The system may include a vehicle with an onboard camera. An image of a human seated in the driver's seat of the vehicle may be captured by the camera. A database remote to the vehicle may store the image. A computing device that is also remote to the vehicle may obtain the image from the database and generate a plurality of modified images based thereon. Each modified image may comprise the image modified to obscure the human in a different manner. The computing device may be further programmed to generate an identification model by training on at least the image and the plurality of modified images. The identification model may be stored onboard the vehicle and used by a computer system onboard the vehicle to determine if subsequent images captured by the camera correspond to the same human.

Description

DRIVER-IDENTIFICATION SYSTEM AND METHOD
BACKGROUND
FIELD OF THE INVENTION
[001] This invention relates to vehicular systems and more particularly to systems and methods for using facial recognition technologies to identify a driver in a vehicle.
BACKGROUND OF THE INVENTION
[002] When a driver occupying a driver's seat of a vehicle is properly identified, various changes, customizations, or preferences corresponding to that driver may be automatically implemented. Accordingly, what is needed are systems and methods that enable or support robust identification of a driver occupying a driver's seat of a vehicle.
BRIEF DESCRIPTION OF THE DRAWINGS
[003] In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
[004] Figure 1 is a schematic diagram illustrating one embodiment of a system in accordance with the present invention; [005] Figure 2 is a schematic block diagram illustrating one embodiment of the onboard computer system of Figure 1;
[006] Figure 3 is a schematic block diagram illustrating one embodiment of the remote database of Figure 1;
[007] Figure 4 is a schematic block diagram illustrating one embodiment of the remote computer system of Figure 1;
[008] Figure 5 is a schematic diagram illustrating an original image and selected modified images that may be generated based on the original image by the modification module of Figure 4;
[009] Figure 6 is a schematic block diagram of one embodiment of a driver- identification method in accordance with the present invention; and
[0010] Figure 7 is a schematic block diagram of one embodiment of an identification-model-generation method in accordance with the present invention.
DETAILED DESCRIPTION
[0011] It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the invention. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. [0012] Referring to Figure 1, in selected embodiments, a system 10 in accordance with the present invention may identify a human driver seated within a vehicle 12. By so doing, a system 10 may provide, enable, or support various features or options relating to personal mobility. For example, various different drivers 14 may drive a particular vehicle 12. By identifying which driver 14 is driving at a particular time, a system 10 may enable or support personal customization for each of the different drivers 14.
[0013] Such customizations may include or relate to driver-seat configurations, wireless pairing (e.g., BLUETOOTH) settings, vehicle-head-up-display (HUD) wallpaper, content configurations, or the like, volume settings for audio devices, driving behavior or preference settings, or the like or any combinations or sub-combinations thereof. (The term "sub-combination" as used in this specification means a combination comprising less than all of the listed components or elements.) Accordingly, by identifying a driver 14, a system 10 in accordance with the present invention may enable other systems of a vehicle 12 to automatically adjust to or incorporate customizations and preferences of each driver 14.
[0014] A system 10 in accordance with the present invention may identify a driver seated within a vehicle 12 in any suitable manner. For example, a system 10 may be a facial recognition system embodied as hardware, software, or some combination thereof. In certain embodiments, a system 10 may include an onboard computer system 16 (i.e., a computer system 16 that is carried onboard the vehicle 12), an onboard camera 18 (i.e., a camera 18 that is carried onboard the vehicle 12), a remote database 20 (i.e., a database 20 that is not carried onboard the vehicle 12), a remote computer system 22 (i.e., a computer system 22 that is not carried onboard the vehicle 12), or the like or any combination or sub-combination thereof. [0015] An onboard camera 18 may form part of an onboard computer system 16. Alternatively, an onboard camera 18 may be operably connected to an onboard computer system 16. An onboard camera 18 may be positioned to face a human driver 14 seated in a driver's seat of the vehicle 12. Suitable locations for mounting such a camera 18 may include on a steering wheel, dashboard, pillar (e.g., an A-pillar), headliner, sun visor, or the like of the vehicle 12.
[0016] One more images captured by an onboard camera 18 may be passed (e.g., in the form of one or more digital photograph files) by an onboard computer system 16 directly into a communication system 24. For example, an onboard computer system 16 may feed one or more images into a cellular telephone network by sending one or more signals directly to a nearby cellular tower 26. Alternatively, an onboard computer system 16 may feed one or more images to a satellite or other wireless communication hub (e.g., a local area wireless computer network such as a WiFi network located at the home of a driver 14). In still other embodiments, an onboard computer system 16 may feed one or more images into communication system 24 through a less direct path.
[0017] For example, an onboard computer system 16 may feed one or more images to a mobile telephone (e.g., of a driver 14) via a direct, hardwire link, a wireless pairing (e.g., a BLUETOOTH pairing), or the like. The mobile telephone may then feed the one or more images into a cellular telephone network by sending one or more signals to a nearby cellular tower 26. Alternatively, the mobile telephone may feed the one or more images into a local area wireless computer network (e.g., a WiFi network) located at the home, place of employment, or the like of a driver 14. Thus, one more images captured by an onboard camera 18 may be passed by an onboard computer system 16 directly into a communication system 24 in real time (e.g., substantially immediately after they are captured or taken) or later (e.g., minutes or hours later) when a suitable network connection becomes available.
[0018] In selected embodiments, one or more images fed into a communications system 24 by an onboard computer system 16 may be routed through a computer network 28 (e.g., an intranet, an extranet, the Internet, or the like or any combinations or subcombinations thereof) to a remote database 20. Alternatively, one or more images fed in by an onboard computer system 16 may be routed through a computer network 28 to a remote computer system 22, processed by the remote computer system 22, fed by the remote computer system 22 back into the computer network 28, and routed to a remote database 20.
[0019] The processing performed by a remote computer system 22 on one or more images may include assigning identifying names thereto. Alternatively, an onboard computer system 16 may assign identifying names to the images before they are sent to a database 20 or remote computer system 22. In either case, the name assigned to each such image may be unique within a system 10.
[0020] In selected embodiments, the assigned names may correspond or be linked to an identification corresponding to the vehicle 12 from which the image originates. For example, a name assigned an image may include a vehicle identification number (VIN) or some portion thereof with a serial number appended thereto. The serial number may assist in differentiating among the various images corresponding to (e.g., originating within) a particular vehicle 12.
[0021] A system 10 in accordance with the present invention may store images (e.g., photograph files) corresponding to more than one vehicle 12. For example, one or more first photograph files may be sent to a remote database 20 by a first computer system 16 onboard a first vehicle 12, while one or more second photograph files may be sent to the remote database 20 by a second computer system 16 onboard a second vehicle 12. A system 10 in accordance with the present invention may be scaled to receive photograph files from any number of different vehicles 12.
[0022] Referring to Figure 2, in selected embodiments, an onboard computer system 16 may include one or more processors 30, memory 32, a user interface 34, an onboard camera 18, a transmitter 36, a receiver 38, other hardware 40, or the like or any combination or sub-combination thereof. The memory 32 may be operably connected to the one or more processors 30 and store the computer software. This may enable the one or more processors 30 to execute the computer software.
[0023] A user interface 34 of an onboard computer system 16 may enable a user (e.g., technician, engineer, employee or contractor of a manufacturer of the vehicle 12, contractor hired to repair or diagnose the vehicle 12, driver 14 of the vehicle 12, or the like or any combination or sub-combination thereof) to interact with, run, customize, or control various aspects of an onboard computer system 16. For example, a user interface 34 may enable a new driver 14 to initiate a "photo session" with an onboard camera 18 in order to capture one or more images suitable for future facial recognition of the driver 14. In selected embodiments, a user interface 34 of an onboard computer system 16 may include one or more ports, keypads, keyboards, touch screens, pointing devices, or the like or any combination or sub-combination thereof.
[0024] An onboard camera 18 may be a digital camera selected to capture images in the form of digital photograph file having a desired resolution. The resolution of the camera 18 may be high enough to capture sufficient detail to enable facial recognition analysis, yet low enough to avoid generating photograph files that are too large to send via wireless communication (e.g., wireless communication with a nearby cellular tower 26). In selected embodiments, an onboard camera 18 may have a pixel count within a range from about 1 megapixels to about 4 megapixels. As the technology deployed in real world wireless communication systems advances, an onboard camera 18 may have a pixel count greater than 4, 8, or 12 megapixels or even higher.
[0025] A transmitter 36 and receiver 38 may enable a corresponding onboard computer system 16 to send signals to, and receive signals from, a communication system 24. For example, a transmitter 36 and receiver 38 may enable a corresponding onboard computer system 16 to send signals to, and receive signals from, a nearby cellular tower 26, an orbiting satellite, other wireless communication hub (e.g., WiFi network), or the like.
[0026] In selected embodiments, the memory 32 of an onboard computer system 16 may store a camera-control module 42, communication module 44, recognition module 46, interpretation module 48, implementation module 50, other data or software 52 (e.g., operating system software), or the like or any combinations or sub-combinations thereof. Accordingly, in certain embodiments, an onboard computer system 16 may include a camera-control module 42, communication module 44, recognition module 46, interpretation module 48, and implementation module 50. In other embodiments, however, an onboard computer system 16 may include less than that.
[0027] For example, an onboard computer system 16 may include a camera- control module 42, communication model 44, and implementation module 50. In such embodiments, a recognition module 46 and/or interpretation module 48 and the executables associated therewith may be run on a different computer system. For example, a recognition module 46 and/or interpretation module 48 and the executables associated therewith may be stored in the memory of a remote computer system 22 and run on the remote computer system 22.
[0028] A camera-control module 42 may comprise executables that control when a corresponding onboard camera 18 takes pictures. In certain embodiments, an onboard camera 18 may act as a sensor. Accordingly, the executables of a camera-control module 42 may monitor the output of an onboard camera 18 in order to determine when an image suitable for facial recognition analysis should be taken.
[0029] For example, the executables of a camera-control module 42 may instruct an onboard camera 18 to take a photograph at the time the vehicle 12 is started or when a photograph is requested by a driver 14 (e.g., by a new driver 14 in an initiation or set-up process). Alternatively, or in addition thereto, the executables of a camera-control module 42 may instruct an onboard camera 18 to take a photograph any time the driver's seat of the vehicle 12 transitions from unoccupied to occupied.
[0030] In selected embodiments, such a transition from unoccupied to occupied may be detected solely by the executables of a camera-control module 42 monitoring outputs of an onboard camera 18. Alternatively, or is addition thereto, a transition from unoccupied to occupied may be detected by the executables of a camera-control module 42 monitoring outputs of other hardware 40 (e.g., a pressure sensor in a driver's seat or the like) that are indicative of an unoccupied driver's seat being filled.
[0031] A communication module 44 may comprise executables that enable or support communication between an onboard computer system 16 and at least one of a remote database 20 and a remote computer system 22. In selected embodiments, a communication module 44 may be or include a background application running on an onboard computer system 16 to provide wireless (e.g., cellular or the like) communication between an onboard computer system 16 and a wireless communication hub (e.g., a nearby cellular tower 26, WiFi network), a mobile telephone of a driver 14, or the like.
[0032] A recognition module 46 may comprise, store, and/or apply one or more driver-identification models 54. In selected embodiments, a driver-identification model 54 may be a parameterized model developed through machine learning to quantify how closely an input photograph corresponds to a particular driver 14. Accordingly, when applied by the executables of a recognition module 46 to a digital photograph, a driver- identification model 54 may output a score indicative of how likely it is that the digital photograph is a photograph of a particular person.
[0033] For example, in a hypothetical, exemplary embodiment, a recognition module 46 may include two driver-identification models 54. A first driver-identification model 52 may correspond to "Bob," while a second driver-identification model 54 may correspond to "Alice." Accordingly, when an onboard camera 18 is instructed to take a picture by a camera-control module 42, a digital photograph of an unidentified driver 14 may be produced. The executables of a recognition module 46 may apply the first driver- identification model 54 to the digital photograph to quantify how likely it is that the unidentified driver 14 is Bob. Similarly, the executables of a recognition module 46 may apply the second driver-identification model 54 to the digital photograph to quantify how likely it is that the unidentified driver 14 is Alice.
[0034] Alternatively, a driver-identification model 54 may be a parameterized model developed through machine learning to quantify how closely an input photograph corresponds to a plurality of drivers 14. Accordingly, when applied by the executables of a recognition module 46 to a digital photograph, a driver-identification model 52 may output a plurality of scores, each score thereof indicative of how likely it is that the digital photograph is a photograph of a different person.
[0035] For example, in another hypothetical, exemplary embodiment, a recognition module 46 may include a single driver-identification model 54. The driver- identification model 54 may correspond to both "Bob" and "Alice." Accordingly, when an onboard camera 18 is instructed to take a picture by a camera-control module 42, a digital photograph of an unidentified driver 14 may be produced. The executables of a recognition module 46 may apply the driver-identification model 54 to the digital photograph to produce first and second scores. The first score may quantify how likely it is that the unidentified driver 14 is Bob, while the second score may quantify how likely it is that the unidentified driver 14 is Alice.
[0036] An interpretation module 48 may include executables that interpret the one or more scores output when a recognition module 46 applies one or more driver- identification models 54 to a digital photograph of an unidentified driver 14. In certain embodiments, one or more driver-identification models 52 may produce one or more scores in a particular range. A score at one end of the range may correspond to or indicate a 0% chance of a match. Conversely, a score at the other end of the range may correspond to or indicate a 100% chance of a match. Accordingly, the executables of an interpretation module 48 may interpret those one or more scores and determine whether a match should be declared.
[0037] For example, in a first hypothetical situation, the one or more scores output when a recognition module 46 applies one or more driver-identification models 54 to a digital photograph of an unidentified driver 14 may be in the range from 0 to 1 with a score of 0.75 for Bob and 0.16 for Alice. In such a situation, an interpretation module 48 may determine that the unidentified driver 14 is Bob. In a second hypothetical situation, the one or more scores output may be a 0.25 for Bob and 0.81 for Alice. In such a situation, an interpretation module 48 may determine that the unidentified driver 14 is Alice. In a third hypothetical situation, the one or more scores output may be a 0.27 for Bob and a 0.30 for Alice. In such a situation, an interpretation module 48 may determine that the unidentified driver 14 is neither Bob nor Alice.
[0038] In selected embodiments, the criteria used by an interpretation module 48 to make a determination of identity may include one or more threshold comparisons. For example, an interpretation module 48 may compare a score output by a recognition module 46 to a predetermined "match" threshold. The executables of an interpretation module 48 may characterize a score above that match threshold as being a match. Alternatively, or in addition thereto, an interpretation module 48 may compare a score output by a recognition module 46 to a predetermined "no match" threshold. The executables of an interpretation module 48 may characterize a score below that no match threshold as not being a match.
[0039] In selected embodiments, an interpretation module 48 may ensure that all thresholds are properly met by the available scores before declaring the identity of an unidentified driver 14. For example, in the first hypothetical situation set forth above, an interpretation module 48 may ensure that the 0.75 score for Bob is above a match threshold and that the 0.16 score for Alice is below a no match threshold before determining that the unidentified driver 14 is Bob.
[0040] Similarly, in the second hypothetical situation, an interpretation module 48 may ensure that the 0.25 score for Bob is below a no match threshold and that the 0.81 score for Alice is above a match threshold before determining that the unidentified driver 14 is Alice. Finally, in the third hypothetical situation, an interpretation module 48 may ensure that the 0.27 score for Bob and the 0.30 score for Alice are both below the no match threshold before determining that the unidentified driver 14 is neither Bob nor Alice (e.g., that the unidentified driver 14 is a new driver 14).
[0041] If less than all thresholds are properly met by the available scores, an interpretation module 48 may declare the identity of an unidentified driver 14 as unknown. For example, when a score for Bob is below a no match threshold and the score for Alice is above a no match threshold, but below a match threshold, an interpretation module 48 may declare that the unidentified driver 14 is indeterminate. Similarly, when a score for Bob is above a match threshold and the score for Alice is above a no match threshold, but below a match threshold, an interpretation module 48 may declare that the unidentified driver 14 is indeterminate.
[0042] An implementation module 50 may comprise executables that control how an onboard computer system 16 responds to the determinations of an interpretation module 48. For example, if an interpretation module 48 determines that an unidentified driver 14 is Bob, an implementation module 50 may institute or request the implementation of one or more customizations or preferences of Bob. Conversely, if an interpretation module 48 determines that an unidentified driver 14 is Alice, an implementation module 48 may institute or request the implementation of one or more customizations or preferences of Alice.
[0043] If an interpretation module 48 determines that an unidentified driver 14 is a new driver 14, an implementation module 50 may begin, institute, or request the collection of information regarding the preferences or the like of the new driver 14.
Additionally, an implementation module 50 may work with a communication module 44 to send a photograph of the new driver 14 to at least one of a remote database 20 and a remote computer system 22 so that a driver-identification model 54 covering the new driver 14 may be generated. Accordingly, in the future, the customizations or preferences may be implemented when the new driver 14 is recognized as again driving the vehicle 12.
[0044] On the other hand, if an interpretation module 48 determines that an unidentified driver 14 is of indeterminate identity, an implementation module 50 may institute or request the initiation of one or more additional attempts to identify the driver 14. For example, an implementation module 50 may work with a camera-control module 42 to take another picture of the unidentified driver 14. The resulting digital photograph may be analyzed like the one or more previous photographs of the driver 14 to see if the identity of the driver 14 may be determined.
[0045] Referring to Figure 3, a database 20 in accordance with the present invention may store data, files, or the like needed by a system 10. In selected embodiments, a database 20 may store one or more digital photograph files such as recognized photograph files 56, unrecognized photograph files 58, other photograph files 60 or the like or any combinations or sub-combinations thereof. Alternatively, or in addition thereto, a database 20 may store one or more driver-identification models 54, other data 62, or the like combinations thereof.
[0046] A recognized photograph file 56 stored within a database 20 may be a digital picture of a driver 14 (e.g., an image of a driver 14 in a driver's seat of a vehicle
12). The driver 14 in a recognized photograph file 56 may be known by the system 10.
That is, the system 10 may include at least one driver-identification model 54 specifically trained to identify the driver 14 pictured in a recognized photograph file 56. [0047] In selected embodiments, a recognized photograph file 56 may comprise, or be linked within the database 20 to, certain data such as vehicle identification 64, person identification 66, date and time information 68, other data 62, or the like or any combinations or sub-combinations thereof. Vehicle-identification data 64 may uniquely identify within a system 10 the vehicle 12 in which the corresponding recognized photograph file 56 was generated. Similarly, person-identification data 64 may uniquely identify within a system 10 the driver 14 pictured in the corresponding recognized photograph file 56. Data and time information 68 may identify when the corresponding recognized photograph 56 was generated.
[0048] An unrecognized photograph file 58 stored within a database 20 may also be a digital picture of a driver 14 (e.g., an image of a driver 14 in a driver's seat of a vehicle 12). The driver 14 in an unrecognized photograph file 56 may be unknown by the system 10. That is, the system 10 may not include or know of at least one driver- identification model 54 specifically trained to identify the driver 14 pictured in an unrecognized photograph file 56.
[0049] In selected embodiments, an unrecognized photograph file 58 may comprise, or be linked within the database 20 to, certain data such as vehicle identification 64, date and time information 68, other data 62, or the like or any combinations or sub-combinations thereof. Vehicle-identification data 64 may uniquely identify within a system 10 the vehicle 12 in which the corresponding unrecognized photograph file 58 was generated. Data and time information 68 may identified when the corresponding unrecognized photograph 58 was generated.
[0050] The various recognized photograph files 56 and unrecognized photograph files 58 stored within a database 20 may originate within various vehicles 12 forming part of a system 10 in accordance with the present invention. That is, pictures captured by various cameras 18 within various vehicles 12 may be packaged as recognized or unrecognized photograph files 56, 58 and passed from the vehicles 12 to the database 20 for storage. One or more other photograph files 60 stored within a system 10 may originate with different sources.
[0051] For example, in selected embodiments, one or more other photograph files 60 may be or comprise "head shots" or the like generated by one or more cameras that are not within any vehicle 12 or that do not form part of a system 10 in accordance with the present invention. Thus, the one or more other photograph files 60 may not correspond to drivers 14 that are to be identified within a system 10. Rather, the one or more photographs 60 may provide a rich array of training data. Accordingly, by using the one or more other photograph files 60 (e.g., a large number of other photograph files 60), a system 10 in accordance with the present invention may provide more robust driver-identification models 54.
[0052] In selected embodiments, one or more driver-identification models 54 stored within a database 20 may comprise, or be linked within the database 20 to, certain data such as vehicle identification 64, person identification 66, other data 62, or the like or any combinations or sub-combinations thereof. Vehicle-identification data 64 may uniquely identify within a system 10 the vehicle 12 to which the driver-identification model 54 pertains. Similarly, person-identification data 64 may uniquely identify within a system 10 the driver 14 or drivers 14 the corresponding driver-identification model 54 is trained to recognize. Other data 62 associated with a driver-identification model 54 may include a version identification or identifier or the like that may assist in determining a more current and updated version of the driver-identification model 54 or models 54 corresponding to a particular vehicle 12 and/or driver 14.
[0053] In certain embodiments, other data 62 that may be stored in a database 20 or associated or linked to various records 54, 56, 58 within a database 20 may be useful in linking two or more vehicles 12 together. For example, a particular household may have more than one vehicle 12. Thus, the various drivers 14 within that household may at various time drive each of those multiple vehicles 12. Accordingly, the driver- identification model 54 or models 54 generated for one vehicle 12 corresponding to that household may be useful within another vehicle 12 corresponding to that household.
[0054] Accordingly, in selected embodiments, other data 62 may enable a system 20 to determine when two or more vehicles 12 pertain to the same household. For example, other data 62 may include data identifying a location of a vehicle 12, telephone numbers of one or more drivers 14, or the like or combinations thereof. Thus, when the other data 62 indicates that two or more vehicles pertain to the same household, a system 10 (e.g., a remote computer system 22) may provide or apply the same driver- identification models 54 to those vehicles 12.
[0055] Referring to Figures 4 and 5, in selected embodiments, a remote computer system 22 may include one or more processors 70, memory 72, a user interface 74, other hardware 76, or the like or any combination or sub-combination thereof. The memory 72 may be operably connected to the one or more processors 70 and store the computer software. This may enable the one or more processors 70 to execute the computer software.
[0056] A user interface 74 of a remote computer system 22 may enable a user
(e.g., technician, engineer, employee or contractor of a manufacturer of the vehicle 12, contractor hired to repair or diagnose the vehicle 12, or the like or any combination or sub-combination thereof) to interact with, run, customize, or control various aspects of a remote computer system 22. In selected embodiments, a user interface 74 of a remote computer system 22 may include one or more ports, keypads, keyboards, touch screens, pointing devices, or the like or any combination or sub-combination thereof.
[0057] In selected embodiments, the memory 72 of a remote computer system 22 may store a communication module 78, modification module 80, comparison module 88, training module 90, other data or software 92 (e.g., operating system software), or the like or any combinations or sub-combinations thereof. Accordingly, in certain embodiments, a remote computer system 22 may include a communication module 78, modification module 80, comparison module 88, and training module 90. In other embodiments, however, a remote computer system 16 may include less than that. For example, a modification module 80 and/or comparison module 88 may be omitted from a remote computer system 22.
[0058] A communication module 78 may comprise executables that enable or support communication between a remote computer system 22 and at least one of a remote database 20 and an onboard computer system 16. In selected embodiments, a communication module 78 may be or include a background application running on a remote computer system 22 to provide such communication.
[0059] A modification module 80 may comprise executables that modify an image 82 in order to obtain one or more modified images 84 therefrom or based thereon. An image 82 may be the picture represented within or conveyed by a photograph file (e.g., an unrecognized photograph file 58). Accordingly, an image 82 may be the picture captured by an onboard camera 18 and include a representation 86 of at least part of the face of a driver 14 (e.g., an unknown driver 14).
[0060] In selected embodiments, each modified image 84 may comprise the image 82 modified to obscure the human driver 14 in a different manner. The modifications applied by a modification module 80 to an image 82 may include adding glasses, adding sunglasses, changing style of glasses, removing glasses, adding a hat, removing a hat, adding facial hair, removing facial hair, changing length of facial hair, changing style of facial hair (e.g., changing from beard to mustache or goatee, etc.), changing hairstyle, changing length of hair, changing orientation of face, changing lighting intensity or level, changing (e.g., degrading) focus level, or the like or any combinations or sub-combinations thereof.
[0061] For example, an image 82 may be of a human driver 14 at a first clarity level. Accordingly, a modified image 84a corresponding thereto may comprise the image 82 modified to be at a second clarity level. The second clarity level may be different (and typically lower or more degraded) than the first clarity level. For example, the first clarity level may correspond to or be a first level of focus, first level of lighting, first orientation with respect to the onboard camera 18, or the like or any combinations or subcombinations thereof. Accordingly, the second clarity level may correspond to or be a second level of focus, second level of lighting, second orientation with respect to the onboard camera 18, or the like or any combinations or sub-combinations thereof.
[0062] Alternatively, or in addition thereto, an image 82 may be of a human driver 14 in a first state. Accordingly, a modified image 84b, 84c corresponding thereto may comprise the image 82 modified to be in a second, different or opposite state. For example, an image 82 may correspond to a driver 14 in a first state with respect to wearing glasses, wearing a hat, facial hair, wearing a particular hairstyle, or the like or any combinations or sub-combinations thereof. Thus, a modified image 84b, 84c may comprise the image 82 modified to include the human driver 14 in a second, opposite state with respect to wearing glasses, wearing a hat, facial hair, wearing a particular hairstyle, or the like or any combinations or sub-combinations thereof. .
[0063] A comparison module 80 may comprise executables that identify one or more "close negatives" within a database 20. A close negative may be a photograph (e.g., recognized photograph file 56, other photograph file 60, or the like) that conveys an image 82 of a human that is not the driver 14 for whom a driver-identification model 54 is being developed, but has a similar appearance to that driver 14. Thus, the close negatives identified by a comparison module 80 may be heavily weighed as important negative images and provide for a more robust driver-identification model 54.
[0064] A training module 90 may generate one or more driver-identification models 54. In selected embodiments, a training module 90 may comprise executables performing machine learning using one or more images 82, one or more modified images 84, one or more close negatives, one or more other negatives, or any combinations or sub-combinations thereof as training data.
[0065] Referring to Figure 6, a system 10 (e.g., an on-board computer system 16 or an onboard computer system 16 acting in cooperation with a remote computer system
22 and/or remote database 20) may support, enable, or execute a driver-identification process 94 in accordance with the present invention. In selected embodiments, such a process 94 may begin when an onboard computer system 16 senses 96 the presence of a driver 14. For example, an onboard computer system 16 may sense 96 that a driver's seat of a vehicle 12 has transitioned from an unoccupied condition to an occupied condition. Accordingly, at some point after (e.g., immediately after, within a few second, or the like) sensing 96 the presence of a driver 14, an onboard camera 18 may be activated 98 to collect one or more images 82 of a driver 14.
[0066] A determination may be made 100 as to whether a driver-identification model 54 is present or available to analyze the one or more images 82. If no driver- identification model 54 is present, one or more of the images 82 collected by the onboard camera 18 may be stored 102 in a remote database 20. Meanwhile, the customizations, preferences, or the like of the driver 14 may be recorded 104. Accordingly, the customizations and preferences of the driver 14 may be used or implemented should he or she later be found to be driving the vehicle 12.
[0067] When one or more driver-identification models 54 are obtained 106 or it is determined 100 that one or more driver-identification models 54 are available for use, the one or more driver-identification models 54 may be used to analyze 108 one or more images 82 collected by an onboard camera 18. This analysis 108 may generate one or more scores or the like that may be interpreted in order to determine 110 whether the driver 14 in the driver's seat of the vehicle 12 is a known driver 14.
[0068] If the driver 14 is not known, one or more of the images 82 collected by the onboard camera 18 may be stored 102 in a remote database 20 and the customizations, preferences, or the like of the driver 14 may be recorded 104. Eventually, a driver-identification model 54 trained to recognize the unknown driver 14 may be obtained 106 for future use. Conversely, if the driver 14 is known, the one or more customizations or preferences of the driver 14 may be retrieved 112 and implemented 114. The system 10 may then wait for a driver 14 to again be sensed 96 (e.g., by detecting that a driver's seat of a vehicle 12 has transitioned from an unoccupied condition to an occupied condition).
[0069] In selected embodiments, one or more driver-identification models 54 may be periodically updated. Accordingly, even after a driver-identification model 54 capable of recognizing a particular driver 14 is obtained 106 or available for use, additional images of that driver 14 may be periodically collected and stored 116 in a remote database 20. These more current images of that driver 14 may be used to obtain 1 18, generate 118, train 118, or retrain 118 a driver-identification model 54. Accordingly, a system 10 in accordance with the present invention may stay current and periodically improve or update one or more driver-identification models 54.
[0070] The various step of this method 95 in accordance with the present invention may be cyclical in nature. That is, they may be repeated. However, this repeating or cycling through the various steps need not be continuous nor immediate. Certain steps may occurred over or be delayed by a significant period of time (e.g., minutes, hours, days, etc.). For example, the process of recording 104 the preferences of a driver 14 may require or involve hours of monitoring. Similarly, the delay between activating 98 an onboard camera 18 to collect an image 82 of a particular driver 14 and obtaining 106 a driver-identification model 54 trained to recognize that driver 14 may be several hours or even days. Thus, the flow of a method 94 in accordance with the present invention may be more periodic than continuous.
[0071] Referring to Figure 7, a system 10 (e.g., a remote computer system 22 or a remote computer system 22 acting in cooperation with an onboard computer system 16 and/or remote database 20) may support, enable, or execute a model-generation process
120 in accordance with the present invention. In selected embodiments, such a process 120 may begin when one or more original images 82 (e.g., one or more digital photograph files containing or conveying one or more images 82 of one or more drivers 14) are obtained 122 from a remote database 20. The one or more images 82 may be used to generate 124 a plurality of modified images 84. Alternatively, or in addition thereto, one or more "similar" images (e.g., close negatives) may be obtained 126. Such similar images may be obtained 126 from a remote database 20 and correspond to one or more persons that are not pictured in the original images 82, but that are similar in appearance the one or more drivers 14 pictured in the original images 82.
[0072] Once obtained 122, 126 or generated 124, the one or more original images 82, modified images 84, similar images, or the like or any combinations or subcombinations thereof may be used 128 to train or generate one or more driver- identification model 54. The one or more driver-identification models 54 may be specifically trained to identify the one or more drivers 14 pictured in the original images 82. The modified images 84 and/or similar images may be included in the training to produce more robust driver-identification models 54. Once the one or more driver- identification models 54 are trained 128 or generated 128, they may be stored 130 in a remote database 20 to be downloaded by an onboard computer system 16 or to be used by one or more recognition modules 46 located elsewhere.
[0073] The flowcharts in Figures 6 and 7 illustrate the functionality and operation of possible implementations of systems, methods, and computer program products according to one embodiment of the present invention. In this regard, selected blocks or combinations of blocks in the flowcharts may represent a module, segment, or portion of code (e.g., a module, segment, or portion of code stored in a non-transitory medium), which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that certain blocks of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0074] It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. In certain embodiments, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Alternatively, certain steps or functions may be omitted if not needed.
[0075] An example of the invention may include one or more of the following steps, functions, or structures:
[0076] a vehicle having onboard a camera and a computer system;
[0077] a database storing an image of a human captured by the camera; and
[0078] a computing device programmed to obtain the image and generate an identification model by training on at least the image.
[0079] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the computing device further programmed to generate a plurality of modified images, each comprising the image modified to obscure the human in a different manner.
[0080] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the computing device further programmed to generate the identification model by training on at least the image and the plurality of modified images. [0081] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the image comprising a digital photograph of the human as the human is seated in a driver's seat of the vehicle.
[0082] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the image being of the human in a first state with respect to wearing glasses and at least one image of the plurality of modified images comprising the image modified to include the human in a second, opposite state with respect to wearing the glasses.
[0083] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the image being of the human in a first state with respect to wearing a hat and at least one image of the plurality of modified images comprising the image modified to include the human in a second, opposite state with respect to wearing the hat.
[0084] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the image being of the human at a first lighting level and at least one image of the plurality of modified images comprising the image modified to be at a second lighting level that is darker than the first lighting level.
[0085] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the image being of the human at a first clarity level and at least one image of the plurality of modified images comprising the image modified to be at a second clarity level that is lower than the first clarity level. [0086] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the database being off board with respect to the vehicle.
[0087] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the computing device being off board with respect to the vehicle.
[0088] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the database further storing a plurality of different images, each comprising an image of a different human that is not the human.
[0089] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the computing device further programmed to identify within the plurality of different images at least one similar image corresponding to a first different human having one or more features similar to those of the human.
[0090] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the computing device further programmed to generate the identification model by training on at least the image, the plurality of modified images, and the at least one similar image.
[0091] Another example of the invention may include one or more of the following steps, functions, or structures:
[0092] a first vehicle having onboard a first camera and a first computer system;
[0093] a second vehicle having onboard a second camera and a second computer system; and [0094] a remote database off board with respect to the first and second vehicles.
[0095] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the remote database storing a first image of a first human captured by the first camera as the first human is seated in a driver's seat of the first vehicle.
[0096] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the remote database storing a second image of a second human captured by the second camera as the second human is seated in a driver's seat of the second vehicle.
[0097] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the system further comprising a third computer system off board with respect to the first and second vehicles.
[0098] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the third computer system being programmed to obtain the first and second images.
[0099] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the third computer system being further programmed to generate a plurality of modified first images, each comprising the first image modified to obscure the first human in a different manner.
[00100] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the third computer system being further programmed to generate a first identification model by training on at least the first image, the plurality of modified first images, and the second image. [00101] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the first vehicle, wherein the first computer system stores the first identification model.
[00102] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the third computer system being further programmed to generate a plurality of modified second images, each comprising the second image modified to obscure the second human in a different manner.
[00103] The example of the invention may also include one or more steps, functions, or structures set forth above combined with the third computer system being further programmed to generate a second identification model by training on at least the second image and the plurality of modified second images.
[00104] Another example of the invention may include one or more of the following steps, functions, or structures:
[00105] capturing, by a camera onboard a vehicle, a first image of a first human seated in a driver's seat of the vehicle;
[00106] sending, by a first computer system onboard the vehicle, the image to a database remote from the vehicle; and
[00107] storing the first image to within the database.
[00108] The example of the invention may also include one or more steps, functions, or structures set forth above combined with obtaining, by a second computer system remote from the vehicle, the first image from the database.
[00109] The example of the invention may also include one or more steps, functions, or structures set forth above combined with generating, by the second computer system, a plurality of modified images, each comprising the image modified to obscure the human in a different manner.
[00110] The example of the invention may also include one or more steps, functions, or structures set forth above combined with generating, by the second computer system, a first identification model by training on at least the first image and the plurality of modified first images.
[00111] The example of the invention may also include one or more steps, functions, or structures set forth above combined with storing, by the first computer system, the first identification model onboard the vehicle.
[00112] The example of the invention may also include one or more steps, functions, or structures set forth above combined with capturing, by the camera, a second image of an unidentified human seated in the driver's seat of the vehicle.
[00113] The example of the invention may also include one or more steps, functions, or structures set forth above combined with using, by the first computer system, the identification model to identify the unidentified human as the first human.
[00114] The example of the invention may also include one or more steps, functions, or structures set forth above combined with capturing, by the camera, a second image of an unidentified human seated in the driver's seat of the vehicle.
[00115] The example of the invention may also include one or more steps, functions, or structures set forth above combined with using, by the first computer system, the identification model to determine that the unidentified human is not the first human. [00116] The example of the invention may also include one or more steps, functions, or structures set forth above combined with sending, by the first computer system, the second image to the database.
[00117] The example of the invention may also include one or more steps, functions, or structures set forth above combined with storing the second image to within the database.
[00118] The example of the invention may also include one or more steps, functions, or structures set forth above combined with obtaining, by the second computer system, the second image from the database.
[00119] The example of the invention may also include one or more steps, functions, or structures set forth above combined with generating, by the second computer system, a plurality of modified second images, each comprising the image modified to obscure the unidentified human in a different manner.
[00120] The example of the invention may also include one or more steps, functions, or structures set forth above combined with generating, by the second computer system, a second identification model by training on at least the second image and the plurality of modified second images.
[00121] The example of the invention may also include one or more steps, functions, or structures set forth above combined with storing, by the first computer system, the second identification model onboard the vehicle.
[00122] The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. The scope of the invention is, therefore, indicated by the appended claims, rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. A system comprising:
a vehicle having onboard a camera and a computer system storing an identification model;
a database storing an image of a human captured by the camera; and
a computing device programmed to
obtain the image,
generate a plurality of modified images, each comprising the image modified to obscure the human in a different manner, and
generate the identification model by training on at least the image and the plurality of modified images.
2. The system of claim 1, wherein the image comprises a digital photograph of the human as the human is seated in a driver's seat of the vehicle.
3. The system of claim 2, wherein:
the image is of the human in a first state with respect to wearing glasses; and a first image of the plurality of modified images comprises the image modified to include the human in a second, opposite state with respect to wearing the glasses.
4. The system of claim 3, wherein:
the image is of the human in a first state with respect to wearing a hat; and a second image of the plurality of modified images comprises the image modified to include the human in a second, opposite state with respect to wearing the hat.
5. The system of claim 4, wherein:
the image is of the human at a first lighting level; and
a third image of the plurality of modified images comprises the image modified to be at a second lighting level that is darker than the first lighting level.
6. The system of claim 5, wherein:
the image is of the human at a first clarity level; and
a fourth image of the plurality of modified images comprises the image modified to be at a second clarity level that is lower than the first clarity level.
7. The system of claim 2, wherein the database is off board with respect to the vehicle.
8. The system of claim 7, wherein the computing device is off board with respect to the vehicle.
9. The system of claim 2, wherein the database further stores a plurality of different images, each comprising an image of a different human that is not the human.
10. The system of claim 9, wherein the computing device is further programmed to identify with in the plurality of different images at least one similar image corresponding to a first different human having one or more features similar to those of the human.
11. The system of claim 10, wherein the computing device is further programmed to generate the identification model by training on at the image, the plurality of modified images, and the at least one similar image.
12. The system of claim 11, wherein:
the image is of the human in a first state with respect to wearing at least one of glasses and a hat; and
a first image of the plurality of modified images comprises the image modified to include the human in a second, opposite state with respect to wearing the at least one of the glasses and the hat.
13. The system of claim 11, wherein:
the image is of the human at a first lighting level; and
a first image of the plurality of modified images comprises the image modified to be at a second lighting level that is darker than the first lighting level.
14. The system of claim 1, wherein:
the image is of the human at a first clarity level; and
a first image of the plurality of modified images comprises the image modified to be at a second clarity level that is lower than the first clarity level.
15. The system of claim 1, wherein the database and the computing device are off board with respect to the vehicle.
16. A driver-identification system comprising:
a first vehicle having onboard a first camera and a first computer system;
a second vehicle having onboard a second camera and a second computer system; a remote database off board with respect to the first and second vehicles;
the remote database storing a first image of a first human captured by the first camera as the first human is seated in a driver's seat of the first vehicle;
the remote database further storing a second image of a second human captured by the second camera as the second human is seated in a driver's seat of the second vehicle;
a third computer system off board with respect to the first and second vehicles; the third computer system programmed to
obtain the first and second images,
generate a plurality of modified first images, each comprising the first image modified to obscure the first human in a different manner, and
generate a first identification model by training on at least the first image, the plurality of modified first images, and the second image; and
the first vehicle, wherein the first computer system stores the first identification model.
17. The system of claim 16, wherein the third computer system is further programmed to:
generate a plurality of modified second images, each comprising the second image modified to obscure the second human in a different manner; and
generate a second identification model by training on at least the second image and the plurality of modified second images.
18. A method comprising:
capturing, by a camera onboard a vehicle, a first image of a first human seated in a driver's seat of the vehicle;
sending, by a first computer system onboard the vehicle, the image to a database remote from the vehicle;
storing the first image to within the database;
obtaining, by a second computer system remote from the vehicle, the first image from the database;
generating, by the second computer system, a plurality of modified images, each comprising the image modified to obscure the human in a different manner;
generating, by the second computer system, a first identification model by training on at least the first image and the plurality of modified first images; and
storing, by the first computer system, the first identification model onboard the vehicle.
19. The method of claim 18, further comprising:
capturing, by the camera, a second image of an unidentified human seated in the driver's seat of the vehicle; and
using, by the first computer system, the identification model to identify the unidentified human as the first human.
20. The method of claim 18, further comprising;
capturing, by the camera, a second image of an unidentified human seated in the driver's seat of the vehicle;
using, by the first computer system, the identification model to determine that the unidentified human is not the first human;
sending, by the first computer system, the second image to the database;
storing the second image to within the database;
obtaining, by the second computer system, the second image from the database; generating, by the second computer system, a plurality of modified second images, each comprising the image modified to obscure the unidentified human in a different manner;
generating, by the second computer system, a second identification model by training on at least the second image and the plurality of modified second images; and storing, by the first computer system, the second identification model onboard the vehicle.
PCT/US2016/013351 2016-01-14 2016-01-14 Driver-identification system and method WO2017123226A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2016/013351 WO2017123226A1 (en) 2016-01-14 2016-01-14 Driver-identification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2016/013351 WO2017123226A1 (en) 2016-01-14 2016-01-14 Driver-identification system and method

Publications (1)

Publication Number Publication Date
WO2017123226A1 true WO2017123226A1 (en) 2017-07-20

Family

ID=59311619

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/013351 WO2017123226A1 (en) 2016-01-14 2016-01-14 Driver-identification system and method

Country Status (1)

Country Link
WO (1) WO2017123226A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204261B2 (en) * 2012-08-24 2019-02-12 Jeffrey T Haley Camera in vehicle reports identity of driver
WO2020027752A3 (en) * 2018-05-08 2020-04-16 Havelsan Hava Elektronik Sanayi Ve Ticaret Anonim Sirketi A method for detecting child passengers in front seats of vehicles

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183635A1 (en) * 2004-09-16 2007-08-09 Bayerische Motoren Werke Aktiengsellschaft Method for image-based driver identification in a motor vehicle
US20110109462A1 (en) * 2009-11-10 2011-05-12 Gm Global Technology Operations, Inc. Driver Configurable Drowsiness Prevention
US20130041521A1 (en) * 2011-08-09 2013-02-14 Otman A. Basir Vehicle monitoring system with automatic driver identification
US20140324281A1 (en) * 2011-09-16 2014-10-30 Lytx, Inc. Driver identification based on face data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183635A1 (en) * 2004-09-16 2007-08-09 Bayerische Motoren Werke Aktiengsellschaft Method for image-based driver identification in a motor vehicle
US20110109462A1 (en) * 2009-11-10 2011-05-12 Gm Global Technology Operations, Inc. Driver Configurable Drowsiness Prevention
US20130041521A1 (en) * 2011-08-09 2013-02-14 Otman A. Basir Vehicle monitoring system with automatic driver identification
US20140324281A1 (en) * 2011-09-16 2014-10-30 Lytx, Inc. Driver identification based on face data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204261B2 (en) * 2012-08-24 2019-02-12 Jeffrey T Haley Camera in vehicle reports identity of driver
WO2020027752A3 (en) * 2018-05-08 2020-04-16 Havelsan Hava Elektronik Sanayi Ve Ticaret Anonim Sirketi A method for detecting child passengers in front seats of vehicles

Similar Documents

Publication Publication Date Title
US11498573B2 (en) Pacification method, apparatus, and system based on emotion recognition, computer device and computer readable storage medium
CN108725357B (en) Parameter control method and system based on face recognition and cloud server
US10049284B2 (en) Vision-based rain detection using deep learning
CN110047487B (en) Wake-up method and device for vehicle-mounted voice equipment, vehicle and machine-readable medium
US9950681B2 (en) Method for setting internal usage scenario of vehicle, vehicle-mounted device, and network device
US10214221B2 (en) System and method for identifying a vehicle driver by a pattern of movement
CN110865705B (en) Multi-mode fusion communication method and device, head-mounted equipment and storage medium
US20180208208A1 (en) System and method for identifying at least one passenger of a vehicle by a pattern of movement
CN104816694A (en) Intelligent driving state adjustment device and method
US20220277558A1 (en) Cascaded Neural Network-Based Attention Detection Method, Computer Device, And Computer-Readable Storage Medium
CN110254393A (en) A kind of automotive self-adaptive control method based on face recognition technology
EP3132433A2 (en) Trainable transceiver and mobile communications device training systems and methods
CN103324904A (en) Face recognition system and method thereof
CN110171271A (en) A kind of automobile perfume (or spice) atmosphere control method and device
KR20140072734A (en) System and method for providing a user interface using hand shape trace recognition in a vehicle
EP3789250B1 (en) Management system and method for identifying and bio-monitoring in vehicles
CN110389744A (en) Multimedia music processing method and system based on recognition of face
CN113591659B (en) Gesture control intention recognition method and system based on multi-mode input
CN110217176A (en) A kind of meeting pedal regulation method, apparatus, equipment, storage medium and system
US20240010146A1 (en) Technologies for using image analysis to facilitate adjustments of vehicle components
WO2017123226A1 (en) Driver-identification system and method
CN112883417A (en) New energy automobile control method and system based on face recognition and new energy automobile
CN111717147A (en) Automatic adjusting system and method for vehicle seat
CN112036468A (en) Driving operation system adjusting method, vehicle and storage medium
CN112511746A (en) In-vehicle photographing processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16885328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16885328

Country of ref document: EP

Kind code of ref document: A1