WO2019232973A1 - 车辆控制方法和系统、车载智能系统、电子设备、介质 - Google Patents

车辆控制方法和系统、车载智能系统、电子设备、介质 Download PDF

Info

Publication number
WO2019232973A1
WO2019232973A1 PCT/CN2018/105809 CN2018105809W WO2019232973A1 WO 2019232973 A1 WO2019232973 A1 WO 2019232973A1 CN 2018105809 W CN2018105809 W CN 2018105809W WO 2019232973 A1 WO2019232973 A1 WO 2019232973A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
user
face image
information
data set
Prior art date
Application number
PCT/CN2018/105809
Other languages
English (en)
French (fr)
Inventor
孟德
李轲
于晨笛
秦仁波
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to EP18919403.8A priority Critical patent/EP3628549B1/en
Priority to KR1020207012404A priority patent/KR102297162B1/ko
Priority to SG11201911197VA priority patent/SG11201911197VA/en
Priority to JP2019564878A priority patent/JP6916307B2/ja
Priority to KR1020217027087A priority patent/KR102374507B1/ko
Priority to US16/233,064 priority patent/US10970571B2/en
Publication of WO2019232973A1 publication Critical patent/WO2019232973A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/12Limiting control by the driver depending on vehicle state, e.g. interlocking means for the control input for preventing unsafe operation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/20Means to switch the anti-theft system on or off
    • B60R25/25Means to switch the anti-theft system on or off using biometry
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/01Fittings or systems for preventing or indicating unauthorised use or theft of vehicles operating on vehicle systems or fittings, e.g. on doors, seats or windscreens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/20Means to switch the anti-theft system on or off
    • B60R25/2018Central base unlocks or authorises unlocking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/30Detection related to theft or to other events relevant to anti-theft systems
    • B60R25/305Detection related to theft or to other events relevant to anti-theft systems using a camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/30Detection related to theft or to other events relevant to anti-theft systems
    • B60R25/34Detection related to theft or to other events relevant to anti-theft systems of conditions of vehicle components, e.g. of windows, door locks or gear selectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0051Handover processes from occupants to vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2325/00Indexing scheme relating to vehicle anti-theft devices
    • B60R2325/10Communication protocols, communication systems of vehicle anti-theft devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/225Direction of gaze
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/26Incapacity

Definitions

  • the present application relates to vehicle intelligent identification technology, and in particular, to a vehicle control method and system, a vehicle-mounted intelligent system, electronic equipment, and a medium.
  • Intelligent vehicle is a comprehensive system that integrates functions such as environmental perception, planning and decision-making, and multi-level assisted driving. It integrates the technologies of computer, modern sensing, information fusion, communication, artificial intelligence and automatic control. Technology complex. At present, research on smart vehicles is mainly focused on improving the safety and comfort of automobiles, and providing excellent human-vehicle interaction interfaces. In recent years, intelligent vehicles have become a research hotspot in the world's vehicle engineering field and a new driving force for the growth of the automotive industry. Many developed countries have incorporated them into intelligent transportation systems that they have focused on.
  • the embodiments of the present application provide a vehicle control method and system, a vehicle-mounted intelligent system, an electronic device, and a medium.
  • a vehicle control method including:
  • the used vehicle includes one or any combination of the following: booking a car, driving, riding, cleaning a vehicle, maintaining a vehicle, repairing a vehicle, refueling a vehicle, or charging a vehicle.
  • the data set stores at least one pre-stored face image of a user who has reserved a ride
  • the controlling the vehicle movement to allow the user to use the vehicle includes controlling opening of a vehicle door.
  • the data set stores at least one pre-stored face image of a user who has reserved a car;
  • the controlling the vehicle movement to allow the user to use the vehicle includes controlling opening of a door and releasing driving control of the vehicle.
  • the data set stores at least one recorded pre-stored face image of a user who is allowed to ride a car;
  • the controlling the vehicle movement to allow the user to use the vehicle includes controlling opening of a vehicle door.
  • the data set stores at least one recorded pre-stored face image of a user who is allowed to use the car;
  • the controlling the vehicle movement to allow the user to use the vehicle includes controlling opening of a door and releasing driving control of the vehicle.
  • the data set stores at least one pre-stored face image of a user who has reserved unlocking or has recorded permission to unlock;
  • the controlling the vehicle motion to allow the user to use the vehicle includes controlling a car lock to be unlocked.
  • the data set stores at least one pre-stored face image of a user who has reserved fuel for the vehicle or has been recorded to allow the vehicle to be fueled;
  • the controlling the vehicle movement to allow the user to use the vehicle includes controlling opening of a vehicle fueling port.
  • the data set stores at least one pre-stored face image of a user who has reserved a vehicle for charging or has been recorded to allow the vehicle to be charged;
  • the controlling the vehicle motion to allow the user to use the vehicle includes controlling a charging device to allow a battery of the vehicle to be connected.
  • the method further includes: controlling the vehicle to issue prompt information indicating that the user is allowed to use the vehicle.
  • obtaining a face image of a user who currently requests to use the vehicle includes:
  • a face image of the user is collected by a photographing component provided on the vehicle.
  • the method further includes:
  • the method further includes:
  • the feature matching result indicates that the feature matching is successful, obtaining the identity information of the user according to a pre-stored face image with successful feature matching;
  • the method further includes: obtaining a living body detection result of the face image;
  • Controlling the vehicle motion to allow the user to use the vehicle according to the feature matching result including:
  • a vehicle motion is controlled to allow the user to use the vehicle.
  • the method further includes:
  • the data set is acquired by the mobile terminal device from a cloud server and sent to the vehicle when receiving the data set download request.
  • the method further includes:
  • the method further includes:
  • a data set is established according to the reserved face image.
  • the method further includes:
  • an early warning prompt for an abnormal state is performed.
  • the user state detection includes any one or more of the following: user fatigue state detection, user distraction state detection, and user predetermined distraction action detection.
  • performing user fatigue state detection based on the face image includes:
  • the status information of at least a part of the face includes any one or more of the following: eye opening and closing state information, Mouth opening and closing status information;
  • a result of detecting the fatigue state of the user is determined according to a parameter value of an index used to represent the fatigue state of the user.
  • the indicator used to characterize the fatigue state of the user includes any one or more of the following: the degree of eyes closed and the degree of yawning.
  • the parameter value of the degree of closed eyes includes any one or more of the following: number of eyes closed, frequency of closed eyes, duration of closed eyes, amplitude of closed eyes, number of closed eyes, frequency of closed eyes; and / or,
  • the parameter value of the yawning degree includes any one or more of the following: yawning status, number of yawning, duration of yawning, and frequency of yawning.
  • performing user distraction state detection based on the face image includes:
  • the index used to characterize the distraction state of the user includes any one or more of Item: Degree of deviation of face orientation, degree of deviation of sight;
  • a result of detecting the distraction state of the user is determined according to a parameter value of an index used to characterize the distraction state of the user.
  • the parameter value of the face orientation deviation degree includes any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or,
  • the parameter value of the degree of sight line deviation includes any one or more of the following: the sight line direction deviation angle, the sight line direction deviation duration, and the sight line direction deviation frequency.
  • detecting the user's face orientation and / or line of sight direction in the face image includes:
  • Face detection and / or line of sight detection is performed according to the key points of the face.
  • performing face orientation detection according to the key points of the face to obtain the face orientation information includes:
  • the predetermined distraction action includes any one or more of the following: smoking action, drinking action, eating action, calling action, and entertaining action.
  • performing a user's predetermined distraction detection based on the face image includes:
  • the method further includes:
  • a predetermined distraction action occurs, according to a determination result of whether the predetermined distraction action occurs within a period of time, obtaining a parameter value of an index used to characterize the degree of distraction of the user;
  • a result of detecting a user's predetermined distraction action is determined according to a parameter value of the index for characterizing the degree of distraction of the user.
  • the parameter value of the index for characterizing the degree of distraction of the user includes any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
  • the method further includes:
  • the detected predetermined distraction motion is prompted.
  • the method further includes:
  • a control operation corresponding to a result of the user state detection is performed.
  • the performing a control operation corresponding to a result of the user state detection includes at least one of the following:
  • the driving mode is switched to an automatic driving mode.
  • the method further includes:
  • the at least part of the results include: abnormal vehicle state information determined according to user state detection.
  • the method further includes:
  • a vehicle-mounted intelligent system including:
  • a user image obtaining unit configured to obtain a face image of a user who currently requests to use the vehicle
  • a matching unit configured to obtain a feature matching result between the face image and at least one pre-stored face image in a data set of the vehicle; wherein the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle ;
  • a vehicle control unit is configured to control a vehicle action to allow the user to use the vehicle if the feature matching result indicates that the feature matching is successful.
  • a vehicle control method including:
  • the method further includes:
  • the data set storing at least one pre-recorded pre-stored face image of a user who is allowed to use the vehicle;
  • the method further includes:
  • a data set is established according to the reserved face image.
  • obtaining the feature matching result between the face image and at least one pre-stored face image in the data set includes:
  • a feature matching result of the face image and at least one pre-stored face image in a data set is obtained from the vehicle.
  • the method further includes:
  • the at least part of the results include: abnormal vehicle state information determined according to user state detection.
  • the method further includes: performing a control operation corresponding to a result of the user state detection.
  • the performing a control operation corresponding to a result of the user state detection includes:
  • the driving mode is switched to an automatic driving mode.
  • the method further includes: receiving a face image corresponding to the abnormal vehicle status information sent by the vehicle.
  • the method further includes: performing at least one of the following operations based on the abnormal vehicle status information:
  • the performing data statistics based on the abnormal vehicle status information includes:
  • the performing vehicle management based on the abnormal vehicle status information includes:
  • the performing user management based on the abnormal vehicle status information includes:
  • an electronic device including:
  • An image receiving unit configured to receive a face image to be identified sent by a vehicle
  • a matching result obtaining unit is configured to obtain a feature matching result between the face image and at least one pre-stored face image in a data set, wherein the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle ;
  • An instruction sending unit is configured to: if the feature matching result indicates that the feature matching is successful, send an instruction to the vehicle to allow control of the vehicle.
  • a vehicle control system including: a vehicle and / or a cloud server;
  • the vehicle is used to execute the vehicle management method according to any one of the above;
  • the cloud server is configured to execute the vehicle control method according to any one of the foregoing.
  • the vehicle control system further includes: a mobile terminal device, configured to:
  • an electronic device including: a memory for storing executable instructions;
  • a processor configured to communicate with the memory to execute the executable instructions to complete any one of the vehicle control methods described above.
  • a computer program including computer-readable code.
  • the computer-readable code runs in an electronic device
  • a processor in the electronic device executes the program to implement the foregoing.
  • the vehicle control method according to any one of the above.
  • a computer storage medium for storing computer-readable instructions, and when the instructions are executed, the vehicle control method according to any one of the foregoing is implemented.
  • in-vehicle intelligent system, electronic device, and medium provided by the foregoing embodiments of the present application, obtain a face image of a user currently requesting the use of the vehicle; obtain at least one pre-stored person in the data set of the face image and the vehicle Feature matching result of face image; if the feature matching result indicates that the feature matching is successful, control the vehicle action to allow the user to use the vehicle.
  • the rights of pre-recorded personnel are guaranteed, and feature matching can be achieved without a network to overcome The dependence on the network has further improved the safety of vehicles.
  • FIG. 1 is a flowchart of a vehicle control method according to some embodiments of the present application.
  • FIG. 2 is a flowchart of detecting a user fatigue state based on a face image in some embodiments of the present application
  • FIG. 3 is a flowchart of detecting a user's distraction state based on a face image in some embodiments of the present application
  • FIG. 4 is a flowchart of detecting a user's predetermined distraction based on a face image in some embodiments of the present application
  • FIG. 5 is a flowchart of a user state detection method according to some embodiments of the present application.
  • FIG. 6 is a schematic structural diagram of a vehicle-mounted intelligent system according to some embodiments of the present application.
  • FIG. 7 is a flowchart of a vehicle control method according to another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • FIG. 9 is a flowchart of using a vehicle management system according to some embodiments of the present application.
  • FIG. 10 is a schematic structural diagram of an application example of an electronic device according to some embodiments of the present application.
  • Embodiments of the present invention can be applied to electronic equipment such as terminal equipment, computer systems, servers, etc., which can operate with many other general or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of these systems, and more.
  • Electronic devices such as a terminal device, a computer system, and a server can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
  • the computer system / server can be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks are performed by remote processing devices linked through a communication network. In a distributed cloud computing environment, program modules may be located on a local or remote computing system storage medium including a storage device.
  • FIG. 1 is a flowchart of a vehicle control method according to some embodiments of the present application.
  • the execution subject of the vehicle control method in this embodiment may be a vehicle-end device.
  • the execution subject may be an in-vehicle intelligent system or another device with similar functions.
  • the method in this embodiment includes:
  • an image acquisition device provided outside or inside the vehicle may be used to perform image collection on the person appearing to obtain a face image.
  • operations such as face detection, face quality screening, and living body recognition may be performed on the collected image.
  • the operation 110 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a user image acquisition unit 61 executed by the processor.
  • the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle.
  • feature matching is performed on a face image and a pre-existing face image in a data set, and a feature of the face image and a feature of the pre-existing face image may be obtained through a convolutional neural network, and then feature matching is performed to identify a human
  • the face image has a pre-stored face image corresponding to the same face, so as to recognize the identity of the user who has collected the face image.
  • the operation 120 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a matching unit 62 executed by the processor.
  • the feature matching result includes two cases: the feature matching is successful and the feature matching is unsuccessful.
  • the feature matching is successful, it indicates that the user is a user who has been reserved or allowed to use the vehicle. At this time, controlling the vehicle action to Allow users to use the vehicle.
  • the operation 130 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a vehicle control unit 63 executed by the processor.
  • a face image of a user currently requesting a vehicle is obtained; a feature matching result of the face image and at least one pre-stored face image in a data set of the vehicle is obtained; if the feature matching result indicates a feature
  • the matching is successful, the vehicle movement is controlled to allow the user to use the vehicle, and the rights of pre-recorded personnel are guaranteed based on feature matching, and feature matching can be achieved without a network, which overcomes the dependence on the network and further improves the safety of the vehicle Security.
  • the used vehicle may include, but is not limited to, one or any combination of the following: reservation of a vehicle, driving, riding, cleaning a vehicle, maintaining a vehicle, repairing a vehicle, refueling a vehicle, or a vehicle Charging.
  • the vehicle can be fixed. For cleaning personnel, it is only necessary to open the door. For automatic washing, it may be necessary to provide cleaning personnel with driving control of the vehicle. Maintenance and repair vehicles can be opened for the corresponding person. The door can be used; to refuel the vehicle, you need to control the opening of the fuel port; when charging the vehicle (for electric vehicles), you need to control the charging device (such as a charging gun) to allow the vehicle's battery to be connected.
  • the charging device such as a charging gun
  • a pre-stored face image of at least one user who has reserved a ride is stored in the data set
  • Operation 130 may include controlling opening of a vehicle door.
  • Opening the door for users who make a reservation for a ride e.g., an online booking
  • a reservation for a ride e.g., an online booking
  • a pre-stored face image of at least one user who has reserved a car is stored in the data set
  • Operation 130 may include controlling opening of a vehicle door and passing vehicle driving control.
  • the reserved user makes an appointment to drive a vehicle (for example, to reserve a rental car)
  • the user is provided with driving control rights for the user, while other non-reserved users cannot enter the vehicle, and even if they enter the vehicle illegally, they cannot drive the vehicle, ensuring the safety of the vehicle.
  • the data set stores at least one recorded pre-stored face image of a user who is allowed to ride;
  • Operation 130 may include controlling opening of a vehicle door.
  • the door is opened and controlled for the user so that the user can take the car safely.
  • the data set stores at least one recorded pre-stored face image of a user who is allowed to use the car;
  • Operation 130 includes controlling opening of a door and releasing driving control of the vehicle.
  • the user When the user is a recorded user who is allowed to use the car (for example, a driving member corresponding to a private car), the user is provided with vehicle driving control rights, while other non-recorded users cannot enter the vehicle, even if they enter the vehicle illegally.
  • the vehicle guarantees the safety of the vehicle.
  • the data set stores at least one pre-stored face image of a user who has been scheduled to unlock or has recorded permission to unlock;
  • Operation 130 includes controlling unlocking of the car lock.
  • the user can be a reserved user (including temporary or long-term reservation) or recorded permission
  • the unlocked user controls the unlocking of the car for that user, ensuring the safety of the vehicle.
  • the data set stores at least one pre-stored face image of a user who has reserved fuel for the vehicle or has recorded a user who is allowed to fuel the vehicle;
  • Operation 130 includes controlling the opening of the vehicle fueling port.
  • the data set stores at least one pre-stored face image of a user who has reserved for charging the vehicle or has been recorded to allow the vehicle to be charged;
  • Operation 130 includes controlling the battery of the vehicle to allow the charging device to be connected.
  • the charging device When the vehicle needs to be charged (such as an electric car or electric bicycle, etc.), it is necessary to control the charging device that allows the charging device to connect to the vehicle's battery.
  • the device is connected to the vehicle's battery to charge the vehicle, ensuring the safety of the vehicle battery.
  • the vehicle control method further includes: controlling the vehicle to issue prompt information indicating that the user is permitted to use the vehicle.
  • operation 110 may include:
  • a user's face image is collected by a photographing component provided on the vehicle.
  • this embodiment provides a service for the user to use the vehicle, it can include operations inside the vehicle (such as driving) or operations outside the vehicle (such as driving doors, driving locks), so the shooting component can be set in the vehicle External or internal, can be fixed or active.
  • the vehicle control method further includes:
  • the data set is usually stored in a cloud server.
  • face matching on the vehicle side needs to be implemented. In order to be able to match human faces even when there is no network, you can use the network when Download the data set from the cloud server and save the data set on the vehicle side. At this time, even if there is no network and cannot communicate with the cloud server, face matching can be achieved on the vehicle side, and it is convenient for the vehicle side to manage the data set.
  • the vehicle control method may further include:
  • the user's identity information is obtained according to the pre-stored face image of the successful feature matching
  • the feature matching when the feature matching is successful, it means that the user is a user who has reserved or allowed to use the vehicle, and can obtain the identity information corresponding to the user from the data set, and send the face image and identity information to the cloud server.
  • Real-time tracking of the user can be established on the cloud server (for example: when and where does a user ride a certain vehicle), and in the presence of the network, the face image can be uploaded to the cloud server in real time, which can realize the use of the user Analysis and statistics of vehicle status.
  • the vehicle control method further includes: acquiring a living body detection result of a face image;
  • Operation 130 may include:
  • the vehicle motion is controlled to allow the user to use the vehicle.
  • the living body detection is used to determine whether the image is from a real person (or a living person), and the identity verification of the driver can be made more accurate through the living body detection.
  • This embodiment does not limit the specific method of living body detection. For example, three-dimensional information depth analysis of the image, facial optical flow analysis, Fourier spectrum analysis, edge or reflection security clue analysis, and multi-frame video in the video stream can be used. Image frame comprehensive analysis and other methods are implemented, so it will not be repeated here.
  • the vehicle control method further includes:
  • the data set is acquired by the mobile terminal device from the cloud server and sent to the vehicle when the data set download request is received.
  • the mobile terminal device may be a mobile phone, a PAD, or a terminal device on another vehicle.
  • the mobile terminal device receives the data set download request, it sends the data set download request to the cloud server, and then obtains the data set and sends it to the vehicle.
  • the network such as 2G, 3G, 4G, etc.
  • the network can be applied to avoid the vehicle being restricted by the network from downloading the dataset from the cloud server to face matching. The problem.
  • the vehicle control method further includes: if the feature matching result indicates that the feature matching is unsuccessful, controlling the vehicle action to reject the user from using the vehicle.
  • the unsuccessful feature matching indicates that the user has not made a reservation or is not allowed to use the vehicle.
  • the vehicle will refuse the user to use the vehicle.
  • the vehicle control method further includes:
  • the user's appointment request including a user's appointment face image
  • a data set is established based on the pre-appointed face images.
  • a reservation request sent by a user is received by a vehicle, and a reservation face image of the user is saved.
  • a data set is established on the vehicle side based on the reservation face image, and the individual face matching on the vehicle side can be achieved through the data set. No need to download datasets from cloud servers.
  • the vehicle control method further includes:
  • an early warning prompt for an abnormal state is performed.
  • the results of the user status detection may be output.
  • intelligent driving control of the vehicle may be performed according to a result of detection of the user state.
  • the result of the user state detection may be output, and at the same time, intelligent driving control of the vehicle may be performed according to the result of the user state detection.
  • the result of user status detection and / or the result of user status detection may be output locally.
  • the result of the user state detection is output locally, that is, the result of the user state detection is output through the user state detection device or the user monitoring system, or the result of the user state detection is output to the central control system in the vehicle, so that the vehicle can detect the result based on the user state Car intelligent driving control.
  • Remotely output the results of user status detection for example, you can send the results of user status detection to the cloud server or management node, so that the cloud server or management node can collect, analyze, and / or manage the results of user status detection, or based on the user
  • the result of condition detection is remote control of the vehicle.
  • an early warning prompt for an abnormal state may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by an output module run by the processor.
  • the foregoing operations may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a user state detection unit run by the processor.
  • the user state detection may include, but is not limited to, any one or more of the following: user fatigue state detection, user distraction state detection, user predetermined distraction motion detection, and user gesture detection.
  • the results of the user state detection accordingly include but are not limited to any one or more of the following: the results of the user fatigue state detection, the results of the user distracted state detection, the results of the user's scheduled distracted motion detection, and the results of the user gesture detection.
  • the predetermined distraction action may be any distraction action that may distract the user ’s attention, such as: smoking action, drinking action, eating action, calling action, entertainment action, and the like.
  • eating actions include actions such as eating fruits and snacks
  • entertaining actions include actions such as sending messages, playing games, and singing songs by any electronic device.
  • electronic devices include mobile phones, handheld computers, and games. Machine and so on.
  • user state detection can be performed on a face image, and a result of the user state detection can be output, thereby real-time detection of a user's used vehicle state is realized, so as to facilitate the use of the vehicle state by the user.
  • timely taking corresponding measures will help ensure safe driving and reduce or avoid road traffic accidents.
  • FIG. 2 is a flowchart of user fatigue state detection based on a face image in some embodiments of the present application.
  • the embodiment shown in FIG. 2 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • a method for detecting a fatigue state of a user based on a face image may include:
  • 202 Detect at least a part of a face of a face image to obtain status information of at least a part of a face.
  • the at least part of the face may include at least one of an eye area of a user's face, a mouth area of a user's face, and an entire area of a user's face.
  • the state information of at least a part of the face may include any one or more of the following: eye opening and closing state information, and mouth opening and closing state information.
  • the above-mentioned eye opening and closing state information may be used to perform closed-eye detection of the user, for example, detecting whether the user is half-closed ("half" indicates a state of incompletely closed eyes, such as squinting in a doze state, etc.), Whether to close eyes, the number of eyes closed, and the magnitude of eyes closed.
  • the eye opening and closing state information may be information obtained by normalizing the height of the eyes opened.
  • the mouth opening and closing state information may be used to perform yawn detection of a user, for example, detecting whether a user yawns, the number of yawns, and the like.
  • the mouth opening and closing state information may be information obtained by normalizing the height of the mouth opening.
  • face keypoint detection may be performed on a face image, and eye keypoints in the detected face keypoints are directly used for calculation, so as to obtain eye opening and closing state information according to the calculation result.
  • an eye key point (for example, coordinate information of the eye key point in the user image) in the face key point may be first used to locate the eye in the user image to obtain an eye image, and use the The eye image obtains the upper eyelid line and the lower eyelid line, and by calculating the interval between the upper eyelid line and the lower eyelid line, the eye opening and closing state information is obtained.
  • the mouth key points in the face key points can be directly used for calculation, so as to obtain the mouth opening and closing state information according to the calculation results.
  • the mouth key point (for example, the coordinate information of the mouth key point in the user image) in the face key point can be used to locate the mouth in the user image, and the mouth can be obtained by cutting or other methods. Image, and use this mouth image to obtain the upper lip line and the lower lip line. By calculating the interval between the upper lip line and the lower lip line, the mouth opening and closing state information is obtained.
  • the indicator used to characterize the fatigue state of the user may include, but is not limited to, any one or more of the following: the degree of eyes closed, the degree of yawning.
  • the parameter value of the degree of closed eyes may include, but is not limited to, any one or more of the following: the number of closed eyes, the frequency of closed eyes, the duration of closed eyes, the magnitude of closed eyes, the number of closed eyes, Semi-closed eye frequency; and / or, the parameter value of the yawning degree may include, for example, but not limited to any one or more of the following: yawning status, yawning times, yawning duration, and yawning frequency.
  • the result of the user fatigue state detection may include: no fatigue state and fatigue state are detected.
  • the result of the fatigue state detection of the user may also be a degree of fatigue driving, where the degree of fatigue driving may include a normal driving level (also referred to as a non-fatigue driving level) and a fatigue driving level.
  • the fatigue driving level may be one level, or may be divided into multiple different levels.
  • the above-mentioned fatigue driving level may be divided into: a prompt fatigue driving level (also referred to as a mild fatigue driving level) and a warning fatigue.
  • Driving level also called severe fatigue driving level
  • the degree of fatigue driving can be divided into more levels, such as: mild fatigue driving level, moderate fatigue driving level, and severe fatigue driving level. This embodiment does not limit the different levels included in the degree of fatigue driving.
  • FIG. 3 is a flowchart of detecting a user's distraction state based on a face image in some embodiments of the present application.
  • the embodiment shown in FIG. 3 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • a method for detecting a user's distraction state based on a face image may include:
  • the foregoing face orientation information may be used to determine whether the user's face direction is normal, for example, determining whether the user's side face is facing forward or whether to turn back.
  • the face orientation information may be an angle between the user's face directly in front of the user's face and the vehicle's front.
  • the above-mentioned line of sight direction information may be used to determine whether the line of sight direction of the user is normal, for example, determining whether the user is looking ahead, etc., and the line of sight direction information may be used to determine whether the line of sight of the user has deviated.
  • the line of sight information may be an angle between the line of sight of the user and the front of the vehicle used by the user.
  • the index used to characterize the distraction state of the user may include, but is not limited to, any one or more of the following: the degree of deviation of the face orientation, and the degree of deviation of the line of sight.
  • the parameter value of the degree of deviation of the face orientation may include, but is not limited to, any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or, the degree of sight deviation
  • the parameter value of can include, but is not limited to, any one or more of the following: the angle of deviation of the line of sight direction, the length of time of the line of sight direction deviation, and the frequency of the line of sight direction deviation.
  • the above-mentioned degree of deviation of the line of sight may include, for example, at least one of whether the line of sight is deviated and whether the line of sight is severely deviated; and the above-mentioned degree of deviation of the face orientation (also referred to as the degree of turning face or the degree of turning back) may include, for example ,: At least one of whether the head is turned, whether it is turned for a short time, and whether it is turned for a long time.
  • the phenomenon of long-time large-angle head rotation can record a long-time large-angle head rotation or the duration of the current head rotation; when it is determined that the face orientation information is not greater than the first orientation, greater than the second orientation, and not greater than the first Orientation, which is larger than the second orientation, lasts for N1 frames (for example, lasts 9 or 10 frames, etc.), then it is determined that the user has experienced a long-time small-angle rotation, and a small-angle rotation can be recorded. You can record the duration of this turn.
  • the included angle between the line of sight information and the front of the car is greater than the first included angle, and the phenomenon that is greater than the first included angle persists for N2 frames (for example, 8 frames or 9 frames) Frames, etc.), it is determined that the user has experienced a serious sight deviation, which can record a severe sight deviation, or the duration of the severe sight deviation; when determining that the angle between the sight direction information and the front of the car is not greater than If an angle is greater than the second angle, and it is not greater than the first angle and greater than the second angle, this phenomenon continues for N2 frames (for example, it lasts 8 or 9 frames, etc.), then it is determined that the user has appeared once The sight deviation phenomenon can be recorded once, or the duration of this sight deviation.
  • N2 frames for example, 8 frames or 9 frames
  • the values of the first orientation, the second orientation, the first included angle, the second included angle, N1, and N2 may be set according to actual conditions, and the value of the values is not limited in this embodiment.
  • the result of the user's distraction state detection may include, for example, the user ’s concentration (the user ’s attention is not distracted) and the user ’s distraction; or the user ’s distraction state detection result may be the user ’s distraction level, For example, it may include: the user's attention is concentrated (the user's attention is not distracted), the user's attention is slightly distracted, the user's attention is moderately distracted, the user's attention is seriously distracted, and so on.
  • the level of user distraction can be determined by a preset condition that is satisfied by a parameter value of an index used to characterize the distraction state of the user.
  • the user's attention dispersal level is the user's concentration
  • the deviation of the sight direction and the face orientation deviation angle are greater than or equal to the first The preset angle
  • the duration is longer than the first preset duration and less than or equal to the second preset duration is a slight distraction of the user's attention
  • the sight direction deviation angle and the face orientation deviation angle are any greater than or equal to the first prediction time Set the angle
  • the duration is greater than the second preset duration and less than or equal to the third preset duration, which is a moderate distraction of the user's attention
  • the first preset duration is shorter than the second preset duration and the second preset duration is shorter than the third preset duration.
  • This embodiment determines a parameter value of an index for characterizing a user's distraction state by detecting a face orientation and / or a line of sight direction of a user image when the user is a driver, and determines a result of the user's distraction state detection based on this. Determine whether the user is concentrating on driving. By quantifying the index of the user's distraction state, the degree of driving concentration is quantified as at least one of the index of sight deviation and the degree of turning, which is helpful for timely and objective measurement of the user's focused driving state. .
  • operation 302 for detecting a face orientation and / or a line of sight direction on a face image may include:
  • Face orientation and / or line of sight detection is performed based on key points of the face.
  • facial keypoints usually include head pose feature information
  • face orientation detection is performed based on facial keypoints to obtain facial orientation information, including: obtaining heads based on facial keypoints Characteristic information of the pose; determine the face orientation (also called head pose) information based on the feature information of the head pose, where the face orientation information here can indicate, for example, the direction and angle of the face's rotation, and the direction of rotation here It can be turning left, turning right, turning down, and / or turning up, etc.
  • Face orientation head attitude
  • yaw represents the horizontal deflection angle (yaw angle) and vertical deflection of the head in the normalized ball coordinates (the camera coordinate system where the camera is located)
  • Angle elevation
  • the horizontal deflection angle and / or the vertical deflection angle are greater than a preset angle threshold and the duration is greater than a preset time threshold, it may be determined that the result of the user's distracted state detection is inattention.
  • a corresponding neural network may be used to obtain face orientation information of at least one user image.
  • the detected key points of the face may be input to a first neural network, and the first neural network may extract the characteristic information of the head pose based on the received key points of the face and input the second neural network; Head posture estimation is performed based on the feature information of the head posture, and face orientation information is obtained.
  • the existing developments are mature and have good real-time neural network for extracting feature information of head pose and neural network for estimating face orientation to obtain face orientation information
  • aiming at The video captured by the camera can accurately and timely detect the face orientation information corresponding to at least one image frame (that is, at least one user image) in the video, thereby helping to improve the accuracy of determining the degree of user attention.
  • the gaze direction detection is performed according to the key points of the face to obtain the gaze direction information, including: determining the pupil edge position according to the eye image positioned by the eye key point in the key points of the face, and calculating according to the pupil edge position Pupil center position; Calculate sight direction information based on pupil center position and eye center position. For example: a vector of the pupil center position and the eye center position in the eye image can be calculated, and this vector can be used as the sight direction information.
  • the direction of the line of sight can be used to determine whether the user is focusing on driving.
  • the line of sight direction can be expressed as (yaw, pitch), where yaw represents the horizontal deflection angle (yaw angle) and vertical deflection angle (elevation angle) of the line of sight in the normalized ball coordinates (the camera coordinate system where the camera is located).
  • yaw represents the horizontal deflection angle
  • vertical deflection angle elevation angle
  • the horizontal deflection angle and / or the vertical deflection angle are greater than a preset angle threshold and the duration is greater than a preset time threshold, it may be determined that the result of the user's distraction state detection is inattention.
  • determining the pupil edge position according to an eye image positioned by an eye keypoint in a keypoint of a face can be achieved by: based on a third neural network, the The eye area image detects pupil edge positions, and obtains pupil edge positions based on information output by the third neural network.
  • FIG. 4 is a flowchart of detecting a user's predetermined distraction based on a face image in some embodiments of the present application.
  • the embodiment shown in FIG. 4 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • a method for detecting a user's predetermined distraction based on a face image may include:
  • the user performs a predetermined distracted motion detection by detecting a target object corresponding to the predetermined distracted motion, and determining whether a predetermined distracted motion occurs according to a detection frame of the detected target object, thereby determining whether the user is distracted. It helps to obtain accurate detection results of the user's predetermined distraction action, thereby helping to improve the accuracy of the results of the user state detection.
  • the operations 402 to 404 may include: performing face detection on the user image via a fourth neural network to obtain a face detection frame, and extracting feature information of the face detection frame; The neural network determines whether a smoking action occurs based on the characteristic information of the face detection frame.
  • the above operations 402 to 404 may include: detecting a preset target object corresponding to eating, drinking, phone calling, or entertainment actions on a user image via a fifth neural network to obtain a detection frame of the preset target object, where the preset target object includes: Hands, mouth, eyes, target objects; target objects can include, but are not limited to, any one or more of the following: containers, food, electronic devices; determining detection of a predetermined distraction based on a preset target object detection frame
  • the detection result of the predetermined distracting action may include one of the following: no eating action / drinking action / calling action / entertaining action appears, eating action appears, drinking action appears, calling action occurs, and entertainment action appears.
  • the detection frame of the preset target object to determine the detection result of the predetermined distraction movement may include: a detection frame of whether the hand is detected, a detection frame of the mouth, a detection frame of the eye, and a detection frame of the target object, and according to the hand Whether the detection frame of the target object overlaps with the detection frame of the target object, the type of the target object, and whether the distance between the detection frame of the target object and the detection frame of the mouth or the detection frame of the eye meets a preset condition, determine the Test results.
  • the detection frame of the hand overlaps the detection frame of the target object, and the type of the target object is a container or food, and the detection frame of the target object and the detection frame of the mouth overlap, it is determined that eating or drinking actions occur ; And / or, if the detection frame of the hand overlaps the detection frame of the target object, the type of the target object is an electronic device, and the minimum distance between the detection frame of the target object and the detection frame of the mouth is less than the first preset distance, Or the minimum distance between the detection frame of the target object and the detection frame of the eye is smaller than the second preset distance, and it is determined that an entertainment action or a phone call action occurs.
  • the detection frame of the hand, the detection frame of the mouth, and the detection frame of any target object are not detected at the same time, and the detection frame of the hand, the detection frame of the eye, and the detection frame of any target object are not detected at the same time, Determining the detection result of distraction movement as no eating movement, drinking movement, telephone movement, and entertainment movement; and / or, if the detection frame of the hand does not overlap with the detection frame of the target object, determine the detection result of distraction movement Is that no eating, drinking, calling, or entertaining action is detected; and / or, if the type of the target object is a container or food, and there is no overlap between the detection frame of the target object and the detection frame of the mouth, and / Or, the type of the target object is an electronic device, and the minimum distance between the detection frame of the target object and the detection frame of the mouth is not less than the first preset distance, or between the detection frame of the target object and the detection frame of the eye The minimum distance is not less than the second preset distance, and it
  • the method may further include: if the result of the user's distraction state detection is that a predetermined distraction action is detected, prompting the detected predetermined distraction action, for example, detecting When the smoking action is detected, it is prompted to detect smoking; when the drinking action is detected, it is prompted to detect drinking water; when the calling action is detected, it is prompted to detect a call.
  • the operation of the predetermined distracted action detected by the prompt may be executed by the processor by calling a corresponding instruction stored in the memory, or may be performed by a prompt unit executed by the processor.
  • a user predetermined distracted motion detection on a user image it may optionally include:
  • the index used to characterize the degree of distraction of the user may include, but is not limited to, any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
  • the number of smoking actions, duration, frequency For example: the number of smoking actions, duration, frequency; the number of drinking actions, duration, frequency; the number of phone calls, duration, frequency; and so on.
  • the result of the user's predetermined distracted motion detection may include: the predetermined distracted motion is not detected, and the detected predetermined distracted motion.
  • the result of the user's predetermined distracted motion detection may also be a distraction level.
  • the distraction level may be divided into: an undistracted level (also referred to as a focused driving level), and a distracted driving level (also may be It is called a mildly distracted driving level) and a warning distracted driving level (also referred to as a severely distracted driving level).
  • the level of distraction can also be divided into more levels, such as: undistracted driving level, mildly distracted driving level, moderately distracted driving level, and severely distracted driving level.
  • the distraction level of at least one of the embodiments may also be divided according to other situations, and is not limited to the above-mentioned level division.
  • the distraction level may be determined by a preset condition satisfied by a parameter value of an index used to characterize the degree of distraction of the user. For example: if no predetermined distraction action is detected, the distraction level is the undistraction level (also known as a focused driving level); if the duration of the predetermined distraction action is less than the first preset duration and the frequency is less than the first A preset frequency, the level of distraction is a mild distracted driving level; if the duration of the predetermined distracted action is detected to be greater than the first preset duration, and / or the frequency is greater than the first preset frequency, the distracted level is a severe distraction Heart driving level.
  • the undistraction level also known as a focused driving level
  • the level of distraction is a mild distracted driving level
  • the duration of the predetermined distracted action is detected to be greater than the first preset duration, and / or the frequency is greater than the first preset frequency
  • the distracted level is a severe distraction Heart driving level.
  • the user state detection method may further include: outputting distraction prompt information according to a result of the user's distraction state detection and / or a result of the user's predetermined distraction action detection.
  • the distraction prompt information may be output. To remind users to concentrate on driving.
  • the operation of outputting the distraction prompt information may be executed by the processor by calling the corresponding instruction stored in the memory, or may be performed by the processor. Executed by the prompt unit run by the processor.
  • FIG. 5 is a flowchart of a user state detection method according to some embodiments of the present application.
  • the embodiment shown in FIG. 5 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • the user state detection method in this embodiment includes:
  • each user status level corresponds to a preset condition, which can determine the results of the user's fatigue status detection, the user's distracted status detection result, and the user's predetermined distracted motion detection result.
  • the status level corresponding to the preset condition that is satisfied can be determined as the result of the user status detection of the user.
  • the user status level may include, for example, a normal driving status (also referred to as a focused driving level), a prompt driving status (a poor driving status), and a warning driving status (a very poor driving status).
  • the embodiment shown in FIG. 5 above may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by an output module run by the processor.
  • the preset conditions corresponding to a normal driving state may include:
  • Condition 2 the result of the user's distraction detection is: the user's attention is focused;
  • condition 3 the result of the user's predetermined distracted motion detection is: no predetermined distracted motion or undistracted level is detected.
  • the driving state level is a normal driving state (also referred to as a focused driving level).
  • the preset conditions corresponding to the driving status may include:
  • the result of detecting the fatigue state of the user is: prompting a fatigue driving level (also referred to as a mild fatigue driving level);
  • condition 33 the result of the user's predetermined distracted motion detection is: prompting a distracted driving level (also referred to as a mild distracted driving level).
  • a distracted driving level also referred to as a mild distracted driving level
  • driving The status level is a prompt driving status (the driving status is poor).
  • the preset conditions corresponding to the warning driving state may include:
  • Condition 111 The result of detecting the fatigue state of the user is: a warning fatigue driving level (also referred to as a severe fatigue driving level);
  • Condition 222 the result of the user's distraction detection is: the user's attention is seriously distracted;
  • the result of the user's predetermined distracted motion detection is: a warning of a distracted driving level (also referred to as a severe distracted driving level).
  • the driving state level is a warning driving state (the driving state is very poor).
  • the user state detection method may further include:
  • a control operation corresponding to the result of the user state detection is performed.
  • the execution of the control operation corresponding to the result of the user state detection may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a control unit executed by the processor.
  • performing a control operation corresponding to the result of the user state detection may include at least one of the following:
  • output prompts / alarm information corresponding to the reminder / alarm predetermined conditions for example: sound (such as: voice or ringing, etc.) / Light (such as: light on or light flashing, etc.) / Vibration, etc. Prompt the user so as to remind the user to pay attention, urge the user to return distracted attention to driving or urge the user to rest, etc., in order to achieve safe driving and avoid road traffic accidents; and / or,
  • a predetermined driving mode switching condition for example, a preset condition corresponding to a warning driving state (for example, the driving state is very poor) is satisfied or the driving state level is warning distracted driving Level (also known as severely distracted driving level), switch the driving mode to automatic driving mode to achieve safe driving and avoid road traffic accidents; at the same time, you can also pass the sound (such as: voice or ringing) / Light (such as: light on or light flickering) / vibration to remind the user, so as to remind the user, prompt the user to return the distracted attention to driving, or promote the user to rest; etc .; and / or,
  • a predetermined driving mode switching condition for example, a preset condition corresponding to a warning driving state (for example, the driving state is very poor) is satisfied or the driving state level is warning distracted driving Level (also known as severely distracted driving level)
  • the predetermined contact method sends the predetermined information to the predetermined contact method or establish a communication connection with the predetermined contact method; for example, when the user is scheduled to take a certain action, it means that the user is in a dangerous state or Need help, when these actions are detected, send predetermined information to the predetermined contact information (for example: alarm phone, nearest contact phone or set emergency contact phone) (such as: alarm information, reminder information or dial the phone) , You can also establish a communication connection (such as a video call, voice call, or telephone call) directly with the predetermined contact method through the vehicle device to protect the user's personal and / or property safety.
  • a communication connection such as a video call, voice call, or telephone call
  • the vehicle control method further includes: sending at least part of a result of the user status detection to the cloud server.
  • At least part of the results include: abnormal vehicle state information determined according to user state detection.
  • Sending some or all results obtained from user status detection to the cloud server can back up abnormal vehicle status information. Since normal vehicle status does not need to be recorded, this embodiment only sends abnormal vehicle status information to Cloud server; when the obtained user status detection results include normal car status information and abnormal car status information, some results are transmitted, that is, only abnormal car status information is sent to the cloud server; and when all the results of the user status detection are When the abnormal vehicle status information is used, all abnormal vehicle status information is transmitted to the cloud server.
  • the vehicle control method further includes: storing a face image corresponding to the abnormal vehicle state information; and / or,
  • the face image corresponding to the abnormal vehicle status information is stored locally on the vehicle side to realize evidence preservation.
  • responsibility is determined by fetching the saved face image. If abnormal vehicle status related to the problem is found in the saved face image, it can be determined as the user's responsibility; and in order to prevent the data on the vehicle side from being deleted by mistake Or deliberately delete, you can upload the face image corresponding to the abnormal vehicle status information to the cloud server for backup. When the information is needed, you can download it from the cloud server to the vehicle for viewing, or download it from the cloud server to other clients for Check it out.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes at least one type of medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disc.
  • FIG. 6 is a schematic structural diagram of a vehicle-mounted intelligent system according to some embodiments of the present application.
  • the in-vehicle intelligent system of this embodiment may be used to implement each of the foregoing vehicle control method embodiments of the present application.
  • the vehicle-mounted intelligent system of this embodiment includes:
  • the user image obtaining unit 61 is configured to obtain a face image of a user who currently requests to use the vehicle.
  • the matching unit 62 is configured to obtain a feature matching result between a face image and at least one pre-stored face image in a data set of the vehicle.
  • the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle.
  • the vehicle control unit 63 is configured to control the action of the vehicle to allow the user to use the vehicle if the feature matching result indicates that the feature matching is successful.
  • the in-vehicle intelligent system Based on the in-vehicle intelligent system provided by the foregoing embodiments of the present application, obtain a face image of a user who currently requests a vehicle; obtain a feature matching result between the face image and at least one pre-stored face image in the data set of the vehicle; if the feature matching result indicates a feature
  • the matching is successful, the vehicle movement is controlled to allow the user to use the vehicle, and the rights of pre-recorded personnel are guaranteed based on feature matching, and feature matching can be achieved without a network, which overcomes the dependence on the network and further improves the safety of the vehicle Security.
  • the used vehicle includes one or any combination of the following: reservation of a vehicle, driving, riding, cleaning a vehicle, maintaining a vehicle, repairing a vehicle, refueling a vehicle, and charging a vehicle.
  • a pre-stored face image of at least one user who has reserved a ride is stored in the data set
  • the vehicle control unit 63 is configured to control opening of a vehicle door.
  • a pre-stored face image of at least one user who has reserved a car is stored in the data set
  • the vehicle control unit 63 is configured to control the opening of a door and release the driving control right of the vehicle.
  • the data set stores at least one recorded pre-stored face image of a user who is allowed to ride;
  • the vehicle control unit 63 is configured to control opening of a vehicle door.
  • the data set stores at least one recorded pre-stored face image of a user who is allowed to use the car;
  • the vehicle control unit 63 is configured to control the opening of a door and release the driving control right of the vehicle.
  • the data set stores at least one pre-stored face image of a user who has been scheduled to unlock or has recorded permission to unlock;
  • the vehicle control unit 63 is configured to control unlocking of a vehicle lock.
  • the data set stores at least one pre-stored face image of a user who has reserved fuel for the vehicle or has recorded a user who is allowed to fuel the vehicle;
  • the vehicle control unit 63 is configured to control opening of a fuel filler of the vehicle.
  • the data set stores at least one pre-stored face image of a user who has reserved for charging the vehicle or has been recorded to allow the vehicle to be charged;
  • the vehicle control unit 63 is used to control a battery that allows the charging device to be connected to the vehicle.
  • the vehicle control unit 63 is further configured to control the vehicle to issue prompt information indicating that the user is permitted to use the vehicle.
  • the user image obtaining unit 61 is configured to collect a face image of the user through a photographing component provided on the vehicle.
  • the in-vehicle intelligent system further includes: a first data downloading unit, configured to send a data set download request to the cloud server when the vehicle and the cloud server are in a communication connection state; receive and store the cloud The data set sent by the server.
  • a first data downloading unit configured to send a data set download request to the cloud server when the vehicle and the cloud server are in a communication connection state; receive and store the cloud The data set sent by the server.
  • the vehicle-mounted intelligent system may further include:
  • An information storage unit is configured to: if the feature matching result indicates that the feature matching is successful, obtain the identity information of the user according to the pre-stored face image of the successful feature matching; and send the face image and identity information to the cloud server.
  • the in-vehicle intelligent system may further include: a living body detection unit configured to obtain a living body detection result of a face image;
  • the vehicle control unit 63 is configured to control a vehicle motion to allow a user to use the vehicle according to a feature matching result and a living body detection result.
  • the vehicle-mounted intelligent system further includes:
  • the second data downloading unit is configured to send a data set download request to the mobile terminal device when the vehicle and the mobile terminal device are in a communication connection state; receive and store the data set sent by the mobile terminal device.
  • the data set is acquired by the mobile terminal device from the cloud server and sent to the vehicle when the data set download request is received.
  • the vehicle control unit 63 is further configured to, if the feature matching result indicates that the feature matching is unsuccessful, control the vehicle action to reject the user from using the vehicle.
  • the vehicle-mounted intelligent system further includes:
  • a reservation unit is used to send out reminder reservation information; according to the reservation information, a user's reservation request is received, and the user's reservation request includes the user's reservation face image; and a data set is established based on the reservation face image.
  • the vehicle-mounted intelligent system further includes:
  • a state detection unit for detecting a user state based on a face image
  • An output unit is used to provide an early warning prompt for an abnormal state according to the result of the user state detection.
  • the results of the user status detection may be output.
  • intelligent driving control of the vehicle may be performed according to a result of detection of the user state.
  • the result of user state detection may be output, and at the same time, intelligent driving control of the vehicle may be performed according to the result of user state detection.
  • the user state detection includes any one or more of the following: user fatigue state detection, user distraction state detection, and user predetermined distraction action detection.
  • the state detection unit when the state detection unit performs user fatigue state detection based on the face image, the state detection unit is configured to:
  • the status information of at least part of the face includes any one or more of the following: eye opening and closing status information, mouth opening and closing status information;
  • a result of detecting the fatigue state of the user is determined according to a parameter value of an index used to represent the fatigue state of the user.
  • the index used to characterize the fatigue state of the user includes any one or more of the following: the degree of eyes closed, the degree of yawning.
  • the parameter value of the degree of closed eyes includes any one or more of the following: the number of closed eyes, the frequency of closed eyes, the duration of closed eyes, the amplitude of closed eyes, the number of closed eyes, the frequency of closed eyes; and / or,
  • the parameter values of the yawning degree include any one or more of the following: yawning status, number of yawning, duration of yawning, and frequency of yawning.
  • the state detection unit when the state detection unit performs user distraction state detection based on the face image, the state detection unit is configured to:
  • the index used to characterize the distraction state of the user includes any one or more of the following: face Degree of deviation from direction
  • the detection result of the user's distraction state is determined according to a parameter value of an index for characterizing the user's distraction state.
  • the parameter value of the deviation degree of the face orientation includes any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or,
  • the parameter values of the degree of line of sight deviation include any one or more of the following: the angle of line of sight deviation, the length of time of line of sight deviation, and the frequency of line of sight deviation.
  • the state detection unit detects the face orientation and / or the line of sight direction of the face image, it is configured to:
  • Face orientation and / or line of sight detection is performed based on key points of the face.
  • the state detection unit performs face orientation detection based on the key points of the face, and when the face orientation information is obtained, is used for:
  • Face orientation information is determined based on the feature information of the head posture.
  • the predetermined distraction action includes any one or more of the following: smoking action, drinking action, eating action, calling action, and entertaining action.
  • the state detection unit when the state detection unit performs a user's predetermined distracted motion detection based on the face image, the state detection unit is configured to:
  • the state detection unit is further configured to:
  • a predetermined distraction action occurs, according to a determination result of whether the predetermined distraction action occurs within a period of time, obtain a parameter value of an index used to characterize the degree of distraction of the user;
  • a result of detecting a user's predetermined distraction action is determined according to a parameter value of an index used to represent the degree of distraction of the user.
  • the parameter value of the index for characterizing the degree of distraction of the user includes any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
  • the vehicle-mounted intelligent system further includes:
  • the prompting unit is configured to prompt the detected predetermined distracted motion if the result of the detection of the predetermined distracted motion is detected by the user.
  • the vehicle-mounted intelligent system further includes:
  • the control unit is configured to perform a control operation corresponding to a result of the user state detection.
  • control unit is configured to:
  • the driving mode is switched to an automatic driving mode.
  • the vehicle-mounted intelligent system further includes:
  • the result sending unit is configured to send at least part of the results of the user status detection to the cloud server.
  • At least part of the results include: abnormal vehicle state information determined according to user state detection.
  • the vehicle-mounted intelligent system further includes:
  • An image storage unit for storing a face image corresponding to the abnormal vehicle status information; and / or,
  • FIG. 7 is a flowchart of a vehicle control method according to some embodiments of the present application.
  • the execution subject of the vehicle control method in this embodiment may be a cloud server.
  • the execution subject may be an electronic device or other device with similar functions.
  • the method in this embodiment includes:
  • the face image to be identified is collected by the vehicle, and the process of obtaining the face image may include: face detection, face quality screening, and living body recognition.
  • face detection face detection
  • face quality screening face quality screening
  • living body recognition living body recognition
  • the operation 710 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by an image receiving unit 81 executed by the processor.
  • the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle; optionally, the cloud server may directly obtain the face image from the vehicle and the feature of the at least one pre-stored face image in the data set The matching result. At this time, the feature matching process is implemented on the vehicle side.
  • the operation 720 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a matching result obtaining unit 82 executed by the processor.
  • the operation 730 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by an instruction sending unit 83 executed by the processor.
  • the vehicle control method further includes:
  • the data set is usually stored in a cloud server.
  • face matching on the vehicle side needs to be implemented. In order to be able to match human faces even when there is no network, you can use the network when Download the data set from the cloud server and save the data set on the vehicle side. At this time, even if there is no network and cannot communicate with the cloud server, face matching can be achieved on the vehicle side, and it is convenient for the vehicle side to manage the data set.
  • the vehicle control method further includes:
  • a data set is established based on the pre-appointed face images.
  • a data set is created for the reserved face image in the cloud server, and multiple reserved ones are stored in the data set.
  • the user's reserved face image is saved by the cloud server, ensuring the security of the data.
  • the vehicle control method further includes:
  • At least part of the results include: abnormal vehicle state information determined according to user state detection.
  • Sending some or all results obtained from user status detection to the cloud server can back up abnormal vehicle status information. Since normal vehicle status does not need to be recorded, this embodiment only sends abnormal vehicle status information to Cloud server; when the obtained driver status detection results include normal vehicle status information and abnormal vehicle status information, part of the results are transmitted, that is, only abnormal vehicle status information is sent to the cloud server; and all the results of user status detection When both are abnormal vehicle status information, all abnormal vehicle status information is transmitted to the cloud server.
  • the vehicle control method further includes: performing a control operation corresponding to a result of the user state detection.
  • the prompt / alarm predetermined condition for example, the preset condition corresponding to the prompt state (eg, poor driving state) or the status level is prompt driving Status (eg, poor driving status)
  • output prompt / alarm information corresponding to the reminder / alarm predetermined condition for example: passing sound (eg: voice or ringing, etc.) / Light (eg: light on or light flashing, etc.) / Vibration to remind the user, so as to remind the user to pay attention, urge the user to return their distracted attention to driving or urge the user to rest, etc., to achieve safe driving and avoid road traffic accidents; and / or,
  • a predetermined driving mode switching condition for example, a preset condition corresponding to a warning driving state (for example, the driving state is very poor) is satisfied or the driving state level is warning distracted driving Level (also known as severely distracted driving level), switch the driving mode to automatic driving mode to achieve safe driving and avoid road traffic accidents; at the same time, you can also pass the sound (such as: voice or ringing) / Light (such as: light on or light flickering) / vibration to remind the user, so as to remind the user, prompt the user to return the distracted attention to driving, or promote the user to rest; etc .; and / or,
  • a predetermined driving mode switching condition for example, a preset condition corresponding to a warning driving state (for example, the driving state is very poor) is satisfied or the driving state level is warning distracted driving Level (also known as severely distracted driving level)
  • the predetermined contact method sends the predetermined information to the predetermined contact method or establish a communication connection with the predetermined contact method; for example, when the user is scheduled to take a certain action, it means that the user is in a dangerous state or Need help, when these actions are detected, send predetermined information to the predetermined contact information (for example: alarm phone, nearest contact phone or set emergency contact phone) (such as: alarm information, reminder information or dial the phone) , You can also establish a communication connection (such as a video call, voice call, or telephone call) directly with the predetermined contact method through the vehicle device to protect the user's personal and / or property safety.
  • a communication connection such as a video call, voice call, or telephone call
  • the vehicle control method further includes: receiving a face image corresponding to the abnormal vehicle state information sent by the vehicle.
  • the face image corresponding to the abnormal vehicle status information can be uploaded to the cloud server for backup.
  • the information can be downloaded from the cloud server to Check on the vehicle side, or download from the cloud server to other clients for viewing.
  • the vehicle control method further includes: performing at least one of the following operations based on the abnormal vehicle status information:
  • the cloud server can receive abnormal vehicle status information of multiple vehicles, and can achieve data statistics based on big data, management of vehicles and users, and better services for vehicles and users.
  • performing data statistics based on abnormal vehicle status information includes:
  • the received facial images corresponding to the abnormal vehicle status information are counted, so that the facial images are classified according to different abnormal vehicle statuses, and the statistical status of each abnormal vehicle status is determined.
  • the classification and statistics of each different abnormal vehicle status can be used to obtain the abnormal vehicle status that users often appear based on big data. It can provide more reference data for vehicle developers to provide more suitable vehicles for the abnormal vehicle. State settings or devices provide users with a more comfortable car environment.
  • performing vehicle management based on abnormal vehicle status information includes:
  • the received facial images corresponding to the abnormal vehicle state information are counted, so that the facial images are classified according to different vehicles, and the abnormal vehicle statistics of each vehicle are determined.
  • the abnormal vehicle status information of all users corresponding to the vehicle can be processed. For example, when a problem occurs in a vehicle, by viewing all the abnormal vehicle status information corresponding to the vehicle, that is, Responsibility determination can be achieved.
  • performing user management based on abnormal vehicle status information includes:
  • the received facial images corresponding to the abnormal vehicle state information are processed, so that the facial images are classified according to different users, and the abnormal vehicle statistics of each user are determined.
  • each user's car habits and frequently occurring problems can be obtained, and personalized services can be provided for each user. While achieving the purpose of safe car use, it will not affect Users with good car habits cause interference; for example, after statistics of abnormal car status information are determined, a driver often yawns while driving, and a higher volume of prompt information can be provided for the driver.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
  • FIG. 8 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • the electronic device in this embodiment may be used to implement the foregoing embodiments of the vehicle control methods of the present application.
  • the electronic device of this embodiment includes:
  • Image receiving unit 81 for receiving a face image to be identified sent by a vehicle.
  • the matching result obtaining unit 82 is configured to obtain a feature matching result between the face image and at least one pre-stored face image in the data set.
  • the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle.
  • the cloud server may directly obtain the feature matching result of the face image and at least one pre-stored face image in the data set from the vehicle. At this time, the feature matching process is implemented on the vehicle side.
  • An instruction sending unit 83 is configured to: if the feature matching result indicates that the feature matching is successful, send an instruction to the vehicle allowing the vehicle to be controlled.
  • the electronic device further includes:
  • the data sending unit is configured to receive a data set download request sent by the vehicle.
  • the data set stores at least one pre-stored pre-stored face image of a user who is allowed to use the vehicle; and sends the data set to the vehicle.
  • the electronic device further includes:
  • a reservation request receiving unit configured to receive a reservation request sent by a vehicle or a mobile terminal device, and the reservation request includes a user's reservation face image
  • a data set is established based on the pre-appointed face images.
  • the electronic device further includes:
  • the detection result receiving unit is configured to receive at least part of the results of the user status detection sent by the vehicle, and provide an early warning prompt for abnormal vehicle status.
  • At least part of the results include: abnormal vehicle state information determined according to user state detection.
  • the electronic device further includes: an execution control unit, configured to execute a control operation corresponding to a result of the user state detection.
  • an execution control unit is configured to:
  • the driving mode is switched to an automatic driving mode.
  • the electronic device further includes:
  • the state image receiving unit is configured to receive a face image corresponding to the abnormal vehicle state information sent by the vehicle.
  • the electronic device further includes:
  • the abnormality processing unit is configured to perform at least one of the following operations based on abnormal vehicle status information: data statistics, vehicle management, and user management.
  • the abnormality processing unit when the abnormality processing unit performs data statistics based on the abnormal vehicle state information, the abnormality processing unit is configured to perform statistics on the received face images corresponding to the abnormal vehicle state information based on the abnormal vehicle state information, so that the face images are classified according to different abnormalities. Classify by vehicle status to determine the statistical status of each abnormal vehicle status.
  • the abnormality processing unit when the abnormality processing unit performs vehicle management based on the abnormal vehicle status information, it is configured to perform statistics on the received face images corresponding to the abnormal vehicle status information based on the abnormal vehicle status information, so that the facial images correspond to different vehicles. Classify and determine the statistics of abnormal vehicle usage for each vehicle.
  • the abnormality processing unit when the abnormality processing unit performs user management based on the abnormal vehicle state information, it is configured to process the received facial images corresponding to the abnormal vehicle state information based on the abnormal vehicle state information, so that the facial images are different for different users.
  • the classification is performed to determine the statistics of abnormal vehicle usage of each user.
  • a vehicle management system including: a vehicle and / or a cloud server;
  • the vehicle is used to execute the vehicle control method of any one of the embodiments shown in FIGS. 1-5;
  • the cloud server is configured to execute a vehicle control method according to any one of the embodiments shown in FIG. 7.
  • the vehicle management system further includes: a mobile terminal device, configured to:
  • the user registration request including a registered face image of the user
  • the vehicle management system of this embodiment realizes face matching on the vehicle client side, and does not need to rely on the cloud for face matching, reducing the flow cost of real-time data transmission; high flexibility, low dependence on the network, and downloading vehicle data sets It is completed when there is a network, and feature extraction is performed when the vehicle is idle, reducing the dependence on the network.
  • users request to use the vehicle, they can be independent of the network, and upload the comparison result when there is a network after successful authentication.
  • FIG. 9 is a flowchart of using a vehicle management system according to some embodiments of the present application.
  • the reservation process implemented in the foregoing embodiment is implemented on a mobile phone (mobile terminal device), and the filtered face image and user ID information are uploaded to a cloud server, and the cloud server sends the face image and user ID
  • the information is stored in the appointment data set.
  • the appointment data set is downloaded to the vehicle client for matching by the vehicle; the vehicle obtains the requester's image, and the requester's image is subjected to face detection, quality screening and live recognition in turn.
  • the matching requested face image is matched with all the face images in the appointment data set.
  • the matching is realized based on the face features.
  • the face features can be obtained through neural network extraction. Based on the comparison result, it is determined whether the requesting face image is an appointment person. , Allowing reservations to use the vehicle.
  • a mobile device such as a mobile phone
  • a cloud server such as a car machine
  • vehicle such as a car machine
  • the mobile phone takes pictures, performs quality screening, and then photos and personnel.
  • the information is uploaded to the cloud storage to complete the appointment process.
  • the cloud synchronizes personnel information to the vehicle.
  • the car machine After the car machine performs face recognition and comparison based on the personnel information, it makes an intelligent judgment and notifies the cloud to update the user status.
  • an electronic device including: a memory for storing executable instructions;
  • a processor for communicating with the memory to execute executable instructions to complete the vehicle control method of any one of the above embodiments.
  • FIG. 10 is a schematic structural diagram of an application example of an electronic device according to some embodiments of the present application.
  • the electronic device includes one or more processors, a communication unit, and the like.
  • the one or more processors are, for example, one or more central processing units (CPUs) 1001, and / or one or more Acceleration unit 1013, etc.
  • the acceleration unit may include, but is not limited to, GPU, FPGA, other types of special-purpose processors, etc.
  • the processor may be loaded to random based on executable instructions stored in read-only memory (ROM) 1002 or from the storage portion 1008
  • the executable instructions in the memory (RAM) 1003 are accessed to perform various appropriate actions and processes.
  • the communication unit 1012 may include, but is not limited to, a network card.
  • the network card may include, but is not limited to, an IB (Infiniband) network card.
  • the processor may communicate with the read-only memory 1002 and / or the random access memory 1003 to execute executable instructions.
  • the bus 1004 It is connected to the communication unit 1012 and communicates with other target devices via the communication unit 1012, thereby completing operations corresponding to any of the methods provided in the embodiments of the present application, for example, obtaining a face image of a user currently requesting a vehicle; obtaining a face image Feature matching results with at least one pre-stored face image in the data set of the vehicle; if the feature matching results indicate that the feature matching is successful, control the vehicle action to allow the user to use the vehicle.
  • the RAM 1003 can store various programs and data required for the operation of the device.
  • the CPU 1001, the ROM 1002, and the RAM 1003 are connected to each other through a bus 1004.
  • ROM 1002 is an optional module.
  • the RAM 1003 stores executable instructions, or writes executable instructions to the ROM 1002 at run time, and the executable instructions cause the central processing unit 1001 to perform operations corresponding to any of the foregoing methods in this application.
  • An input / output (I / O) interface 1005 is also connected to the bus 1004.
  • the communication unit 1012 may be integratedly configured, or may be configured to have multiple sub-modules (for example, multiple IB network cards) and be on a bus link.
  • the following components are connected to the I / O interface 1005: an input portion 1006 including a keyboard, a mouse, etc .; an output portion 1007 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc .; and a speaker; a storage portion 1008 including a hard disk, etc. ; And a communication section 1009 including a network interface card such as a LAN card, a modem, and the like. The communication section 1009 performs communication processing via a network such as the Internet.
  • the driver 1011 is also connected to the I / O interface 1005 as needed.
  • a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1010 as needed, so that a computer program read therefrom is installed into the storage section 1008 as needed.
  • FIG. 10 is only an optional implementation manner.
  • the number and types of components in FIG. 10 may be selected, deleted, added or replaced according to actual needs.
  • Different function components can also be implemented in separate settings or integrated settings.
  • the acceleration unit 1013 and the CPU 1001 can be separated or the acceleration unit 1013 can be integrated on the CPU 1001.
  • the communication department can be set separately or integrated on the CPU 1001. Or on the acceleration unit 1013, and so on.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine-readable medium, the computer program including program code for performing a method shown in a flowchart, and the program code may include a corresponding The instructions corresponding to the steps of the vehicle control method provided by any embodiment of the present application are executed.
  • the computer program may be downloaded and installed from a network through the communication section 1009, and / or installed from a removable medium 1011.
  • a processor When the computer program is executed by a processor, a corresponding operation in any method of the present application is performed.
  • a computer storage medium for storing a computer-readable instruction, and when the instruction is executed, the operation of the vehicle control method of any one of the foregoing embodiments is performed.
  • the methods and devices, systems, and devices of this application may be implemented in many ways.
  • the methods and devices, systems, and devices of the present application can be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order of the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order described above, unless otherwise specifically stated.
  • the present application can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present application.
  • the present application also covers a recording medium storing a program for executing the method according to the present application.

Abstract

一种车辆控制方法和系统、车载智能系统、电子设备、介质,其中,方法包括:获取当前请求使用车辆的用户的人脸图像(110);获取所述人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果(120);其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;如果所述特征匹配结果表示特征匹配成功,控制车辆动作以允许所述用户使用车辆(130)。基于特征匹配保证了预先记录的人员的权利,并且可以在无网络的情况下实现特征匹配,克服了对网络的依赖性,进一步提高了车辆的安全保障性。

Description

车辆控制方法和系统、车载智能系统、电子设备、介质
本申请要求在2018年06月04日提交中国专利局、申请号为CN 201810565700.3、发明名称为“车辆控制方法和系统、车载智能系统、电子设备、介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及车辆智能识别技术,尤其是一种车辆控制方法和系统、车载智能系统、电子设备、介质。
背景技术
智能车辆是一个集环境感知、规划决策、多等级辅助驾驶等功能于一体的综合系统,它集中运用了计算机、现代传感、信息融合、通讯、人工智能及自动控制等技术,是典型的高新技术综合体。目前对智能车辆的研究主要致力于提高汽车的安全性、舒适性,以及提供优良的人车交互界面。近年来,智能车辆己经成为世界车辆工程领域研究的热点和汽车工业增长的新动力,很多发达国家都将其纳入到各自重点发展的智能交通系统当中。
发明内容
本申请实施例提供的一种车辆控制方法和系统、车载智能系统、电子设备、介质。
根据本申请实施例的一个方面,提供的一种车辆控制方法,包括:
获取当前请求使用车辆的用户的人脸图像;
获取所述人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
如果所述特征匹配结果表示特征匹配成功,控制车辆动作以允许所述用户使用车辆。
可选地,所述使用车辆包括以下之一或任意组合:预约用车、开车、乘车、清洗车辆、保养车辆、维修车辆、给车辆加油、给车辆充电。
可选地,所述数据集中存储有至少一个已预约乘车的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门。
可选地,所述数据集中存储有至少一个已预约用车的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门以及放行车辆驾驶控制权。
可选地,所述数据集中存储有至少一个已记录的允许乘车的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门。
可选地,所述数据集中存储有至少一个已记录的允许用车的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门以及放行车辆驾驶控制权。
可选地,所述数据集中存储有至少一个已预约开锁或已记录允许开锁的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车锁。
可选地,所述数据集中存储有至少一个已预约给车辆加油或已记录的允许给车辆加油的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车辆加油口。
可选地,所述数据集中存储有至少一个已预约给车辆充电或已记录的允许给车辆充电的用户的预存人脸图像;
所述控制车辆动作以允许所述用户使用车辆,包括:控制允许充电设备连接车辆的电池。
可选地,所述方法还包括:控制车辆发出用于表示用户允许使用车辆的提示信息。
可选地,获取当前请求使用车辆的用户的人脸图像,包括:
通过设置在所述车辆上的拍摄组件采集所述用户的人脸图像。
可选地,所述方法还包括:
在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;
接收并存储所述云端服务器发送的数据集。
可选地,所述方法还包括:
如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述用户的身份信息;
向所述云端服务器发送所述人脸图像和所述身份信息。
可选地,所述方法还包括:获取所述人脸图像的活体检测结果;
根据所述特征匹配结果,控制车辆动作以允许所述用户使用车辆,包括:
根据所述特征匹配结果和所述活体检测结果,控制车辆动作以允许所述用户使用车辆。
可选地,所述方法还包括:
在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;
接收并存储所述移动端设备发送的数据集。
可选地,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
可选地,所述方法还包括:
如果所述特征匹配结果表示特征匹配不成功,控制车辆动作以拒绝所述用户使用车辆。
可选地,所述方法还包括:
发出提示预约信息;
根据所述预约信息接收所述用户的预约请求,所述用户的预约请求包括用户的预约人脸图像;
根据所述预约人脸图像,建立数据集。
可选地,所述方法还包括:
基于所述人脸图像进行用户状态检测;
根据用户状态检测的结果,进行异常状态的预警提示。
可选地,所述用户状态检测包括以下任意一项或多项:用户疲劳状态检测,用户分心状态检测,用户预定分心动作检测。
可选地,基于所述人脸图像进行用户疲劳状态检测,包括:
对所述人脸图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征用户疲劳状态的指标的参数值;
根据用于表征用户疲劳状态的指标的参数值确定用户疲劳状态检测的结果。
可选地,所述用于表征用户疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
可选地,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
可选地,基于所述人脸图像进行用户分心状态检测,包括:
对所述人脸图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征用户分心状态的指标的参数值;所述用于表征用户分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
根据用于表征所述用户分心状态的指标的参数值确定用户分心状态检测的结果。
可选地,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
可选地,所述对所述人脸图像中的用户进行人脸朝向和/或视线方向检测,包括:
检测所述人脸图像的人脸关键点;
根据所述人脸关键点进行人脸朝向和/或视线方向检测。
可选地,根据所述人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:
根据所述人脸关键点获取头部姿态的特征信息;
根据所述头部姿态的特征信息确定人脸朝向信息。
可选地,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
可选地,基于所述人脸图像进行用户预定分心动作检测,包括:
对所述人脸图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;
根据所述目标对象的检测框,确定是否出现所述预定分心动作。
可选地,所述方法还包括:
若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征用户分心程度的指标的参数值;
根据所述用于表征用户分心程度的指标的参数值确定用户预定分心动作检测的结果。
可选地,所述用于表征用户分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
可选地,所述方法还包括:
若用户预定分心动作检测的结果为检测到预定分心动作,提示检测到的预定分心动作。
可选地,所述方法还包括:
执行与所述用户状态检测的结果对应的控制操作。
可选地,所述执行与所述用户状态检测的结果对应的控制操作,包括以下至少之一:
如果确定的所述用户状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的所述用户状态检测的结果满足预定信息发送条件,向预设联系方式发送预定信息或与预设联系方式建立通信连接;和/或,
如果确定的所述用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
可选地,所述方法还包括:
向云端服务器发送所述用户状态检测的至少部分结果。
可选地,所述至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
可选地,所述方法还包括:
存储与所述异常用车状态信息对应的人脸图像;和/或,
向所述云端服务器发送与所述异常用车状态信息对应的人脸图像。
根据本申请实施例的另一个方面,提供的一种车载智能系统,包括:
用户图像获取单元,用于获取当前请求使用车辆的用户的人脸图像;
匹配单元,用于获取所述人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
车辆控制单元,用于如果所述特征匹配结果表示特征匹配成功,控制车辆动作以允许所述用户使用车辆。
根据本申请实施例的又一个方面,提供的一种车辆控制方法,包括:
接收车辆发送的待识别的人脸图像;
获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
可选地,所述方法还包括:
接收车辆发送的数据集下载请求,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
向所述车辆发送所述数据集。
可选地,所述方法还包括:
接收车辆或移动端设备发送的预约请求,所述预约请求包括用户的预约人脸图像;
根据所述预约人脸图像,建立数据集。
可选地,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:
从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,所述方法还包括:
接收所述车辆发送的所述用户状态检测的至少部分结果,进行异常用车状态的预警提示。
可选地,所述至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
可选地,所述方法还包括:执行与所述用户状态检测的结果对应的控制操作。
可选地,所述执行与所述用户状态检测的结果对应的控制操作,包括:
如果确定的所述用户状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的所述用户状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
如果确定的所述用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
可选地,所述方法还包括:接收所述车辆发送的与所述异常用车状态信息对应的人脸图像。
可选地,所述方法还包括:基于所述异常用车状态信息进行以下至少一种操作:
数据统计、车辆管理、用户管理。
可选地,所述基于所述异常用车状态信息进行数据统计,包括:
基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行统计,使所述人脸图像按不同异常用车状态进行分类,确定每种所述异常用车状态的统计情况。
可选地,所述基于所述异常用车状态信息进行车辆管理,包括:
基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行统计,使所述人脸图像按不同车辆进行分类,确定每个所述车辆的异常用车统计情况。
可选地,所述基于所述异常用车状态信息进行用户管理,包括:
基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行处理,使所述人脸图像按不同用户进行分类,确定每个所述用户的异常用车统计情况。
根据本申请实施例的再一个方面,提供的一种电子设备,包括:
图像接收单元,用于接收车辆发送的待识别的人脸图像;
匹配结果获得单元,用于获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
指令发送单元,用于如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
根据本申请实施例的还一个方面,提供的一种车辆控制系统,包括:车辆和/或云端服务器;
所述车辆用于执行上述任意一项所述的车辆管理方法;
所述云端服务器用于执行上述任意一项所述的车辆控制方法。
可选地,所述车辆控制系统还包括:移动端设备,用于:
接收用户注册请求,所述用户注册请求包括用户的注册人脸图像;
将所述用户注册请求发送给所述云端服务器。
根据本申请实施例的还一个方面,提供的一种电子设备,包括:存储器,用于存储可执行指令;
以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成上述任意一项所述车辆控制方法。
根据本申请实施例的还一个方面,提供的一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述任意一项所述的车辆控制方法。
根据本申请实施例的另一个方面,提供的一种计算机存储介质,用于存储计算机可读取的指令,所述指令被执行时实现上述任意一项所述车辆控制方法。
基于本申请上述实施例提供的一种车辆控制方法和系统、车载智能系统、电子设备、介质,获取当前请求使用车辆的用户的人脸图像;获取人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆动作以允许用户使用车辆,基于特征匹配保证了预先记录的人员的权利,并且可以在无网络的情况下实现特征匹配,克服了对网络的依赖性,进一步提高了车辆的安全保障性。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1为本申请一些实施例的车辆控制方法的流程图。
图2为本申请一些实施例中基于人脸图像进行用户疲劳状态检测的流程图;
图3为本申请一些实施例中基于人脸图像进行用户分心状态检测的流程图;
图4为本申请一些实施例中基于人脸图像进行用户预定分心动作检测的流程图;
图5为本申请一些实施例的用户状态检测方法的流程图;
图6为本申请一些实施例的车载智能系统的结构示意图;
图7为本申请另一些实施例的车辆控制方法的流程图;
图8为本申请一些实施例的电子设备的结构示意图;
图9为本申请一些实施例的车辆管理系统的使用流程图;
图10为本申请一些实施例的电子设备的一个应用示例的结构示意图。
具体实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本发明实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1为本申请一些实施例的车辆控制方法的流程图。如图1所示,本实施例车辆控制方法的执行主体可以为车辆端设备,例如:执行主体可以为车载智能系统或其他具有类似功能的设备,该实施例的方法包括:
110,获取当前请求使用车辆的用户的人脸图像。
可选地,为了获取用户的人脸图像,可以通过设置在车辆外部或内部的图像采集装置对出现的人进行图像采集,以获得人脸图像。可选地,为了获得质量较好的人脸图像,可以对采集到的图像进行人脸检测、人脸质量筛选和活体识别等操作。
在一个可选示例中,该操作110可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的用户图像获取单元61执行。
120,获取人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像。
可选地,将人脸图像与数据集中的预存人脸图像进行特征匹配,可以通过卷积神经网络分别获得人脸图像的特征和预存人脸图像的特征,之后进行特征匹配,以识别与人脸图像具有对应相同人脸的预存人脸图像,从而实现对采集到人脸图像的用户的身份进行识别。
在一个可选示例中,该操作120可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的匹配单元62执行。
130,如果特征匹配结果表示特征匹配成功,控制车辆动作以允许用户使用车辆。
可选地,特征匹配结果包括两种情况:特征匹配成功和特征匹配不成功,当特征匹配成功时,表示该用户是经过预约或被允许的用户,可以使用车辆,此时,控制车辆动作以允许用户使用车辆。
在一个可选示例中,该操作130可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的车辆控制单元63执行。
基于本申请上述实施例提供的车辆控制方法,获取当前请求使用车辆的用户的人脸图像;获取人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆动作以允许用户使用车辆,基于特征匹配保证了预先记录的人员的权利,并且可以在无网络的情况下实现特征匹配,克服了对网络的依赖性,进一步提高了车辆的安全保障性。
在一个或多个可选的实施例中,使用车辆可以包括但不限于以下之一或任意组合:预约用车、开车、乘车、清洗车辆、保养车辆、维修车辆、给车辆加油、给车辆充电。
通常对于用车、开车,需要获得对车辆的驾驶控制权;而对于乘车(例如:班车、网约车)需要控制开启车门即可;对于清洗车辆可以有不同的情况,如果是人工洗车的情况,车辆可以固定不动,对于清洗人员需要为其控制开启车门即可,而对于自动洗车的情况,可能需要为清洗人员提供车辆的驾驶控制权;保养车辆和维修车辆可以为对应人员控制开启车门即可;给车辆加油,需要控制开启加油口;给车辆充电(对于电动车)时,需要控制允许充电设备(如:充电枪)连接车辆的电池。
在一些实施例中,数据集中存储有至少一个已预约乘车的用户的预存人脸图像;
操作130可以包括:控制开启车门。
为预约乘车(如:网约车)的用户开启车门,使用户可以成功上车,而其他非预约的用户无法开启车门上车,保证了预约用户的权益和车辆的安全。
在一些实施例中,数据集中存储有至少一个已预约用车的用户的预存人脸图像;
操作130可以包括:控制开启车门以及放行车辆驾驶控制权。
当预约用户预约的是驾驶车辆(如:预约租车),为用户提供车辆驾驶控制权,而其他非预约的用户无法进入车辆,即使非法进入车辆也无法驾驶车辆,保证了车辆的安全。
在一些实施例中,数据集中存储有至少一个已记录的允许乘车的用户的预存人脸图像;
操作130可以包括:控制开启车门。
当用户是已记录的允许乘车的用户(如:私家车对应的没有驾驶能力的成员、班车的乘客),为用户控制开启车门,使用户可以安全乘车。
在一些实施例中,数据集中存储有至少一个已记录的允许用车的用户的预存人脸图像;
操作130包括:控制开启车门以及放行车辆驾驶控制权。
当用户是已记录的允许用车的用户(如:私家车对应的具有驾驶能力的成员),为用户提供车辆驾驶控制权,而其 他非记录的用户无法进入车辆,即使非法进入车辆也无法驾驶车辆,保证了车辆的安全。
在一些实施例中,数据集中存储有至少一个已预约开锁或已记录允许开锁的用户的预存人脸图像;
操作130包括:控制开启车锁。
对于一些特殊的车辆(如:电动自行车、电动摩托车、共享单车等),需要开启车锁即可获得使用车辆,此时,用户可以为预约用户(包括临时预约或长期预约)或已记录允许开锁的用户,为该用户控制开启车锁,保证了车辆的安全。
在一些实施例中,数据集中存储有至少一个已预约给车辆加油或已记录的允许给车辆加油的用户的预存人脸图像;
操作130包括:控制开启车辆加油口。
在车辆需要进行加油时,需要开启加油口,对于已预约给车辆加油或已记录的允许给车辆加油的用户,为该用户控制开启车辆加油口,以实现为车辆加油,保证了车辆各方面性能的安全性。
在一些实施例中,数据集中存储有至少一个已预约给车辆充电或已记录的允许给车辆充电的用户的预存人脸图像;
操作130包括:控制允许充电设备连接车辆的电池。
在车辆需要进行充电时(如:电动汽车或电动自行车等),需要控制允许充电设备连接车辆的电池,对于已预约给车辆充电或已记录的允许给车辆充电的用户,为该用户控制允许充电设备连接车辆的电池,以实现为车辆充电,保证了车辆电池的安全性。
在一个或多个可选的实施例中,车辆控制方法还包括:控制车辆发出用于表示用户允许使用车辆的提示信息。
为了给用户提供更好的使用体验,可以通过发出用于表示用户允许使用车辆的提示信息,提示用户可以使用车辆,可以避免用户等待或缩短用户等待的时间,为用户提供更好的服务。
在一个或多个可选的实施例中,操作110可以包括:
通过设置在车辆上的拍摄组件采集用户的人脸图像。
由于本实施例为用户提供的是使用车辆的服务,可以包括在车辆内部的操作(如:开车)或在车辆外部的操作(如:开车门、开车锁),因此,拍摄组件可以设置在车辆外部或内部,可以是固定设置或活动设置。
在一个或多个可选的实施例中,车辆控制方法还包括:
在车辆与云端服务器处于通信连接状态时,向云端服务器发送数据集下载请求;
接收并存储云端服务器发送的数据集。
可选地,通常数据集保存在云端服务器中,本实施例需要实现在车辆端进行人脸匹配,为了可以在无网络的情况下也能对人脸进行匹配,可以在有网络的情况下,从云端服务器下载数据集,并将数据集保存在车辆端,此时,即使没有网络,无法与云端服务器通信,也可以在车辆端实现人脸匹配,并且方便车辆端对数据集的管理。
在一个或多个可选的实施例中,车辆控制方法还可以包括:
如果特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取用户的身份信息;
向云端服务器发送人脸图像和身份信息。
在本实施例中,当特征匹配成功时,说明该用户是已经预约或已允许使用车辆的用户,从数据集中可以获得用户对应的身份信息,而将人脸图像和身份信息发送到云端服务器,可以在云端服务器对该用户建立实时追踪(例如:某个用户在什么时间、什么地点乘坐某一车辆),在存在网络的情况下,将人脸图像实时上传到云端服务器,可实现对用户使用车辆状态的分析和统计。
在一个或多个可选的实施例中,车辆控制方法还包括:获取人脸图像的活体检测结果;
操作130可以包括:
根据特征匹配结果和活体检测结果,控制车辆动作以允许用户使用车辆。
在本实施例中,活体检测是用来判断图像是否来自真人(或活人),通过活体检测可以使驾驶员的身份验证更为准确。本实施例对活体检测的具体方式不做限定,例如:可以采用对图像的三维信息深度分析、面部光流分析、傅里叶频谱分析、边缘或反光等防伪线索分析、视频流中多帧视频图像帧综合分析等等方法来实现,故在此不再赘述。
在一个或多个可选的实施例中,车辆控制方法还包括:
在车辆与移动端设备处于通信连接状态时,向移动端设备发送数据集下载请求;
接收并存储移动端设备发送的数据集。
可选地,数据集是由移动端设备在接收到数据集下载请求时,从云端服务器获取并发送给车辆的。
可选地,移动端设备可以为手机、PAD或者其他车辆上的终端设备等,移动端设备在接收到数据集下载请求时向云端服务器发送数据集下载请求,然后获得数据集再发送给车辆,通过移动端设备下载数据集时,可以应用移动端设备自带的网络(如:2G、3G、4G网络等),避免了车辆受网络限制不能从云端服务器下载到数据集而无法进行人脸匹配的问题。
在一个或多个可选的实施例中,车辆控制方法还包括:如果特征匹配结果表示特征匹配不成功,控制车辆动作以拒绝用户使用车辆。
在本实施例中,特征匹配不成功表示该用户未经过预约或未被允许使用车辆,此时,为了保障已预约或已被允许使用车辆的用户的权益,车辆将拒绝该用户使用车辆。
可选地,车辆控制方法还包括:
发出提示预约信息;
根据预约信息接收用户的预约请求,用户的预约请求包括用户的预约人脸图像;
根据预约人脸图像,建立数据集。
在本实施例中,通过车辆接收用户发出的预约请求,对该用户的预约人脸图像进行保存,在车辆端基于该预约人脸图像建立数据集,通过数据集可实现车辆端的单独人脸匹配,无需从云端服务器下载数据集。
在一个或多个可选的实施例中,车辆控制方法还包括:
基于人脸图像进行用户状态检测;
根据用户状态检测的结果,进行异常状态的预警提示。
在其中一些实施例中,可以输出用户状态检测的结果。
在其中另一些实施例中,当用户为驾驶员时,可以根据用户状态检测的结果,对车辆进行智能驾驶控制。
在其中又一些实施例中,当用户为驾驶员时,可以输出用户状态检测的结果,同时根据用户状态检测的结果,对车 辆进行智能驾驶控制。
可选地,可以本地输出用户状态检测的结果和/或远程输出用户状态检测的结果。其中,本地输出用户状态检测的结果即通过用户状态检测装置或者用户监控系统输出用户状态检测的结果,或者向车辆中的中控系统输出用户状态检测的结果,以便车辆基于该用户状态检测的结果对车辆进行智能驾驶控制。远程输出用户状态检测的结果,例如:可以向云端服务器或管理节点发送用户状态检测的结果,以便由云端服务器或管理节点进行用户状态检测的结果的收集、分析和/或管理,或者基于该用户状态检测的结果对车辆进行远程控制。
在一个可选示例中,根据用户状态检测的结果,进行异常状态的预警提示,可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的输出模块执行。
在一个可选示例中,上述操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的用户状态检测单元执行。
在一些实施例中,用户状态检测例如可以包括但不限于以下任意一项或多项:用户疲劳状态检测,用户分心状态检测,用户预定分心动作检测,用户手势检测。则用户状态检测的结果相应包括但不限于以下任意一项或多项:用户疲劳状态检测的结果,用户分心状态检测的结果,用户预定分心动作检测的结果,用户手势检测的结果。
在本实施例中,预定分心动作,可以是任意可能分散用户的注意力的分心动作,例如:抽烟动作、喝水动作、饮食动作、打电话动作、娱乐动作等。其中,饮食动作例如为吃水果、零食等食物的动作,娱乐动作例如为发信息、玩游戏、K歌等任意借助于电子设备执行的动作,其中,电子设备例如为手机终端、掌上电脑、游戏机等。
基于本申请上述实施例提供的用户状态检测方法,可以对人脸图像进行用户状态检测,输出用户状态检测的结果,从而实现对用户的使用车辆状态的实时检测,以便于在用户的使用车辆状态较差时及时采取相应的措施,有利于保证安全驾驶,减少或避免发生道路交通事故。
图2为本申请一些实施例中基于人脸图像进行用户疲劳状态检测的流程图。在一个可选示例中,图2所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图2所示,基于人脸图像进行用户疲劳状态检测的方法,可以包括:
202,对人脸图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息。
在一个可选示例中,上述人脸至少部分区域可以包括:用户人脸眼部区域、用户人脸嘴部区域以及用户面部整个区域等中的至少一个。其中,该人脸至少部分区域的状态信息可以包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息。
可选地,上述眼睛睁合状态信息可以用于进行用户的闭眼检测,例如:检测用户是否半闭眼(“半”表示非完全闭眼的状态,如瞌睡状态下的眯眼等)、是否闭眼、闭眼次数、闭眼幅度等。可选地,眼睛睁合状态信息可以为对眼睛睁开的高度进行归一化处理后的信息。可选地,上述嘴巴开合状态信息可以用于进行用户的打哈欠检测,例如:检测用户是否打哈欠、打哈欠次数等。可选地,嘴巴开合状态信息可以为对嘴巴张开的高度进行归一化处理后的信息。
在一个可选示例中,可以对人脸图像进行人脸关键点检测,直接利用所检测出的人脸关键点中的眼睛关键点进行计算,从而根据计算结果获得眼睛睁合状态信息。
在一个可选示例中,可以先利用人脸关键点中的眼睛关键点(例如:眼睛关键点在用户图像中的坐标信息)对用户图像中的眼睛进行定位,以获得眼睛图像,并利用该眼睛图像获得上眼睑线和下眼睑线,通过计算上眼睑线和下眼睑线之间的间隔,获得眼睛睁合状态信息。
在一个可选示例中,可以直接利用人脸关键点中的嘴巴关键点进行计算,从而根据计算结果获得嘴巴开合状态信息。
在一个可选示例中,可以先利用人脸关键点中的嘴巴关键点(例如:嘴巴关键点在用户图像中的坐标信息)对用户图像中的嘴巴进行定位,通过剪切等方式可以获得嘴巴图像,并利用该嘴巴图像获得上唇线和下唇线,通过计算上唇线和下唇线之间的间隔,获得嘴巴开合状态信息。
204,根据一段时间内的人脸至少部分区域的状态信息,获取用于表征用户疲劳状态的指标的参数值。
在一些可选示例中,用于表征用户疲劳状态的指标例如可以包括但不限于以下任意一项或多项:闭眼程度、打哈欠程度。
在一个可选示例中,,闭眼程度的参数值例如可以包括但不限于以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,打哈欠程度的参数值例如可以包括但不限于以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
206,根据用于表征用户疲劳状态的指标的参数值确定用户疲劳状态检测的结果。
可选地,上述用户疲劳状态检测的结果可以包括:未检测到疲劳状态和疲劳状态。或者,当用户为驾驶员时,上述用户疲劳状态检测的结果也可以是疲劳驾驶程度,其中,疲劳驾驶程度可以包括:正常驾驶级别(也可以称为非疲劳驾驶级别)以及疲劳驾驶级别。其中,疲劳驾驶级别可以为一个级别,也可以被划分为多个不同的级别,例如:上述疲劳驾驶级别可以被划分为:提示疲劳驾驶级别(也可以称为轻度疲劳驾驶级别)和警告疲劳驾驶级别(也可以称为重度疲劳驾驶级别)。当然,疲劳驾驶程度可以被划分为更多级别,例如:轻度疲劳驾驶级别、中度疲劳驾驶级别以及重度疲劳驾驶级别等。本实施例不限制疲劳驾驶程度所包括的不同级别。
图3为本申请一些实施例中基于人脸图像进行用户分心状态检测的流程图。在一个可选示例中,图3所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图3所示,基于人脸图像进行用户分心状态检测的方法,可以包括:
302,对人脸图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息。
可选地,上述人脸朝向信息可以用于确定用户的人脸方向是否正常,例如:确定用户是否侧脸朝向前方或者是否回头等。可选地,人脸朝向信息可以为用户人脸正前方与用户所使用的车辆正前方之间的夹角。可选地,上述视线方向信息可以用于确定用户的视线方向是否正常,例如:确定用户是否目视前方等,视线方向信息可以用于判断用户的视线是否发生了偏离现象等。可选地,视线方向信息可以为用户的视线与用户所使用的车辆正前方之间的夹角。
304,根据一段时间内的人脸朝向信息和/或视线方向信息,确定用于表征用户分心状态的指标的参数值。
在一些可选示例中,,用于表征用户分心状态的指标例如可以包括但不限于以下任意一项或多项:人脸朝向偏离程度,视线偏离程度。在一个可选示例中,,人脸朝向偏离程度的参数值例如可以包括但不限于以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,视线偏离程度的参数值例如可以包括但不限于以下任意一项或多项:视线 方向偏离角度、视线方向偏离时长、视线方向偏离频率。
在一个可选示例中,上述视线偏离程度例如可以包括:视线是否偏离以及视线是否严重偏离等中的至少一个;上述人脸朝向偏离程度(也可以称为转脸程度或者回头程度)例如可以包括:是否转头、是否短时间转头以及是否长时间转头中的至少一个。
在一个可选示例中,在判断出人脸朝向信息大于第一朝向,大于第一朝向的这一现象持续了N1帧(例如:持续了9帧或者10帧等),则确定用户出现了一次长时间大角度转头现象,可以记录一次长时间大角度转头,也可以记录本次转头时长;在判断出人脸朝向信息不大于第一朝向,大于第二朝向,在不大于第一朝向,大于第二朝向的这一现象持续了N1帧(例如:持续了9帧或者10帧等),则确定用户出现了一次长时间小角度转头现象,可以记录一次小角度转头,也可以记录本次转头时长。
在一个可选示例中,在判断出视线方向信息和汽车正前方之间的夹角大于第一夹角,大于第一夹角的这一现象持续了N2帧(例如:持续了8帧或者9帧等),则确定用户出现了一次视线严重偏离现象,可以记录一次视线严重偏离,也可以记录本次视线严重偏离时长;在判断出视线方向信息和汽车正前方之间的夹角不大于第一夹角,大于第二夹角,在不大于第一夹角,大于第二夹角的这一现象持续了N2帧(例如:持续了8帧或者9帧等),则确定用户出现了一次视线偏离现象,可以记录一次视线偏离,也可以记录本次视线偏离时长。
在一个可选示例中,上述第一朝向、第二朝向、第一夹角、第二夹角、N1以及N2的取值可以根据实际情况设置,本实施例不限制其取值的大小。
306,根据用于表征用户分心状态的指标的参数值确定用户分心状态检测的结果。
可选地,上述用户分心状态检测的结果例如可以包括:用户注意力集中(用户注意力未分散),用户注意力分散;或者,用户分心状态检测的结果可以为用户注意力分散级别,例如:可以包括:用户注意力集中(用户注意力未分散),用户注意力轻度分散,用户注意力中度分散,用户注意力严重分散等。其中,用户注意力分散级别可以通过用于表征用户分心状态的指标的参数值所满足的预设条件确定。例如:若视线方向偏离角度和人脸朝向偏离角度均小于第一预设角度,用户注意力分散级别为用户注意力集中;若视线方向偏离角度和人脸朝向偏离角度任一大于或等于第一预设角度,且持续时间大于第一预设时长、且小于或等于第二预设时长为用户注意力轻度分散;若视线方向偏离角度和人脸朝向偏离角度任一大于或等于第一预设角度,且持续时间大于第二预设时长、且小于或等于第三预设时长为用户注意力中度分散;若视线方向偏离角度和人脸朝向偏离角度任一大于或等于第一预设角度,且持续时间大于第三预设时长为用户注意力重度分散,其中,第一预设时长小于第二预设时长,第二预设时长小于第三预设时长。
本实施例通过当用户为驾驶员时检测用户图像的人脸朝向和/或视线方向来确定用于表征用户分心状态的指标的参数值,并据此确定用户分心状态检测的结果,可以判断用户是否集中注意力驾驶,通过对用户分心状态的指标进行量化,将驾驶专注程度量化为视线偏离程度和转头程度的指标中的至少一个,有利于及时客观的衡量用户的专注驾驶状态。
在一些实施例中,操作302对人脸图像进行人脸朝向和/或视线方向检测,可以包括:
检测人脸图像的人脸关键点;
根据人脸关键点进行人脸朝向和/或视线方向检测。
由于人脸关键点中通常会包含有头部姿态特征信息,在一些可选示例中,根据人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:根据人脸关键点获取头部姿态的特征信息;根据头部姿态的特征信息确定人脸朝向(也称为头部姿态)信息,此处的人脸朝向信息例如可以表现出人脸转动的方向以及角度,这里的转动的方向可以为向左转动、向右转动、向下转动和/或者向上转动等。
在一个可选示例中,可以通过人脸朝向判断用户是否集中注意力驾驶。人脸朝向(头部姿态)可以表示为(yaw,pitch),其中,yaw表示头部在归一化球坐标(摄像头所在的相机坐标系)中的水平偏转角度(偏航角)和垂直偏转角度(俯仰角)。当水平偏转角和/或垂直偏转角大于一个预设角度阈值、且持续时间大于一个预设时间阈值时可以确定用户分心状态检测的结果为注意力不集中。
在一个可选示例中,可以利用相应的神经网络来获得至少一个用户图像的人脸朝向信息。例如:可以将上述检测到的人脸关键点输入第一神经网络,经第一神经网络基于接收到的人脸关键点提取头部姿态的特征信息并输入第二神经网络;由第二神经网络基于该头部姿态的特征信息进行头部姿态估计,获得人脸朝向信息。
在采用现有的发展较为成熟,具有较好的实时性的用于提取头部姿态的特征信息的神经网络和用于估测人脸朝向的神经网络来获取人脸朝向信息的情况下,针对摄像头摄取到的视频,可以准确及时的检测出视频中的至少一个图像帧(即至少一帧用户图像)所对应的人脸朝向信息,从而有利于提高确定用户注意力程度的准确性。
在一些可选示例中,根据人脸关键点进行视线方向检测,得到视线方向信息,包括:根据人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置,并根据瞳孔边沿位置计算瞳孔中心位置;根据瞳孔中心位置与眼睛中心位置计算视线方向信息。例如:计算瞳孔中心位置与眼睛图像中的眼睛中心位置的向量,该向量即可作为视线方向信息。
在一个可选示例中,可以通过视线方向判断用户是否集中注意力驾驶。视线方向可以表示为(yaw,pitch),其中,yaw表示视线在归一化球坐标(摄像头所在的相机坐标系)中的水平偏转角度(偏航角)和垂直偏转角度(俯仰角)。当水平偏转角和/或垂直偏转角大于一个预设角度阈值、且持续时间大于一个预设时间阈值时可以确定用户分心状态检测的结果为注意力不集中。
在一个可选示例中,根据人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置,可以通过如下方式实现:基于第三神经网络对根据人脸关键点分割出的图像中的眼睛区域图像进行瞳孔边沿位置的检测,并根据第三神经网络输出的信息获取到瞳孔边沿位置。
图4为本申请一些实施例中基于人脸图像进行用户预定分心动作检测的流程图。在一个可选示例中,图4所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图4所示,基于人脸图像进行用户预定分心动作检测的方法,可以包括:
402,对人脸图像进行预定分心动作相应的目标对象检测,得到目标对象的检测框。
404,根据目标对象的检测框,确定是否出现预定分心动作。
在本实施例中,对用户进行预定分心动作检测通过检测预定分心动作相应的目标对象、根据检测到的目标对象的检测框确定是否出现预定分心动作,从而判断用户是否分心,有助于获取准确的用户预定分心动作检测的结果,从而有助于提高用户状态检测的结果的准确性。
例如:预定分心动作为抽烟动作时,上述操作402~404可以包括:经第四神经网络对用户图像进行人脸检测,得到人脸检测框,并提取人脸检测框的特征信息;经第四神经网络根据人脸检测框的特征信息确定是否出现抽烟动作。
又例如:预定分心动作为饮食动作/喝水动作/打电话动作/娱乐动作(即,饮食动作和/或喝水动作和/或打电话动作和/或娱乐动作)时,上述操作402~404可以包括:经第五神经网络对用户图像进行饮食动作/喝水动作/打电话动作/娱乐动作相应的预设目标对象检测,得到预设目标对象的检测框,其中的预设目标对象包括:手部、嘴部、眼部、目标物体;目标物体例如可以包括但不限于以下任意一类或多类:容器、食物、电子设备;根据预设目标对象的检测框确定预定分心动作的检测结果,该预定分心动作的检测结果可以包括以下之一:未出现饮食动作/喝水动作/打电话动作/娱乐动作,出现饮食动作,出现喝水动作,出现打电话动作,出现娱乐动作。
在一些可选示例中,预定分心动作为饮食动作/喝水动作/打电话动作/娱乐动作(即,饮食动作和/或喝水动作和/或打电话动作和/或娱乐动作)时,根据预设目标对象的检测框确定预定分心动作的检测结果,可以包括:根据是否检测到手部的检测框、嘴部的检测框、眼部的检测框和目标物体的检测框,以及根据手部的检测框与目标物体的检测框是否重叠、目标物体的类型以及目标物体的检测框与嘴部的检测框或眼部的检测框之间的距离是否满足预设条件,确定预定分心动作的检测结果。
可选地,若手部的检测框与目标物体的检测框重叠,目标物体的类型为容器或食物、且目标物体的检测框与嘴部的检测框之间重叠,确定出现饮食动作或喝水动作;和/或,若手部的检测框与目标物体的检测框重叠,目标物体的类型为电子设备,且目标物体的检测框与嘴部的检测框之间的最小距离小于第一预设距离、或者目标物体的检测框与眼部的检测框之间的最小距离小于第二预设距离,确定出现娱乐动作或打电话动作。
另外,若未同时检测到手部的检测框、嘴部的检测框和任一目标物体的检测框,且未同时检测到手部的检测框、眼部的检测框和任一目标物体的检测框,确定分心动作的检测结果为未检测到饮食动作、喝水动作、打电话动作和娱乐动作;和/或,若手部的检测框与目标物体的检测框未重叠,确定分心动作的检测结果为未检测到饮食动作、喝水动作、打电话动作和娱乐动作;和/或,若目标物体的类型为容器或食物、且目标物体的检测框与嘴部的检测框之间未重叠,和/或,目标物体的类型为电子设备、且目标物体的检测框与嘴部的检测框之间的最小距离不小于第一预设距离、或者目标物体的检测框与眼部的检测框之间的最小距离不小于第二预设距离,确定分心动作的检测结果为未检测到饮食动作、喝水动作、打电话动作和娱乐动作。
另外,在上述对用户图像进行预定分心动作检测的实施例中,还可以包括:若用户分心状态检测的结果为检测到预定分心动作,提示检测到的预定分心动作,例如:检测到抽烟动作时,提示检测到抽烟;检测到喝水动作时,提示检测到喝水;检测到打电话动作时,提示检测到打电话。
在一个可选示例中,上述提示检测到的预定分心动作的操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的提示单元执行。
另外,请再参见图4所示,在对用户图像进行用户预定分心动作检测的另一个实施例中,还可以选择性地包括:
406,若出现预定分心动作,根据一段时间内是否出现预定分心动作的确定结果,获取用于表征用户分心程度的指标的参数值。
可选地,用于表征用户分心程度的指标例如可以包括但不限于以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。例如:抽烟动作的次数、持续时长、频率;喝水动作的次数、持续时长、频率;打电话动作的次数、持续时长、频率;等等。
408,根据用于表征用户分心程度的指标的参数值确定用户预定分心动作检测的结果。
可选地,上述用户预定分心动作检测的结果可以包括:未检测到预定分心动作,检测到的预定分心动作。另外,上述用户预定分心动作检测的结果也可以为分心级别,例如:分心级别可以被划分为:未分心级别(也可以称为专注驾驶级别),提示分心驾驶级别(也可以称为轻度分心驾驶级别)和警告分心驾驶级别(也可以称为重度分心驾驶级别)。当然,分心级别也可以被划分为更多级别,例如:未分心驾驶级别,轻度分心驾驶级别、中度分心驾驶级别以及重度分心驾驶级别等。当然,本实施例至少一个实施例的分心级别也可以按照其他情况划分,不限制为上述级别划分情况。
在一个可选示例中,分心级别可以通过用于表征用户分心程度的指标的参数值所满足的预设条件确定。例如:若未检测到预定分心动作,分心级别为未分心级别(也可以称为专注驾驶级别);若检测到预定分心动作的持续时间小于第一预设时长、且频率小于第一预设频率,分心级别为轻度分心驾驶级别;若检测到预定分心动作的持续时间大于第一预设时长,和/或频率大于第一预设频率,分心级别为重度分心驾驶级别。
在一些实施例中,用户状态检测方法还可以包括:根据用户分心状态检测的结果和/或用户预定分心动作检测的结果,输出分心提示信息。
可选地,若用户分心状态检测的结果为用户注意力分散或者用户注意力分散级别,和/或用户预定分心动作检测的结果为检测到预定分心动作,则可以输出分心提示信息,以提醒用户集中注意力驾驶。
在一个可选示例中,上述根据用户分心状态检测的结果和/或用户预定分心动作检测的结果,输出分心提示信息的操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的提示单元执行。
图5为本申请一些实施例的用户状态检测方法的流程图。在一个可选示例中,图5所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图5所示,该实施例的用户状态检测方法包括:
502,基于人脸图像进行用户疲劳状态检测、用户分心状态检测和用户预定分心动作检测,得到用户疲劳状态检测的结果、用户分心状态检测的结果和用户预定分心动作检测的结果。
504,根据用户疲劳状态检测的结果、用户分心状态检测的结果和用户预定分心动作检测的结果所满足的预设条件确定用户状态等级。
506,将确定的用户状态等级作为用户状态检测的结果。
在一个可选示例中,每一个用户状态等级均对应有预设条件,可以实时的判断用户疲劳状态检测的结果、用户分心状态检测的结果和用户预定分心动作检测的结果所满足的预设条件,可以将被满足的预设条件所对应的状态等级确定为用户的用户状态检测的结果。当用户为驾驶员时,其中,用户状态等级例如可以包括:正常驾驶状态(也可以称为专注驾驶级别),提示驾驶状态(驾驶状态较差),警告驾驶状态(驾驶状态非常差)。
在一个可选示例中,上述图5所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行 的输出模块执行。
例如:当用户为驾驶员时,正常驾驶状态(也可以称为专注驾驶级别)对应的预设条件可以包括:
条件1、用户疲劳状态检测的结果为:未检测到疲劳状态或者非疲劳驾驶级别;
条件2,用户分心状态检测的结果为:用户注意力集中;
条件3,用户预定分心动作检测的结果为:未检测到预定分心动作或者未分心级别。
在上述条件1、条件2、条件3均满足的情况下,驾驶状态等级为正常驾驶状态(也可以称为专注驾驶级别)。
例如:当用户为驾驶员时,提示驾驶状态(驾驶状态较差)对应的预设条件可以包括:
条件11、用户疲劳状态检测的结果为:提示疲劳驾驶级别(也可以称为轻度疲劳驾驶级别);
条件22,用户分心状态检测的结果为:用户注意力轻度分散;
条件33,用户预定分心动作检测的结果为:提示分心驾驶级别(也可以称为轻度分心驾驶级别)。
在上述条件11、条件22、条件33中的任一条件满足,且其他条件中的结果未达到更严重的疲劳驾驶级别、注意力分散级别、分心级别对应的预设条件的情况下,驾驶状态等级为提示驾驶状态(驾驶状态较差)。
例如:当用户为驾驶员时,警告驾驶状态(驾驶状态非常差)对应的预设条件可以包括:
条件111、用户疲劳状态检测的结果为:警告疲劳驾驶级别(也可以称为重度疲劳驾驶级别);
条件222,用户分心状态检测的结果为:用户注意力严重分散;
条件333,用户预定分心动作检测的结果为:警告分心驾驶级别(也可以称为重度分心驾驶级别)。
在上述条件111、条件222、条件333中的任一条件满足时,驾驶状态等级为警告驾驶状态(驾驶状态非常差)。
在一些实施例中,用户状态检测方法还可以包括:
执行与用户状态检测的结果对应的控制操作。
在一个可选示例中,执行与用户状态检测的结果对应的控制操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制单元执行。
可选地,执行与用户状态检测的结果对应的控制操作可以包括以下至少之一:
当用户为驾驶员时,如果确定的用户状态检测的结果满足提示/告警预定条件,例如:满足提示状态(如:驾驶状态较差)对应的预设条件或者状态等级为提示驾驶状态(如:驾驶状态较差),输出与该提示/告警预定条件相应的提示/告警信息,例如:通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示用户,以便于提醒用户注意,促使用户将被分散的注意力回归到驾驶上或者促使用户进行休息等,以实现安全驾驶,避免发生道路交通事故;和/或,
当用户为驾驶员时,如果确定的用户状态检测的结果满足预定驾驶模式切换条件,例如:满足警告驾驶状态(如:驾驶状态非常差)对应的预设条件或者驾驶状态等级为警告分心驾驶级别(也可以称为重度分心驾驶级别)时,将驾驶模式切换为自动驾驶模式,以实现安全驾驶,避免发生道路交通事故;同时,还可以通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示用户,以便于提醒用户,促使用户将被分散的注意力回归到驾驶上或者促使用户进行休息等;和/或,
如果确定的用户状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;例如:约定用户做出某个或某些动作时,表示用户处于危险状态或需要求助,当检测到这些动作时,向预定联系方式(例如:报警电话、最近联系人的电话或设置的紧急联系人的电话)发送预定信息(如:报警信息、提示信息或拨通电话),还可以直接通过车载设备与预定联系方式建立通信连接(如:视频通话、语音通话或电话通话),以保障用户的人身和/或财产安全。
在一个或多个可选的实施例中,车辆控制方法还包括:向云端服务器发送用户状态检测的至少部分结果。
可选地,至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
将用户状态检测得到的部分结果或全部结果发送到云端服务器,可实现对异常用车状态信息的备份,由于正常用车状态无需进行记录,因此,本实施例仅将异常用车状态信息发送给云端服务器;当得到的用户状态检测结果包括正常用车状态信息和异常用车状态信息时,传输部分结果,即仅将异常用车状态信息发送给云端服务器;而当用户状态检测的全部结果都为异常用车状态信息时,传输全部的异常用车状态信息给云端服务器。
可选地,车辆控制方法还包括:存储与异常用车状态信息对应的人脸图像;和/或,
向云端服务器发送与异常用车状态信息对应的人脸图像。
在本实施例中,通过在车辆端本地保存与异常用车状态信息对应的人脸图像,实现证据保存,通过保存的人脸图像,如果后续由于用户异常用车状态出现安全或其他问题,可以通过调取保存的人脸图像进行责任确定,如果在保存的人脸图像中发现与出现的问题相关的异常用车状态,即可以确定为该用户的责任;而为了防止车辆端的数据被误删或蓄意删除,可以将与异常用车状态信息对应的人脸图像上传到云端服务器进行备份,在需要信息时,可以从云端服务器下载到车辆端进行查看,或从云端服务器下载到其他客户端进行查看。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等至少一个种可以存储程序代码的介质。
图6为本申请一些实施例的车载智能系统的结构示意图。该实施例的车载智能系统可用于实现本申请上述各车辆控制方法实施例。如图6所示,该实施例的车载智能系统包括:
用户图像获取单元61,用于获取当前请求使用车辆的用户的人脸图像。
匹配单元62,用于获取人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像。
车辆控制单元63,用于如果特征匹配结果表示特征匹配成功,控制车辆动作以允许用户使用车辆。
基于本申请上述实施例提供的车载智能系统,获取当前请求使用车辆的用户的人脸图像;获取人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆动作以允许用户使用车辆,基于特征匹配保证了预先记录的人员的权利,并且可以在无网络的情况下实现特征匹配,克服了对网络的依赖性,进一步提高了车辆的安全保障性。
在一个或多个可选的实施例中,使用车辆包括以下之一或任意组合:预约用车、开车、乘车、清洗车辆、保养车辆、 维修车辆、给车辆加油、给车辆充电。
在一些实施例中,数据集中存储有至少一个已预约乘车的用户的预存人脸图像;
车辆控制单元63,用于控制开启车门。
在一些实施例中,数据集中存储有至少一个已预约用车的用户的预存人脸图像;
车辆控制单元63,用于控制开启车门以及放行车辆驾驶控制权。
在一些实施例中,数据集中存储有至少一个已记录的允许乘车的用户的预存人脸图像;
车辆控制单元63,用于控制开启车门。
在一些实施例中,数据集中存储有至少一个已记录的允许用车的用户的预存人脸图像;
车辆控制单元63,用于控制开启车门以及放行车辆驾驶控制权。
在一些实施例中,数据集中存储有至少一个已预约开锁或已记录允许开锁的用户的预存人脸图像;
车辆控制单元63,用于控制开启车锁。
在一些实施例中,数据集中存储有至少一个已预约给车辆加油或已记录的允许给车辆加油的用户的预存人脸图像;
车辆控制单元63,用于控制开启车辆加油口。
在一些实施例中,数据集中存储有至少一个已预约给车辆充电或已记录的允许给车辆充电的用户的预存人脸图像;
车辆控制单元63,用于控制允许充电设备连接车辆的电池。
在一个或多个可选的实施例中,车辆控制单元63,还用于控制车辆发出用于表示用户允许使用车辆的提示信息。
在一个或多个可选的实施例中,用户图像获取单元61,用于通过设置在车辆上的拍摄组件采集用户的人脸图像。
在一个或多个可选的实施例中,车载智能系统还包括:第一数据下载单元,用于在车辆与云端服务器处于通信连接状态时,向云端服务器发送数据集下载请求;接收并存储云端服务器发送的数据集。
在一个或多个可选的实施例中,车载智能系统还可以包括:
信息存储单元,用于如果特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取用户的身份信息;向云端服务器发送人脸图像和身份信息。
在一个或多个可选的实施例中,车载智能系统还可以包括:活体检测单元,用于获取人脸图像的活体检测结果;
车辆控制单元63,用于根据特征匹配结果和活体检测结果,控制车辆动作以允许用户使用车辆。
在一个或多个可选的实施例中,车载智能系统还包括:
第二数据下载单元,用于在车辆与移动端设备处于通信连接状态时,向移动端设备发送数据集下载请求;接收并存储移动端设备发送的数据集。
可选地,数据集是由移动端设备在接收到数据集下载请求时,从云端服务器获取并发送给车辆的。
在一个或多个可选的实施例中,车辆控制单元63,还用于如果特征匹配结果表示特征匹配不成功,控制车辆动作以拒绝用户使用车辆。
可选地,车载智能系统还包括:
预约单元,用于发出提示预约信息;根据预约信息接收用户的预约请求,用户的预约请求包括用户的预约人脸图像;根据预约人脸图像,建立数据集。
在一个或多个可选的实施例中,车载智能系统还包括:
状态检测单元,用于基于人脸图像进行用户状态检测;
输出单元,用于根据用户状态检测的结果,进行异常状态的预警提示。
在其中一些实施例中,可以输出用户状态检测的结果。
在其中另一些实施例中,当用户为驾驶员时,可以根据用户状态检测的结果,对车辆进行智能驾驶控制。
在其中又一些实施例中,当用户为驾驶员时,可以输出用户状态检测的结果,同时根据用户状态检测的结果,对车辆进行智能驾驶控制。
可选地,用户状态检测包括以下任意一项或多项:用户疲劳状态检测,用户分心状态检测,用户预定分心动作检测。
可选地,状态检测单元基于人脸图像进行用户疲劳状态检测时,用于:
对人脸图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
根据一段时间内的人脸至少部分区域的状态信息,获取用于表征用户疲劳状态的指标的参数值;
根据用于表征用户疲劳状态的指标的参数值确定用户疲劳状态检测的结果。
可选地,用于表征用户疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
可选地,闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
可选地,状态检测单元基于人脸图像进行用户分心状态检测时,用于:
对人脸图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
根据一段时间内的人脸朝向信息和/或视线方向信息,确定用于表征用户分心状态的指标的参数值;用于表征用户分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
根据用于表征用户分心状态的指标的参数值确定用户分心状态检测的结果。
可选地,人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
可选地,状态检测单元对人脸图像进行人脸朝向和/或视线方向检测时,用于:
检测人脸图像的人脸关键点;
根据人脸关键点进行人脸朝向和/或视线方向检测。
可选地,状态检测单元根据人脸关键点进行人脸朝向检测,得到人脸朝向信息时,用于:
根据人脸关键点获取头部姿态的特征信息;
根据头部姿态的特征信息确定人脸朝向信息。
可选地,预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
可选地,状态检测单元基于人脸图像进行用户预定分心动作检测时,用于:
对人脸图像进行预定分心动作相应的目标对象检测,得到目标对象的检测框;
根据目标对象的检测框,确定是否出现预定分心动作。
可选地,状态检测单元,还用于:
若出现预定分心动作,根据一段时间内是否出现预定分心动作的确定结果,获取用于表征用户分心程度的指标的参数值;
根据用于表征用户分心程度的指标的参数值确定用户预定分心动作检测的结果。
可选地,用于表征用户分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
可选地,车载智能系统还包括:
提示单元,用于若用户预定分心动作检测的结果为检测到预定分心动作,提示检测到的预定分心动作。
在一个或多个可选的实施例,车载智能系统还包括:
控制单元,用于执行与用户状态检测的结果对应的控制操作。
可选地,控制单元,用于:
如果确定的用户状态检测的结果满足提示/告警预定条件,输出与提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的用户状态检测的结果满足预定信息发送条件,向预设联系方式发送预定信息或与预设联系方式建立通信连接;和/或,
如果确定的用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
在一个或多个可选的实施例中,车载智能系统还包括:
结果发送单元,用于向云端服务器发送用户状态检测的至少部分结果。
可选地,至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
可选地,车载智能系统还包括:
图像存储单元,用于存储与异常用车状态信息对应的人脸图像;和/或,
向云端服务器发送与异常用车状态信息对应的人脸图像。
本申请实施例提供的车载智能系统任一实施例的工作过程以及设置方式均可以参照本申请上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
图7为本申请一些实施例的车辆控制方法的流程图。如图7所示,本实施例车辆控制方法的执行主体可以为云端服务器,例如:执行主体可以为电子设备或其他具有类似功能的设备,该实施例的方法包括:
710,接收车辆发送的待识别的人脸图像。
可选地,待识别的人脸图像通过车辆进行采集,获得人脸图像的过程可以包括:人脸检测、人脸质量筛选和活体识别,通过这些过程可以保证获得的待识别的人脸图像是车辆内或外的真实用户的质量较好的人脸图像,保证了后续特征匹配的效果。
在一个可选示例中,该操作710可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的图像接收单元81执行。
720,获得人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;可选地,云端服务器可以从车辆直接获取到人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,此时,特征匹配的过程在车辆端实现。
在一个可选示例中,该操作720可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的匹配结果获得单元82执行。
730,如果特征匹配结果表示特征匹配成功,向车辆发送允许控制车辆的指令。
在一个可选示例中,该操作730可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的指令发送单元83执行。
基于本申请上述实施例提供的车辆控制方法,通过在车辆端实现人脸特征匹配,减少了用户识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
可选地,车辆控制方法还包括:
接收车辆发送的数据集下载请求,数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
向车辆发送数据集。
可选地,通常数据集保存在云端服务器中,本实施例需要实现在车辆端进行人脸匹配,为了可以在无网络的情况下也能对人脸进行匹配,可以在有网络的情况下,从云端服务器下载数据集,并将数据集保存在车辆端,此时,即使没有网络,无法与云端服务器通信,也可以在车辆端实现人脸匹配,并且方便车辆端对数据集的管理。
在一个或多个可选的实施例中,车辆控制方法还包括:
接收车辆或移动端设备发送的预约请求,预约请求包括用户的预约人脸图像;
根据预约人脸图像,建立数据集。
为了识别用户是否经过预约,首先需要存储预约的用户对应的预约人脸图像,本实施例中,在云端服务器,为已预约的预约人脸图像建立数据集,在数据集中保存已经预约的多个用户的预约人脸图像,通过云端服务器保存,保证了数据的安全性。
在一个或多个可选的实施例中,车辆控制方法还包括:
接收车辆发送的用户状态检测的至少部分结果,进行异常用车状态的预警提示。
可选地,至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
将用户状态检测得到的部分结果或全部结果发送到云端服务器,可以实现对异常用车状态信息的备份,由于正常用车状态无需进行记录,因此,本实施例仅将异常用车状态信息发送给云端服务器;当得到的驾驶员状态检测结果包括正常用车状态信息和异常用车状态信息时,传输部分结果,即仅将异常用车状态信息发送给云端服务器;而当用户状态检测的全部结果都为异常用车状态信息时,传输全部的异常用车状态信息给云端服务器。
在一个或多个可选的实施例中,车辆控制方法还包括:执行与用户状态检测的结果对应的控制操作。
可选地,当用户为驾驶员时,如果确定的用户状态检测的结果满足提示/告警预定条件,例如:满足提示状态(如:驾驶状态较差)对应的预设条件或者状态等级为提示驾驶状态(如:驾驶状态较差),输出与该提示/告警预定条件相应的提示/告警信息,例如:通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示用户,以便于提醒用户注意,促使用户将被分散的注意力回归到驾驶上或者促使用户进行休息等,以实现安全驾驶,避免发生道路交通事故;和/或,
当用户为驾驶员时,如果确定的用户状态检测的结果满足预定驾驶模式切换条件,例如:满足警告驾驶状态(如:驾驶状态非常差)对应的预设条件或者驾驶状态等级为警告分心驾驶级别(也可以称为重度分心驾驶级别)时,将驾驶模式切换为自动驾驶模式,以实现安全驾驶,避免发生道路交通事故;同时,还可以通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示用户,以便于提醒用户,促使用户将被分散的注意力回归到驾驶上或者促使用户进行休息等;和/或,
如果确定的用户状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;例如:约定用户做出某个或某些动作时,表示用户处于危险状态或需要求助,当检测到这些动作时,向预定联系方式(例如:报警电话、最近联系人的电话或设置的紧急联系人的电话)发送预定信息(如:报警信息、提示信息或拨通电话),还可以直接通过车载设备与预定联系方式建立通信连接(如:视频通话、语音通话或电话通话),以保障用户的人身和/或财产安全。
可选地,车辆控制方法还包括:接收车辆发送的与异常用车状态信息对应的人脸图像。
在本实施例中,而为了防止车辆端的数据被误删或蓄意删除,可以将与异常用车状态信息对应的人脸图像上传到云端服务器进行备份,在需要信息时,可以从云端服务器下载到车辆端进行查看,或从云端服务器下载到其他客户端进行查看。
可选地,车辆控制方法还包括:基于异常用车状态信息进行以下至少一种操作:
数据统计、车辆管理、用户管理。
云端服务器可以接收多个车辆的异常用车状态信息,可以实现基于大数据的数据统计、对车辆及用户的管理,以实现更好的为车辆和用户服务。
可选地,基于异常用车状态信息进行数据统计,包括:
基于异常用车状态信息对接收的与异常用车状态信息对应的人脸图像进行统计,使人脸图像按不同异常用车状态进行分类,确定每种异常用车状态的统计情况。
对每种不同异常用车状态进行分类统计,可以得到基于大数据的用户经常出现的异常用车状态,可以为车辆开发者提供更多的参考数据,以便在车辆中提供更适合应对异常用车状态的设置或装置,为用户提供更舒适的用车环境。
可选地,基于异常用车状态信息进行车辆管理,包括:
基于异常用车状态信息对接收的与异常用车状态信息对应的人脸图像进行统计,使人脸图像按不同车辆进行分类,确定每个车辆的异常用车统计情况。
通过基于车辆对异常用车状态信息进行统计,可以对车辆对应的所有用户的异常用车状态信息进行处理,例如:当某一车辆出现问题,通过查看该车辆对应的所有异常用车状态信息即可实现责任确定。
可选地,基于异常用车状态信息进行用户管理,包括:
基于异常用车状态信息对接收的与异常用车状态信息对应的人脸图像进行处理,使人脸图像按不同用户进行分类,确定每个用户的异常用车统计情况。
通过基于用户对异常用车状态信息进行统计,可获得每个用户的用车习惯及经常出现的问题,可为每个用户提供个性化服务,在达到安全用车的目的的同时,不会对用车习惯良好的用户造成干扰;例如:经过对异常用车状态信息进行统计,确定某个驾驶员经常在开车时打哈欠,针对该驾驶员可提供更高音量的提示信息。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图8为本申请一些实施例的电子设备的结构示意图。该实施例的电子设备可用于实现本申请上述各车辆控制方法实施例。如图8所示,该实施例的电子设备包括:
图像接收单元81,用于接收车辆发送的待识别的人脸图像.
匹配结果获得单元82,用于获得人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像。可选地,云端服务器可以从车辆直接获取到人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,此时,特征匹配的过程在车辆端实现。
指令发送单元83,用于如果特征匹配结果表示特征匹配成功,向车辆发送允许控制车辆的指令。
基于本申请上述实施例提供的电子设备,通过在车辆端实现人脸特征匹配,减少了用户识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
可选地,电子设备还包括:
数据发送单元,用于接收车辆发送的数据集下载请求,数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;向车辆发送数据集。
在一个或多个可选的实施例中,电子设备还包括:
预约请求接收单元,用于接收车辆或移动端设备发送的预约请求,预约请求包括用户的预约人脸图像;
根据预约人脸图像,建立数据集。
在一个或多个可选的实施例中,电子设备还包括:
检测结果接收单元,用于接收车辆发送的用户状态检测的至少部分结果,进行异常用车状态的预警提示。
可选地,至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
在一个或多个可选的实施例中,电子设备还包括:执行控制单元,用于执行与用户状态检测的结果对应的控制操作。
可选地,执行控制单元,用于:
如果确定的用户状态检测的结果满足提示/告警预定条件,输出与提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的用户状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信 连接;和/或,
如果确定的用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
可选地,电子设备还包括:
状态图像接收单元,用于接收车辆发送的与异常用车状态信息对应的人脸图像。
可选地,电子设备还包括:
异常处理单元,用于基于异常用车状态信息进行以下至少一种操作:数据统计、车辆管理、用户管理。
可选地,异常处理单元基于异常用车状态信息进行数据统计时,用于基于异常用车状态信息对接收的与异常用车状态信息对应的人脸图像进行统计,使人脸图像按不同异常用车状态进行分类,确定每种异常用车状态的统计情况。
可选地,异常处理单元基于异常用车状态信息进行车辆管理时,用于基于异常用车状态信息对接收的与异常用车状态信息对应的人脸图像进行统计,使人脸图像按不同车辆进行分类,确定每个车辆的异常用车统计情况。
可选地,异常处理单元基于异常用车状态信息进行用户管理时,用于基于异常用车状态信息对接收的与异常用车状态信息对应的人脸图像进行处理,使人脸图像按不同用户进行分类,确定每个用户的异常用车统计情况。
本申请实施例提供的电子设备任一实施例的工作过程以及设置方式均可以参照本申请上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
根据本申请实施例的另一个方面,提供的一种车辆管理系统,包括:车辆和/或云端服务器;
车辆用于执行如图1-5所示实施例中任一的车辆控制方法;
云端服务器用于执行如图7所示实施例中任一的车辆控制方法。
可选地,车辆管理系统还包括:移动端设备,用于:
接收用户注册请求,用户注册请求包括用户的注册人脸图像;
将用户注册请求发送给云端服务器。
本实施例的车辆管理系统通过在车辆客户端实现人脸匹配,无需依赖云端进行人脸匹配,减少了实时数据传输耗费的流量成本;灵活性高,对网络依赖低,车辆数据集下载可在有网络时完成,车辆空闲时进行特征提取,减小了对网络的依赖。用户请求使用车辆时可以不依赖网络,可在认证成功后待有网络时再上传比对结果。
图9为本申请一些实施例车辆管理系统的使用流程图。如图9所示,上述实施例实现的预约过程在手机端(移动端设备)实现,并将经过筛选的人脸图像和用户ID信息上传到云端服务器中,云端服务器将人脸图像和用户ID信息存入预约数据集,在采集到请求人员图像时,通过车辆端下载预约数据集到车辆客户端进行匹配;车辆获取请求人员图像,对请求人员图像依次经过人脸检测、质量筛选和活体识别,以经过筛选的请求人脸图像与预约数据集中所有人脸图像进行匹配,匹配基于人脸特征实现,人脸特征可通过神经网络提取获得,基于比对结果确定请求人脸图像是否为预约人员,允许预约人员使用车辆。
本申请具体应用时,可包括三个部分:移动端设备(如:手机端)、云端服务器和车辆(如:车机端),具体地,手机端拍照,做质量筛选,然后将照片和人员信息上传云端存储,完成预约过程。云端将人员信息同步到车机端。车机端根据人员信息进行人脸识别比对后,做出智能判断,同时通知云端更新用户状态。具体优点包括:实时性好,响应速度快,深度学习技术和嵌入式芯片优化技术结合;车机端支持ARM,X86主流平台(支持价格更低的车载芯片IMX6,Cotex-A9800MHz);灵活性高,对网络依赖低,车机端信息同步可在有网络时完成。用户上车登陆使用时不依赖网络,可在认证成功后待有网络时在上传状态信息;流程清晰简单,网络传输图片尺寸可根据人脸位置裁剪,减少了网络开销。图片大小经过JPEG压缩占用几十K;通过云端存储和管理数据,不易丢失,可扩展性强;人脸识别全流程的衔接优化,保证了最终识别准确率。
根据本申请实施例的另一个方面,提供的一种电子设备,包括:存储器,用于存储可执行指令;
以及处理器,用于与存储器通信以执行可执行指令从而完成上述任一实施例的车辆控制方法。
图10为本申请一些实施例的电子设备的一个应用示例的结构示意图。下面参考图10,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备的结构示意图。如图10所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU)1001,和/或一个或多个加速单元1013等,加速单元可包括但不限于GPU、FPGA、其他类型的专用处理器等,处理器可以根据存储在只读存储器(ROM)1002中的可执行指令或者从存储部分1008加载到随机访问存储器(RAM)1003中的可执行指令而执行各种适当的动作和处理。通信部1012可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器1002和/或随机访问存储器1003中通信以执行可执行指令,通过总线1004与通信部1012相连、并经通信部1012与其他目标设备通信,从而完成本申请实施例提供的任一方法对应的操作,例如,获取当前请求使用车辆的用户的人脸图像;获取人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆动作以允许用户使用车辆。
此外,在RAM1003中,还可存储有装置操作所需的各种程序和数据。CPU1001、ROM1002以及RAM1003通过总线1004彼此相连。在有RAM1003的情况下,ROM1002为可选模块。RAM1003存储可执行指令,或在运行时向ROM1002中写入可执行指令,可执行指令使中央处理单元1001执行本申请上述任一方法对应的操作。输入/输出(I/O)接口1005也连接至总线1004。通信部1012可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口1005:包括键盘、鼠标等的输入部分1006;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分1007;包括硬盘等的存储部分1008;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分1009。通信部分1009经由诸如因特网的网络执行通信处理。驱动器1011也根据需要连接至I/O接口1005。可拆卸介质1011,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1010上,以便于从其上读出的计算机程序根据需要被安装入存储部分1008。
需要说明的,如图10所示的架构仅为一种可选实现方式,在具体实践过程中,可根据实际需要对上述图10的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如加速单元1013和CPU1001可分离设置或者可将加速单元1013集成在CPU1001上,通信部可分离设置,也可集成设置在CPU1001或加速单元1013上,等等。这些可替换的实施方式均落入本申请公开的保护范围。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的 方法的程序代码,程序代码可包括对应执行本申请任一实施例提供的车辆控制方法步骤对应的指令。在这样的实施例中,该计算机程序可以通过通信部分1009从网络上被下载和安装,和/或从可拆卸介质1011被安装。在该计算机程序被处理器执行时,执行本申请的任一方法中的相应操作。
根据本申请实施例的另一个方面,提供的一种计算机存储介质,用于存储计算机可读取的指令,所述指令被执行时执行上述实施例任意一项车辆控制方法的操作。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本申请的方法和装置、系统、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置、系统、设备。用于所述方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。

Claims (105)

  1. 一种车辆控制方法,其特征在于,包括:
    获取当前请求使用车辆的用户的人脸图像;
    获取所述人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
    如果所述特征匹配结果表示特征匹配成功,控制车辆动作以允许所述用户使用车辆。
  2. 根据权利要求1所述的方法,其特征在于,所述使用车辆包括以下之一或任意组合:预约用车、开车、乘车、清洗车辆、保养车辆、维修车辆、给车辆加油、给车辆充电。
  3. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已预约乘车的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门。
  4. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已预约用车的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门以及放行车辆驾驶控制权。
  5. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已记录的允许乘车的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门。
  6. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已记录的允许用车的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车门以及放行车辆驾驶控制权。
  7. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已预约开锁或已记录允许开锁的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车锁。
  8. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已预约给车辆加油或已记录的允许给车辆加油的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制开启车辆加油口。
  9. 根据权利要求2所述的方法,其特征在于,所述数据集中存储有至少一个已预约给车辆充电或已记录的允许给车辆充电的用户的预存人脸图像;
    所述控制车辆动作以允许所述用户使用车辆,包括:控制允许充电设备连接车辆的电池。
  10. 根据权利要求1-9任一所述的方法,其特征在于,还包括:控制车辆发出用于表示用户允许使用车辆的提示信息。
  11. 根据权利要求1-10任一所述的方法,其特征在于,获取当前请求使用车辆的用户的人脸图像,包括:
    通过设置在所述车辆上的拍摄组件采集所述用户的人脸图像。
  12. 根据权利要求1-11任一所述的方法,其特征在于,还包括:
    在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;
    接收并存储所述云端服务器发送的数据集。
  13. 根据权利要求1-12任一所述的方法,其特征在于,还包括:
    如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述用户的身份信息;
    向所述云端服务器发送所述人脸图像和所述身份信息。
  14. 根据权利要求1-13任一所述的方法,其特征在于,还包括:获取所述人脸图像的活体检测结果;
    根据所述特征匹配结果,控制车辆动作以允许所述用户使用车辆,包括:
    根据所述特征匹配结果和所述活体检测结果,控制车辆动作以允许所述用户使用车辆。
  15. 根据权利要求1-14所述的方法,其特征在于,还包括:
    在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;
    接收并存储所述移动端设备发送的数据集。
  16. 根据权利要求15所述的方法,其特征在于,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
  17. 根据权利要求1-16任一所述的方法,其特征在于,还包括:
    如果所述特征匹配结果表示特征匹配不成功,控制车辆动作以拒绝所述用户使用车辆。
  18. 根据权利要求17所述的方法,其特征在于,还包括:
    发出提示预约信息;
    根据所述预约信息接收所述用户的预约请求,所述用户的预约请求包括用户的预约人脸图像;
    根据所述预约人脸图像,建立数据集。
  19. 根据权利要求1-18任一所述的方法,其特征在于,还包括:
    基于所述人脸图像进行用户状态检测;
    根据用户状态检测的结果,进行异常状态的预警提示。
  20. 根据权利要求19所述的方法,其特征在于,所述用户状态检测包括以下任意一项或多项:用户疲劳状态检测,用户分心状态检测,用户预定分心动作检测。
  21. 根据权利要求20所述的方法,其特征在于,基于所述人脸图像进行用户疲劳状态检测,包括:
    对所述人脸图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
    根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征用户疲劳状态的指标的参数值;
    根据用于表征用户疲劳状态的指标的参数值确定用户疲劳状态检测的结果。
  22. 根据权利要求21所述的方法,其特征在于,所述用于表征用户疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
  23. 根据权利要求22所述的方法,其特征在于,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
    所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
  24. 根据权利要求20-23任一所述的方法,其特征在于,基于所述人脸图像进行用户分心状态检测,包括:
    对所述人脸图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
    根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征用户分心状态的指标的参数值;所述用于表征用户分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
    根据用于表征所述用户分心状态的指标的参数值确定用户分心状态检测的结果。
  25. 根据权利要求24所述的方法,其特征在于,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
    所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
  26. 根据权利要求24或25所述的方法,其特征在于,所述对所述人脸图像中的用户进行人脸朝向和/或视线方向检测,包括:
    检测所述人脸图像的人脸关键点;
    根据所述人脸关键点进行人脸朝向和/或视线方向检测。
  27. 根据权利要求26所述的方法,其特征在于,根据所述人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:
    根据所述人脸关键点获取头部姿态的特征信息;
    根据所述头部姿态的特征信息确定人脸朝向信息。
  28. 根据权利要求20-27任一所述的方法,其特征在于,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
  29. 根据权利要求28所述的方法,其特征在于,基于所述人脸图像进行用户预定分心动作检测,包括:
    对所述人脸图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;
    根据所述目标对象的检测框,确定是否出现所述预定分心动作。
  30. 根据权利要求29所述的方法,其特征在于,还包括:
    若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征用户分心程度的指标的参数值;
    根据所述用于表征用户分心程度的指标的参数值确定用户预定分心动作检测的结果。
  31. 根据权利要求30所述的方法,其特征在于,所述用于表征用户分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
  32. 根据权利要求28-31任一所述的方法,其特征在于,还包括:
    若用户预定分心动作检测的结果为检测到预定分心动作,提示检测到的预定分心动作。
  33. 根据权利要求19-32任一所述的方法,其特征在于,还包括:
    执行与所述用户状态检测的结果对应的控制操作。
  34. 根据权利要求33所述的方法,其特征在于,所述执行与所述用户状态检测的结果对应的控制操作,包括以下至少之一:
    如果确定的所述用户状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述用户状态检测的结果满足预定信息发送条件,向预设联系方式发送预定信息或与预设联系方式建立通信连接;和/或,
    如果确定的所述用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  35. 根据权利要求19-34任一所述的方法,其特征在于,还包括:
    向云端服务器发送所述用户状态检测的至少部分结果。
  36. 根据权利要求35所述的方法,其特征在于,所述至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
  37. 根据权利要求36所述的方法,其特征在于,还包括:
    存储与所述异常用车状态信息对应的人脸图像;和/或,
    向所述云端服务器发送与所述异常用车状态信息对应的人脸图像。
  38. 一种车载智能系统,其特征在于,包括:
    用户图像获取单元,用于获取当前请求使用车辆的用户的人脸图像;
    匹配单元,用于获取所述人脸图像与车辆的数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
    车辆控制单元,用于如果所述特征匹配结果表示特征匹配成功,控制车辆动作以允许所述用户使用车辆。
  39. 根据权利要求38所述的系统,其特征在于,所述使用车辆包括以下之一或任意组合:预约用车、开车、乘车、清洗车辆、保养车辆、维修车辆、给车辆加油、给车辆充电。
  40. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已预约乘车的用户的预存人脸图像;
    所述车辆控制单元,用于控制开启车门。
  41. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已预约用车的用户的预存人脸图像;
    所述车辆控制单元,用于控制开启车门以及放行车辆驾驶控制权。
  42. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已记录的允许乘车的用户的预存人脸图像;
    所述车辆控制单元,用于控制开启车门。
  43. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已记录的允许用车的用户的预存人脸图像;
    所述车辆控制单元,用于控制开启车门以及放行车辆驾驶控制权。
  44. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已预约开锁或已记录允许开锁的用户的预存人脸图像;
    所述车辆控制单元,用于控制开启车锁。
  45. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已预约给车辆加油或已记录的允许给车辆加油的用户的预存人脸图像;
    所述车辆控制单元,用于控制开启车辆加油口。
  46. 根据权利要求39所述的系统,其特征在于,所述数据集中存储有至少一个已预约给车辆充电或已记录的允许给车辆充电的用户的预存人脸图像;
    所述车辆控制单元,用于控制允许充电设备连接车辆的电池。
  47. 根据权利要求38-46任一所述的系统,其特征在于,所述车辆控制单元还用于:控制车辆发出用于表示用户允许使用车辆的提示信息。
  48. 根据权利要求38-47任一所述的系统,其特征在于,所述用户图像获取单元,具体用于通过设置在所述车辆上的拍摄组件采集所述用户的人脸图像。
  49. 根据权利要求38-48任一所述的系统,其特征在于,还包括:
    第一数据下载单元,用于在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;接收并存储所述云端服务器发送的数据集。
  50. 根据权利要求38-49任一所述的系统,其特征在于,还包括:
    信息存储单元,用于如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述用户的身份信息;向所述云端服务器发送所述人脸图像和所述身份信息。
  51. 根据权利要求38-50任一所述的系统,其特征在于,还包括:活体检测单元,用于获取所述人脸图像的活体检测结果;
    所述车辆控制单元,用于根据所述特征匹配结果和所述活体检测结果,控制车辆动作以允许所述用户使用车辆。
  52. 根据权利要求38-51任一所述的系统,其特征在于,还包括:
    第二数据下载单元,用于在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;接收并存储所述移动端设备发送的数据集。
  53. 根据权利要求52所述的系统,其特征在于,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
  54. 根据权利要求38-53任一所述的系统,其特征在于,
    所述车辆控制单元,还用于如果所述特征匹配结果表示特征匹配不成功,控制车辆动作以拒绝所述用户使用车辆。
  55. 根据权利要求54所述的系统,其特征在于,还包括:
    预约单元,用于发出提示预约信息;根据所述预约信息接收所述用户的预约请求,所述用户的预约请求包括用户的预约人脸图像;根据所述预约人脸图像,建立数据集。
  56. 根据权利要求38-55任一所述的系统,其特征在于,还包括:
    状态检测单元,用于基于所述人脸图像进行用户状态检测;
    输出单元,用于根据用户状态检测的结果,进行异常状态的预警提示。
  57. 根据权利要求56所述的系统,其特征在于,所述用户状态检测包括以下任意一项或多项:用户疲劳状态检测,用户分心状态检测,用户预定分心动作检测。
  58. 根据权利要求57所述的系统,其特征在于,所述状态检测单元基于所述人脸图像进行用户疲劳状态检测时,用于:
    对所述人脸图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
    根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征用户疲劳状态的指标的参数值;
    根据用于表征用户疲劳状态的指标的参数值确定用户疲劳状态检测的结果。
  59. 根据权利要求58所述的系统,其特征在于,所述用于表征用户疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
  60. 根据权利要求59所述的系统,其特征在于,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
    所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
  61. 根据权利要求57-60任一所述的系统,其特征在于,所述状态检测单元基于所述人脸图像进行用户分心状态检测时,用于:
    对所述人脸图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
    根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征用户分心状态的指标的参数值;所述用于表征用户分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
    根据用于表征所述用户分心状态的指标的参数值确定用户分心状态检测的结果。
  62. 根据权利要求61所述的系统,其特征在于,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
    所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
  63. 根据权利要求61或62所述的系统,其特征在于,所述状态检测单元对所述人脸图像中的用户进行人脸朝向和/或视线方向检测时,用于:
    检测所述人脸图像的人脸关键点;
    根据所述人脸关键点进行人脸朝向和/或视线方向检测。
  64. 根据权利要求63所述的系统,其特征在于,所述状态检测单元根据所述人脸关键点进行人脸朝向检测,得到人脸朝向信息时,用于:
    根据所述人脸关键点获取头部姿态的特征信息;
    根据所述头部姿态的特征信息确定人脸朝向信息。
  65. 根据权利要求57-64任一所述的系统,其特征在于,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
  66. 根据权利要求65所述的系统,其特征在于,所述状态检测单元基于所述人脸图像进行用户预定分心动作检测时,用于:
    对所述人脸图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;
    根据所述目标对象的检测框,确定是否出现所述预定分心动作。
  67. 根据权利要求66所述的系统,其特征在于,所述状态检测单元,还用于:
    若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征用户分心程度的指标的参数值;
    根据所述用于表征用户分心程度的指标的参数值确定用户预定分心动作检测的结果。
  68. 根据权利要求67所述的系统,其特征在于,所述用于表征用户分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
  69. 根据权利要求65-68任一所述的系统,其特征在于,还包括:
    提示单元,用于若用户预定分心动作检测的结果为检测到预定分心动作,提示检测到的预定分心动作。
  70. 根据权利要求56-69任一所述的系统,其特征在于,还包括:
    控制单元,用于执行与所述用户状态检测的结果对应的控制操作。
  71. 根据权利要求70所述的系统,其特征在于,所述控制单元,用于:
    如果确定的所述用户状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述用户状态检测的结果满足预定信息发送条件,向预设联系方式发送预定信息或与预设联系方式建立通信连接;和/或,
    如果确定的所述用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  72. 根据权利要求56-71任一所述的系统,其特征在于,还包括:
    结果发送单元,用于向云端服务器发送所述用户状态检测的至少部分结果。
  73. 根据权利要求72所述的系统,其特征在于,所述至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
  74. 根据权利要求73所述的系统,其特征在于,还包括:
    图像存储单元,用于存储与所述异常用车状态信息对应的人脸图像;和/或,
    向所述云端服务器发送所述与所述异常用车状态信息对应的人脸图像。
  75. 一种车辆控制方法,其特征在于,包括:
    接收车辆发送的待识别的人脸图像;
    获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
    如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
  76. 根据权利要求75所述的方法,其特征在于,还包括:
    接收车辆发送的数据集下载请求,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
    向所述车辆发送所述数据集。
  77. 根据权利要求75或76所述的方法,其特征在于,还包括:
    接收车辆或移动端设备发送的预约请求,所述预约请求包括用户的预约人脸图像;
    根据所述预约人脸图像,建立数据集。
  78. 根据权利要求75-77任一所述的方法,其特征在于,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:
    从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
  79. 根据权利要求75-78任一所述的方法,其特征在于,还包括:
    接收所述车辆发送的所述用户状态检测的至少部分结果,进行异常用车状态的预警提示。
  80. 根据权利要求79所述的方法,其特征在于,所述至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
  81. 根据权利要求79或80所述的方法,其特征在于,还包括:执行与所述用户状态检测的结果对应的控制操作。
  82. 根据权利要求81所述的方法,其特征在于,所述执行与所述用户状态检测的结果对应的控制操作,包括:
    如果确定的所述用户状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述用户状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
    如果确定的所述用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  83. 根据权利要求80-82任一所述的方法,其特征在于,还包括:接收所述车辆发送的与所述异常用车状态信息对应的人脸图像。
  84. 根据权利要求83所述的方法,其特征在于,还包括:基于所述异常用车状态信息进行以下至少一种操作:
    数据统计、车辆管理、用户管理。
  85. 根据权利要求84所述的方法,其特征在于,所述基于所述异常用车状态信息进行数据统计,包括:
    基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行统计,使所述人脸图像按不同异常用车状态进行分类,确定每种所述异常用车状态的统计情况。
  86. 根据权利要求84或85所述的方法,其特征在于,所述基于所述异常用车状态信息进行车辆管理,包括:
    基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行统计,使所述人脸图像按不同车辆进行分类,确定每个所述车辆的异常用车统计情况。
  87. 根据权利要求84-86任一所述的方法,其特征在于,所述基于所述异常用车状态信息进行用户管理,包括:
    基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行处理,使所述人脸图像按不同用户进行分类,确定每个所述用户的异常用车统计情况。
  88. 一种电子设备,其特征在于,包括:
    图像接收单元,用于接收车辆发送的待识别的人脸图像;
    匹配结果获得单元,用于获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;
    指令发送单元,用于如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
  89. 根据权利要求88所述的电子设备,其特征在于,还包括:
    数据发送单元,用于接收车辆发送的数据集下载请求,所述数据集中存储有至少一个预先记录的允许使用车辆的用户的预存人脸图像;向所述车辆发送所述数据集。
  90. 根据权利要求88或89所述的电子设备,其特征在于,还包括:
    预约请求接收单元,用于接收车辆或移动端设备发送的预约请求,所述预约请求包括用户的预约人脸图像;
    根据所述预约人脸图像,建立数据集。
  91. 根据权利要求88-90任一所述的电子设备,其特征在于,所述匹配结果获得单元,用于从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
  92. 根据权利要求88-91任一所述的电子设备,其特征在于,还包括:
    检测结果接收单元,用于接收所述车辆发送的所述用户状态检测的至少部分结果,进行异常用车状态的预警提示。
  93. 根据权利要求92所述的电子设备,其特征在于,所述至少部分结果包括:根据用户状态检测确定的异常用车状态信息。
  94. 根据权利要求92或93所述的电子设备,其特征在于,还包括:执行控制单元,用于执行与所述用户状态检测的结果对应的控制操作。
  95. 根据权利要求94所述的电子设备,其特征在于,所述执行控制单元,用于:
    如果确定的所述用户状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述用户状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
    如果确定的所述用户状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  96. 根据权利要求93-95任一所述的电子设备,其特征在于,还包括:
    状态图像接收单元,用于接收所述车辆发送的与所述异常用车状态信息对应的人脸图像。
  97. 根据权利要求96所述的电子设备,其特征在于,还包括:
    异常处理单元,用于基于所述异常用车状态信息进行以下至少一种操作:数据统计、车辆管理、用户管理。
  98. 根据权利要求97所述的电子设备,其特征在于,所述异常处理单元基于所述异常用车状态信息进行数据统计时,用于基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行统计,使所述人脸图像按不同异常用车状态进行分类,确定每种所述异常用车状态的统计情况。
  99. 根据权利要求97或98所述的电子设备,其特征在于,所述异常处理单元基于所述异常用车状态信息进行车辆管理时,用于基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行统计,使所述人脸图像按不同车辆进行分类,确定每个所述车辆的异常用车统计情况。
  100. 根据权利要求97-99任一所述的电子设备,其特征在于,所述异常处理单元基于所述异常用车状态信息进行用户管理时,用于基于所述异常用车状态信息对接收的与所述异常用车状态信息对应的人脸图像进行处理,使所述人脸图像按不同用户进行分类,确定每个所述用户的异常用车统计情况。
  101. 一种车辆管理系统,其特征在于,包括:车辆和/或云端服务器;
    所述车辆用于执行权利要求1-37任意一项所述的车辆控制方法;
    所述云端服务器用于执行权利要求75-87任意一项所述的车辆控制方法。
  102. 根据权利要求101所述的系统,其特征在于,还包括:移动端设备,用于:
    接收用户注册请求,所述用户注册请求包括用户的注册人脸图像;
    将所述用户注册请求发送给所述云端服务器。
  103. 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;
    以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1至37任意一项所述车辆控制方法或权利要求75至87任意一项所述的车辆控制方法。
  104. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至37任意一项所述的车辆控制方法或权利要求75至87任意一项所述的车辆控制方法。
  105. 一种计算机存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1至37任意一项所述车辆控制方法或权利要求75至87任意一项所述的车辆控制方法。
PCT/CN2018/105809 2018-06-04 2018-09-14 车辆控制方法和系统、车载智能系统、电子设备、介质 WO2019232973A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP18919403.8A EP3628549B1 (en) 2018-06-04 2018-09-14 Vehicle control method and system, and in-vehicle intelligent system, electronic device and medium
KR1020207012404A KR102297162B1 (ko) 2018-06-04 2018-09-14 차량 제어 방법 및 시스템, 차량 탑재 지능형 시스템, 전자 기기, 매체
SG11201911197VA SG11201911197VA (en) 2018-06-04 2018-09-14 Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
JP2019564878A JP6916307B2 (ja) 2018-06-04 2018-09-14 車両制御方法及びシステム、車載インテリジェントシステム、電子機器並びに媒体
KR1020217027087A KR102374507B1 (ko) 2018-06-04 2018-09-14 차량 제어 방법 및 시스템, 차량 탑재 지능형 시스템, 전자 기기, 매체
US16/233,064 US10970571B2 (en) 2018-06-04 2018-12-26 Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810565700.3 2018-06-04
CN201810565700.3A CN108819900A (zh) 2018-06-04 2018-06-04 车辆控制方法和系统、车载智能系统、电子设备、介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/233,064 Continuation US10970571B2 (en) 2018-06-04 2018-12-26 Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium

Publications (1)

Publication Number Publication Date
WO2019232973A1 true WO2019232973A1 (zh) 2019-12-12

Family

ID=64143618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105809 WO2019232973A1 (zh) 2018-06-04 2018-09-14 车辆控制方法和系统、车载智能系统、电子设备、介质

Country Status (6)

Country Link
EP (1) EP3628549B1 (zh)
JP (1) JP6916307B2 (zh)
KR (2) KR102374507B1 (zh)
CN (1) CN108819900A (zh)
SG (1) SG11201911197VA (zh)
WO (1) WO2019232973A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312958A (zh) * 2021-03-22 2021-08-27 广州宸祺出行科技有限公司 一种基于司机状态的派单优先度调整方法及装置
JP2022533885A (ja) * 2020-04-24 2022-07-27 シャンハイ センスタイム リンカン インテリジェント テクノロジー カンパニー リミテッド 車両用ドア制御方法、車両、システム、電子機器及び記憶媒体

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291607B (zh) * 2018-12-06 2021-01-22 广州汽车集团股份有限公司 驾驶员分神检测方法、装置、计算机设备和存储介质
CN109693612A (zh) * 2019-01-09 2019-04-30 北京旷视科技有限公司 一种行车辅助方法、装置、车载终端和计算机可读介质
CN110077361B (zh) * 2019-04-26 2022-02-22 深圳市元征科技股份有限公司 一种车辆控制方法及装置
CN110111535A (zh) * 2019-06-11 2019-08-09 陈乐堂 利用第五代移动通信技术的实时交通监控方法
CN110271557B (zh) * 2019-06-12 2021-05-14 浙江亚太机电股份有限公司 一种车辆用户特征识别系统
CN110390285A (zh) * 2019-07-16 2019-10-29 广州小鹏汽车科技有限公司 驾驶员分神检测方法、系统及车辆
CN110674717B (zh) * 2019-09-16 2022-08-26 杭州奔巴慧视科技有限公司 基于姿态识别的吸烟监测系统
CN110728256A (zh) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 基于车载数字人的交互方法及装置、存储介质
CN112793570A (zh) * 2019-10-29 2021-05-14 北京百度网讯科技有限公司 自动驾驶车辆的控制方法、装置、设备及存储介质
CN111049802A (zh) * 2019-11-18 2020-04-21 上海擎感智能科技有限公司 无感登录方法、系统、存储介质及车机端
WO2021212504A1 (zh) * 2020-04-24 2021-10-28 上海商汤临港智能科技有限公司 车辆和车舱域控制器
CN112037380B (zh) * 2020-09-03 2022-06-24 上海商汤临港智能科技有限公司 车辆控制方法及装置、电子设备、存储介质和车辆
CN112883417A (zh) * 2021-02-01 2021-06-01 重庆长安新能源汽车科技有限公司 一种基于人脸识别的新能源汽车控制方法、系统及新能源汽车
CN113223227B (zh) * 2021-04-15 2023-01-20 广州爽游网络科技有限公司 通过在短视频图像上执行手势控制门禁开关的实现方法
CN113696853B (zh) * 2021-08-27 2022-05-31 武汉市惊叹号科技有限公司 一种基于物联网的智能汽车中央控制系统
CN113824714B (zh) * 2021-09-17 2022-11-25 珠海极海半导体有限公司 车辆配置方法和系统
CN114296465A (zh) * 2021-12-31 2022-04-08 上海商汤临港智能科技有限公司 一种车辆的控制方法、设备和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102975690A (zh) * 2011-09-02 2013-03-20 上海博泰悦臻电子设备制造有限公司 汽车锁定系统及方法
CN103303257A (zh) * 2013-06-24 2013-09-18 开平市中铝实业有限公司 一种汽车指纹及人脸认证启动系统
CN104143090A (zh) * 2014-07-30 2014-11-12 哈尔滨工业大学深圳研究生院 一种基于人脸识别的汽车开门方法
CN105843375A (zh) * 2016-02-22 2016-08-10 乐卡汽车智能科技(北京)有限公司 用于车辆的设置方法、装置及车载电子信息系统
CN107316363A (zh) * 2017-07-05 2017-11-03 奇瑞汽车股份有限公司 一种基于生物识别技术的汽车智能互联系统

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046538A1 (en) * 1995-06-07 2009-02-19 Automotive Technologies International, Inc. Apparatus and method for Determining Presence of Objects in a Vehicle
JP4161963B2 (ja) * 2004-12-16 2008-10-08 トヨタ自動車株式会社 画像情報認証システム
ATE526212T1 (de) * 2005-07-11 2011-10-15 Volvo Technology Corp Verfahren und anordnung zur durchführung von fahreridentitätsüberprüfung
KR100778060B1 (ko) * 2007-06-01 2007-11-21 (주)텔릭스타 얼굴인식기술을 이용한 위난방지장치 및 이를 이용한위난방지장치 시스템
DE102008020915A1 (de) * 2007-11-06 2009-05-14 Klotz, Werner Elektronischer Nachweis- Einrichtungen "Schlüssel" programmierbare Freigabe- Fahrerlaubnis Lizenz- "Mobil"-Funkfernbedienungs- Kontroll- System- oder Schlüsselsteuerung
JP4636171B2 (ja) * 2008-12-17 2011-02-23 トヨタ自動車株式会社 車両用生体認証システム
JP5402365B2 (ja) * 2009-08-04 2014-01-29 大日本印刷株式会社 キーレスエントリー装置
JP5437904B2 (ja) * 2010-05-12 2014-03-12 株式会社東海理化電機製作所 給電プラグロック装置
CN101902619A (zh) * 2010-06-22 2010-12-01 浙江天鸿汽车用品有限公司 车载智能身份识别与监视系统
JP5782726B2 (ja) * 2011-02-04 2015-09-24 日産自動車株式会社 覚醒低下検出装置
US20140094987A1 (en) * 2012-09-28 2014-04-03 Intel Corporation Tiered level of access to a set of vehicles
US9751534B2 (en) * 2013-03-15 2017-09-05 Honda Motor Co., Ltd. System and method for responding to driver state
WO2015056530A1 (ja) * 2013-10-17 2015-04-23 みこらった株式会社 自動運転車、自動運転車の盗難防止システム、自動運転車の盗難防止プログラム、端末制御用プログラム及び自動運転車のレンタル方法
DE102013114394A1 (de) * 2013-12-18 2015-06-18 Huf Hülsbeck & Fürst Gmbh & Co. Kg Verfahren zur Authentifizierung eines Fahrers in einem Kraftfahrzeug
CN103770733B (zh) * 2014-01-15 2017-01-11 中国人民解放军国防科学技术大学 一种驾驶员安全驾驶状态检测方法及装置
JP6150258B2 (ja) * 2014-01-15 2017-06-21 みこらった株式会社 自動運転車
JP2015153258A (ja) * 2014-02-17 2015-08-24 パナソニックIpマネジメント株式会社 車両用個人認証システム及び車両用個人認証方法
CN106575454A (zh) * 2014-06-11 2017-04-19 威尔蒂姆Ip公司 基于生物特征信息帮助用户访问车辆的系统和方法
KR20160133179A (ko) * 2015-05-12 2016-11-22 자동차부품연구원 통합 hvi기반 운전자 위험상황 판단 방법 및 장치
CN105035025B (zh) * 2015-07-03 2018-04-13 郑州宇通客车股份有限公司 一种驾驶员识别管理方法及系统
KR101895485B1 (ko) * 2015-08-26 2018-09-05 엘지전자 주식회사 운전 보조 장치 및 그 제어 방법
WO2017163488A1 (ja) * 2016-03-25 2017-09-28 Necソリューションイノベータ株式会社 車両システム
JP2017206183A (ja) * 2016-05-20 2017-11-24 Necソリューションイノベータ株式会社 車両システム
JP6776681B2 (ja) 2016-07-18 2020-10-28 株式会社デンソー ドライバ状態判定装置、及びドライバ状態判定プログラム
CN106218405A (zh) * 2016-08-12 2016-12-14 深圳市元征科技股份有限公司 疲劳驾驶监控方法及云端服务器
JP6732602B2 (ja) 2016-08-25 2020-07-29 株式会社デンソーテン 入出庫支援装置および入出庫支援方法
CN106335469B (zh) * 2016-09-04 2019-11-26 深圳市云智易联科技有限公司 车载认证方法、系统、车载装置、移动终端及服务器
KR102371591B1 (ko) * 2016-10-06 2022-03-07 현대자동차주식회사 운전자 상태 판단 장치 및 방법
US9963106B1 (en) * 2016-11-07 2018-05-08 Nio Usa, Inc. Method and system for authentication in autonomous vehicles
CN108022451A (zh) * 2017-12-06 2018-05-11 驾玉科技(上海)有限公司 一种基于云端的驾驶员状态预警上报及分发系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102975690A (zh) * 2011-09-02 2013-03-20 上海博泰悦臻电子设备制造有限公司 汽车锁定系统及方法
CN103303257A (zh) * 2013-06-24 2013-09-18 开平市中铝实业有限公司 一种汽车指纹及人脸认证启动系统
CN104143090A (zh) * 2014-07-30 2014-11-12 哈尔滨工业大学深圳研究生院 一种基于人脸识别的汽车开门方法
CN105843375A (zh) * 2016-02-22 2016-08-10 乐卡汽车智能科技(北京)有限公司 用于车辆的设置方法、装置及车载电子信息系统
CN107316363A (zh) * 2017-07-05 2017-11-03 奇瑞汽车股份有限公司 一种基于生物识别技术的汽车智能互联系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3628549A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022533885A (ja) * 2020-04-24 2022-07-27 シャンハイ センスタイム リンカン インテリジェント テクノロジー カンパニー リミテッド 車両用ドア制御方法、車両、システム、電子機器及び記憶媒体
CN113312958A (zh) * 2021-03-22 2021-08-27 广州宸祺出行科技有限公司 一种基于司机状态的派单优先度调整方法及装置
CN113312958B (zh) * 2021-03-22 2024-04-12 广州宸祺出行科技有限公司 一种基于司机状态的派单优先度调整方法及装置

Also Published As

Publication number Publication date
CN108819900A (zh) 2018-11-16
EP3628549A4 (en) 2020-08-05
KR20200057768A (ko) 2020-05-26
KR102374507B1 (ko) 2022-03-15
SG11201911197VA (en) 2020-01-30
JP6916307B2 (ja) 2021-08-11
KR102297162B1 (ko) 2021-09-06
EP3628549B1 (en) 2022-05-04
EP3628549A1 (en) 2020-04-01
JP2020525884A (ja) 2020-08-27
KR20210111863A (ko) 2021-09-13

Similar Documents

Publication Publication Date Title
WO2019232973A1 (zh) 车辆控制方法和系统、车载智能系统、电子设备、介质
US10970571B2 (en) Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
WO2019232972A1 (zh) 驾驶管理方法和系统、车载智能系统、电子设备、介质
US10915769B2 (en) Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
KR102391279B1 (ko) 운전 상태 모니터링 방법 및 장치, 운전자 모니터링 시스템 및 차량
CN111079476B (zh) 驾驶状态分析方法和装置、驾驶员监控系统、车辆
JP7146959B2 (ja) 運転状態検出方法及び装置、運転者監視システム並びに車両
CN108791299B (zh) 一种基于视觉的驾驶疲劳检测及预警系统及方法
CN103434484B (zh) 车载识别认证装置、移动终端、智能车钥控制系统及方法
CN111709264A (zh) 驾驶员注意力监测方法和装置及电子设备
JP2020525884A5 (zh)
US11783600B2 (en) Adaptive monitoring of a vehicle using a camera
CN107284449A (zh) 一种行车安全预警方法及系统、汽车、可读存储介质
CN109770922A (zh) 嵌入式疲劳检测系统及方法
US20190149777A1 (en) System for recording a scene based on scene content
KR101005339B1 (ko) 개인화된 템플릿 기반의 지능형 졸음운전 감시시스템
CN114760417A (zh) 一种图像拍摄方法和装置、电子设备和存储介质
KR20110065304A (ko) 개인화된 템플릿 기반의 지능형 졸음운전 감시시스템 및 방법
JP7060841B2 (ja) 運転評価装置、運転評価方法、及び運転評価プログラム
US20240051465A1 (en) Adaptive monitoring of a vehicle using a camera
WO2020261832A1 (ja) 画像処理装置、モニタリング装置、制御システム、画像処理方法、及びプログラム
US20190149778A1 (en) Method for variable recording of a scene based on scene content
CN117115894A (zh) 一种非接触式驾驶员疲劳状态分析方法、装置和设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019564878

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919403

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207012404

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE