WO2019232972A1 - 驾驶管理方法和系统、车载智能系统、电子设备、介质 - Google Patents
驾驶管理方法和系统、车载智能系统、电子设备、介质 Download PDFInfo
- Publication number
- WO2019232972A1 WO2019232972A1 PCT/CN2018/105790 CN2018105790W WO2019232972A1 WO 2019232972 A1 WO2019232972 A1 WO 2019232972A1 CN 2018105790 W CN2018105790 W CN 2018105790W WO 2019232972 A1 WO2019232972 A1 WO 2019232972A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- driver
- vehicle
- information
- image
- face
- Prior art date
Links
- 238000007726 management method Methods 0.000 title claims abstract description 89
- 238000000034 method Methods 0.000 claims abstract description 134
- 238000001514 detection method Methods 0.000 claims description 330
- 230000009471 action Effects 0.000 claims description 131
- 230000002159 abnormal effect Effects 0.000 claims description 128
- 238000004891 communication Methods 0.000 claims description 40
- 230000033001 locomotion Effects 0.000 claims description 37
- 206010048232 Yawning Diseases 0.000 claims description 34
- 241001282135 Poromitra oscitans Species 0.000 claims description 26
- 230000035622 drinking Effects 0.000 claims description 19
- 210000003128 head Anatomy 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 19
- 230000007937 eating Effects 0.000 claims description 16
- 230000005856 abnormality Effects 0.000 claims description 14
- 230000000391 smoking effect Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 description 14
- 230000001815 facial effect Effects 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 206010039203 Road traffic accident Diseases 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 210000001747 pupil Anatomy 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 210000000744 eyelid Anatomy 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 235000013305 food Nutrition 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 235000005911 diet Nutrition 0.000 description 2
- 230000037213 diet Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000003651 drinking water Substances 0.000 description 1
- 235000020188 drinking water Nutrition 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005206 flow analysis Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 235000011888 snacks Nutrition 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K28/00—Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions
- B60K28/02—Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
- B60K28/06—Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/005—Handover processes
- B60W60/0051—Handover processes from occupants to vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0818—Inactivity or incapacity of driver
- B60W2040/0863—Inactivity or incapacity of driver due to erroneous selection or response of the driver
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0872—Driver physiology
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
Definitions
- the present application relates to artificial intelligence technology, in particular to a driving management method and system, a vehicle-mounted intelligent system, electronic equipment, and a medium.
- Intelligent vehicle is a comprehensive system that integrates functions such as environmental perception, planning and decision-making, and multi-level assisted driving. It integrates the technologies of computer, modern sensing, information fusion, communication, artificial intelligence and automatic control. Technology complex. At present, research on smart vehicles is mainly focused on improving the safety and comfort of automobiles, and providing excellent human-vehicle interaction interfaces. In recent years, intelligent vehicles have become a research hotspot in the world's vehicle engineering field and a new driving force for the growth of the automotive industry. Many developed countries have incorporated them into intelligent transportation systems that they have focused on.
- the embodiments of the present application provide a driving management method and system, a vehicle-mounted intelligent system, an electronic device, and a medium.
- a driving management method includes:
- the control vehicle executes the operation instruction received by the vehicle.
- the method further includes:
- the method further includes:
- the feature matching result indicates that the feature matching is successful, obtaining the identity information of the vehicle driver according to a pre-stored face image of the successful feature matching;
- the method further includes:
- the feature matching result indicates that the feature matching is successful, obtaining the identity information of the vehicle driver according to a pre-stored face image of the successful feature matching;
- the method further includes: acquiring a living body detection result of the acquired image;
- the controlling the vehicle to execute the operation instruction received by the vehicle according to the result of the feature matching includes:
- the pre-stored face images in the data set are correspondingly provided with driving authority
- the method further includes: if the feature matching result indicates that the feature matching is successful, obtaining a driving authority corresponding to a pre-stored face image with successful feature matching;
- the controlling the vehicle to execute an operation instruction received by the vehicle includes: controlling the vehicle to execute an operation instruction received by the vehicle and within the authority range.
- the method further includes:
- an early warning prompt for an abnormal driving state and / or an intelligent driving control are performed.
- the driver state detection includes any one or more of the following: driver fatigue state detection, driver distraction state detection, driver scheduled distraction action detection, and driver gesture detection.
- the performing driver fatigue state detection based on the video stream includes:
- the status information of at least a part of the face includes any one or more of the following: eyes open Closing status information, mouth opening and closing status information;
- the result of the driver fatigue state detection is determined according to a parameter value of an index for characterizing the driver fatigue state.
- the indicator used to characterize the fatigue state of the driver includes any one or more of the following: the degree of eyes closed and the degree of yawning.
- the parameter value of the degree of closed eyes includes any one or more of the following: number of eyes closed, frequency of closed eyes, duration of closed eyes, amplitude of closed eyes, number of closed eyes, frequency of closed eyes; and / or,
- the parameter value of the yawning degree includes any one or more of the following: yawning status, number of yawning, duration of yawning, and frequency of yawning.
- the detecting the distraction state of the driver based on the video stream includes:
- the index used to characterize the driver's distracted state includes any of the following Or more: the degree of deviation of the face orientation, the degree of deviation of the line of sight;
- a result of detecting the driver's distraction state is determined according to a parameter value of an index for characterizing the driver's distraction state.
- the parameter value of the face orientation deviation degree includes any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or,
- the parameter value of the degree of sight line deviation includes any one or more of the following: the sight line direction deviation angle, the sight line direction deviation duration, and the sight line direction deviation frequency.
- the detecting the face orientation and / or the line of sight direction of the driver image in the video stream includes:
- Face detection and / or line of sight detection is performed according to the key points of the face.
- performing face orientation detection according to the key points of the face to obtain the face orientation information includes:
- the predetermined distraction action includes any one or more of the following: smoking action, drinking action, eating action, calling action, and entertaining action.
- detecting the driver's predetermined distraction based on the video stream includes:
- the method further includes:
- a result of detecting a driver's predetermined distraction action is determined according to a parameter value of the index for characterizing a driver's distraction degree.
- the parameter value of the driver's degree of distraction includes any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
- the method further includes:
- the driver detects a predetermined distraction action as a result of detecting the predetermined distraction action, the detected distraction action is prompted.
- the method further includes:
- a control operation corresponding to a result of the driver state detection is performed.
- the performing a control operation corresponding to a result of the driver state detection includes at least one of the following:
- the driving mode is switched to an automatic driving mode.
- the method further includes:
- the at least part of the results include: abnormal driving state information determined according to driver state detection.
- the method further includes:
- the method further includes:
- the data set is acquired by the mobile terminal device from a cloud server and sent to the vehicle when receiving the data set download request.
- the method further includes:
- the received operation instruction is refused to be executed.
- the method further includes:
- a data set is established according to the registered face image.
- the obtaining a feature matching result of a face portion of at least one image in the video stream with at least one pre-stored face image in a data set includes:
- a face portion of at least one image in the video stream is uploaded to the cloud server, and a feature matching result sent by the cloud server is received.
- a vehicle-mounted intelligent system including:
- a video acquisition unit for controlling a camera component provided on the vehicle to collect a video stream of the driver of the vehicle;
- a result obtaining unit configured to obtain a feature matching result between a face portion of at least one image in the video stream and at least one pre-stored face image in a data set; wherein the data set stores at least one registered driver ’s Pre-stored face images;
- An operation unit is configured to control the vehicle to execute an operation instruction received by the vehicle if the feature matching result indicates that the feature matching is successful.
- a driving management method includes:
- the method further includes:
- the method further includes:
- a data set is established according to the registered face image.
- obtaining the feature matching result between the face image and at least one pre-stored face image in the data set includes:
- Feature matching is performed on the face image and at least one pre-stored face image in the data set to obtain the feature matching result.
- obtaining the feature matching result between the face image and at least one pre-stored face image in the data set includes:
- a feature matching result of the face image and at least one pre-stored face image in a data set is obtained from the vehicle.
- the method further includes:
- the at least part of the results include: abnormal driving state information determined according to driver state detection.
- the method further includes: performing a control operation corresponding to a result of the driver state detection.
- the performing a control operation corresponding to a result of the driver state detection includes:
- the driving mode is switched to an automatic driving mode.
- the method further includes:
- the method further includes:
- the performing data statistics based on the abnormal driving state information includes:
- the performing vehicle management based on the abnormal driving state information includes:
- the performing driver management based on the abnormal driving state information includes:
- an electronic device including:
- An image receiving unit configured to receive a face image to be identified sent by a vehicle
- a matching result obtaining unit configured to obtain a feature matching result between the face image and at least one pre-stored face image in a data set, wherein the data set stores at least one pre-stored face image of a registered driver;
- An instruction sending unit is configured to: if the feature matching result indicates that the feature matching is successful, send an instruction to the vehicle to allow control of the vehicle.
- a driving management system including: a vehicle and / or a cloud server;
- the vehicle is used to execute the driving management method according to any one of the above;
- the cloud server is configured to execute the driving management method according to any one of the foregoing.
- the system further includes: a mobile terminal device, configured to:
- an electronic device including: a memory for storing executable instructions;
- a processor configured to communicate with the memory to execute the executable instructions to complete the driving management method according to any one of the above.
- a computer program including computer-readable code.
- the computer-readable code runs in an electronic device
- a processor in the electronic device executes the program to implement the foregoing.
- the driving management method according to any one.
- a computer storage medium for storing computer-readable instructions, and when the instructions are executed, the driving management method according to any one of the foregoing is implemented.
- a video stream of a driver of the vehicle is collected by controlling a camera component provided on the vehicle; and at least one image in the video stream is acquired
- the feature matching result of the human face part with at least one pre-stored face image in the data set if the feature matching result indicates that the feature matching is successful, the vehicle is controlled to execute the operation instruction received by the vehicle, which reduces the driver's recognition of its dependence on the network.
- the feature matching in the case of the network further improves the safety of the vehicle.
- FIG. 1 is a flowchart of a driving management method according to some embodiments of the present application.
- FIG. 2 is a flowchart of driver fatigue state detection based on a video stream in some embodiments of the present application
- FIG. 3 is a flowchart of detecting a driver's distraction state based on a video stream in some embodiments of the present application
- FIG. 4 is a flowchart of detecting a predetermined distracted motion of a driver based on a video stream in some embodiments of the present application
- FIG. 5 is a flowchart of a driver state detection method according to some embodiments of the present application.
- FIG. 6 is a flowchart of an application example of a driving management method according to some embodiments of the present application.
- FIG. 7 is a schematic structural diagram of a vehicle-mounted intelligent system according to some embodiments of the present application.
- FIG. 8 is a flowchart of a driving management method according to another embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
- FIG. 10 is a flowchart of using a driving management system according to some embodiments of the present application.
- FIG. 11 is a flowchart of using a driving management system according to another embodiment of the present application.
- FIG. 12 is a schematic structural diagram of an application example of an electronic device according to some embodiments of the present application.
- the embodiments of the present application can be applied to electronic devices such as a terminal device, a computer system, and a server, and can be operated with many other general or special-purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of these systems, and more.
- Electronic devices such as a terminal device, a computer system, and a server can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
- the computer system / server can be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks are performed by remote processing devices linked through a communication network. In a distributed cloud computing environment, program modules may be located on a local or remote computing system storage medium including a storage device.
- FIG. 1 is a flowchart of a driving management method according to some embodiments of the present application.
- the execution subject of the driving management method in this embodiment may be a vehicle-end device.
- the execution subject may be an in-vehicle intelligent system or other devices with similar functions.
- the method in this embodiment includes:
- the camera module is set at a position inside the vehicle where the driving position can be photographed.
- the position of the camera module can be fixed or not fixed; in the case of non-fixed, it can be adjusted according to different drivers
- the position of the camera component; under fixed conditions, the lens direction of the camera component can be adjusted for different drivers.
- the operation 110 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a video acquisition unit 71 executed by a processor.
- a pre-stored face image of at least one registered driver is stored in the data set, that is, a face image corresponding to the registered driver is stored in the data set as the pre-stored face image.
- the face part in the image can be obtained through face detection (for example, face detection based on a neural network); feature matching can be performed on the face part with a pre-existing face image in the data set, and a convolutional neural network can be used.
- face detection for example, face detection based on a neural network
- feature matching can be performed on the face part with a pre-existing face image in the data set, and a convolutional neural network can be used.
- the features of the face part and the features of the pre-existing face image are obtained separately, and then feature matching is performed to identify the pre-existing face image corresponding to the face part corresponding to the face, thereby realizing the identity of the driver who has collected the image .
- the operation 120 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a result acquisition unit 72 executed by the processor.
- the feature matching result includes two cases: successful feature matching and unsuccessful feature matching.
- the feature matching is successful, it indicates that the driver of the vehicle is a registered driver and can control the vehicle. At this time, the controlling vehicle performs receiving To the operation instructions (operation instructions issued by the driver).
- the operation 130 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by an operation unit 73 executed by the processor.
- a video stream of a driver of the vehicle is collected by controlling a camera component provided on the vehicle; a face portion of at least one image in the video stream and at least one pre-stored face image in a data set are acquired.
- Feature matching result if the feature matching result indicates that the feature matching is successful, controlling the vehicle to execute the operation instructions received by the vehicle, reducing the driver's reliance on the network for recognition, and enabling feature matching without the network, further improving the vehicle Security.
- the driving management method further includes:
- the data set is usually stored in a cloud server.
- face matching on the vehicle side needs to be implemented. In order to be able to match human faces even when there is no network, you can use the network when Download the data set from the cloud server and save the data set on the vehicle side. At this time, even if there is no network and cannot communicate with the cloud server, face matching can be achieved on the vehicle side, and it is convenient for the vehicle side to manage the data set.
- the driving management method further includes:
- the identity information of the driver of the vehicle is obtained according to the pre-stored face image of the successful feature matching
- the feature matching when the feature matching is successful, it means that the driver is a registered driver, corresponding identity information can be obtained from the data set, and the image and identity information can be sent to the cloud server.
- the driver establishes real-time tracking (for example, when and where a certain driver drives a certain vehicle). Since the image is obtained based on the video stream, in the presence of the network, the image can be uploaded to the cloud server in real time. To realize the analysis, statistics and / or management of the driving state of the driver.
- the driving management method may further include:
- the identity information of the driver of the vehicle is obtained according to the pre-stored face image of the successful feature matching
- the face matching process is based on the face part in the image
- the cloud server when sending the image to the cloud server, only the face part obtained based on the image segmentation can be sent, which is beneficial to reducing the on-board end.
- the cloud server After the cloud server receives the intercepted face part and identity information, it can store the face part as a new face image of the driver in the data set, which can be added or Replace the existing face image; as the basis for the next face recognition.
- the driving management method may further include: acquiring a living body detection result of the acquired image;
- Operation 130 may include:
- the vehicle is controlled to execute the operation instruction received by the vehicle.
- the living body detection is used to determine whether the image is from a real person (or a living person), and the identity verification of the driver can be made more accurate through the living body detection.
- This embodiment does not limit the specific method of living body detection. For example, three-dimensional information depth analysis of the image, facial optical flow analysis, Fourier spectrum analysis, edge or reflection security clue analysis, and multi-frame video in the video stream can be used. Image frame comprehensive analysis and other methods are implemented, so it will not be repeated here.
- the pre-stored face image in the data set is correspondingly provided with driving authority
- the driving management method may further include: if the feature matching result indicates that the feature matching is successful, obtaining the driving authority corresponding to the pre-stored face image of the successful feature matching;
- Operation 130 may include controlling the vehicle to execute an operation instruction received by the vehicle within the authority range.
- the safety of the vehicle can be improved, and a driver with high permission can be guaranteed to have higher control rights, which can improve the user experience.
- the setting of different permissions can be distinguished by limiting the operating time and / or the operating range. For example, some drivers can drive only during the day or at a specific time, while other drivers can drive all day. Or, some drivers may use the in-car entertainment equipment while driving the vehicle, while other drivers may only drive.
- the driving management method further includes:
- an early warning prompt for an abnormal driving state and / or an intelligent driving control are performed.
- the results of driver state detection may be output.
- intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
- the result of the driver state detection may be output, and at the same time, intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
- the results of driver state detection may be output locally and / or the results of driver state detection may be output remotely.
- the result of the driver state detection is output locally, that is, the driver state detection result is output through the driver state detection device or the driver monitoring system, or the driver state detection result is output to the central control system in the vehicle, so that the vehicle is based on the As a result of the driver state detection, intelligent driving control is performed on the vehicle.
- Remotely output the results of driver status detection for example, the results of driver status detection may be sent to a cloud server or management node for collection, analysis, and / or management of the results of driver status detection by the cloud server or management node, or The vehicle is remotely controlled based on the result of the driver state detection.
- an early warning prompt for an abnormal driving state and / or intelligent driving control may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a processor.
- the output module and / or the intelligent driving control module are executed.
- the foregoing operations may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a driver state detection unit operated by the processor.
- the driver state detection may include, but is not limited to, any one or more of the following: driver fatigue state detection, driver distraction state detection, driver predetermined distraction motion detection, and driver gesture detection.
- the driver state detection results accordingly include, but are not limited to, any one or more of the following: the driver fatigue state detection result, the driver distraction state detection result, the driver scheduled distraction action detection result, and the driver gesture Test results.
- the predetermined distraction action may be any distraction action that may distract the driver ’s attention, such as: smoking action, drinking action, eating action, phone call action, entertainment action, and the like.
- eating actions include actions such as eating fruits and snacks
- entertaining actions include actions such as sending messages, playing games, and singing songs by any electronic device.
- electronic devices include mobile phones, handheld computers, and games. Machine and so on.
- driver state detection can be performed on a driver image, and a result of the driver state detection can be output, thereby real-time detection of the driving state of the driver is facilitated when driving
- the driver ’s driving condition is poor, he / she shall take corresponding measures in time to ensure safe driving and avoid road traffic accidents.
- FIG. 2 is a flowchart of detecting driver fatigue state based on a video stream in some embodiments of the present application.
- the embodiment shown in FIG. 2 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
- a method for detecting driver fatigue status based on a video stream may include:
- the at least partial region of the human face may include at least one of a driver's face eye region, a driver's face mouth region, and an entire region of the driver's face.
- the state information of at least a part of the face may include any one or more of the following: eye opening and closing state information, and mouth opening and closing state information.
- the above-mentioned eye-opening state information may be used to perform closed-eye detection of the driver, for example, detecting whether the driver is half-closed ("half" indicates a state of incompletely closed eyes, such as squinting in a doze state, etc. ), Whether to close eyes, the number of eyes closed, and the magnitude of eyes closed.
- the eye opening and closing state information may be information obtained by normalizing the height of the eyes opened.
- the mouth opening and closing state information can be used to perform yawn detection of the driver, for example, detecting whether the driver yawns, the number of yawns, and the like.
- the mouth opening and closing state information may be information obtained by normalizing the height of the mouth opening.
- face keypoint detection may be performed on the driver image, and eye keypoints in the detected face keypoints may be directly used for calculation, so as to obtain eye opening and closing state information according to the calculation result.
- an eye key point (for example, coordinate information of the eye key point in the driver image) in the face key point may be first used to locate the eyes in the driver image to obtain an eye image, and Use this eye image to obtain the upper eyelid line and the lower eyelid line, and by calculating the interval between the upper eyelid line and the lower eyelid line, obtain the eye opening and closing state information.
- the mouth key points in the face key points can be directly used for calculation, so as to obtain the mouth opening and closing state information according to the calculation results.
- the mouth key point (for example, the coordinate information of the mouth key point in the driver image) in the face key point may be first used to locate the mouth in the driver image.
- a mouth image is obtained, and an upper lip line and a lower lip line are obtained by using the mouth image.
- the mouth opening and closing state information is obtained.
- the indicators used to characterize the fatigue state of the driver may include, but are not limited to, any one or more of the following: the degree of eyes closed, the degree of yawning.
- the parameter value of the degree of closed eyes may include, but is not limited to, any one or more of the following: number of eyes closed, frequency of closed eyes, duration of closed eyes, amplitude of closed eyes, number of closed eyes, half Eye closing frequency; and / or, yawning parameter values may include, but are not limited to, any one or more of the following: yawn status, yawn count, yawn duration, and yawn frequency.
- the result of the driver fatigue state detection may include: no fatigue state and fatigue driving state are detected.
- the result of the driver fatigue state detection may also be a degree of fatigue driving, where the degree of fatigue driving may include a normal driving level (also referred to as a non-fatigue driving level) and a fatigue driving level.
- the fatigue driving level may be one level, or may be divided into multiple different levels.
- the above-mentioned fatigue driving level may be divided into: a prompt fatigue driving level (also referred to as a mild fatigue driving level) and a warning fatigue.
- Driving level also called severe fatigue driving level
- the degree of fatigue driving can be divided into more levels, such as: mild fatigue driving level, moderate fatigue driving level, and severe fatigue driving level. This embodiment does not limit the different levels included in the degree of fatigue driving.
- FIG. 3 is a flowchart of detecting a distracted state of a driver based on a video stream in some embodiments of the present application.
- the embodiment shown in FIG. 3 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
- a method for detecting driver distraction based on a video stream may include:
- the above-mentioned face orientation information may be used to determine whether the driver's face direction is normal, for example, determining whether the driver's side face is facing forward or whether he is turning back.
- the face orientation information may be an angle between the front of the driver's face and the front of the vehicle being driven by the driver.
- the above-mentioned line-of-sight direction information may be used to determine whether the line-of-sight direction of the driver is normal, for example, determining whether the driver is looking ahead, etc., and the line-of-sight direction information may be used to determine whether the line of sight of the driver has deviated.
- the line of sight direction information may be an angle between the line of sight of the driver and the front of the vehicle being driven by the driver.
- the index used to characterize the driver's distracted state may include, but is not limited to, any one or more of the following: the degree of deviation of the face orientation, and the degree of deviation of the line of sight.
- the parameter value of the degree of deviation of the face orientation may include, but is not limited to, any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or, the degree of the line of sight deviation
- the parameter value may include, for example, but not limited to any one or more of the following: a deviation angle of the sight line direction, a deviation time of the sight line direction, and a deviation frequency of the sight line direction.
- the above-mentioned degree of deviation of the line of sight may include, for example, at least one of whether the line of sight is deviated and whether the line of sight is severely deviated; and the above-mentioned degree of deviation of the face orientation (also referred to as the degree of turning face or the degree of turning back) may include, for example ,: At least one of whether the head is turned, whether it is turned for a short time, and whether it is turned for a long time.
- a long-time large-angle turn can record a long-time large-angle turn, as well as the length of the current turn; when it is determined that the face orientation information is not greater than the first orientation, greater than the second orientation, and not greater than the first One direction, which is larger than the second direction, lasts for N1 frames (for example, 9 frames or 10 frames, etc.), then it is determined that the driver has a long-time small-angle turning phenomenon, and a small-angle turning can be recorded. , You can also record the duration of this turn.
- the included angle between the line of sight information and the front of the car is greater than the first included angle, and the phenomenon that is greater than the first included angle persists for N2 frames (for example, 8 frames or 9 frames) Frame, etc.), it is determined that the driver has experienced a serious sight deviation, which can record a severe sight deviation, or the duration of the severe sight deviation; in determining that the angle between the sight direction information and the front of the car is not greater than If the first included angle is greater than the second included angle, and the phenomenon that the angle is not greater than the first included angle and greater than the second included angle persists for N2 frames (for example, 8 frames or 9 frames, etc.), it is determined that the driver appears Once the sight deviation phenomenon is observed, the sight deviation can be recorded once, and the duration of the sight deviation can also be recorded.
- N2 frames for example, 8 frames or 9 frames
- the values of the first orientation, the second orientation, the first included angle, the second included angle, N1, and N2 may be set according to actual conditions, and the value of the values is not limited in this embodiment.
- the result of the driver's distracted state detection may include, for example, the driver ’s distraction (the driver ’s distraction is not distracted) and the driver ’s distracted state; or the driver ’s distracted state detection result may be driving
- the level of distraction of the driver may include: the driver's concentration (the driver's distraction is not distracted), the driver's concentration is slightly distracted, the driver's concentration is moderately distracted, and the driver's concentration is seriously distracted.
- the level of driver distraction can be determined by a preset condition that is satisfied by a parameter value of an index used to characterize a driver's distracted state.
- the driver's distraction level is the driver's concentration
- the deviation angle of the sight direction and the face orientation deviation angle are greater than or equal to The first preset angle with a duration greater than the first preset duration and less than or equal to the second preset duration is a slight distraction of the driver's attention
- the sight direction deviation angle or the face orientation deviation angle is greater than or equal to The first preset angle and the duration is greater than the second preset duration and less than or equal to the third preset duration, which is a moderate distraction of the driver's attention
- the sight direction deviation angle or the face orientation deviation angle is greater than or equal to The first preset angle and the duration longer than the third preset duration is a serious distraction of the driver.
- the first preset duration is shorter than the second preset duration and the second preset duration is shorter than the third preset duration.
- a parameter value of an index for characterizing a driver's distraction state is determined by detecting a face direction and / or a line of sight direction of a driver image, and the result of the driver's distraction state detection is determined according to this, and the driver can be judged Whether to focus on driving, by quantifying the driver's distraction status, quantifying the degree of driving concentration into at least one of the indicators of the degree of sight deviation and the degree of turning, which is conducive to timely and objective measurement of the driver's focused driving status .
- operation 302 of detecting the face orientation and / or the line of sight direction of the driver image in the video stream may include:
- Face orientation and / or line of sight detection is performed based on key points of the face.
- facial keypoints usually include head pose feature information
- face orientation detection is performed based on facial keypoints to obtain facial orientation information, including: obtaining heads based on facial keypoints Characteristic information of the pose; determine the face orientation (also called head pose) information based on the feature information of the head pose, where the face orientation information here can indicate, for example, the direction and angle of the face's rotation, and the direction of rotation here It can be turning left, turning right, turning down, and / or turning up, etc.
- Face orientation head attitude
- yaw represents the horizontal deflection angle (yaw angle) and vertical deflection of the head in the normalized ball coordinates (the camera coordinate system where the camera is located)
- Angle elevation
- the horizontal deflection angle and / or the vertical deflection angle is greater than a preset angle threshold and the duration is greater than a preset time threshold, it may be determined that the driver's distracted state detection result is inattention.
- a corresponding neural network may be utilized to obtain face orientation information of at least one driver image.
- the detected key points of the face may be input to a first neural network, and the first neural network may extract the characteristic information of the head pose based on the received key points of the face and input the second neural network; Head posture estimation is performed based on the feature information of the head posture, and face orientation information is obtained.
- the existing developments are mature and have good real-time neural network for extracting feature information of head pose and neural network for estimating face orientation to obtain face orientation information
- aiming at The video captured by the camera can accurately and timely detect the face orientation information corresponding to at least one image frame (that is, at least one driver image) in the video, thereby helping to improve the accuracy of determining the degree of driver's attention.
- the gaze direction detection is performed according to the key points of the face to obtain the gaze direction information, including: determining the pupil edge position according to the eye image positioned by the eye key point in the key points of the face, and calculating according to the pupil edge position Pupil center position; Calculate sight direction information based on pupil center position and eye center position. For example: a vector of the pupil center position and the eye center position in the eye image can be calculated, and this vector can be used as the sight direction information.
- the direction of the line of sight can be used to determine whether the driver is focusing on driving.
- the line of sight direction can be expressed as (yaw, pitch), where yaw represents the horizontal deflection angle (yaw angle) and vertical deflection angle (elevation angle) of the line of sight in the normalized ball coordinates (the camera coordinate system where the camera is located).
- yaw represents the horizontal deflection angle
- vertical deflection angle elevation angle
- the horizontal deflection angle and / or the vertical deflection angle is greater than a preset angle threshold and the duration is greater than a preset time threshold, it may be determined that the driver's distracted state detection result is inattention.
- determining the pupil edge position according to an eye image positioned by an eye keypoint in a keypoint of a face can be achieved by: based on a third neural network, the The eye area image detects pupil edge positions, and obtains pupil edge positions based on information output by the third neural network.
- FIG. 4 is a flowchart of detecting a predetermined distracted motion of a driver based on a video stream in some embodiments of the present application.
- the embodiment shown in FIG. 4 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
- a method for detecting a driver's predetermined distraction based on a video stream may include:
- the driver performs a predetermined distracted motion detection by detecting a target object corresponding to the predetermined distracted motion and determining whether a predetermined distracted motion occurs according to a detection frame of the detected target object, thereby determining whether the driver is distracted.
- the operations 402 to 404 may include: performing face detection on the driver image via a fourth neural network to obtain a face detection frame, and extracting feature information of the face detection frame; The four neural networks determine whether a smoking action occurs based on the feature information of the face detection frame.
- the above operations 402 to 404 may include: detecting a preset target object corresponding to the eating, drinking, drinking, calling, and entertainment actions of the driver image via the fifth neural network to obtain a detection frame of the preset target object, where the preset target object may be Including: hands, mouth, eyes, target objects; target objects can include, but are not limited to, any one or more of the following: containers, food, electronic devices; predetermined distracting actions are determined according to a preset target object detection frame
- the detection result of the predetermined distraction action may include one of the following: no eating action / drinking action / calling action / entertainment action, eating action, drinking action, calling action, or entertainment action.
- the detection frame of the preset target object to determine the detection result of the predetermined distraction movement may include: a detection frame of whether the hand is detected, a detection frame of the mouth, a detection frame of the eye, and a detection frame of the target object, and according to the hand Whether the detection frame of the target object overlaps with the detection frame of the target object, the type of the target object, and whether the distance between the detection frame of the target object and the detection frame of the mouth or the detection frame of the eye meets a preset condition, determine the Test results.
- the detection frame of the hand overlaps the detection frame of the target object, and the type of the target object is a container or food, and the detection frame of the target object and the detection frame of the mouth overlap, it is determined that eating or drinking actions occur ; And / or, if the detection frame of the hand overlaps the detection frame of the target object, the type of the target object is an electronic device, and the minimum distance between the detection frame of the target object and the detection frame of the mouth is less than the first preset distance, Or the minimum distance between the detection frame of the target object and the detection frame of the eye is smaller than the second preset distance, and it is determined that an entertainment action or a phone call action occurs.
- the detection frame of the hand, the detection frame of the mouth, and the detection frame of any target object are not detected at the same time, and the detection frame of the hand, the detection frame of the eye, and the detection frame of any target object are not detected at the same time, Determining the detection result of distraction movement as no eating movement, drinking movement, telephone movement, and entertainment movement; and / or, if the detection frame of the hand does not overlap with the detection frame of the target object, determine the detection result of distraction movement Is that no eating, drinking, calling, or entertaining action is detected; and / or, if the type of the target object is a container or food, and there is no overlap between the detection frame of the target object and the detection frame of the mouth, and / Or, the type of the target object is an electronic device, and the minimum distance between the detection frame of the target object and the detection frame of the mouth is not less than the first preset distance, or between the detection frame of the target object and the detection frame of the eye The minimum distance is not less than the second preset distance, and it
- the method may further include: if the result of the driver's distraction state detection is that a predetermined distraction action is detected, prompting the detected predetermined distraction action, for example, : When a smoking action is detected, it is prompted to detect smoking; when a drinking action is detected, it is prompted to detect drinking water; when a calling action is detected, it is prompted to detect a call.
- the operation of the predetermined distracted action detected by the prompt may be executed by the processor by calling a corresponding instruction stored in the memory, or may be performed by a prompt unit executed by the processor.
- the index used to characterize the degree of driver distraction may include, but is not limited to, any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action. For example: the number of smoking actions, duration, frequency; the number of drinking actions, duration, frequency; the number of phone calls, duration, frequency; and so on.
- the result of the driver's predetermined distracted motion detection may include: no predetermined distracted motion is detected, and the detected predetermined distracted motion is included.
- the result of the driver's predetermined distraction detection may also be a distraction level, for example, the distraction level may be divided into: an undistracted level (also referred to as a focused driving level), and a distracted driving level (also May be referred to as a mildly distracted driving level) and a warning distracted driving level (also may be referred to as a severely distracted driving level).
- the level of distraction can also be divided into more levels, such as: undistracted driving level, mildly distracted driving level, moderately distracted driving level, and severely distracted driving level.
- the distraction level of at least one of the embodiments may also be divided according to other situations, and is not limited to the above-mentioned level division.
- the distraction level may be determined by a preset condition satisfied by a parameter value of an index used to characterize a driver's degree of distraction. For example: if no predetermined distraction action is detected, the distraction level is the undistraction level (also known as a focused driving level); if the duration of the predetermined distraction action is less than the first preset duration and the frequency is less than the first A preset frequency, the level of distraction is a mild distracted driving level; if the duration of the predetermined distracted action is detected to be greater than the first preset duration, and / or the frequency is greater than the first preset frequency, the distracted level is a severe distraction Heart driving level.
- the undistraction level also known as a focused driving level
- the level of distraction is a mild distracted driving level
- the duration of the predetermined distracted action is detected to be greater than the first preset duration, and / or the frequency is greater than the first preset frequency
- the distracted level is a severe distraction Heart driving level.
- the driver state detection method may further include: outputting distraction prompt information according to a result of the driver's distracted state detection and / or a result of the driver's predetermined distracted motion detection.
- the output Distraction reminders to remind the driver to concentrate on driving.
- the foregoing operation of outputting the distraction prompt information according to the result of the driver's distracted state detection and / or the result of the driver's predetermined distracted motion detection may be executed by the processor calling a corresponding instruction stored in the memory, or It may be executed by a prompt unit executed by the processor.
- FIG. 5 is a flowchart of a driver state detection method according to some embodiments of the present application.
- the embodiment shown in FIG. 5 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
- the driver state detection method in this embodiment includes:
- each driver state level corresponds to a preset condition, which can judge the results of driver fatigue state detection, driver distraction state detection results, and driver predetermined distracted motion detection results in real time.
- the driver status level corresponding to the satisfied preset conditions may be determined as a result of the driver status detection of the driver.
- the driver status level may include, for example, a normal driving status (also referred to as a focused driving level), a prompt driving status (a poor driving status), and a warning driving status (a very poor driving status).
- the foregoing embodiment shown in FIG. 5 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by an output module run by the processor.
- the preset conditions corresponding to a normal driving state may include:
- driver fatigue state detection is: no fatigue state or non-fatigue driving level is detected;
- the result of the driver's distraction detection is: the driver's concentration
- condition 3 the result of the driver's predetermined distraction action detection is that no predetermined distraction action or no distraction level is detected.
- the driving state level is a normal driving state (also referred to as a focused driving level).
- the preset conditions corresponding to the driving state may include:
- the result of the driver fatigue state detection is: prompting a fatigue driving level (also known as a mild fatigue driving level);
- condition 33 the result of the driver's predetermined distracted motion detection is: prompting a distracted driving level (also referred to as a mild distracted driving level).
- a distracted driving level also referred to as a mild distracted driving level
- driving The status level is a prompt driving status (the driving status is poor).
- the preset conditions corresponding to the warning driving state may include:
- Warning fatigue driving level also called severe fatigue driving level
- the result of the driver's predetermined distracted motion detection is: a warning of a distracted driving level (also referred to as a severe distracted driving level).
- the driving state level is a warning driving state (the driving state is very poor).
- the driver state detection method may further include:
- a control operation corresponding to the result of the driver state detection is performed.
- the execution of the control operation corresponding to the result of the driver state detection may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a control unit executed by the processor.
- performing a control operation corresponding to the result of the driver state detection may include at least one of the following:
- Output prompts / alarm information corresponding to the prompt / alarm predetermined conditions for example: alert the driver by means of sound (such as: voice or ringing) / light (such as: lights or lights flashing) / vibration, In order to remind the driver to pay attention, prompt the driver to return distracted attention to driving, or encourage the driver to rest, etc., in order to achieve safe driving and avoid road traffic accidents; and / or,
- a predetermined driving mode switching condition for example, a preset condition corresponding to a warning driving state (for example, the driving state is very poor) is satisfied or the driving state level is a warning distracted driving level (also referred to as When severely distracted driving level), switch the driving mode to automatic driving mode to achieve safe driving and avoid road traffic accidents; at the same time, you can also use sound (such as: voice or ringing) / light (such as: light on Or flashing lights, etc.) / Vibration to remind the driver in order to remind the driver, prompt the driver to return distracted attention to driving, or encourage the driver to rest, etc .; and / or, if the determined driver status
- the result of the test satisfies the conditions for sending the predetermined information, sends the predetermined information to the predetermined contact method, or establishes a communication connection with the predetermined contact method; for example, when the driver is required to make a certain action, it indicates that the driver is in a dangerous state or needs assistance
- the driver state detection method may further include: sending at least part of a result of the driver state detection to a cloud server.
- At least part of the results include: abnormal driving state information determined according to driver state detection.
- sending part or all of the results obtained from driver status detection to a cloud server can back up abnormal driving status information. Since normal driving status does not need to be recorded, this embodiment only uses abnormal driving The status information is sent to the cloud server; when the obtained driver status detection results include normal driving status information and abnormal driving status information, part of the result is transmitted, that is, only abnormal driving status information is sent to the cloud server; When all the results are abnormal driving status information, all the abnormal driving status information is transmitted to the cloud server.
- the driver state detection method may further include: storing an image or a video segment corresponding to the abnormal driving state information in the video stream on the vehicle side; and / or,
- the image or video segment corresponding to the abnormal driving state information is saved locally on the vehicle side to realize evidence preservation.
- Images or video segments corresponding to abnormal driving status information can be uploaded to the cloud server for backup. When information is needed, it can be downloaded from the cloud server to the vehicle for viewing, or downloaded from the cloud server to View by other clients.
- the driving management method further includes: when the vehicle and the mobile terminal device are in a communication connection state, sending a data set download request to the mobile terminal device;
- the data set is obtained by the mobile device from the cloud server and sent to the vehicle when the data set download request is received.
- the mobile terminal device may be a mobile phone, a PAD, or a terminal device on another vehicle.
- the mobile terminal device receives the data set download request, it sends the data set download request to the cloud server, and then obtains the data set and sends it to the vehicle.
- the network such as 2G, 3G, 4G, etc.
- the network can be applied to avoid the vehicle being restricted by the network from downloading the dataset from the cloud server to face matching. The problem.
- the driving management method further includes: if the feature matching result indicates that the feature matching is unsuccessful, refusing to execute the received operation instruction.
- the unsuccessful feature matching indicates that the driver has not been registered. At this time, in order to protect the rights of the registered driver, the vehicle will refuse to execute the driver's operation instruction.
- the driving management method further includes:
- the driver registration request includes a driver's registered face image
- a driver's registration request from a driver is received by a vehicle, and a registered face image of the driver is saved.
- a data set is established on the vehicle side based on the registered face image. Individual face matching, no need to download dataset from cloud server.
- FIG. 6 is a flowchart of an application example of a driving management method according to some embodiments of the present application.
- the execution subject of the driving management method in this embodiment may be a vehicle-end device.
- the execution subject may be an in-vehicle intelligent system or other devices with similar functions, and may be a filtered face image and a driver ID.
- Information is assigned to the corresponding driver authority information and stored in a data set;
- the vehicle client obtains the driver image, and the driver image is subjected to face detection, quality screening, and living body recognition in order.
- the filtered to-be-recognized face image is matched with all the face images in the data set. Face features can be obtained through neural network extraction.
- the authority information corresponding to the face image to be identified is determined, and the vehicle action is controlled based on the authority information.
- the vehicle client performs feature extraction on the to-be-recognized image and the face image in the data set. To obtain corresponding facial features, perform matching based on the facial features, and perform corresponding operations based on the matching results.
- operation 120 may include: when the vehicle and the cloud server are in a communication connection state, uploading a face portion of at least one image in the video stream to the cloud server, and receiving the cloud server sending Feature matching results.
- feature matching is implemented in a cloud server.
- the vehicle uploads a face portion of at least one image in the video stream to the cloud server, and the cloud server and the face portion in the data set The image is subjected to feature matching to obtain the feature matching result.
- the vehicle obtains the feature matching result from the cloud server, which reduces the amount of data transmission between the vehicle and the cloud server, and reduces network overhead.
- the foregoing program may be stored in a computer-readable storage medium.
- the program is executed, the program is executed.
- the method includes the steps of the foregoing method embodiment.
- the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
- FIG. 7 is a schematic structural diagram of a vehicle-mounted intelligent system according to some embodiments of the present application.
- the in-vehicle intelligent system of this embodiment can be used to implement the foregoing driving management method embodiments of the present application.
- the vehicle-mounted intelligent system of this embodiment includes:
- the video acquisition unit 71 is configured to control a camera component provided on the vehicle to collect a video stream of a driver of the vehicle.
- the result obtaining unit 72 is configured to obtain a feature matching result of a face part of at least one image in the video stream and at least one pre-stored face image in the data set.
- a pre-stored face image of at least one registered driver is stored in the data set.
- An operation unit 73 is configured to control the vehicle to execute an operation instruction received by the vehicle if the feature matching result indicates that the feature matching is successful.
- the video stream of the driver of the vehicle is collected by controlling the camera component provided on the vehicle; the face portion of at least one image in the video stream and at least one pre-stored face image in the data set are acquired.
- Feature matching result if the feature matching result indicates that the feature matching is successful, controlling the vehicle to execute the operation instructions received by the vehicle, reducing the driver's reliance on the network for recognition, and achieving feature matching without the network, further improving the vehicle's Security.
- the vehicle-mounted intelligent system further includes:
- a first data downloading unit configured to send a data set download request to the cloud server when the vehicle and the cloud server are in a communication connection state
- the data storage unit is used for receiving and storing the data set sent by the cloud server.
- the vehicle-mounted intelligent system further includes:
- the first cloud storage unit is configured to: if the feature matching result indicates that the feature matching is successful, obtain the identity information of the driver of the vehicle according to the pre-stored face image of the feature matching success; and send the image and identity information to the cloud server.
- the vehicle-mounted intelligent system may further include:
- the second cloud storage unit if the feature matching result indicates that the feature matching is successful, the identity information of the driver of the vehicle is obtained according to the pre-stored face image of the successful feature matching; the face part in the image is intercepted; the intercepted face part is sent to the cloud server and Identity Information.
- the in-vehicle intelligent system may further include: a living body detection unit for obtaining a living body detection result of the acquired image;
- the operation unit 73 is configured to control the vehicle to execute the operation instruction received by the vehicle according to the feature matching result and the living body detection result.
- the pre-stored face image in the data set is correspondingly provided with driving authority
- An authority obtaining unit is configured to obtain the driving authority corresponding to the pre-stored face image of the successful feature matching if the feature matching result indicates that the feature matching is successful;
- the operation unit 73 is further configured to control the vehicle to execute an operation instruction received by the vehicle within the authority range.
- the vehicle-mounted intelligent system further includes:
- Status detection unit for detecting driver status based on a video stream
- An output unit for providing an early warning prompt for an abnormal driving state according to the result of the driver state detection and / or,
- the intelligent driving control unit is configured to perform intelligent driving control according to the result of the driver state detection.
- the results of the driver's driver state detection may be output.
- intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
- the result of the driver state detection may be output, and at the same time, intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
- the driver state detection includes any one or more of the following: driver fatigue state detection, driver distraction state detection, driver predetermined distraction motion detection, and driver gesture detection.
- the state detection unit when the state detection unit performs driver fatigue state detection based on the video stream, the state detection unit is configured to:
- the status information of at least part of the face includes any one or more of the following: eye opening status information, Mouth opening and closing status information;
- the result of the driver fatigue state detection is determined according to a parameter value of an index for characterizing the driver fatigue state.
- the indicator used to characterize the fatigue state of the driver includes any one or more of the following: the degree of eyes closed and the degree of yawning.
- the parameter value of the degree of closed eyes includes any one or more of the following: the number of closed eyes, the frequency of closed eyes, the duration of closed eyes, the amplitude of closed eyes, the number of closed eyes, the frequency of closed eyes; and / or,
- the parameter values of the yawning degree include any one or more of the following: yawning status, number of yawning, duration of yawning, and frequency of yawning.
- the state detection unit when the state detection unit performs driver distraction state detection based on the video stream, the state detection unit is configured to:
- the indicators used to characterize the driver's distracted state include any one or more of the following: The degree of deviation of the face orientation and the degree of deviation of the sight;
- the result of detecting the driver's distraction state is determined according to a parameter value of an index for characterizing the driver's distraction state.
- the parameter value of the deviation degree of the face orientation includes any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or,
- the parameter values of the degree of line of sight deviation include any one or more of the following: the angle of line of sight deviation, the length of time of line of sight deviation, and the frequency of line of sight deviation.
- the state detection unit when the state detection unit detects a face orientation and / or a line of sight direction of the driver image in the video stream, the state detection unit is configured to:
- Face orientation and / or line of sight detection is performed based on key points of the face.
- the state detection unit when the state detection unit performs face orientation detection according to a key point of the face, the state detection unit is configured to:
- Face orientation information is determined based on the feature information of the head posture.
- the predetermined distraction action includes any one or more of the following: smoking action, drinking action, eating action, calling action, and entertaining action.
- the state detection unit when the state detection unit performs a predetermined distracted motion detection of the driver based on the video stream, the state detection unit is configured to:
- the state detection unit is further configured to:
- a predetermined distraction action occurs, obtain a parameter value of an index used to characterize the degree of distraction according to a determination result of whether the predetermined distraction action occurs within a period of time;
- the result of the driver's predetermined distracted motion detection is determined according to a parameter value of an index used to characterize the degree of distraction.
- the parameter value of the index of the degree of distraction includes any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
- the vehicle-mounted intelligent system further includes:
- the prompting unit is configured to prompt the detected distracted motion if the result of the driver's predetermined distracted motion detection is that a predetermined distracted motion is detected.
- the vehicle-mounted intelligent system further includes:
- the control unit is configured to perform a control operation corresponding to a result of the driver state detection.
- control unit is configured to:
- the driving mode is switched to an automatic driving mode.
- the vehicle-mounted intelligent system further includes:
- the result sending unit is configured to send at least a part of the result of the driver state detection to the cloud server.
- At least part of the results include: abnormal driving state information determined according to driver state detection.
- the vehicle-mounted intelligent system further includes: a video storage unit, configured to:
- the vehicle-mounted intelligent system further includes:
- the second data downloading unit is configured to send a data set download request to the mobile terminal device when the vehicle and the mobile terminal device are in a communication connection state; receive and store the data set sent by the mobile terminal device.
- the data set is acquired by the mobile terminal device from the cloud server and sent to the vehicle when the data set download request is received.
- the operation unit 73 is further configured to refuse to execute the received operation instruction if the feature matching result indicates that the feature matching is unsuccessful.
- the operation unit 73 is further configured to issue registration information prompting
- the driver registration request includes a driver's registered face image
- the result obtaining unit 72 is configured to upload a face portion of at least one image in the video stream to the cloud server when the vehicle-end device is in a communication connection state with the cloud server, and Receive the feature matching results sent by the cloud server.
- FIG. 8 is a flowchart of a driving management method according to another embodiment of the present application.
- the execution subject of the driving management method in this embodiment may be a cloud server.
- the execution subject may be an electronic device or other device with similar functions.
- the method in this embodiment includes:
- the face image to be identified is collected by a vehicle, and a face image is obtained from an image in the captured video through face detection.
- the process of obtaining a face image based on the image in the video may include: face detection, Face quality screening and living body recognition. Through these processes, it can be ensured that the obtained face image to be recognized is a good-quality face image of a real driver in a vehicle, which ensures the effect of subsequent feature matching.
- the operation 810 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by an image receiving unit 91 executed by a processor.
- a pre-stored face image of at least one registered driver is stored in the data set; optionally, the cloud server may directly obtain a feature matching result from the vehicle. At this time, the feature matching process is implemented on the vehicle side.
- a feature matching result between the face image and at least one pre-stored face image in the data set is obtained from the vehicle.
- the operation 820 may be executed by the processor calling a corresponding instruction stored in the memory, or may be executed by the matching result obtaining unit 92 executed by the processor.
- the operation 830 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by an instruction sending unit 93 executed by the processor.
- the driver recognition is reduced to the network, and feature matching can be achieved without the network, which further improves the safety of the vehicle. Sex.
- the driving management method further includes:
- the data set is usually stored in a cloud server.
- face matching on the vehicle side needs to be implemented. In order to be able to match human faces even when there is no network, you can use the network when Download the data set from the cloud server and save the data set on the vehicle side. At this time, even if there is no network and cannot communicate with the cloud server, face matching can be achieved on the vehicle side, and it is convenient for the vehicle side to manage the data set.
- the driving management method further includes:
- the driver registration request including a driver's registered face image
- a driver In order to identify whether a driver is registered, it is necessary to first store a registered face image corresponding to the registered driver.
- a data set is created for the registered registered face image, and the registered The registered face images of multiple drivers are saved by the cloud server, ensuring data security.
- operation 820 may include:
- Feature matching is performed on the face image and at least one pre-stored face image in the data set to obtain a feature matching result.
- feature matching is implemented in a cloud server.
- the vehicle uploads a face portion of at least one image in the video stream to the cloud server, and the cloud server and the face portion in the data set The image is subjected to feature matching to obtain the feature matching result.
- the vehicle obtains the feature matching result from the cloud server, which reduces the amount of data transmission between the vehicle and the cloud server, and reduces network overhead.
- the driving management method further includes:
- At least part of the results include: abnormal driving state information determined according to driver state detection.
- Sending part or all of the results obtained from driver status detection to the cloud server can back up abnormal driving status information. Since normal driving status does not need to be recorded, this embodiment only sends the abnormal driving status information to the cloud server. ; When the obtained driver state detection results include normal driving state information and abnormal driving state information, a part of the result is transmitted, that is, only the abnormal driving state information is transmitted to the cloud server; and when all the results of the driver state detection are abnormal driving When the status information is transmitted, all abnormal driving status information is transmitted to the cloud server.
- the driving management method further includes: performing a control operation corresponding to a result of the driver state detection.
- a predetermined prompt / alarm condition for example, a preset condition corresponding to the prompt driving state (eg, a poor driving state) is satisfied or the driving state level is a prompt driving state (such as: Poor driving status)
- output prompts / alarm information corresponding to the reminder / alarm predetermined conditions for example: sound (such as: voice or ringing, etc.) / Light (such as: light on or light flashing, etc.) / Vibration, etc.
- the result of the driver state detection meets the predetermined driving mode switching conditions, for example, the preset conditions corresponding to the warning driving state (eg, the driving state is very poor) are met, or the driving state level is a warning distracted driving level (also called severe distraction) Driving level), switch the driving mode to automatic driving mode to achieve safe driving and avoid road traffic accidents; , Can also remind the driver by sound (such as: voice or ringing) / light (such as: lights or lights flashing) / vibration, etc., in order to remind the driver, prompt the driver to return to the distracted attention To drive or encourage the driver to take a rest, etc .; and / or,
- send predetermined information such as alarm information, reminder information, or dial number
- the predetermined contact information for example: alarm phone, nearest contact phone or set emergency contact phone.
- Phone calls can also establish a communication connection (such as a video call, voice call, or telephone call) with a predetermined contact method directly through the in-vehicle device to protect the driver's personal and / or property safety.
- the driving management method further includes:
- an image or a video segment corresponding to the abnormal driving state information may be uploaded to a cloud server for backup, and when the information is needed, it may be downloaded from the cloud server to Check on the vehicle side, or download from the cloud server to other clients for viewing.
- the driving management method further includes:
- At least one of the following operations can be performed:
- the cloud server can receive abnormal driving status information of multiple vehicles, and can implement data statistics based on big data, management of vehicles and drivers, and better services for vehicles and drivers.
- performing data statistics based on abnormal driving state information includes:
- the received image or video segment corresponding to the abnormal driving state information is counted, and the image or video segment is classified according to different abnormal driving states to determine the statistical situation of each abnormal driving state.
- the classification and statistics of each different abnormal driving state can be used to obtain the abnormal driving states that are frequently encountered by drivers based on big data. It can provide more reference data for vehicle developers in order to provide vehicles with more suitable responses to abnormal driving states. A setting or device to provide the driver with a more comfortable driving environment.
- vehicle management based on abnormal driving state information includes:
- the received image or video segment corresponding to the abnormal driving state information is counted, and the image or video segment is classified according to different vehicles to determine the abnormal driving statistics of each vehicle.
- the abnormal driving status information of all drivers corresponding to the vehicle can be processed, for example, when a problem occurs in a certain vehicle, it can be achieved by viewing all the abnormal driving status information corresponding to the vehicle. Liability determination.
- performing driver management based on abnormal driving state information includes:
- the received image or video segment corresponding to the abnormal driving state information is processed based on the abnormal driving state information, so that the image or video segment is classified according to different drivers, and the abnormal driving statistics of each driver are determined.
- each driver's driving habits and frequently occurring problems can be obtained.
- Each driver can be provided with personalized services. While achieving the goal of safe driving, it will not affect Drivers with good driving habits cause interference; for example, after statistical information on abnormal driving status is determined, a driver often yawns while driving, and a higher volume of prompt information can be provided for the driver.
- the foregoing program may be stored in a computer-readable storage medium.
- the program is executed, the program is executed.
- the method includes the steps of the foregoing method embodiment.
- the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
- FIG. 9 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
- the electronic device in this embodiment may be used to implement the foregoing driving management method embodiments of the present application.
- the electronic device of this embodiment includes:
- the image receiving unit 91 is configured to receive a face image to be identified sent by a vehicle.
- the matching result obtaining unit 92 is configured to obtain a feature matching result between the face image and at least one pre-stored face image in the data set.
- a pre-stored face image of at least one registered driver is stored in the data set.
- a feature matching result between the face image and at least one pre-stored face image in the data set is obtained from the vehicle.
- the instruction sending unit 93 is configured to send an instruction for controlling the vehicle to the vehicle if the feature matching result indicates that the feature matching is successful.
- the electronic device further includes:
- the first data sending unit is configured to receive a data set download request sent by a vehicle.
- the data set stores at least one pre-stored face image of a registered driver; and sends the data set to the vehicle.
- the electronic device further includes:
- a registration request receiving unit configured to receive a driver registration request sent by a vehicle or a mobile terminal device, where the driver registration request includes a registered face image of the driver;
- the matching result obtaining unit 92 is configured to perform feature matching on a face image and at least one pre-stored face image in a data set to obtain a feature matching result.
- the electronic device further includes:
- the detection result receiving unit is configured to receive at least part of the results of the driver state detection sent by the vehicle, perform an early warning prompt for an abnormal driving state, and / or send an instruction to the vehicle to perform intelligent driving control.
- At least part of the results include: abnormal driving state information determined according to driver state detection.
- the electronic device further includes:
- the execution control unit is configured to execute a control operation corresponding to a result of the driver state detection.
- an execution control unit is configured to:
- the driving mode is switched to an automatic driving mode.
- the electronic device further includes:
- the video receiving unit is configured to receive an image or a video segment corresponding to the abnormal driving state information.
- the electronic device further includes:
- the abnormality processing unit is configured to perform at least one of the following operations based on abnormal driving state information: data statistics, vehicle management, and driver management.
- the abnormality processing unit when the abnormality processing unit performs data statistics based on the abnormal driving state information, it is configured to perform statistics on the received image or video segment corresponding to the abnormal driving state information based on the abnormal driving state information, so that the image or video segment drives according to different abnormalities.
- the status is classified to determine the statistics of each abnormal driving status.
- the abnormality processing unit when the abnormality processing unit performs vehicle management based on the abnormal driving state information, the abnormality processing unit is configured to count the received image or video segment corresponding to the abnormal driving state information based on the abnormal driving state information, so that the image or video segment is performed according to different vehicles. Classification to determine abnormal driving statistics for each vehicle.
- the abnormality processing unit when the abnormality processing unit performs driver management based on the abnormal driving state information, the abnormality processing unit is configured to process the received image or video segment corresponding to the abnormal driving state information based on the abnormal driving state information, so that the image or video segment drives according to different driving conditions. Classify and determine the abnormal driving statistics of each driver.
- a driving management system including: a vehicle and / or a cloud server;
- the vehicle is used to execute any one of the driving management methods in the embodiments shown in Figs. 1-6;
- the cloud server is configured to execute any driving management method in the embodiment shown in FIG. 8.
- the driving management system further includes: a mobile terminal device, configured to:
- the driver registration request includes a driver's registered face image
- FIG. 10 is a flowchart of using a driving management system according to some embodiments of the present application.
- the registration process implemented in the above embodiment is implemented on a mobile phone (mobile terminal device), and the filtered face image and driver ID information (identity information) are uploaded to a cloud server, and the cloud server will The face image and driver ID information and the user permission information corresponding to the face image are stored in the data set.
- the vehicle client downloads the data set to the vehicle client for matching; the vehicle client obtains the driver Image, the driver image is subjected to face detection, quality screening, and live recognition in order, and the filtered face image to be identified is matched with all the face images in the dataset.
- the matching is based on the facial features, and the facial features can be obtained through nerves. Obtained through network extraction, determining the authority information corresponding to the face image to be identified based on the comparison result, and controlling the vehicle action based on the authority information.
- FIG. 11 is a flowchart of using a driving management system according to another embodiment of the present application.
- the registration process implemented in the foregoing embodiment is implemented on a mobile phone (mobile terminal device), and the filtered face image and driver ID information (identity information) are uploaded to a cloud server, and the cloud server sends the person
- the face image and driver ID information and the user permission information corresponding to the face image are stored in the data set.
- permission matching is required, the to-be-recognized face image uploaded by the vehicle client is received. Face images are matched. Matching is based on face features. Face features can be obtained through neural network extraction.
- the authority information corresponding to the face image to be identified is determined, and vehicle actions are controlled based on the authority information.
- the vehicle client obtains the driver image, and then the driver image is subjected to face detection, quality screening, and living body recognition in order to obtain the face image to be identified.
- an electronic device including: a memory for storing executable instructions;
- a processor for communicating with the memory to execute executable instructions to complete the driving management method of any one of the above embodiments.
- FIG. 12 is a schematic structural diagram of an application example of an electronic device according to some embodiments of the present application.
- the electronic device includes one or more processors, a communication unit, and the like.
- the one or more processors are, for example, one or more central processing units (CPUs) 1201, and / or one or more Acceleration unit 1213, etc.
- the acceleration unit may include, but is not limited to, GPU, FPGA, other types of special-purpose processors, etc.
- the processor may be loaded to random based on executable instructions stored in read-only memory (ROM) 1202 or from the storage portion 1208
- the executable instructions in the memory (RAM) 1203 are accessed to perform various appropriate actions and processes.
- the communication unit 1212 may include, but is not limited to, a network card.
- the network card may include, but is not limited to, an IB (Infiniband) network card.
- the processor may communicate with the read-only memory 1202 and / or the random access memory 1203 to execute executable instructions.
- the bus 1204 It is connected to the communication unit 1212 and communicates with other target devices via the communication unit 1212, thereby completing operations corresponding to any of the methods provided in the embodiments of the present application, for example, controlling a camera component provided on a vehicle to collect a video stream of a driver of the vehicle; A feature matching result of a face portion of at least one image in a video stream and at least one pre-stored face image in a data set is obtained; if the feature matching result indicates that the feature matching is successful, the vehicle is controlled to execute an operation instruction received by the vehicle.
- the RAM 1203 can store various programs and data required for the operation of the device.
- the CPU 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204.
- ROM1202 is an optional module.
- the RAM 1203 stores executable instructions, or writes executable instructions to the ROM 1202 at run time, and the executable instructions cause the central processing unit 1201 to perform operations corresponding to any of the foregoing methods in this application.
- An input / output (I / O) interface 1205 is also connected to the bus 1204.
- the communication unit 1212 may be provided in an integrated manner, or may be provided with multiple sub-modules (for example, multiple IB network cards) and connected on a bus link.
- the following components are connected to the I / O interface 1205: an input portion 1206 including a keyboard, a mouse, and the like; an output portion 1207 including a cathode ray tube (CRT), a liquid crystal display (LCD), and the speaker; a storage portion 1208 including a hard disk and the like And a communication section 1209 including a network interface card such as a LAN card, a modem, and the like.
- the communication section 1209 performs communication processing via a network such as the Internet.
- the driver 1210 is also connected to the I / O interface 1205 as needed.
- a removable medium 1211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1210 as needed, so that a computer program read therefrom is installed into the storage section 1208 as needed.
- FIG. 12 is only an optional implementation manner.
- the number and types of components in FIG. 12 may be selected, deleted, added or replaced according to actual needs.
- Different function components can also be implemented in separate settings or integrated settings.
- the acceleration unit 1213 and the CPU 1201 can be set separately or the acceleration unit 1213 can be integrated on the CPU 1201.
- the communication department can be set separately or integrated on the CPU 1201. Or on the acceleration unit 1213, and so on.
- the process described above with reference to the flowchart may be implemented as a computer software program.
- embodiments of the present application include a computer program product including a computer program tangibly embodied on a machine-readable medium, the computer program including program code for performing a method shown in a flowchart, and the program code may include a corresponding The instructions corresponding to the steps of the driving management method provided by any embodiment of the present application are executed.
- the computer program may be downloaded and installed from a network through a communication section, and / or installed from a removable medium.
- the computer program is executed by the CPU 1201, the functions defined in the method of the present application are executed.
- a computer storage medium for storing a computer-readable instruction, and when the instruction is executed, the operation of the driving management method of any one of the foregoing embodiments is performed.
- the methods and devices, systems, and devices of this application may be implemented in many ways.
- the methods and devices, systems, and devices of the present application can be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
- the above order of the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order described above, unless otherwise specifically stated.
- the present application can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present application.
- the present application also covers a recording medium storing a program for executing the method according to the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
- Collating Specific Patterns (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (93)
- 一种驾驶管理方法,其特征在于,包括:控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令。
- 根据权利要求1所述的方法,其特征在于,还包括:在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;接收并存储所述云端服务器发送的数据集。
- 根据权利要求1或2所述的方法,其特征在于,还包括:如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;向所述云端服务器发送所述图像和所述身份信息。
- 根据权利要求1或2所述的方法,其特征在于,还包括:如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;截取所述图像中的人脸部分;向所述云端服务器发送所述截取的人脸部分和所述身份信息。
- 根据权利要求1-4任一所述的方法,其特征在于,所述方法还包括:获取所采集的图像的活体检测结果;根据所述特征匹配结果,控制车辆执行所述车辆接收到的操作指令,包括:根据所述特征匹配结果和所述活体检测结果,控制车辆执行所述车辆接收到的操作指令。
- 根据权利要求5所述的方法,其特征在于,所述数据集中的预存人脸图像还对应设置有驾驶权限;所述方法还包括:如果所述特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;所述控制车辆执行所述车辆接收到的操作指令,包括:控制车辆执行所述车辆接收到的在所述权限范围内的操作指令。
- 根据权利要求1-6任一所述的方法,其特征在于,还包括:基于所述视频流进行驾驶员状态检测;根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示和/或进行智能驾驶控制。
- 根据权利要求7所述的方法,其特征在于,所述驾驶员状态检测包括以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。
- 根据权利要求8所述的方法,其特征在于,基于所述视频流进行驾驶员疲劳状态检测,包括:对所述视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值;根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
- 根据权利要求9所述的方法,其特征在于,所述用于表征驾驶员疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
- 根据权利要求10所述的方法,其特征在于,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
- 根据权利要求8-11任一所述的方法,其特征在于,基于所述视频流进行驾驶员分心状态检测,包括:对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值;所述用于表征驾驶员分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;根据用于表征所述驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
- 根据权利要求12所述的方法,其特征在于,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
- 根据权利要求12或13所述的方法,其特征在于,所述对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,包括:检测所述视频流中驾驶员图像的人脸关键点;根据所述人脸关键点进行人脸朝向和/或视线方向检测。
- 根据权利要求14所述的方法,其特征在于,根据所述人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:根据所述人脸关键点获取头部姿态的特征信息;根据所述头部姿态的特征信息确定人脸朝向信息。
- 根据权利要求8-15任一所述的方法,其特征在于,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
- 根据权利要求16所述的方法,其特征在于,基于所述视频流进行驾驶员预定分心动作检测,包括:对所述视频流中的至少一个图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;根据所述目标对象的检测框,确定是否出现所述预定分心动作。
- 根据权利要求17所述的方法,其特征在于,还包括:若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征驾驶员分心程度的指标的参数值;根据所述用于表征驾驶员分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
- 根据权利要求18所述的方法,其特征在于,所述驾驶员分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
- 根据权利要求16-19任一所述的方法,其特征在于,还包括:若驾驶员预定分心动作检测的结果为检测到预定分心动作,提示检测到的分心动作。
- 根据权利要求7-20任一所述的方法,其特征在于,还包括:执行与所述驾驶员状态检测的结果对应的控制操作。
- 根据权利要求21所述的方法,其特征在于,所述执行与所述驾驶员状态检测的结果对应的控制操作,包括以下至少之一:如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
- 根据权利要求7-22任一所述的方法,其特征在于,还包括:向所述云端服务器发送所述驾驶员状态检测的至少部分结果。
- 根据权利要求23所述的方法,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
- 根据权利要求24所述的方法,其特征在于,还包括:存储所述视频流中与所述异常驾驶状态信息对应的图像或视频段;和/或,向所述云端服务器发送所述视频流中与所述异常驾驶状态信息对应的图像或视频段。
- 根据权利要求1所述的方法,其特征在于,还包括:在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;接收并存储所述移动端设备发送的数据集。
- 根据权利要求26所述的方法,其特征在于,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
- 根据权利要求1-27任一所述的方法,其特征在于,还包括:如果所述特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
- 根据权利要求28所述的方法,其特征在于,还包括:发出提示注册信息;根据所述提示注册信息接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;根据所述注册人脸图像,建立数据集。
- 根据权利要求1-29任一所述的方法,其特征在于,所述获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果,包括:在所述车辆与云端服务器处于通信连接状态时,将所述视频流中的至少一个图像的人脸部分上传到所述云端服务器,并接收所述云端服务器发送的特征匹配结果。
- 一种车载智能系统,其特征在于,包括:视频采集单元,用于控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;结果获取单元,用于获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;操作单元,用于如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令。
- 根据权利要求31所述的系统,其特征在于,还包括:第一数据下载单元,用于在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;数据保存单元,用于接收并存储所述云端服务器发送的数据集。
- 根据权利要求31或32所述的系统,其特征在于,还包括:第一云端存储单元,用于如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;向所述云端服务器发送所述图像和所述身份信息。
- 根据权利要求31或32所述的系统,其特征在于,还包括:第二云端存储单元,如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;截取所述图像中的人脸部分;向所述云端服务器发送所述截取的人脸部分和所述身份信息。
- 根据权利要求31-34任一所述的系统,其特征在于,所述系统还包括:活体检测单元,用于获取所采集的图像的活体检测结果;所述操作单元,用于根据所述特征匹配结果和所述活体检测结果,控制车辆执行所述车辆接收到的操作指令。
- 根据权利要求35所述的系统,其特征在于,所述数据集中的预存人脸图像还对应设置有驾驶权限;所述系统还包括:权限获取单元,用于如果所述特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;所述操作单元,还用于控制车辆执行所述车辆接收到的在所述权限范围内的操作指令。
- 根据权利要求31-36任一所述的系统,其特征在于,还包括:状态检测单元,用于基于所述视频流进行驾驶员状态检测;输出单元,用于根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示;和/或,智能驾驶控制单元,用于根据驾驶员状态检测的结果,进行智能驾驶控制。
- 根据权利要求37所述的系统,其特征在于,所述驾驶员状态检测包括以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。
- 根据权利要求38所述的系统,其特征在于,所述状态检测单元基于所述视频流进行驾驶员疲劳状态检测时, 用于:对所述视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值;根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
- 根据权利要求39所述的系统,其特征在于,所述用于表征驾驶员疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
- 根据权利要求40所述的系统,其特征在于,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
- 根据权利要求38-41任一所述的系统,其特征在于,所述状态检测单元基于所述视频流进行驾驶员分心状态检测时,用于:对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值;所述用于表征驾驶员分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;根据用于表征所述驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
- 根据权利要求42所述的系统,其特征在于,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
- 根据权利要求42或43所述的系统,其特征在于,所述状态检测单元对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测时,用于:检测所述视频流中驾驶员图像的人脸关键点;根据所述人脸关键点进行人脸朝向和/或视线方向检测。
- 根据权利要求44所述的系统,其特征在于,所述状态检测单元根据所述人脸关键点进行人脸朝向检测时,用于:根据所述人脸关键点获取头部姿态的特征信息;根据所述头部姿态的特征信息确定人脸朝向信息。
- 根据权利要求38-45任一所述的系统,其特征在于,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
- 根据权利要求46所述的系统,其特征在于,所述状态检测单元基于所述视频流进行驾驶员预定分心动作检测时,用于:对所述视频流中的至少一个图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;根据所述目标对象的检测框,确定是否出现所述预定分心动作。
- 根据权利要求47所述的系统,其特征在于,所述状态检测单元,还用于:若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征驾驶员分心程度的指标的参数值;根据所述用于表征驾驶员分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
- 根据权利要求48所述的系统,其特征在于,所述驾驶员分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
- 根据权利要求46-49任一所述的系统,其特征在于,还包括:提示单元,用于若驾驶员预定分心动作检测的结果为检测到预定分心动作,提示检测到的分心动作。
- 根据权利要求37-50任一所述的系统,其特征在于,还包括:控制单元,用于执行与所述驾驶员状态检测的结果对应的控制操作。
- 根据权利要求51所述的系统,其特征在于,所述控制单元,用于:如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
- 根据权利要求37-52任一所述的系统,其特征在于,还包括:结果发送单元,用于向所述云端服务器发送所述驾驶员状态检测的至少部分结果。
- 根据权利要求53所述的系统,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
- 根据权利要求54所述的系统,其特征在于,还包括:视频存储单元,用于:存储所述视频流中与所述异常驾驶状态信息对应的图像或视频段;和/或,向所述云端服务器发送所述视频流中与所述异常驾驶状态信息对应的图像或视频段。
- 根据权利要求31所述的系统,其特征在于,还包括:第二数据下载单元,用于在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;接收并存储所述移动端设备发送的数据集。
- 根据权利要求56所述的系统,其特征在于,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
- 根据权利要求31-57任一所述的系统,其特征在于,所述操作单元,还用于如果所述特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
- 根据权利要求58所述的系统,其特征在于,所述操作单元,还用于发出提示注册信息;根据所述提示注册信息接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;根据所述注册人脸图像,建立数据集。
- 根据权利要求31-59任一所述的系统,其特征在于,所述结果获取单元,用于在所述车辆与云端服务器处于通信连接状态时,将所述视频流中的至少一个图像的人脸部分上传到所述云端服务器,并接收所述云端服务器发送的特征匹配结果。
- 一种驾驶管理方法,其特征在于,包括:接收车辆发送的待识别的人脸图像;获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
- 根据权利要求61所述的方法,其特征在于,还包括:接收车辆发送的数据集下载请求,所述数据集中存储有至少一已注册的驾驶员的预存人脸图像;向所述车辆发送所述数据集。
- 根据权利要求61或62所述的方法,其特征在于,还包括:接收车辆或移动端设备发送的驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;根据所述注册人脸图像,建立数据集。
- 根据权利要求61-63任一所述的方法,其特征在于,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:对所述人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到所述特征匹配结果。
- 根据权利要求61-64任一所述的方法,其特征在于,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
- 根据权利要求61-65任一所述的方法,其特征在于,还包括:接收所述车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向所述车辆发送进行智能驾驶控制的指令。
- 根据权利要求66所述的方法,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
- 根据权利要求66或67所述的方法,其特征在于,还包括:执行与所述驾驶员状态检测的结果对应的控制操作。
- 根据权利要求68所述的方法,其特征在于,所述执行与所述驾驶员状态检测的结果对应的控制操作,包括:如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
- 根据权利要求67-69任一所述的方法,其特征在于,还包括:接收与所述异常驾驶状态信息对应的图像或视频段。
- 根据权利要求70所述的方法,其特征在于,还包括:基于所述异常驾驶状态信息进行以下至少一种操作:数据统计、车辆管理、驾驶员管理。
- 根据权利要求71所述的方法,其特征在于,所述基于所述异常驾驶状态信息进行数据统计,包括:基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同异常驾驶状态进行分类,确定每种所述异常驾驶状态的统计情况。
- 根据权利要求71或72所述的方法,其特征在于,所述基于所述异常驾驶状态信息进行车辆管理,包括:基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同车辆进行分类,确定每个所述车辆的异常驾驶统计情况。
- 根据权利要求71-73任一所述的方法,其特征在于,所述基于所述异常驾驶状态信息进行驾驶员管理,包括:基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行处理,使所述图像或视频段按不同驾驶员进行分类,确定每个所述驾驶员的异常驾驶统计情况。
- 一种电子设备,其特征在于,包括:图像接收单元,用于接收车辆发送的待识别的人脸图像;匹配结果获得单元,用于获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;指令发送单元,用于如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
- 根据权利要求75所述的电子设备,其特征在于,还包括:第一数据发送单元,用于接收车辆发送的数据集下载请求,所述数据集中存储有至少一已注册的驾驶员的预存人脸图像;向所述车辆发送所述数据集。
- 根据权利要求75或76所述的电子设备,其特征在于,还包括:注册请求接收单元,用于接收车辆或移动端设备发送的驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;根据所述注册人脸图像,建立数据集。
- 根据权利要求75-77任一所述的电子设备,其特征在于,所述匹配结果获得单元,用于对所述人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到所述特征匹配结果。
- 根据权利要求75-78任一所述的电子设备,其特征在于,所述匹配结果获得单元,用于从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
- 根据权利要求75-79任一所述的电子设备,其特征在于,还包括:检测结果接收单元,用于接收所述车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向所述车辆发送进行智能驾驶控制的指令。
- 根据权利要求80所述的电子设备,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
- 根据权利要求80或81所述的电子设备,其特征在于,还包括:执行控制单元,用于执行与所述驾驶员状态检测的结果对应的控制操作。
- 根据权利要求82所述的电子设备,其特征在于,所述执行控制单元,用于:如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
- 根据权利要求81-83任一所述的电子设备,其特征在于,还包括:视频接收单元,用于接收与所述异常驾驶状态信息对应的图像或视频段。
- 根据权利要求84所述的电子设备,其特征在于,还包括:异常处理单元,用于基于所述异常驾驶状态信息进行以下至少一种操作:数据统计、车辆管理、驾驶员管理。
- 根据权利要求85所述的电子设备,其特征在于,所述异常处理单元基于所述异常驾驶状态信息进行数据统计时,用于基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同异常驾驶状态进行分类,确定每种所述异常驾驶状态的统计情况。
- 根据权利要求85或86所述的电子设备,其特征在于,所述异常处理单元基于所述异常驾驶状态信息进行车辆管理时,用于基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同车辆进行分类,确定每个所述车辆的异常驾驶统计情况。
- 根据权利要求85-87任一所述的电子设备,其特征在于,所述异常处理单元基于所述异常驾驶状态信息进行驾驶员管理时,用于基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行处理,使所述图像或视频段按不同驾驶员进行分类,确定每个所述驾驶员的异常驾驶统计情况。
- 一种驾驶管理系统,其特征在于,包括:车辆和/或云端服务器;所述车辆用于执行权利要求1-30任意一项所述的驾驶管理方法;所述云端服务器用于执行权利要求61-74任意一项所述的驾驶管理方法。
- 根据权利要求89所述的系统,其特征在于,还包括:移动端设备,用于:接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;将所述驾驶员注册请求发送给所述云端服务器。
- 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1至30任意一项所述驾驶管理方法或权利要求61至74任意一项所述的驾驶管理方法。
- 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至30任意一项所述的驾驶管理方法或权利要求61至74任意一项所述的驾驶管理方法。
- 一种计算机存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1至30任意一项所述驾驶管理方法或权利要求61至74任意一项所述的驾驶管理方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MYPI2019007079A MY197453A (en) | 2018-06-04 | 2018-09-14 | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
SG11201911404QA SG11201911404QA (en) | 2018-06-04 | 2018-09-14 | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
JP2019565001A JP6932208B2 (ja) | 2018-06-04 | 2018-09-14 | 運転管理方法及びシステム、車載スマートシステム、電子機器並びに媒体 |
EP18919400.4A EP3617935A4 (en) | 2018-06-04 | 2018-09-14 | DRIVING MANAGEMENT METHOD AND SYSTEM, ON-BOARD INTELLIGENT SYSTEM, ELECTRONIC DEVICE AND MEDIUM |
KR1020207012402A KR102305914B1 (ko) | 2018-06-04 | 2018-09-14 | 운전 관리 방법 및 시스템, 차량 탑재 지능형 시스템, 전자 기기, 매체 |
US16/224,389 US10915769B2 (en) | 2018-06-04 | 2018-12-18 | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810565711.1A CN109002757A (zh) | 2018-06-04 | 2018-06-04 | 驾驶管理方法和系统、车载智能系统、电子设备、介质 |
CN201810565711.1 | 2018-06-04 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/224,389 Continuation US10915769B2 (en) | 2018-06-04 | 2018-12-18 | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019232972A1 true WO2019232972A1 (zh) | 2019-12-12 |
Family
ID=64574253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/105790 WO2019232972A1 (zh) | 2018-06-04 | 2018-09-14 | 驾驶管理方法和系统、车载智能系统、电子设备、介质 |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP3617935A4 (zh) |
JP (1) | JP6932208B2 (zh) |
KR (1) | KR102305914B1 (zh) |
CN (1) | CN109002757A (zh) |
MY (1) | MY197453A (zh) |
SG (1) | SG11201911404QA (zh) |
WO (1) | WO2019232972A1 (zh) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476185A (zh) * | 2020-04-13 | 2020-07-31 | 罗翌源 | 一种驾驶者注意力监测方法、装置以及系统 |
CN112288286A (zh) * | 2020-10-30 | 2021-01-29 | 上海仙塔智能科技有限公司 | 安全派单方法、安全派单系统及可读存储介质 |
CN112660157A (zh) * | 2020-12-11 | 2021-04-16 | 重庆邮电大学 | 一种多功能无障碍车远程监控与辅助驾驶系统 |
CN112699807A (zh) * | 2020-12-31 | 2021-04-23 | 车主邦(北京)科技有限公司 | 一种驾驶员状态信息监控方法和装置 |
CN113104046A (zh) * | 2021-04-28 | 2021-07-13 | 中国第一汽车股份有限公司 | 一种基于云服务器的开门预警方法及装置 |
CN113191286A (zh) * | 2021-05-08 | 2021-07-30 | 重庆紫光华山智安科技有限公司 | 图像数据质量检测调优方法、系统、设备及介质 |
CN113285998A (zh) * | 2021-05-20 | 2021-08-20 | 江西北斗应用科技有限公司 | 驾驶员管理终端和无人航空器监管系统 |
CN113581209A (zh) * | 2021-08-04 | 2021-11-02 | 东风柳州汽车有限公司 | 驾驶辅助模式切换方法、装置、设备及存储介质 |
CN113744498A (zh) * | 2020-05-29 | 2021-12-03 | 杭州海康汽车软件有限公司 | 驾驶员注意力监测的系统和方法 |
CN114368395A (zh) * | 2022-01-21 | 2022-04-19 | 华录智达科技股份有限公司 | 一种基于公交数字化转型的人工智能公交驾驶安全管理系统 |
CN114475623A (zh) * | 2021-12-28 | 2022-05-13 | 阿波罗智联(北京)科技有限公司 | 车辆的控制方法、装置、电子设备及存储介质 |
CN115147785A (zh) * | 2021-03-29 | 2022-10-04 | 东风汽车集团股份有限公司 | 一种车辆识别方法、装置、电子设备和存储介质 |
CN115214505A (zh) * | 2022-06-29 | 2022-10-21 | 重庆长安汽车股份有限公司 | 车辆座舱音效的控制方法、装置、车辆及存储介质 |
WO2022222174A1 (zh) * | 2021-04-21 | 2022-10-27 | 彭泳 | 基于视频图像分析的危货监管系统及危货监管方法 |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111497782A (zh) * | 2019-01-31 | 2020-08-07 | 西安铁路信号有限责任公司 | 一种基于生物特征识别的车辆保安系统及方法 |
CN109977771A (zh) * | 2019-02-22 | 2019-07-05 | 杭州飞步科技有限公司 | 司机身份的验证方法、装置、设备及计算机可读存储介质 |
CN110119708A (zh) * | 2019-05-10 | 2019-08-13 | 万雪莉 | 一种基于台灯的用户状态调整方法和装置 |
CN112069863B (zh) * | 2019-06-11 | 2022-08-19 | 荣耀终端有限公司 | 一种面部特征的有效性判定方法及电子设备 |
CN112218270A (zh) * | 2019-07-09 | 2021-01-12 | 奥迪股份公司 | 车辆用户的呼入或呼出处理系统及相应的方法和介质 |
CN110390285A (zh) * | 2019-07-16 | 2019-10-29 | 广州小鹏汽车科技有限公司 | 驾驶员分神检测方法、系统及车辆 |
CN110550043B (zh) * | 2019-09-05 | 2022-07-22 | 上海博泰悦臻网络技术服务有限公司 | 危险行为的警示方法、系统、计算机存储介质及车载终端 |
CN110737688B (zh) * | 2019-09-30 | 2023-04-07 | 上海商汤临港智能科技有限公司 | 驾驶数据分析方法、装置、电子设备和计算机存储介质 |
CN110758324A (zh) * | 2019-10-23 | 2020-02-07 | 上海能塔智能科技有限公司 | 试驾控制方法及系统、车载智能设备、车辆、存储介质 |
CN110780934B (zh) * | 2019-10-23 | 2024-03-12 | 深圳市商汤科技有限公司 | 车载图像处理系统的部署方法和装置 |
CN110816473B (zh) * | 2019-11-29 | 2022-08-16 | 福智易车联网(宁波)有限公司 | 一种车辆控制方法、车辆控制系统及存储介质 |
WO2021212504A1 (zh) * | 2020-04-24 | 2021-10-28 | 上海商汤临港智能科技有限公司 | 车辆和车舱域控制器 |
CN111483471B (zh) * | 2020-04-26 | 2021-11-30 | 湘潭牵引机车厂有限公司 | 车辆控制方法、装置及车载控制器 |
CN113696897B (zh) * | 2020-05-07 | 2023-06-23 | 沃尔沃汽车公司 | 驾驶员分神预警方法和驾驶员分神预警系统 |
CN111951637B (zh) * | 2020-07-19 | 2022-05-03 | 西北工业大学 | 一种任务情景相关联的无人机飞行员视觉注意力分配模式提取方法 |
CN112037380B (zh) * | 2020-09-03 | 2022-06-24 | 上海商汤临港智能科技有限公司 | 车辆控制方法及装置、电子设备、存储介质和车辆 |
CN112861677A (zh) * | 2021-01-28 | 2021-05-28 | 上海商汤临港智能科技有限公司 | 轨交驾驶员的动作检测方法及装置、设备、介质及工具 |
CN114132329B (zh) * | 2021-12-10 | 2024-04-12 | 智己汽车科技有限公司 | 一种驾驶员注意力保持方法及系统 |
CN114312669B (zh) * | 2022-02-15 | 2022-08-05 | 远峰科技股份有限公司 | 一种基于人脸识别的智能座舱显示系统 |
CN114895983B (zh) * | 2022-05-12 | 2024-07-26 | 合肥杰发科技有限公司 | Dms的启动方法及相关设备 |
KR102510733B1 (ko) * | 2022-08-10 | 2023-03-16 | 주식회사 에이모 | 영상에서 학습 대상 이미지 프레임을 선별하는 방법 및 장치 |
CN116912808B (zh) * | 2023-09-14 | 2023-12-01 | 四川公路桥梁建设集团有限公司 | 架桥机控制方法、电子设备和计算机可读介质 |
CN118248174B (zh) * | 2024-05-21 | 2024-07-30 | 吉林大学 | 一种驾驶人视频通话的识别与预警方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050110610A1 (en) * | 2003-09-05 | 2005-05-26 | Bazakos Michael E. | System and method for gate access control |
US20110091079A1 (en) * | 2009-10-21 | 2011-04-21 | Automotive Research & Testing Center | Facial image recognition system for a driver of a vehicle |
CN104169993A (zh) * | 2012-03-14 | 2014-11-26 | 株式会社电装 | 驾驶辅助装置及驾驶辅助方法 |
CN107578025A (zh) * | 2017-09-15 | 2018-01-12 | 赵立峰 | 一种驾驶员识别方法及系统 |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2546415B2 (ja) * | 1990-07-09 | 1996-10-23 | トヨタ自動車株式会社 | 車両運転者監視装置 |
JP4603264B2 (ja) * | 2002-02-19 | 2010-12-22 | ボルボ テクノロジー コーポレイション | 運転者注意負荷の監視と管理とを行なうシステムおよび方法 |
EP2204118B1 (en) * | 2002-10-15 | 2014-07-23 | Volvo Technology Corporation | Method for interpreting a drivers head and eye activity |
JP2004284460A (ja) * | 2003-03-20 | 2004-10-14 | Aisin Seiki Co Ltd | 車両盗難防止システム |
JP4564320B2 (ja) * | 2004-09-29 | 2010-10-20 | アイシン精機株式会社 | ドライバモニタシステム |
JP2011192031A (ja) * | 2010-03-15 | 2011-09-29 | Denso It Laboratory Inc | 制御装置及び運転安全性保護方法 |
CN101844548B (zh) | 2010-03-30 | 2012-06-27 | 奇瑞汽车股份有限公司 | 一种车辆自动控制方法和系统 |
CN102975690A (zh) * | 2011-09-02 | 2013-03-20 | 上海博泰悦臻电子设备制造有限公司 | 汽车锁定系统及方法 |
JP6150258B2 (ja) * | 2014-01-15 | 2017-06-21 | みこらった株式会社 | 自動運転車 |
CN104408878B (zh) * | 2014-11-05 | 2017-01-25 | 唐郁文 | 一种车队疲劳驾驶预警监控系统及方法 |
CN104732251B (zh) * | 2015-04-23 | 2017-12-22 | 郑州畅想高科股份有限公司 | 一种基于视频的机车司机驾驶状态检测方法 |
JP6447379B2 (ja) * | 2015-06-15 | 2019-01-09 | トヨタ自動車株式会社 | 認証装置、認証システムおよび認証方法 |
CN105469035A (zh) * | 2015-11-17 | 2016-04-06 | 中国科学院重庆绿色智能技术研究院 | 基于双目视频分析的驾驶员不良驾驶行为检测系统 |
JP6641916B2 (ja) * | 2015-11-20 | 2020-02-05 | オムロン株式会社 | 自動運転支援装置、自動運転支援システム、自動運転支援方法および自動運転支援プログラム |
CN105654753A (zh) * | 2016-01-08 | 2016-06-08 | 北京乐驾科技有限公司 | 一种智能车载安全驾驶辅助方法及系统 |
FR3048544B1 (fr) * | 2016-03-01 | 2021-04-02 | Valeo Comfort & Driving Assistance | Dispositif et methode de surveillance d'un conducteur d'un vehicule automobile |
CN108369766A (zh) * | 2016-05-10 | 2018-08-03 | 深圳市赛亿科技开发有限公司 | 一种基于人脸识别的车载疲劳预警系统及预警方法 |
JP6790483B2 (ja) * | 2016-06-16 | 2020-11-25 | 日産自動車株式会社 | 認証方法及び認証装置 |
CN106335469B (zh) * | 2016-09-04 | 2019-11-26 | 深圳市云智易联科技有限公司 | 车载认证方法、系统、车载装置、移动终端及服务器 |
CN106338944B (zh) * | 2016-09-29 | 2019-02-15 | 山东华旗新能源科技有限公司 | 施工升降机安全智能控制系统 |
EP3535646A4 (en) * | 2016-11-07 | 2020-08-12 | Nauto, Inc. | SYSTEM AND METHOD FOR DETERMINING DRIVER DISTRACTION |
CN107832748B (zh) * | 2017-04-18 | 2020-02-14 | 黄海虹 | 一种共享汽车驾驶员更换系统及方法 |
CN107244306A (zh) * | 2017-07-27 | 2017-10-13 | 深圳小爱智能科技有限公司 | 一种启动汽车的装置 |
CN107657236A (zh) * | 2017-09-29 | 2018-02-02 | 厦门知晓物联技术服务有限公司 | 汽车安全驾驶预警方法及车载预警系统 |
CN107891746A (zh) * | 2017-10-19 | 2018-04-10 | 李娟� | 基于汽车驾驶防疲劳的系统 |
CN207433445U (zh) * | 2017-10-31 | 2018-06-01 | 安徽江淮汽车集团股份有限公司 | 一种车辆管理系统 |
CN107953854A (zh) * | 2017-11-10 | 2018-04-24 | 惠州市德赛西威汽车电子股份有限公司 | 一种基于人脸识别的智能车载辅助系统及方法 |
-
2018
- 2018-06-04 CN CN201810565711.1A patent/CN109002757A/zh active Pending
- 2018-09-14 MY MYPI2019007079A patent/MY197453A/en unknown
- 2018-09-14 JP JP2019565001A patent/JP6932208B2/ja active Active
- 2018-09-14 EP EP18919400.4A patent/EP3617935A4/en not_active Withdrawn
- 2018-09-14 SG SG11201911404QA patent/SG11201911404QA/en unknown
- 2018-09-14 WO PCT/CN2018/105790 patent/WO2019232972A1/zh unknown
- 2018-09-14 KR KR1020207012402A patent/KR102305914B1/ko active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050110610A1 (en) * | 2003-09-05 | 2005-05-26 | Bazakos Michael E. | System and method for gate access control |
US20110091079A1 (en) * | 2009-10-21 | 2011-04-21 | Automotive Research & Testing Center | Facial image recognition system for a driver of a vehicle |
CN104169993A (zh) * | 2012-03-14 | 2014-11-26 | 株式会社电装 | 驾驶辅助装置及驾驶辅助方法 |
CN107578025A (zh) * | 2017-09-15 | 2018-01-12 | 赵立峰 | 一种驾驶员识别方法及系统 |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476185A (zh) * | 2020-04-13 | 2020-07-31 | 罗翌源 | 一种驾驶者注意力监测方法、装置以及系统 |
CN111476185B (zh) * | 2020-04-13 | 2023-10-10 | 罗跃宸 | 一种驾驶者注意力监测方法、装置以及系统 |
CN113744498B (zh) * | 2020-05-29 | 2023-10-27 | 杭州海康汽车软件有限公司 | 驾驶员注意力监测的系统和方法 |
CN113744498A (zh) * | 2020-05-29 | 2021-12-03 | 杭州海康汽车软件有限公司 | 驾驶员注意力监测的系统和方法 |
CN112288286A (zh) * | 2020-10-30 | 2021-01-29 | 上海仙塔智能科技有限公司 | 安全派单方法、安全派单系统及可读存储介质 |
CN112660157A (zh) * | 2020-12-11 | 2021-04-16 | 重庆邮电大学 | 一种多功能无障碍车远程监控与辅助驾驶系统 |
CN112699807A (zh) * | 2020-12-31 | 2021-04-23 | 车主邦(北京)科技有限公司 | 一种驾驶员状态信息监控方法和装置 |
CN115147785A (zh) * | 2021-03-29 | 2022-10-04 | 东风汽车集团股份有限公司 | 一种车辆识别方法、装置、电子设备和存储介质 |
WO2022222174A1 (zh) * | 2021-04-21 | 2022-10-27 | 彭泳 | 基于视频图像分析的危货监管系统及危货监管方法 |
CN113104046A (zh) * | 2021-04-28 | 2021-07-13 | 中国第一汽车股份有限公司 | 一种基于云服务器的开门预警方法及装置 |
CN113191286B (zh) * | 2021-05-08 | 2023-04-25 | 重庆紫光华山智安科技有限公司 | 图像数据质量检测调优方法、系统、设备及介质 |
CN113191286A (zh) * | 2021-05-08 | 2021-07-30 | 重庆紫光华山智安科技有限公司 | 图像数据质量检测调优方法、系统、设备及介质 |
CN113285998A (zh) * | 2021-05-20 | 2021-08-20 | 江西北斗应用科技有限公司 | 驾驶员管理终端和无人航空器监管系统 |
CN113581209A (zh) * | 2021-08-04 | 2021-11-02 | 东风柳州汽车有限公司 | 驾驶辅助模式切换方法、装置、设备及存储介质 |
CN113581209B (zh) * | 2021-08-04 | 2023-06-20 | 东风柳州汽车有限公司 | 驾驶辅助模式切换方法、装置、设备及存储介质 |
CN114475623A (zh) * | 2021-12-28 | 2022-05-13 | 阿波罗智联(北京)科技有限公司 | 车辆的控制方法、装置、电子设备及存储介质 |
CN114368395A (zh) * | 2022-01-21 | 2022-04-19 | 华录智达科技股份有限公司 | 一种基于公交数字化转型的人工智能公交驾驶安全管理系统 |
CN115214505A (zh) * | 2022-06-29 | 2022-10-21 | 重庆长安汽车股份有限公司 | 车辆座舱音效的控制方法、装置、车辆及存储介质 |
CN115214505B (zh) * | 2022-06-29 | 2024-04-26 | 重庆长安汽车股份有限公司 | 车辆座舱音效的控制方法、装置、车辆及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3617935A4 (en) | 2020-07-08 |
KR102305914B1 (ko) | 2021-09-28 |
MY197453A (en) | 2023-06-19 |
KR20200063193A (ko) | 2020-06-04 |
JP2020525334A (ja) | 2020-08-27 |
SG11201911404QA (en) | 2020-01-30 |
EP3617935A1 (en) | 2020-03-04 |
JP6932208B2 (ja) | 2021-09-08 |
CN109002757A (zh) | 2018-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019232972A1 (zh) | 驾驶管理方法和系统、车载智能系统、电子设备、介质 | |
US10915769B2 (en) | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium | |
US10970571B2 (en) | Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium | |
WO2019232973A1 (zh) | 车辆控制方法和系统、车载智能系统、电子设备、介质 | |
CN109937152B (zh) | 驾驶状态监测方法和装置、驾驶员监控系统、车辆 | |
CN111079476B (zh) | 驾驶状态分析方法和装置、驾驶员监控系统、车辆 | |
JP7146959B2 (ja) | 運転状態検出方法及び装置、運転者監視システム並びに車両 | |
CN106965675B (zh) | 一种货车集群智能安全作业系统 | |
US11783600B2 (en) | Adaptive monitoring of a vehicle using a camera | |
CN113901866A (zh) | 一种机器视觉的疲劳驾驶预警方法 | |
US20240051465A1 (en) | Adaptive monitoring of a vehicle using a camera | |
P Mathai | A New Proposal for Smartphone-Based Drowsiness Detection and Warning System for Automotive Drivers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019565001 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2018919400 Country of ref document: EP Effective date: 20191128 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18919400 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207012402 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |