CN113705333A - Method and device for screening facial images of driver, electronic equipment and commercial vehicle - Google Patents

Method and device for screening facial images of driver, electronic equipment and commercial vehicle Download PDF

Info

Publication number
CN113705333A
CN113705333A CN202110795728.8A CN202110795728A CN113705333A CN 113705333 A CN113705333 A CN 113705333A CN 202110795728 A CN202110795728 A CN 202110795728A CN 113705333 A CN113705333 A CN 113705333A
Authority
CN
China
Prior art keywords
driver
face
image
face image
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110795728.8A
Other languages
Chinese (zh)
Inventor
刘迎午
李峰荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuwei Information & Technology Development Co ltd
Original Assignee
Shenzhen Yuwei Information & Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuwei Information & Technology Development Co ltd filed Critical Shenzhen Yuwei Information & Technology Development Co ltd
Priority to CN202110795728.8A priority Critical patent/CN113705333A/en
Publication of CN113705333A publication Critical patent/CN113705333A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for screening facial images of a driver, electronic equipment and a commercial vehicle, and relates to the field of commercial vehicles. A driver facial image screening method for a driver monitoring system, comprising: carrying out face identification authentication and storing identity authentication information of a driver; continuously acquiring facial images of a driver based on a video acquired by a driver monitoring system; calculating a quality factor of the driver face image; screening the facial images of the driver according to the quality factor to select a front image of the driver; and packaging and reporting the identity authentication information and the driver front image to a monitoring center. According to the technical scheme of the embodiment of the application, the quality factor of the face image of the driver is calculated, the quality of face snapshot of the driver is quantized, and the final front face image of the driver is selected from the snapshot images according to the quality factor.

Description

Method and device for screening facial images of driver, electronic equipment and commercial vehicle
Technical Field
The application relates to the field of commercial vehicles, in particular to a method and a device for screening facial images of a driver, electronic equipment and a commercial vehicle.
Background
The intelligent vehicle-mounted terminal of the commercial vehicle with the DMS monitoring system generally needs to identify the identity of a driver within a period of time after the driver gets on the vehicle, judge whether the driver is a registered legal driver, perfect a management system for checking the post and signing in of the driver, and reduce the occurrence rate of safety production accidents in the using process of the vehicle. When the identity of the driver is judged through the face recognition algorithm, a front image of the driver needs to be captured and reported to the monitoring center, so that the log filing, investigation and evidence obtaining of the identity image of the driver are facilitated.
The existing technical scheme is that after a vehicle is started, when the speed of the vehicle reaches a preset speed for the first time, a face recognition process is started to verify the identity of a driver, and meanwhile, a front image of the driver is captured. During snapshot, whether a face image of a driver can be detected in a video image is judged only according to a result of a face recognition algorithm, but the face posture during snapshot cannot be judged, so that a large amount of facial images of deflection, face deformity and motion blur exist in front images accumulated and captured in the long-term use process of the DMS device.
Disclosure of Invention
The application provides a method and a device for screening facial images of a driver, electronic equipment and a commercial vehicle, combines a human face posture detection algorithm of a DMS monitoring system, quantifies the requirements of the quality of the face snapshot images of the driver, and screens out the front images of the driver with the best quality from the snapshot images through the quality factor of the face images of the driver.
According to an aspect of the present application, there is provided a driver facial image screening method for a driver monitoring system, including: carrying out face identification authentication and storing identity authentication information of a driver; continuously acquiring facial images of a driver based on a video acquired by a driver monitoring system; calculating a quality factor of the driver face image; screening the facial images of the driver according to the quality factor to select a front image of the driver; and packaging and reporting the identity authentication information and the driver front image to a monitoring center.
According to some embodiments, the performing face recognition authentication includes: and when the vehicle speed reaches the preset speed for the first time after the vehicle is started, carrying out face ID identification authentication on the driver through a face identification algorithm, and storing the identity authentication information after the driver is confirmed to be a registered legal driver.
According to some embodiments, if the face ID identification does not detect the driver, then not collecting the driver's facial image and alerting the monitoring center; if the driver is detected by the face ID identification and is confirmed to be a registered legal driver, starting to acquire a face image of the driver; and if the driver is detected by the face ID identification and is confirmed to be an unregistered illegal driver, starting to acquire a face image of the driver and giving an alarm to the monitoring center.
According to some embodiments, continuously capturing driver facial images based on video acquired by a driver monitoring system comprises: continuously acquiring a driver face image video according to the video of the driver monitoring system and preset time; and continuously extracting video frame images from the driver face image video to be used as the driver face image captured by the driver monitoring system.
According to some embodiments, calculating a quality factor of the driver face image comprises: obtaining a motion blur degree quantization factor; acquiring an attitude angle quantization factor; obtaining an area quantization factor; obtaining a distance quantization factor; calculating a weighted average sum of the motion blur degree quantization factor, the attitude angle quantization factor, the area quantization factor and the distance quantization factor; and acquiring the quality factor according to the calculation result of the weighted average sum.
According to some embodiments, the obtaining the motion blur degree quantization factor comprises: calculating the motion fuzziness of the face image of the driver through a standard deviation; and carrying out normalization processing on the calculation result of the motion fuzziness to obtain the quantization factor of the motion fuzziness.
According to some embodiments, the obtaining a pose angle quantization factor comprises: acquiring a first posture angle, a second posture angle and a third posture angle of the face of the driver; and calculating the square sum of the first attitude angle, the second attitude angle and the third attitude angle to obtain the attitude angle quantization factor.
According to some embodiments, the obtaining the area quantization factor comprises: analyzing the coordinates of key points of the face of the driver through a human face posture detection algorithm; forming a maximum circumscribed rectangle enveloped according to the key point coordinates, and calculating the area of the maximum circumscribed rectangle; and carrying out normalization processing on the calculation result of the maximum circumscribed rectangle area to obtain the area quantization factor.
According to some embodiments, the obtaining the distance quantization factor comprises: acquiring the center position coordinates of the face image of the driver; acquiring the center position coordinates of the maximum circumscribed rectangle forming the envelope according to the key point coordinates of the face of the driver; calculating the distance between the center position coordinates of the driver face image and the center position coordinates of the maximum circumscribed rectangle; and carrying out normalization processing on the calculation result of the distance to obtain the distance quantization factor.
According to some embodiments, obtaining the quality factor from the calculation of the weighted average sum comprises: according to the formula
Figure BDA0003162662910000031
Obtaining a quality factor fQuality,fQualityIs an integer of fReciprocalIs the inverse of the weighted average sum.
According to some embodiments, selecting a driver frontal image includes: and selecting the face image of the driver corresponding to the maximum quality factor as the front image of the driver.
According to an aspect of the present application, there is provided a driver face image screening apparatus for a driver monitoring system, including: the image acquisition module is used for acquiring a face image of a driver; the identification verification module is used for carrying out face ID identification on the driver and verifying the identity authentication information of the driver; the calculating module is used for calculating the quality factor of the facial image of the driver and selecting a front image of the driver from the facial image of the driver according to the quality factor; the storage module is used for storing the facial image of the driver, the identity authentication information and data generated in the quality factor calculation process; and the sending/receiving module is used for sending the identity authentication information and the front image of the driver to a monitoring center and receiving an instruction of the monitoring center.
According to an aspect of the present application, there is provided an electronic device including: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a method as previously described.
According to an aspect of the present application, there is provided a commercial vehicle including the driver face image screening apparatus as described above or the electronic device as described above.
According to an exemplary embodiment, the technical scheme of the application remarkably reduces the proportion of accumulated deflection, face deformity and motion blur facial images in the long-term use process of the DMS device by calculating the quality factor of the facial image of the driver, quantizing the quality of the face snapshot of the driver and selecting the facial image with the best quality according to the quality factor.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present application.
Fig. 1 shows an interaction diagram of a DMS monitoring system according to an exemplary embodiment of the present application.
Fig. 2 shows a flowchart of a method for screening a driver's facial image of a DMS monitoring system according to an exemplary embodiment of the present application.
Fig. 3A illustrates a flowchart of acquiring a motion blur degree quantization factor of an image of a face of a driver according to an exemplary embodiment of the present application.
Fig. 3B illustrates a flowchart of obtaining a driver face image pose angle quantization factor according to an example embodiment of the present application.
Fig. 3C shows a flowchart of acquiring a driver face image area quantization factor according to an example embodiment of the present application.
Fig. 3D illustrates a flowchart of acquiring a distance quantization factor of a face image of a driver according to an example embodiment of the present application.
Fig. 3E shows a flowchart for obtaining the quality factor of the driver's face image according to an example embodiment of the present application.
Fig. 4A shows a driver face image captured by the on-board DMS monitoring system according to an exemplary embodiment of the present application.
Fig. 4B shows another driver face image acquired by the on-vehicle DMS monitoring system according to an exemplary embodiment of the present application.
Fig. 4C shows another driver face image acquired by the on-vehicle DMS monitoring system according to an exemplary embodiment of the present application.
Fig. 5 shows a block diagram of a DMS monitoring system driver facial image screening apparatus according to an exemplary embodiment of the present application.
Fig. 6 shows a block diagram of an electronic device according to an example embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the disclosure can be practiced without one or more of the specific details, or with other means, components, materials, devices, or operations. In such cases, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The purpose of installing the DMS monitoring system in the commercial vehicle is to verify the driver's identity, and particularly in logistics transportation, the driver of the truck needs to drive the truck for a long time, which requires high driving ability. In the process of long-distance driving, the situation that a truck driver hands over the truck to be driven by other people without high driving capability or an emergency happens to interfere the normal driving of the driver easily occurs, and unnecessary loss is brought to the driver and commercial vehicle enterprises.
The DMS monitoring system is also installed to prevent accidents caused by fatigue driving of the driver, and the working state of the driver, distraction, doze and the like are judged and reminded through non-invasive facial feature acquisition.
The application provides a method and a device for screening a driver facial image, electronic equipment and a commercial vehicle, which quantify the quality requirement of a driver facial snapshot image, calculate and screen out a good-quality driver front image which meets the requirement of a DMS monitoring system from the snapshot image through the quality factor of the driver facial image.
The good front image of the driver has the characteristics of clear image, complete face, moderate ratio of the face area to the total area of the video image, correct posture and the like, and the face of the driver is positioned in the middle of the video image.
The technical scheme of the application combines a human face posture detection algorithm of a vehicle-mounted DMS monitoring system, calculates a quality quantization factor of a captured driver face image, restrains the requirement of posture correction through a quantized face posture angle, restrains the requirement of image clearness through a quantized motion ambiguity, restrains the requirement that the face is located at the middle position of a video image through the distance between the center of the quantized face and the center of the video image, and restrains the requirements that the face is complete and the face area occupies a moderate ratio of the total area of the video image through the area of the quantized face.
Calculating a weighted average sum of the quantization factors, calculating a comprehensive quality factor of the face image of the driver, judging the overall quality of the snapshot of the face image of the driver by a single quality factor method, and finally selecting one image with the best quality from a plurality of face images of the driver in the video image sequence as a result image according to the quality factor.
Technical solutions according to embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Description of terms:
the Driver Monitoring System (DMS) monitors irregular behaviors or dangerous behaviors of a Driver in the driving process, reminds the Driver to keep a good driving state and reduces the occurrence rate of traffic accidents.
YUV, a color coding method, is often used in various video processing components.
And the standard deviation of the image reflects the dispersion degree of the pixel value and the mean value of the image, and the larger the standard deviation is, the better the quality of the image is.
The attitude angle roll, also known as the roll angle, rotates around the Z-axis of the coordinate system.
The attitude angle pitch, also known as pitch angle, rotates around the X-axis of the coordinate system.
The attitude angle yaw, also known as the yaw angle, rotates around the Y-axis of the coordinate system.
Fig. 1 shows an interaction diagram of a DMS monitoring system according to an exemplary embodiment of the present application.
As shown in fig. 1, the DMS monitoring system includes a vehicle-mounted terminal 101, a monitoring center server 102, and a network 103.
It should be understood that the number of terminals, server devices and networks in fig. 1 is merely illustrative. There may be any number of terminals, server devices and networks, as is practical.
The network 103 may be a medium for providing an internet communication link between the in-vehicle terminal 101 and the monitoring center server 102, and may include various connection types such as a mobile communication link and the like.
The vehicle-mounted terminal 101 comprises a vehicle-mounted DMS monitoring system, can identify the identity of a driver, snap a facial image of the driver, and supervise the driving behavior and state of the driver in real time.
Generally, the vehicle-mounted terminal may be an electronic device with a display screen, and includes one or more processors and a storage device, and may interact with the monitoring center server 102 through the network 103 to report the driver identity information and the driver front image in real time, and receive an instruction from the monitoring center.
Alternatively, the vehicle-mounted terminal 101 may be applied to various commercial vehicles, such as passenger cars, freight cars, dangerous goods cars, buses, new energy vehicles, school buses, commercial concrete vehicles, muck vehicles, taxis, and the like.
The monitoring center server 102 performs data interaction with the vehicle-mounted terminal 101, stores data reported by the vehicle-mounted terminal 101, is used for filing, surveying and evidence-obtaining of the driver identity image log, and adopts a corresponding strategy according to the condition reported by the vehicle-mounted terminal.
For example, when the on-board DMS monitoring system of the commercial vehicle detects that the driver is an unregistered illegal driver, it sends an alarm message to the monitoring center, or the monitoring center locks the vehicle remotely.
Fig. 2 shows a flowchart of a method for screening a driver's facial image of a DMS monitoring system according to an exemplary embodiment of the present application.
As shown in fig. 2, in S201, the onboard DMS monitoring system performs face recognition authentication on the driver and stores driver identification authentication information.
According to some embodiments, the face recognition authentication is performed on the driver when the vehicle speed reaches a preset speed for the first time after the vehicle is started.
For example, the preset speed may be set to 10 km/h, the driver starts working and enters a normal driving state, and the on-board DMS monitoring system may capture a real frontal image of the driver.
Generally, the face identification and authentication process is to perform face ID identification on the driver by starting a face identification algorithm of the vehicle-mounted DMS, and verify the identity authentication information of the driver by performing matching in the vehicle-mounted DMS according to the face ID.
According to some embodiments, if the face ID identification can detect the driver and verify that the driver is a registered legitimate driver, the driver identity authentication information is saved.
And if the driver is not detected by the face ID identification, not acquiring the face image of the driver and sending alarm information to a monitoring center.
And if the driver can be detected by the face ID identification and the driver passes the verification, the driver is confirmed to be a registered legal driver, and the vehicle-mounted DMS monitoring system starts to acquire the face image of the driver.
If the driver can be detected by the face ID identification, but the driver is not verified, the driver is an unregistered illegal driver, and the vehicle-mounted DMS monitoring system starts to acquire the face image of the driver and sends alarm information to the monitoring center.
In S203, the driver face image is continuously acquired based on the video acquired by the on-vehicle DMS monitoring system.
According to some embodiments, the driver face image video is continuously captured for a preset duration of time for the video acquired by the on-board DMS monitoring system, for example, the preset duration may be set to 15 seconds.
Further, video frame images are continuously extracted from the driver face image video to serve as the driver face image captured by the vehicle-mounted DMS monitoring system, and an image index value is generated.
At S205, a quality factor of each of the driver face images is calculated.
According to some embodiments, the quality factor is used to determine the quality of the driver face image, the greater the value, the better the quality of the driver face image.
According to some embodiments, the quality factor is obtained from a weighted average of a motion blur degree quantization factor, a pose angle quantization factor, an area quantization factor, and a distance quantization factor and a calculation result.
And S207, screening out the face image of the driver with the best quality as the front image of the driver according to the quality factor, and reporting the front image of the driver together with the identity authentication information.
Generally, the face image of the driver corresponding to the maximum quality factor is selected as the final result front image of the driver by the image index value corresponding to the maximum quality factor.
And if the number of the maximum quality factors is more than 1, randomly selecting 1 of the maximum quality factors as a basis for selection.
Further, the driver front image and the identity authentication information in S201 are encapsulated into a network data packet, and the network data packet is reported to the monitoring center as an identity recognition result.
Fig. 3A illustrates a flowchart of acquiring a motion blur degree quantization factor of an image of a face of a driver according to an exemplary embodiment of the present application.
As shown in fig. 3A, at S301, a driver face image is acquired for calculating a driver face image motion blur degree.
According to some embodiments, the vehicle-mounted DMS monitoring system camera continuously inputs a driver face image video according to a preset duration, and continuously extracts a video frame image as the driver face image captured by the vehicle-mounted DMS monitoring system.
Optionally, the camera of the vehicle-mounted DMS monitoring system is a 720P high-resolution high-definition camera, and the obtained facial image of the driver is a 1280 × 720 YUV image.
At S303, the motion blur degree of each of the driver face images is calculated.
Generally, when the face of a driver moves, a certain degree of blur may appear in a video frame image extracted from an input video of a camera of the vehicle-mounted DMS monitoring system, and the degree of blur of the face image of the driver is represented by the motion blur degree, and a larger value indicates that the face image of the driver is more blurred.
The degree of motion blur fBlurCalculated from the standard deviation of the driver's facial image.
In S305, the motion blur degree calculation result is normalized to obtain a motion blur degree quantization factor.
The motion blur quantization factor fBlurNIs represented by the following formula
fBlurN=fBlurAnd/127.
Fig. 3B illustrates a flowchart of obtaining a driver face image pose angle quantization factor according to an example embodiment of the present application.
As shown in fig. 3B, at S401, a driver face image is acquired for calculating a driver face image attitude angle.
According to some embodiments, the vehicle-mounted DMS monitoring system camera continuously inputs a driver face image video according to a preset duration, and continuously extracts a video frame image as the driver face image captured by the vehicle-mounted DMS monitoring system.
Optionally, the camera of the vehicle-mounted DMS monitoring system is a 720P high-resolution high-definition camera, and the obtained facial image of the driver is a 1280 × 720 YUV image.
At S403, a first posture angle, a second posture angle, and a third posture angle of each of the driver face images are acquired.
According to some embodiments, the face image of the driver is subjected to inference calculation through a face posture detection algorithm of the vehicle-mounted DMS monitoring system, and posture angles of three orientations of the face in the face image of the driver are analyzed as a first posture angle roll, a second posture angle pitch, and a third posture angle yaw.
The posture angles of the three orientations represent the size of the deflection angle of the human face, and the value range is [ -1.0,1.0 ].
For example, when the face is kept horizontally facing right ahead, the attitude angles of the three orientations are approximately equal to 0.0, and when the face is raised, twisted left and the head top is deviated to the left, the attitude angles of the three orientations are positive values and increase with the increase of the deflection angle.
For another example, when the head of the human face is turned to the right and the head top is deviated to the right, the posture angles of the three orientations are negative values and decrease with the increase of the deflection angle.
At S405, the sum of squares of the attitude angles of the three orientations is calculated, resulting in an attitude angle quantization factor.
The attitude angle quantization factor sRPYIs represented by the following formula
sRPYCalculated as roll + pitch + yaw.
According to some embodiments, the attitude angle quantization factor represents a degree to which the attitude of the human face is correct, increasing with increasing absolute values of the yaw angles of the driver's face towards the three orientations.
For example, when the driver's face is kept horizontal and facing straight ahead, the attitude angle quantization factor is approximately equal to 0.0.
Fig. 3C shows a flowchart of acquiring a driver face image area quantization factor according to an example embodiment of the present application.
As shown in fig. 3C, at S501, a driver face image is acquired for calculating an area enveloping the driver face region.
According to some embodiments, the vehicle-mounted DMS monitoring system camera continuously inputs a driver face image video according to a preset duration, and continuously extracts a video frame image as the driver face image captured by the vehicle-mounted DMS monitoring system.
Optionally, the camera of the vehicle-mounted DMS monitoring system is a 720P high-resolution high-definition camera, and the obtained facial image of the driver is a 1280 × 720 YUV image.
At S503, the face key point coordinates in each of the driver face images are acquired.
According to some embodiments, the face image of the driver is subjected to reasoning calculation through a face posture detection algorithm of the vehicle-mounted DMS monitoring system, and the key point coordinates are analyzed.
And the inference calculation result of the human face posture detection algorithm of the vehicle-mounted DMS monitoring system does not contain a human face surrounding frame, and the key point coordinates are required to form a surrounding frame for enveloping the human face region.
At S505, the area of the maximum circumscribed rectangle formed from the keypoint coordinates is calculated.
Generally, the maximum circumscribed rectangle formed by all the key point coordinates is an enclosure frame enclosing a face region in the driver face image formed by the key point coordinates, and the maximum circumscribed rectangle area is the area of the enclosure frame enclosing the face region.
The maximum circumscribed rectangle is represented as a vector [ x ]min,ymin,fWid,fHei]Wherein x ismin、yminRepresenting the coordinates of the top left vertex of said maximum circumscribed rectangle, fWid、fHeiRespectively representing the width and height of the largest circumscribed rectangle.
Area f of the maximum circumscribed rectangleAreaIs represented by the following formula
fArea=fWid*fHeiAnd (6) calculating.
The larger the maximum circumscribed rectangular area is, the closer the face of the driver is to the camera of the vehicle-mounted DMS monitoring system is.
Further, since the face of the driver is close to the camera of the on-vehicle DMS monitoring system, when the face of the driver is very close to the camera, some parts of the face may be located outside the video image, resulting in an incomplete face image in the video image.
In S507, normalization processing is performed on the maximum circumscribed rectangle area calculation result to obtain an area quantization factor.
The area quantization factor fAreaNIs represented by the following formula
fAreaN=fAreathe/I _ W/I _ H is calculated.
Wherein, I _ W is the width of the face image of the driver, and I _ H is the height of the face image of the driver, which can be automatically obtained when the face image of the driver is obtained.
Fig. 3D illustrates a flowchart of acquiring a distance quantization factor of a face image of a driver according to an example embodiment of the present application.
As shown in fig. 3D, at S601, a driver face image is acquired for calculating a distance from a center position of the driver face image to a center position of a maximum circumscribed rectangle enveloping the driver face area.
According to some embodiments, the vehicle-mounted DMS monitoring system camera continuously inputs a driver face image video according to a preset duration, and continuously extracts a video frame image as the driver face image captured by the vehicle-mounted DMS monitoring system.
Optionally, the camera of the vehicle-mounted DMS monitoring system is a 720P high-resolution high-definition camera, and the obtained facial image of the driver is a 1280 × 720 YUV image.
In S603, the center position coordinates of each of the driver face images and the maximum circumscribed rectangle center position coordinates are acquired.
Generally, the center position coordinates of the driver face image are calculated, the origin of coordinates is the vertex of the upper left corner of the driver face image, the abscissa of the center position coordinates of the driver face image is represented as I _ CX, and the ordinate is represented as I _ CY.
Calculating the coordinates of the central position of the maximum circumscribed rectangle, namely the coordinates of the central position of an enclosing frame enveloping the face area of the human face, wherein the maximum circumscribed rectangle is expressed as a vector [ x ]min,ymin,fWid,fHei]Wherein x ismin、yminRepresenting the coordinates of the top left vertex of said maximum circumscribed rectangle, fWid、fHeiRespectively representing the width and height of the largest circumscribed rectangle.
The abscissa f of the coordinate of the central position of the maximum circumscribed rectangleCxIs represented by the following formula
fCx=xmin+fWidAnd/2 is calculated.
The ordinate f of the center position coordinate of the maximum circumscribed rectangleCyIs represented by the following formula
fCy=ymin+fHeiAnd/2 is calculated.
At S605, the distance between the center position coordinates of the driver face image and the center position coordinates of the maximum circumscribed rectangle is calculated.
Distance f in the abscissa directionDistXIs represented by the following formula
fDistX=fCx-I _ CX is calculated.
Distance f in the ordinate directionDistYIs represented by the following formula
fDistY=fCy-I _ CY calculation.
According to some embodiments, the distance between the center position coordinate of the driver's face image and the maximum circumscribed rectangle center position coordinate represents a degree of deviation of the driver's face from the center of the video screen, and the closer the driver's face is to the center of the video screen, the smaller the distance between the two center position coordinates.
In S607, normalization processing is performed on the distance calculation result of the center position coordinate of the driver face image and the center position coordinate of the maximum circumscribed rectangle, so as to obtain a distance quantization factor.
The coordinates of the center position of the face image of the driver andthe square f of the distance in the abscissa direction of the maximum circumscribed rectangle center position coordinateDistX 2Is represented by the following formula
fDistX 2=(fCx-I_CX)*(fCx-I _ CX).
The square f of the distance in the ordinate direction between the driver face image center position coordinate and the maximum circumscribed rectangle center position coordinateDistY 2Is represented by the following formula
fDistY 2=(fCy-I_CY)*(fCy-I _ CY) is calculated.
The distance quantization factor fDistNIs represented by the following formula
fDistN=fDistX 2/I_CX/I_CX+fDistY 2and/I _ CY/I _ CY is calculated.
Fig. 3E shows a flowchart for obtaining the quality factor of the driver's face image according to an example embodiment of the present application.
As shown in fig. 3E, at S701, a motion blur degree quantization factor, a posture angle quantization factor, an area quantization factor, and a distance quantization factor are acquired for each driver face image.
Obtaining the motion blur quantization factor fBlurNThe attitude angle quantization factor sRPYThe area quantization factor fAreaNAnd the distance quantization factor fDistN
At S703, a weighted average sum of the quantization factors is calculated.
The weighted average sum fQuotientIs represented by the following formula
fQuotient=fBlurN*0.2+sRPY*0.1+fDistN*0.2+fAreaN0.1 calculated.
In S705, a quality factor of the driver face image is calculated from the weighted average sum.
In general, the inverse f of the weighted average sum is first calculatedReciprocalFrom the formula
fReciprocal=1/fQuotientIs calculated out, and f isReciprocalThe size of the value is limited to 1 byte.
The quality factor fQualityIs represented by the following formula
Figure BDA0003162662910000131
Is calculated to be where fQualityAre integers.
According to some embodiments, the figure of merit is an integer ranging from 0 to 255, and a larger value of the figure of merit indicates a better quality of the driver face image.
For example, the quality factor f of FIG. 4A can be obtained from the driver's face image shown in FIG. 4A according to the above-mentioned method for calculating the quality factor of the driver's face imageQuality4A=9。
The quality factor f of FIG. 4B can be obtained from the driver's face image shown in FIG. 4B according to the method of calculating the quality factor of the driver's face image as described aboveQuality4B=18。
The quality factor f of FIG. 4C can be obtained from the driver's face image shown in FIG. 4C according to the method of calculating the quality factor of the driver's face image as described aboveQuality4C=55。
From fQuality4C>fQuality4B>fQuality4AIt can be seen that the figure 4C quality factor is the largest, and the figure 4A quality factor is the smallest, which indicates that the image quality of the driver's face shown in figure 4C is better than the image quality of the driver's face shown in figures 4A and 4B, and is suitable for reporting to the monitoring center as the front image of the driver.
Fig. 5 shows a block diagram of a DMS monitoring system driver facial image screening apparatus according to an exemplary embodiment of the present application.
As shown in fig. 5, the apparatus for screening a facial image of a driver includes an image capturing module 801, an identification verification module 803, a calculation module 805, a storage module 807, and a transmission/reception module 809.
The image acquisition module 801 is used for acquiring facial images of a vehicle driver.
According to some embodiments, the image acquisition module continuously acquires a driver face image video according to a preset time length through a vehicle-mounted DMS monitoring system camera, and continuously extracts a video frame image as the captured driver face image.
The identification verification module 803 is configured to perform face ID identification on the driver and verify the identity authentication information of the driver.
Generally, the identification and verification module performs face ID identification on the driver through a face identification algorithm of the vehicle-mounted DMS, and matches the stored driver identification information with the face ID in the storage module according to the face ID to determine whether the driver is a registered legal driver.
The calculation module 805 is configured to calculate a quality factor of the driver face image, and select a driver front image from the driver face image according to the quality factor.
According to some embodiments, the calculation module calculates, based on the driver face image acquired by the image acquisition module, a motion blur degree, an attitude angle of a face orientation, a maximum circumscribed rectangular area of a face area, and a distance between a center of the driver face image and the maximum circumscribed rectangular center, respectively, by a face attitude detection algorithm of the on-vehicle DMS monitoring system, and performs normalization processing to obtain a motion blur degree quantization factor, an attitude angle quantization factor, an area quantization factor, and a distance quantization factor.
Further, a quality factor of the face image of the driver is calculated and acquired according to each quantization factor, and a final front image of the driver is selected from the candidate face images of the driver according to the quality factor.
The storage module 807 is used to store the driver face image, the identification information, and the data generated during the quality factor calculation process.
According to some embodiments, the storage module stores registered driver authentication information in advance for comparison with the driver authentication information acquired by the identification verification module.
The sending/receiving module 809 is configured to send the driver identity authentication information and the driver front image to a monitoring center, and receive an instruction of the monitoring center.
According to some embodiments, the sending/receiving module packages the driver identity authentication information and the driver front image into a data packet, and sends the data packet to the monitoring center through a network.
The network comprises, for example, a mobile communication network.
Fig. 6 shows a block diagram of an electronic device according to an example embodiment of the present application.
As shown in fig. 6, the electronic device 900 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: at least one processing unit 910, at least one storage unit 920, a bus 930 connecting different system components (including the storage unit 920 and the processing unit 910), a display unit 940, and the like. Where the storage unit stores program code that may be executed by the processing unit 910 to cause the processing unit 910 to perform the methods according to various exemplary embodiments of the present application described herein. For example, processing unit 910 may perform a method as shown in fig. 2.
The storage unit 920 may include a readable medium in the form of a volatile storage unit, such as a random access memory unit (RAM)9201 and/or a cache memory unit 9202, and may further include a read only memory unit (ROM) 9203.
Storage unit 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 930 can be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 900 may also communicate with one or more external devices 900' (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 900, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 960. The network adapter 960 may communicate with other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. The technical solution according to the embodiment of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiment of the present application.
The software product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions described above.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
According to some embodiments of the application, according to the quality factor of the facial image of the driver, one front image of the driver with the best quality is selected from a plurality of snap-shot images of the driver in the video image sequence as a result image, and the proportion of the facial images with deflection, face deformity and motion blur in the accumulated snap-shot images in the long-term use process of the DMS device is remarkably reduced.
The embodiments of the present application are described in detail, and the description of the embodiments is only used to help understand the method and the core idea of the present application. Meanwhile, a person skilled in the art should, according to the idea of the present application, change or modify the embodiments and applications of the present application based on the scope of the present application. In view of the above, the description should not be taken as limiting the application.

Claims (14)

1. A driver face image screening method for a driver monitoring system, characterized by comprising:
carrying out face identification authentication and storing identity authentication information of a driver;
continuously acquiring facial images of a driver based on a video acquired by a driver monitoring system;
calculating a quality factor of the driver face image;
screening the facial images of the driver according to the quality factor to select a front image of the driver;
and packaging and reporting the identity authentication information and the driver front image to a monitoring center.
2. The method of claim 1, wherein the performing face recognition authentication comprises:
and when the vehicle speed reaches the preset speed for the first time after the vehicle is started, carrying out face ID identification authentication on the driver through a face identification algorithm, and storing the identity authentication information after the driver is confirmed to be a registered legal driver.
3. The method of claim 2, comprising:
if the driver is not detected by the face ID identification, the facial image of the driver is not collected and an alarm is given to the monitoring center;
if the driver is detected by the face ID identification and is confirmed to be a registered legal driver, starting to acquire a face image of the driver;
and if the driver is detected by the face ID identification and is confirmed to be an unregistered illegal driver, starting to acquire a face image of the driver and giving an alarm to the monitoring center.
4. The method of claim 1, wherein continuously capturing driver facial images based on video acquired by a driver monitoring system comprises:
continuously acquiring a driver face image video according to the video of the driver monitoring system and preset time;
and continuously extracting video frame images from the driver face image video to be used as the driver face image captured by the driver monitoring system.
5. The method of claim 1, wherein calculating a quality factor of the driver facial image comprises:
obtaining a motion blur degree quantization factor;
acquiring an attitude angle quantization factor;
obtaining an area quantization factor;
obtaining a distance quantization factor;
calculating a weighted average sum of the motion blur degree quantization factor, the attitude angle quantization factor, the area quantization factor and the distance quantization factor;
and acquiring the quality factor according to the calculation result of the weighted average sum.
6. The method of claim 5, wherein obtaining the motion blur quantization factor comprises:
calculating the motion fuzziness of the face image of the driver through a standard deviation;
and carrying out normalization processing on the calculation result of the motion fuzziness to obtain the quantization factor of the motion fuzziness.
7. The method of claim 5, wherein obtaining the pose angle quantization factor comprises:
acquiring a first posture angle, a second posture angle and a third posture angle of the face of the driver;
and calculating the square sum of the first attitude angle, the second attitude angle and the third attitude angle to obtain the attitude angle quantization factor.
8. The method of claim 5, wherein obtaining the area quantization factor comprises:
analyzing the coordinates of key points of the face of the driver through a human face posture detection algorithm;
forming a maximum circumscribed rectangle enveloped according to the key point coordinates, and calculating the area of the maximum circumscribed rectangle;
and carrying out normalization processing on the calculation result of the maximum circumscribed rectangle area to obtain the area quantization factor.
9. The method of claim 5, wherein obtaining the distance quantization factor comprises:
acquiring the center position coordinates of the face image of the driver;
acquiring the center position coordinates of the maximum circumscribed rectangle forming the envelope according to the key point coordinates of the face of the driver;
calculating the distance between the center position coordinates of the driver face image and the center position coordinates of the maximum circumscribed rectangle;
and carrying out normalization processing on the calculation result of the distance to obtain the distance quantization factor.
10. The method of claim 5, wherein obtaining the quality factor based on the weighted average sum comprises:
according to the formula
Figure FDA0003162662900000031
Obtaining a quality factor fQuality,fQualityIs an integer of fReciprocalIs the inverse of the weighted average sum.
11. The method of claim 1, wherein selecting a driver frontal image comprises:
and selecting the face image of the driver corresponding to the maximum quality factor as the front image of the driver.
12. A driver face image screening apparatus for a driver monitoring system, comprising:
the image acquisition module is used for acquiring a face image of a driver;
the identification verification module is used for carrying out face ID identification on the driver and verifying the identity authentication information of the driver;
the calculating module is used for calculating the quality factor of the facial image of the driver and selecting a front image of the driver from the facial image of the driver according to the quality factor;
the storage module is used for storing the facial image of the driver, the identity authentication information and data generated in the quality factor calculation process;
and the sending/receiving module is used for sending the identity authentication information and the front image of the driver to a monitoring center and receiving an instruction of the monitoring center.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
14. A commercial vehicle characterized by comprising the driver's face image screening device according to claim 12 or the electronic apparatus according to claim 13.
CN202110795728.8A 2021-07-14 2021-07-14 Method and device for screening facial images of driver, electronic equipment and commercial vehicle Pending CN113705333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110795728.8A CN113705333A (en) 2021-07-14 2021-07-14 Method and device for screening facial images of driver, electronic equipment and commercial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110795728.8A CN113705333A (en) 2021-07-14 2021-07-14 Method and device for screening facial images of driver, electronic equipment and commercial vehicle

Publications (1)

Publication Number Publication Date
CN113705333A true CN113705333A (en) 2021-11-26

Family

ID=78648819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110795728.8A Pending CN113705333A (en) 2021-07-14 2021-07-14 Method and device for screening facial images of driver, electronic equipment and commercial vehicle

Country Status (1)

Country Link
CN (1) CN113705333A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694284A (en) * 2022-03-24 2022-07-01 北京金和网络股份有限公司 Special vehicle driver identity verification method and device
WO2023216626A1 (en) * 2022-05-12 2023-11-16 合肥杰发科技有限公司 Dms starting method, and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004128A (en) * 2017-02-16 2017-08-01 深圳市锐明技术股份有限公司 A kind of driver identity recognition methods and device
CN108540579A (en) * 2018-05-25 2018-09-14 西安艾润物联网技术服务有限责任公司 Driver identity on-line monitoring method, device and storage medium
CN109361656A (en) * 2018-09-21 2019-02-19 杨虎 A method of protection public transport passenger safety
US20190279009A1 (en) * 2018-03-12 2019-09-12 Microsoft Technology Licensing, Llc Systems and methods for monitoring driver state
CN111444787A (en) * 2020-03-12 2020-07-24 江西赣鄱云新型智慧城市技术研究有限公司 Fully intelligent facial expression recognition method and system with gender constraint
CN112084942A (en) * 2018-03-09 2020-12-15 西安艾润物联网技术服务有限责任公司 Method and device for online monitoring of driver identity in vehicle and storage medium
US20210004581A1 (en) * 2019-07-05 2021-01-07 Servall Data Systems Inc. Apparatus, system and method for authenticating identification documents

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004128A (en) * 2017-02-16 2017-08-01 深圳市锐明技术股份有限公司 A kind of driver identity recognition methods and device
CN112084942A (en) * 2018-03-09 2020-12-15 西安艾润物联网技术服务有限责任公司 Method and device for online monitoring of driver identity in vehicle and storage medium
US20190279009A1 (en) * 2018-03-12 2019-09-12 Microsoft Technology Licensing, Llc Systems and methods for monitoring driver state
CN108540579A (en) * 2018-05-25 2018-09-14 西安艾润物联网技术服务有限责任公司 Driver identity on-line monitoring method, device and storage medium
CN109361656A (en) * 2018-09-21 2019-02-19 杨虎 A method of protection public transport passenger safety
US20210004581A1 (en) * 2019-07-05 2021-01-07 Servall Data Systems Inc. Apparatus, system and method for authenticating identification documents
CN111444787A (en) * 2020-03-12 2020-07-24 江西赣鄱云新型智慧城市技术研究有限公司 Fully intelligent facial expression recognition method and system with gender constraint

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694284A (en) * 2022-03-24 2022-07-01 北京金和网络股份有限公司 Special vehicle driver identity verification method and device
CN114694284B (en) * 2022-03-24 2024-03-22 北京金和网络股份有限公司 Special vehicle driver identity verification method and device
WO2023216626A1 (en) * 2022-05-12 2023-11-16 合肥杰发科技有限公司 Dms starting method, and related device

Similar Documents

Publication Publication Date Title
CN109886210B (en) Traffic image recognition method and device, computer equipment and medium
CN108540579B (en) Driver identity online monitoring method and device and storage medium
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN108304830A (en) Personnel identity on-line monitoring method, device and storage medium in vehicle
CN113705333A (en) Method and device for screening facial images of driver, electronic equipment and commercial vehicle
CN112085952B (en) Method and device for monitoring vehicle data, computer equipment and storage medium
CN109740424A (en) Traffic violations recognition methods and Related product
CN112163543A (en) Method and system for detecting illegal lane occupation of vehicle
CN111784923A (en) Shared bicycle parking management method, system and server
CN112270309A (en) Vehicle access point equipment snapshot quality evaluation method and device and readable medium
CN112277957B (en) Early warning method and system for driver distraction correction and storage medium
CN106448063A (en) Traffic safety supervision method, device and system
CN108961764A (en) A kind of video for intelligent transportation field and radio frequency identification all-in-one machine
CN112818839A (en) Method, device, equipment and medium for identifying violation behaviors of driver
CN111523388A (en) Method and device for associating non-motor vehicle with person and terminal equipment
CN113037750B (en) Vehicle detection data enhancement training method and system, vehicle and storage medium
CN111241918B (en) Vehicle tracking prevention method and system based on face recognition
CN106845393A (en) Safety belt identification model construction method and device
CN113283296A (en) Helmet wearing detection method, electronic device and storage medium
CN116385185A (en) Vehicle risk assessment auxiliary method, device, computer equipment and storage medium
CN111985304A (en) Patrol alarm method, system, terminal equipment and storage medium
CN116206246A (en) Method and system for identifying safety violation target of infrastructure engineering
CN111191603B (en) Method and device for identifying people in vehicle, terminal equipment and medium
Kiac et al. ADEROS: artificial intelligence-based detection system of critical events for road security
KR102377393B1 (en) Image analysis method and system for recognition of Heavy Equipment and Gas Pipe

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination