CN109299641A - A kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithm - Google Patents

A kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithm Download PDF

Info

Publication number
CN109299641A
CN109299641A CN201810354996.4A CN201810354996A CN109299641A CN 109299641 A CN109299641 A CN 109299641A CN 201810354996 A CN201810354996 A CN 201810354996A CN 109299641 A CN109299641 A CN 109299641A
Authority
CN
China
Prior art keywords
face
image
region
eye
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810354996.4A
Other languages
Chinese (zh)
Other versions
CN109299641B (en
Inventor
杨奎
彭其渊
张晓梅
胡雨欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
China Railway Corp
Original Assignee
Southwest Jiaotong University
China Railway Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University, China Railway Corp filed Critical Southwest Jiaotong University
Priority to CN201810354996.4A priority Critical patent/CN109299641B/en
Publication of CN109299641A publication Critical patent/CN109299641A/en
Application granted granted Critical
Publication of CN109299641B publication Critical patent/CN109299641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Psychiatry (AREA)
  • Pathology (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Developmental Disabilities (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithms, belong to the technical field of the pattern-recognition based on biological characteristic, by adaptive Face datection algorithm by previous frame image detection result optimizing current frame image detection parameters, detection range is reduced to greatest extent, reduction process detects number, improves image detection efficiency;It according to face human eye relationship and eyes positional relationship, is detected by adaptive fast human-eye and intelligent estimation algorithm, further reduces human eye detection range, while carried out eye position and effectively inferring and data check, effectively improve data accuracy and completeness;Consecutive image testing result is assessed into subsequent a period of time picture quality according to interval recognition of face and frame-skipping quick Processing Algorithm, make differentiation frame-skipping processing, improve image processing efficiency, subsequent processes are adaptively adjusted according to currently processed acquisition data by self-adapting detecting technology with reaching, can be improved the purpose of quality of image processing and efficiency.

Description

A kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithm
Technical field
The invention belongs to the technical fields of the pattern-recognition based on biological characteristic, are related to image procossing, pattern-recognition, calculating Many-sided theoretical and technology such as machine vision and Human physiology, in particular to a kind of train dispatcher's fatigue monitoring image Self-adaptive processing algorithm.
Background technique
Face datection, recognition of face and human eye detection are to determine face and human eye in the picture according to face and eye feature Region, the corresponding object identity of identification face, it is more to be related to image procossing, pattern-recognition, computer vision and Human physiology etc. Aspect theory and technology.
OpenCV (Open Source Computer Vision Library) is that Intel company initiates and participates in developing Cross-platform computer vision library, be made of a series of C functions and a small amount of C++ class, realize image procossing and computer view Feel many general-purpose algorithms of aspect.OpenCV can be operated in Linux, Windows and Mac OS operating system, be provided simultaneously The language interfaces such as Python, Ruby and MATLAB, have cross-platform, lightweight and efficiently, independent of other external libraries and Free open source feature, is the ideal tools of image procossing, pattern-recognition and computer vision field secondary development.
OpenCV provides the basic library dll of numerous image procossings, pattern-recognition and computer vision field basic function, but Be its significant drawback be almost without provide GUI interface, it is difficult to directly meet the needs of application development.
EmguCV is then the cross-platform .Net encapsulation of one of OpenCV, allows directly to be adjusted with .Net language by encapsulation With OpenCV function, C# and OpenCV can be connected well, to make up deficiency of the OpenCV in terms of GUI.
Face datection and human eye detection belong to object detection field, and cascade AdaBoost algorithm is that OpenCV is supported and wide The algorithm of target detection of general application obtains AdaBoost cascade classifier using the training of sample Haar feature, calls Haar detection Function realizes target detection.Wherein AdaBoost algorithm core concept is the weak typing different for the training of the same training set These Weak Classifier collection adaptives are promoted to strong classifier by device, are weighted final convergence by iteration and are tended towards stability.
Image detection and image recognition technology are quickly grown, and Face datection and face recognition technology are extensive in all trades and professions Using.Contactless PERCLOS method based on human eye closure degree obtains industry and is widely recognized as, and starts in driver It is applied in flyer's fatigue monitoring, achieves good result.At the same time, EMGUCV provides the system of convenient and efficient Interface can be realized Face datection and recognition of face basic function, meet basic need.
Train dispatcher's dispatch control working environment has significant open, compass of competency is wide, integrated information is wide, equipment and System is more, and technical equipment arrangement generallys use multiple rows of multiple row mode.Mainly sight is focused primarily on other industry personnel To difference, train dispatcher can focus different zones in different periods according to need of work during dispatch control in front, It is likely to occur new line, bows or the movement such as left and right side view, focus vision have significant dispersing characteristic.
According to dispatch control operative scenario feature, human body physiological characteristics and fatigue monitoring needs, train dispatcher's fatigue prison Examining system performance requirement is mainly reflected in Noninvasive, concurrency, continuity, high efficiency and accuracy totally five aspects.Image Processing is fatigue monitoring link the most time-consuming, is related to fatigue monitoring system data-handling efficiency and performance, train dispatcher Parallel fatigue monitoring proposes requirements at the higher level to image processing techniques efficiency, and the prior art has been difficult to meet needs.
Summary of the invention
In order to solve the above problems existing in the present technology, it is an object of that present invention to provide a kind of train dispatcher's fatigue prisons Altimetric image self-adaptive processing algorithm meets system in quality and speed side to reach the function realization in image procossing disparate modules The performance requirement in face adaptively adjusts subsequent processes according to currently processed acquisition data by self-adapting detecting technology It is whole, quality of image processing and efficiency can be improved to the maximum extent, meet system performance while realizing image processing function The purpose of demand.
The technical scheme adopted by the invention is as follows: a kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithm, base In the development platform of VS2010, EmguCV is called to carry out secondary development using C# language, mainly include the following:
(1) Face datection and human eye detection
FaceHaar is obtained by load face classification device haarcascade_frontalface_alt2.xml, by adding Manned eye classifier haarcascade_mcs_righteye.xml obtains EyeHaar, calls DetectMultiScale function Respectively obtain following formula:
Faces=FaceHaar.DetectMultiScale (Image1, SF1, MinNH1, MinSize1, MaxSize1) (2-1)
Eyes=EyeHaar.DetectMultiScale (Image2, SF2, MinNH2, MinSize2, MaxSize2) (2-2)
Wherein, DetectMultiScale is the multi-dimension testing method of CascadeClassifier class, obtains input figure The regional ensemble of specific objective object as in;
Image1 and Image2 respectively indicates the image object of Face datection and human eye detection, and type is Image < Gray, byt e>;
SF1 and SF2 respectively indicates the zoom factor of Face datection and human eye detection;
MinNH1 and MinNH2 respectively indicates the minimum number for constituting the adjacent rectangle of Face datection and human eye detection target;
MinSize1 and MaxSize1 respectively indicates the minimum dimension and full-size that Face datection obtains rectangular area;
MinSize2 and MaxSize2 respectively indicates the minimum dimension and full-size that human eye detection obtains rectangular area;
(2) recognition of face
Recognition of face is by calling the Recognize method of EigenObjectRecognizer class in EmguCV to realize, base In the human face region that Face datection obtains, by the identity of face characteristic discrimination objective object, face recognition process traversal is current The human face region that frame image Face datection obtains, until finding the human face region for belonging to target object, then carries out subsequent people Eye detection and eyelid distance computation, the formula of critical process are as follows:
Recognizer=newEigenObjectRecognizer (Images, Labels, DistanceThreshold, termCrit) (2-3)
Name=recognizer.Recognize (result) .Label (2-4)
Images be face recognition training pattern matrix, type be Image<Gray, byte>;
Labels is the corresponding identification number array of recognition of face pattern matrix, type string;
DistanceThreshold is characterized distance threshold;
TermCrit is face recognition training standard, type MCvTermCriteria;
Name is the object identity mark that recognition of face obtains, and belongs to element in Labels.
Further, the Face datection uses the quick self-adapted Face datection algorithm constrained based on interframe, in face In detection zone, Face datection search window carries out sequence detection since MinSize1 size, if it cannot detect face Search window expands SF1 times, is recycled and is carried out until detecting face or until search window size reaches MaxSize1 with this;It enables I is the frame variable of image procossing, PRiFor the image rectangular area, DRiFor the image Face datection target area, FRiFor the figure As the face rectangular area detected, then:
MinSize1i≤FRi.Size≤MaxSize1i (2-6)
The Face datection target area for taking next frame image is DRi+1, window size MinSize1i+1With MaxSize1i+1, enable f1、f2And f3Respectively indicate DRi+1、MinSize1i+1And MaxSize1i+1With FRiBetween auto-adaptive function Relationship:
DRi+1=f1(FRi)1≤i≤M-1,i∈N (2-7)
MinSize1i+1=f2(FRi)1≤i≤M-1,i∈N (2-8)
MaxSize1i+1=f3(FRi)1≤i≤M-1,i∈N (2-9)
Wherein, M is the number of image frames of current video file.
Further, enabling λ is that coefficient is expanded in region of search, then Face datection target area DR in i+1 frame imagei+1's Location parameter is X and Y, and dimensional parameters are Width and Height;The f1Auto-adaptive function is indicated using following formula:
α and β is enabled to respectively indicate MinSize1i+1And MaxSize1i+1Relative to FRiThe scaling of size, then function f2With f3Formula (2-11) and (2-12), which can be respectively adopted, to be indicated:
Further, when being likely to occur DR during atual detectioni+1Beyond PRi+1The case where, wherein PRi+1It is next The image rectangular area of frame image need to be carried out according to the actual situation by Face datection target area DRi+1It is modified to feasible DRi+1;Take DRi+1With PRi+1Intersection is as i+1 frame image Face datection target area, then DRi+1=DRi+1∩PRi+1
Further, the human eye detection uses adaptive fast human-eye detection algorithm, enables as ERiThe i-th frame of video file FR is based in imageiThe human eye detection target area determined with face " three five, front yards " rules self-adaptive, human eye detection target area Domain ERiLocation parameter be X and Y, dimensional parameters be Width and Height, determine ERiWith FRiBetween auto-adaptive function relationship It is as follows:
Further according to human eye detection region ERiAdaptively determine human eye detection minimum search window MinSize2iMost wantonly search for Rope window MaxSize2i, and MinSize2iAnd MaxSize2iWith ERiBetween the following formula of auto-adaptive function relationship:
Further, eye position deduction and data check are carried out by self-adapting intelligent algorithm under specific circumstances, had Body is as follows: enabling LERiAnd RERiRespectively in ERiDetected left-eye image region and eye image region, q frame in atmosphere It includes: LER that image human eye self-adapting intelligent, which is inferred and verified referring to information,p、RERp、FRpAnd FRq, wherein p be q frame image with Before detect complete human eye information and face information frame number variable maximum value, p≤q-1;
Enable ERNpFor ERpDirect detected human eye quantity in range, q frame image human eye is intelligently in deduction and verification Hold because of ERNpIt is worth difference and difference, specifically includes following three kinds of scenes:
(1) if ERNp>=2, according to LERp、RERp、FRpAnd FRqInformation verifies detected eye areas one by one, rejects Retain two best eye areas after extra eye areas, relationship determines left-eye image region LER respectively depending on the relative positionqWith Eye image region RERq
(2) if ERNp=1, according to LERp、RERp、FRpAnd FRqInformation carries out eye areas verification, determines that directly detection obtains The eye areas obtained is left eye region LERqOr right eye region RERq, by examine after based on this eye areas in ERqModel Enclose interior deduction another eye areas;
(3) if ERNp=0, according to LERp、RERp、FRpAnd FRqInformation is directly in ERqLeft eye region LER is inferred in rangeq With right eye region RERq
Further, according to LERp、RERp、FRpAnd FRqThe eyes region LER that self-adapting intelligent is inferredq' and RERq' respectively Pass through following formula:
Wherein, sq,pZoom factor for human eye area in q frame image relative to human eye area in pth frame image,
sq,p=(FRq.Width/FRp.Width+FRq.Height/FRp.Height)/2(2-18)。
Further, mesh is checked according to face location parameter and dimensional parameters situation of change by interval face recognition algorithms The identity for marking object calls the Recognize method of EigenObjectRecognizer class in EmguCV true after condition triggering Determine the corresponding personnel identity of human face region, the touching for needing to carry out recognition of face after facial image is detected in i+1 frame image Clockwork spring part includes:
(1) the previous frame image of current frame image fails to detect human face region, i.e. DRi+1=PRi, indicate to detect Human face region is the new facial area into personnel in image range;
(2) the face rectangular area FR detected in i+1 frame imagei+1It cannot meet simultaneously following formula:
Wherein, 0 < ω≤0.4,0 < σ≤0.15, FR is takeniFor the face rectangular area detected in the i-th frame image, FRi+1For the face rectangular area detected in i+1 frame image.
Further, when target object, which leaves video record, takes the photograph range, Face datection can be carried out during image processing Frame-skipping processing, the trigger condition of Face datection frame-skipping processing are set as continuous K frame image and fail to detect human face region, and parameter The value range of K is [5,25].
The invention has the benefit that
1. Face datection and human eye detection based on EmguCV have good robustness, in posture slight shift and target It remains to accurately detect face and human eye area when partial occlusion;
2. by using quick self-adapted Face datection algorithm, according to the face location and ruler detected in sequential frame image Very little data are adaptively adjusted the detection zone and relevant parameter of Face datection, maximum on the basis of ensuring detection accuracy Improve detection rates to limit;
3. in the human eye region of search determined based on human face region, adaptive fast human-eye detection and intelligence infer algorithm energy It is enough that corresponding eyes region is found to robustness under different situations, eyes eyelid distance is obtained after binocular images regional processing, is led to Eyes region completion in image is crossed to improve the accuracy and robustness of human eye detection;
4. solving the problems, such as that the object identity after Face datection is checked using interval face recognition technology differentiation, ensuring Image processing efficiency can be improved while image process target accuracy to the maximum extent;
5. target object leaves Face datection when range is taken the photograph in video record and carries out frame-skipping by using frame-skipping quick Processing Algorithm Processing, image overall treatment efficiency can be effectively improved without Face datection when range is taken the photograph in video record by leaving in target object.
Detailed description of the invention
Fig. 1 be image rectangular area in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention, Face datection target area and face rectangular area relation schematic diagram;
Fig. 2 be in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention target object along X-axis The mobile face rectangular area in direction changes schematic diagram;
Fig. 3 be in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention target object along Z axis The mobile face rectangular area in direction changes schematic diagram;
Fig. 4 be in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention target object along X-axis Change schematic diagram with the mobile face rectangular area of Z-direction;
Fig. 5 is target object Y-axis side in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention Face regional change schematic diagram is moved forwards, backwards;
Fig. 6 is same under different distance in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention Differentiation embodies schematic diagram in the picture for size and displacement;
Fig. 7 is people in different modes in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention The comprehensive time-consuming lateral comparison schematic diagram of face detection;
Fig. 8 is facial image " three in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention Five, front yard " distribution schematic diagram;
Fig. 9 is in train dispatcher's fatigue monitoring image adaptive Processing Algorithm provided by the invention at Face datection frame-skipping Triggering, continuous trigger and the normal process schematic of recovery of reason.
1--FRiFor the face rectangular area that the image detection arrives, 2--DRiFor the image Face datection target area, 3-- PRiFor the image rectangular area, O-- video capture device, 4--FF mode, 5--AF mode, 6--FA mode, 7--AA mode, 8--BS mode, 9-- ear, 10-- eyes, 11-- nose, 12-- mouth.
Specific embodiment
With reference to the accompanying drawing and specific embodiment the present invention is further elaborated.
The present invention provides a kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithm, the exploitation based on VS2010 Platform calls EmguCV to carry out secondary development, mainly using C# language include the following:
(1) Face datection and human eye detection
FaceHaar is obtained by load face classification device haarcascade_frontalface_alt2.xml, FaceHaar is the Face datection example of AdaBoost cascade classifier CascadeClassifier, passes through load human eye classification Device haarcascade_mcs_righteye.xml obtains EyeHaar, and EyeHaar is AdaBoost cascade classifier The human eye detection example of CascadeClassifier calls DetectMultiScale function to respectively obtain following formula:
Faces=FaceHaar.DetectMultiScale (Image1, SF1, MinNH1, MinSize1, MaxSize1) (2-1)
Eyes=EyeHaar.DetectMultiScale (Image2, SF2, MinNH2, MinSize2, MaxSize2) (2-2)
Wherein, Faces is the human face region rectangular array that image Face datection returns, and type is Rectangle [], yuan Element includes the positions and dimensions information of a face;
Eyes is the human eye area rectangular array that image human eye detection returns, and type is Rectangle [], and element includes The positions and dimensions information of one eye eyeball;
DetectMultiScale is the multi-dimension testing method of CascadeClassifier class, is obtained in input picture The regional ensemble of specific objective object;
Image1 and Image2 respectively indicates the image object of Face datection and human eye detection, and type is Image < Gray, Byt e >, according to the geometry inclusion relation of eyes and facial area, human eye detection in the human face region that Face datection obtains into Row, i.e. Image2 is the corresponding image-region of Faces element, if face cannot be detected, without subsequent human eye detection;
SF1 and SF2 respectively indicates the zoom factor of Face datection and human eye detection, indicates to search in the adjacent scanning twice of front and back Rope window proportionality coefficient, default value are that the 1.1 each search windows of expression successively expand 10%, specifically can voluntarily be set as needed It is fixed;
MinNH1 and MinNH2 respectively indicates the minimum number for constituting the adjacent rectangle of Face datection and human eye detection target Min_neighbors, the small rectangle number of composition detection target and less than min_neighbors when, can all be excluded, default value It is 3.If value is 0, representative function does not do any operation and is returned to all tested candidate rectangles, and the setting is commonly used in use The customized combinator to testing result in family;
MinSize1 and MaxSize1 respectively indicate Face datection obtain rectangular area minimum dimension and full-size, two For person's synergy to limit the range of human face region, type is Size;
MinSize2 and MaxSize2 respectively indicate human eye detection obtain rectangular area minimum dimension and full-size, two For person's synergy to limit the range of human eye area, type is Size;
(2) recognition of face
Recognition of face is by calling the Recognize method of EigenObjectRecognizer class in EmguCV to realize, base In the human face region that Face datection obtains, by the identity of face characteristic discrimination objective object, face recognition process traversal is current The human face region that frame image Face datection obtains, until finding the human face region for belonging to target object, then carries out subsequent people Eye detection and eyelid distance computation, the formula of critical process are as follows:
Recognizer=newEigenObjectRecognizer (Images, Labels, DistanceThreshold, termCrit) (2-3)
Name=recognizer.Recognize (result) .Label (2-4)
EigenObjectRecognizer uses the target marker of PCA;
Recognizer is an example of EigenObjectRecognizer class, and Recognize method can obtain Special object identification information;
Images be face recognition training pattern matrix, type be Image<Gray, byte>, each image size just as And normalized by histogram, Images is obtained by artificial training in advance;
Labels is the corresponding identification number array of recognition of face pattern matrix, and type string, element and training are schemed As for image in the presence of the mapping relations being corresponding in turn to, Labels is corresponding specified in Images training in array;
DistanceThreshold is characterized distance threshold, and the value is bigger, then recognition of face is more difficult to but accuracy of identification is got over It is high;
TermCrit is face recognition training standard, type MCvTermCriteria;
Name is the object identity mark that recognition of face obtains, and belongs to element in Labels.
When target object workplace has open nature, the possible of short duration video that leaves is adopted in the target object course of work Collect region, other target objects may also appear in video collection area.Therefore for acquire video a certain frame image and Speech, Face datection are likely to occur three kinds of results: (1) without human face region;(2) human face regions, belong to target target object or Other target objects of person;(3) multiple human face regions including target target object or do not include that target target object exists It is interior.
Recognition of face passes through face characteristic discrimination objective object identity based on the human face region that Face datection obtains. Face recognition process traverses the human face region that current frame image Face datection obtains, and the people of target target object is belonged to until finding Until face region, then carry out subsequent human eye detection and eyelid distance computation.
The eyelid spacing data that recognition of face can ensure that image procossing obtains is to belong to specific target object, Yi Mianying Ring the data accuracy of its degree of fatigue development.
Face datection basic function realize after, need further satisfaction fatigue monitoring rate and in terms of performance need It asks, the Face datection is used to the quick self-adapted Face datection algorithm constrained based on interframe, i.e., according to aforementioned successive frame figure The face location and dimension data detected as in, is adaptively adjusted the detection zone and relevant parameter of Face datection, true Detection rates are improved to the maximum extent on the basis of guarantor's detection accuracy.
In Face datection region, Face datection search window carries out sequence detection since MinSize1 size, if not It can detect that then search window expands SF1 times to face, progress is recycled until detecting that face or search window size reach with this Until MaxSize1;Enabling i is the frame variable of image procossing, PRiFor the image rectangular area, DRiFor the image Face datection target Region, FRiFor the face rectangular area that the image detection arrives, PRiImage rectangular area, DRiFace datection target area and FRi Relationship between face rectangular area is as shown in Figure 1, then:
MinSize1i≤FRi.Size≤MaxSize1i (2-6)
It is 25 frames/s that face video, which acquires frame frequency, and consecutive frame image time interval is 0.04s, face location parameter and size Parameters variation all has gradually changeable, and this change procedure is subtly portrayed by sequential frame image record.Single-frame images people Face testing result can directly reflect that face location and dimension information, the Face datection result of sequential frame image then further contain The variation tendency of face location and size, provide effective reference for next frame image Face datection.
With the detected face location parameter of sequential frame image and dimensional parameters information, next frame figure is adaptively determined As Face datection region DRi+1Location parameter and dimensional parameters, MinSize1i+1And MaxSize1i+1, by accurately determining people Face detection zone position minimizes DRi+1Size and MaxSize1i+1, maximize MinSize1i+1, face inspection is reduced to greatest extent It surveys region and reduces detection number, and then promote Face datection rate.
The Face datection target area for taking next frame image is DRi+1, window size MinSize1i+1With MaxSize1i+1, when face is not detected in the i-th frame image, DRi+1、MinSize1i+1And MaxSize1i+1Take initial value silent Recognize value;When the i-th frame image detection is to face, then i+1 frame image Face datection parameters are according to FRiAdaptively really It is fixed, enable f1、f2And f3Respectively indicate DRi+1、MinSize1i+1And MaxSize1i+1With FRiBetween auto-adaptive function relationship:
DRi+1=f1(FRi)1≤i≤M-1,i∈N (2-7)
MinSize1i+1=f2(FRi)1≤i≤M-1,i∈N (2-8)
MaxSize1i+1=f3(FRi)1≤i≤M-1,i∈N (2-9)
Wherein, M is the number of image frames of current video file.
Enabling λ is that coefficient is expanded in region of search, then Face datection target area DR in i+1 frame imagei+1Location parameter be X and Y, dimensional parameters are Width and Height;The f1Auto-adaptive function is indicated using following formula:
When being likely to occur DR during atual detectioni+1Beyond PRi+1The case where, wherein PRi+1For next frame image Image rectangular area need to be carried out according to the actual situation by Face datection target area DRi+1It is modified to feasible DRi+1, i.e.,Take DRi+1With PRi+1Intersection is as i+1 frame image Face datection target area, then DRi+1=DRi+1∩ PRi+1
Preferably, taking λ=0.4 in above-mentioned formula, make a concrete analysis of as follows:
Target object position can occur because of need of work to from left to right (X-direction), forward backward (Y direction) or upwards to Under (Z-direction) displacement, be displaced some direction for being likely to occur in three directions, it is also possible to two of them direction or Three directions.The position of video capture device is to maintain changeless during video acquisition, and target object displacement will lead to It acquires the face location obtained or corresponding change occurs for size.X-axis will lead to face location generation with Z-direction variation and mutually strain Change, corresponding human face region possibly is present in all ranges of the positive and negative maximum displacement of both direction, respectively such as Fig. 2, Fig. 3 and Fig. 4 It is shown.
Target object can influence human face region positions and dimensions in image, target object in Y direction back-and-forth motion simultaneously It moves forward, face is closer apart from camera, and corresponding facial image area size is bigger;Conversely, face is remoter apart from camera, Corresponding facial image size is smaller.Target object Y direction is moved forward and backward schematic diagram as shown in figure 5, target object facial dimension Camera distance change is N times, then the length of the facial image rectangular area detected and wide variation are 1/N times.
Movement speed is about 1m/s for each person under normal circumstances, and change in location process record is in continuous 25 frame image in 1s In, the actual average displacement maximum value of adjacent two field pictures septum reset is 4cm or so.For the same target object, face Portion's actual size be to maintain it is constant, apart from video camera distance it is remoter, the facial area that Face datection obtains is smaller, same journey The variation of the actual displacement reflection of degree in the picture is smaller;Conversely, the facial area that Face datection obtains is bigger, equal extent The variation of actual displacement reflection in the picture is bigger.As shown in fig. 6, OE=2OA, same size area exists at ABCD and EFGH A is presented as in image respectively1B1C1D1And E1F1G1H1, same scale is displaced is presented as A respectively in the picture1A2And E1E2, wherein A1B1=2E1F1, A1A2=2E1E2
Target object face actual size and displacement are objective reality, and human face region size and displacement are in the picture Size, which synchronizes, to be zoomed in or out, and the face location and size detected using current frame image determines next frame Face datection Region has significant adaptive and higher efficiency.By target object work top height and widths affect, target object distance Video capture device distance will not be less than 40cm, therefore Y direction is moved forward and backward the people for causing to detect in adjacent two field pictures Face area size change rate is not more than 10%.Face average-size is about 11cm*18cm, then face exists in adjacent two field pictures X-direction and Z-direction displacement are usually less than the 40% of face width.From the point of view of comprehensive three direction of displacement, present frame face area Face width 40% is expanded in four direction in domain outward respectively can be used as next frame image Face datection region under normal conditions.
It is found based on train dispatcher's Y direction position and the analytical calculation for being moved forward and backward speed, it is same in consecutive frame image The variation of one facial size does not exceed 10%, therefore FR substantiallyiDR is determined adaptive when can geti+1While can also be FRi+1Size provides reference, passes through minimum search window MinSize1i+1With maximum search window MaxSize1i+1People is described respectively FR in face detection processi+1The lower and upper limit of size, utilize FRiAdaptively maximize MinSize1i+1And minimum MaxSize1i+1FR can be reduced to the maximum extenti+1Size feasible region can effectively improve detection speed.
α and β is enabled to respectively indicate MinSize1i+1And MaxSize1i+1Relative to FRiThe scaling of size, then function f2With f3Formula (2-11) and (2-12), which can be respectively adopted, to be indicated:
On the basis of comprehensively considering consecutive frame facial size amplitude of variation, α and β more while leveling off to 1, i+1 frame figure Picture Face datection speed is faster, considers 5% surplus capacity on the basis of 10% scaling, it is preferred that value is 0.85 and 1.15 respectively.
Carry out the experiment of different mode human face detection time: random selection Sample video carries out Face datection test, will 650 frame video images are divided into 13 groups, and every group of 50 frame sequentials carry out image-capture, pretreatment and Face datection, under different mode The Face datection time is as shown in Figure 7.
Wherein, AA indicates DRi+1、MinSize1i+1And MaxSize1i+1It is based on FRiIt is adaptive to determine;AF is indicated MinSize1i+1And MaxSize1i+1Based on FRiIt is adaptive to determine, DRi+1For universe range (DRi+1=PRi+1);FF is indicated MinSize1i+1And MaxSize1i+1For fixed value, DRi+1For universe range;FA indicates MinSize1i+1And MaxSize1i+1For Fixed value, DRi+1Based on FRiIt is adaptive to determine;BS indicates based process mode, only carries out frame image-capture and pretreatment.
The average time difference that AA, AF, FF, FA and B/S mode carry out Face datection is huge, 50 frame image procossing mean times Between be respectively 317ms, 435ms, 724ms, 405ms and 217ms.It is FF pattern systhesis speed that AA mode human face, which detects overall rate, Twice or more of rate removes the based process such as image-capture, image preprocessing and analytical calculation work (B/S mode content), individually Face datection part rate under AA mode is 5 times or more of rate under FF mode, it can be seen that, adaptive Face datection efficiency Promote significant effect.
Human eye detection is the basic premise of subsequent eye closure degree judgement in image processing process, usually to detect Human face region is human eye detection range, improves human eye detection rate by reducing detection range with this.Face face organ is empty Between be distributed the special ratios relationship for usually meeting " three five, front yards ", wherein three front yards refer to the length ratio of face, the length point of face It is forehead hairline line respectively to brow ridge, brow ridge to nose bottom and nose bottom to lower chin for three equal parts;Five refer to the width ratio of face Example, is divided into five eye-shaped length for the face width from left side hairline to right side hairline, eyes lateral position is located at second A and the 4th eye-shaped extension position, as shown in Figure 8, wherein ear 9 is up to eyebrow down toward pen tip;Eyes 10 in face 1/2 Place;11 bottom of nose is at the 1/2 of the centre of eyes and chin, and width is the interval width of two eyes, and mouth 12 is in nose At 11 and the 1/3 of chin.
Based on face " three five, front yards " space constraint relationship, human eye inspection further can be adaptively reduced within the scope of face Survey range, under different location, different posture and different scale human eye detection zone as adaptive change occurs for human face region, And detected eyes region is completely included, the human eye detection uses adaptive fast human-eye detection algorithm, enables as ERiDepending on FR is based in frequency file the i-th frame imageiThe human eye detection target area determined with face " three five, front yards " rules self-adaptive, human eye Detect target area ERiLocation parameter be X and Y, dimensional parameters be Width and Height, determine ERiWith FRiBetween it is adaptive Answer functional relation as follows:
Based on size relationship between human eye and face, human eye detection search window is carried out using Sample video and is tested Card, further according to human eye detection region ERiAdaptively determine human eye detection minimum search window MinSize2iWith maximum search window Mouth MaxSize2i, detection rates, and MinSize2 can be farthest improved while guaranteeing accuracy in detectioniWith MaxSize2iWith ERiBetween the following formula of auto-adaptive function relationship:
Eye position deduction and data check are carried out under specific circumstances by self-adapting intelligent algorithm, and human eye detection is adaptive It should intelligently infer and checking algorithm is using human eye and face physiological characteristic as theoretical basis, specifically include:
(1) stability of the position and scale relativeness of human eye and face;
(2) the almost the same property of eyes size;
(3) synchronism that face and human eye change on position and scale;
(4) eyes closure degree and the synchronism of time;
Above-mentioned human eye and face physiological characteristic rule contain in the video image of acquisition, pass through detected face information It is emerged from human eye information, therefore directly detected face and human eye information (position and scale) are subsequent human eye detections Self-adapting intelligent is inferred and the direct basis of verification.
It is specific as follows: to enable LERiAnd RERiRespectively in ERiDetected left-eye image region and eye image in atmosphere Region, it includes: LER that q frame image human eye self-adapting intelligent, which is inferred and verified referring to information,p、RERp、FRpAnd FRq, wherein p be The maximum value of the frame number variable of complete human eye information and face information, p≤q-1 are detected before q frame image;
Enable ERNpFor ERpDirect detected human eye quantity in range, q frame image human eye is intelligently in deduction and verification Hold because of ERNpIt is worth difference and difference, specifically includes following three kinds of scenes:
(1) if ERNp>=2, according to LERp、RERp、FRpAnd FRqInformation verifies detected eye areas one by one, rejects Retain two best eye areas after extra eye areas, relationship determines left-eye image region LER respectively depending on the relative positionqWith Eye image region RERq
(2) if ERNp=1, according to LERp、RERp、FRpAnd FRqInformation carries out eye areas verification, determines that directly detection obtains The eye areas obtained is left eye region LERqOr right eye region RERq, by examine after based on this eye areas in ERqModel Enclose interior deduction another eye areas;
(3) if ERNp=0, according to LERp、RERp、FRpAnd FRqInformation is directly in ERqLeft eye region LER is inferred in rangeq With right eye region RERq
Self-adapting intelligent is inferred under different scenes and verification content has differences, but it is intelligently inferred and verification principle exists Substantially it is identical, is to calculate the LER in eyes region using the effective face of previous frame and binocular information as referenceq′ .X、LERq′.Y、LERq′.Width、LERq' .Height and RERq′.X、RERq′.Y、RERq′.Width、RERq' .Height, Eyes region is estimated in the human face region of current frame image with above-mentioned supplemental characteristic, with this to the human eye area directly detected It is verified and is inferred.
It is specific as follows:
According to LERp、RERp、FRpAnd FRqThe eyes region LER that self-adapting intelligent is inferredq' and RERq' respectively by as follows Formula:
Wherein, sq,pZoom factor for human eye area in q frame image relative to human eye area in pth frame image,
sq,p=(FRq.Width/FRp.Width+FRq.Height/FRp.Height)/2 (2-18)。
According to above-mentioned, in ERqAfter detecting eyes in range, the eye areas and LER that detect by comparingq' and RERq' between position and scaling relation verified, location estimating is carried out to the eye areas that does not directly detect, passes through figure The completion of eyes region improves the accuracy and robustness of human eye detection as in.
Target object is checked according to face location parameter and dimensional parameters situation of change by interval face recognition algorithms Identity, using train dispatcher as target object, the opening in Train Dispatch & Command place causes to be likely to occur in frame image more The face of a train dispatcher or non-targeted train dispatcher, face recognition technology can check personnel's body by face characteristic Part, train scheduling platform dispatcher on duty is determined from the human face region detected, the interference of other train dispatchers is eliminated with this. Recognition of face is carried out for the human face region that each frame image Face datection obtains, object identity can be accurately determined the most, But image processing work load can be dramatically increased simultaneously, reduces the arrangement processing speed of fatigue monitoring system.
The human face region position of consecutive frame image and size have roll-off characteristic, and have the corresponding changing ratio upper limit. Carrying out after recognition of face determines object identity, for it is subsequent can detect the sequential frame image of face for, can be according to people Face position and change in size situation determine object identity.Therefore to determine the recognition of face of object identity in image processing process In there is no the necessity carried out frame by frame, identified i.e. when according to face location and change in size situation object identity cannot be checked It can.
Interval face recognition algorithms are setting recognition of face trigger conditions, are called in EmguCV after condition triggering The Recognize method of EigenObjectR ecognizer class determines the corresponding personnel identity of human face region, in i+1 frame figure It is detected as in and needs after facial image the trigger condition for carrying out recognition of face to include:
(1) the previous frame image of current frame image fails to detect human face region, i.e. DRi+1=PRi, indicate to detect Human face region is the new facial area into personnel in image range;
(2) the face rectangular area FR detected in i+1 frame imagei+1It cannot meet simultaneously following formula:
Wherein, 0 < ω≤0.4,0 < σ≤0.15, FR is takeniFor the face rectangular area detected in the i-th frame image, FRi+1For the face rectangular area detected in i+1 frame image, formula (2-19) is with adaptive under aforementioned interframe constraint condition Based on answering Face datection to analyze, human face region is public by the condition that the identity based on face location and change in size situation checks Formula.Wherein, in value range, ω and σ value is smaller, is got over according to the condition that face location and change in size situation check identity Strictly, the frame amount of images for needing to carry out recognition of face is more.
In scheduler routine commander's course of work, only has dispatch control people on duty before most time train scheduling platforms Member solves the problems, such as that the object identity after Face datection is checked to interval face recognition technology differentiation, is ensuring image procossing pair Image processing efficiency can be improved while as accuracy to the maximum extent.
In addition, train dispatcher of short duration may leave video record during dispatch control takes the photograph range, lead to continuous one Frame image in the section time cannot detect face.According to the adaptive Face datection algorithm under interframe constraint, present frame cannot Next frame Face datection range is extended to image whole region when detecting face, and the single frames Face datection time is caused significantly to increase Add.Train dispatcher leave video record the case where taking the photograph range would generally certain time, for this section of time acquisition image Face datection belongs to invalidation, while when single-frame images processing time appears in frame image range compared with train dispatcher grows Very much.
Train dispatcher's video record take the photograph frame per second be 25 frames/second, train dispatcher leave video record take the photograph range when without Face datection can effectively improve image overall treatment efficiency, when target object, which leaves video record, takes the photograph range, in image procossing Face datection frame-skipping processing can be carried out in the process, and the trigger condition of Face datection frame-skipping processing is set as continuous K frame image and fails Detect human face region, and the value range of parameter K is [5,25].
Face datection trigger frame-skipping processing when, continuously skip frame number can according to image procossing it needs to be determined that, can [100, 250] 25 fixed integer times is taken in range, corresponding reality time span is 4~10s, is obtained to train dispatcher's degree of fatigue It will not have an impact.When Face datection frame-skipping processing is triggered by continuous several times, it can sequentially be gradually increased and continuously skip frame number, But it is not easy more than 1000 frames, the triggering of Face datection frame-skipping, continuous trigger and to restore normal processes as shown in Figure 9.
The present invention is not limited to above-mentioned optional embodiment, anyone can show that other are various under the inspiration of the present invention The product of form, however, make any variation in its shape or structure, it is all to fall into the claims in the present invention confining spectrum Technical solution, be within the scope of the present invention.

Claims (9)

1. a kind of train dispatcher's fatigue monitoring image adaptive Processing Algorithm, which is characterized in that the exploitation based on VS2010 is flat Platform calls EmguCV to carry out secondary development, mainly using C# language include the following:
(1) Face datection and human eye detection
FaceHaar is obtained by load face classification device haarcascade_frontalface_alt2.xml, by loading people Eye classifier haarcascade_mcs_righteye.xml obtains EyeHaar, calls DetectMultiScale function difference Obtain following formula:
Faces=FaceHaar.DetectMultiScale (Image1, SF1, MinNH1, MinSize1, MaxSize1) (2-1)
Eyes=EyeHaar.DetectMultiScale (Image2, SF2, MinNH2, MinSize2, MaxSize2) (2-2)
Wherein, DetectMultiScale is the multi-dimension testing method of CascadeClassifier class, is obtained in input picture The regional ensemble of specific objective object;
Image1 and Image2 respectively indicates the image object of Face datection and human eye detection, and type is Image < Gray, byte >;
SF1 and SF2 respectively indicates the zoom factor of Face datection and human eye detection;
MinNH1 and MinNH2 respectively indicates the minimum number for constituting the adjacent rectangle of Face datection and human eye detection target;
MinSize1 and MaxSize1 respectively indicates the minimum dimension and full-size that Face datection obtains rectangular area;
MinSize2 and MaxSize2 respectively indicates the minimum dimension and full-size that human eye detection obtains rectangular area;
(2) recognition of face
Recognition of face is based on people by calling the Recognize method of EigenObjectRecognizer class in EmguCV to realize The detected human face region of face, by the identity of face characteristic discrimination objective object, face recognition process traverses present frame figure As the human face region that Face datection obtains, until finding the human face region for belonging to target object, then subsequent human eye inspection is carried out It surveys and eyelid distance computation, the formula of critical process is as follows:
Recognizer=newEigenObjectRecognizer (Images, Labels, DistanceThreshold, termCrit) (2-3)
Name=recognizer.Recognize (result) .Label (2-4)
Images be face recognition training pattern matrix, type be Image<Gray, byte>;
Labels is the corresponding identification number array of recognition of face pattern matrix, type string;
DistanceThreshold is characterized distance threshold;
TermCrit is face recognition training standard, type MCvTermCriteria;
Name is the object identity mark that recognition of face obtains, and belongs to element in Labels.
2. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 1, which is characterized in that described Face datection uses the quick self-adapted Face datection algorithm constrained based on interframe, and in Face datection region, Face datection is searched Rope window carries out sequence detection since MinSize1 size, and search window expands SF1 times if it cannot detect face, with this Circulation carries out until detecting that face or search window size reach MaxSize1;Enabling i is the frame variable of image procossing, PRiFor the image rectangular area, DRiFor the image Face datection target area, FRiThe face rectangle region arrived for the image detection Domain, then:
MinSize1i≤FRi.Size≤MaxSize1i (2-6)
The Face datection target area for taking next frame image is DRi+1, window size MinSize1i+1And MaxSize1i+1, enable f1、f2And f3Respectively indicate DRi+1、MinSize1i+1And MaxSize1i+1With FRiBetween auto-adaptive function relationship:
DRi+1=f1(FRi) 1≤i≤M-1,i∈N (2-7)
MinSize1i+1=f2(FRi) 1≤i≤M-1,i∈N (2-8)
MaxSize1i+1=f3(FRi) 1≤i≤M-1,i∈N (2-9)
Wherein, M is the number of image frames of current video file.
3. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 2, which is characterized in that enable λ Coefficient is expanded for region of search, then Face datection target area DR in i+1 frame imagei+1Location parameter be X and Y, size ginseng Number is Width and Height;The f1Auto-adaptive function is indicated using following formula:
α and β is enabled to respectively indicate MinSize1i+1And MaxSize1i+1Relative to FRiThe scaling of size, then function f2And f3It can Formula (2-11) and (2-12), which is respectively adopted, to be indicated:
4. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 3, which is characterized in that when DR is likely to occur during actually detectedi+1Beyond PRi+1The case where, wherein PRi+1For the image rectangular area of next frame image, It need to carry out Face datection target area DR according to the actual situationi+1It is modified to feasible DR 'i+1;Take DRi+1With PRi+1Intersection is made For i+1 frame image Face datection target area, then DR 'i+1=DRi+1∩PRi+1
5. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 1, it is characterised in that ground, institute Human eye detection is stated using adaptive fast human-eye detection algorithm, is enabled as ERiFR is based in video file the i-th frame imageiAnd face The human eye detection target area that " three five, front yards " rules self-adaptive determines, human eye detection target area ERiLocation parameter be X and Y, dimensional parameters are Width and Height, determine ERiWith FRiBetween auto-adaptive function relationship it is as follows:
Further according to human eye detection region ERiAdaptively determine human eye detection minimum search window MinSize2iWith maximum search window Mouth MaxSize2i, and MinSize2iAnd MaxSize2iWith ERiBetween the following formula of auto-adaptive function relationship:
6. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 5, which is characterized in that pass through Self-adapting intelligent algorithm carries out eye position deduction and data check under specific circumstances, specific as follows: to enable LERiAnd RERiRespectively For in ERiDetected left-eye image region and eye image region in atmosphere, q frame image human eye self-adapting intelligent are inferred It include: LER with verifying referring to informationp、RERp、FRpAnd FRq, wherein p be detect before q frame image complete human eye information and The maximum value of the frame number variable of face information, p≤q-1;
Enable ERNpFor ERpDirect detected human eye quantity in range, q frame image human eye intelligently infer and verification content because ERNpIt is worth difference and difference, specifically includes following three kinds of scenes:
(1) if ERNp>=2, according to LERp、RERp、FRpAnd FRqInformation verifies detected eye areas one by one, and it is extra to reject Retain two best eye areas after eye areas, relationship determines left-eye image region LER respectively depending on the relative positionqAnd right eye Image-region RERq
(2) if ERNp=1, according to LERp、RERp、FRpAnd FRqInformation carries out eye areas verification, determines directly detected Eye areas is left eye region LERqOr right eye region RERq, by examine after based on this eye areas in ERqIn range Infer another eye areas;
(3) if ERNp=0, according to LERp、RERp、FRpAnd FRqInformation is directly in ERqLeft eye region LER is inferred in rangeqAnd right eye Region RERq
7. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 6, which is characterized in that according to LERp、RERp、FRpAnd FRqThe eyes region LER ' that self-adapting intelligent is inferredqWith RER 'qPass through following formula respectively:
Wherein, sq,pZoom factor for human eye area in q frame image relative to human eye area in pth frame image,
sq,p=(FRq.Width/FRp.Width+FRq.Height/FRp.Height)/2 (2-18)。
8. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 1, which is characterized in that pass through It is spaced the identity that face recognition algorithms check target object according to face location parameter and dimensional parameters situation of change, is touched in condition The Recognize method of EigenObjectRecognizer class in EMGUCV is called to determine the corresponding personnel of human face region after hair Identity detects in i+1 frame image and needs after facial image the trigger condition for carrying out recognition of face to include:
(1) the previous frame image of current frame image fails to detect human face region, i.e. DRi+1=PRi, indicate the face area detected Domain is the new facial area into personnel in image range;
(2) the face rectangular area FR detected in i+1 frame imagei+1It cannot meet simultaneously following formula:
Wherein, 0 < ω≤0.4,0 < σ≤0.15, FR is takeniFor the face rectangular area detected in the i-th frame image, FRi+1It is The face rectangular area detected in i+1 frame image.
9. train dispatcher's fatigue monitoring image adaptive Processing Algorithm according to claim 1, which is characterized in that work as mesh Mark object leaves video record when taking the photograph range, carries out Face datection frame-skipping processing during image processing, at Face datection frame-skipping The trigger condition of reason is set as continuous K frame image and fails to detect human face region, and the value range of parameter K is [5,25].
CN201810354996.4A 2018-04-19 2018-04-19 Train dispatcher fatigue monitoring image adaptive processing algorithm Active CN109299641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810354996.4A CN109299641B (en) 2018-04-19 2018-04-19 Train dispatcher fatigue monitoring image adaptive processing algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810354996.4A CN109299641B (en) 2018-04-19 2018-04-19 Train dispatcher fatigue monitoring image adaptive processing algorithm

Publications (2)

Publication Number Publication Date
CN109299641A true CN109299641A (en) 2019-02-01
CN109299641B CN109299641B (en) 2020-10-16

Family

ID=65167538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810354996.4A Active CN109299641B (en) 2018-04-19 2018-04-19 Train dispatcher fatigue monitoring image adaptive processing algorithm

Country Status (1)

Country Link
CN (1) CN109299641B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294524A (en) * 2020-02-24 2020-06-16 中移(杭州)信息技术有限公司 Video editing method and device, electronic equipment and storage medium
CN112733570A (en) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 Glasses detection method and device, electronic equipment and storage medium
CN113505674A (en) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN114821747A (en) * 2022-05-26 2022-07-29 深圳市科荣软件股份有限公司 Method and device for identifying abnormal state of construction site personnel

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN101599207A (en) * 2009-05-06 2009-12-09 深圳市汉华安道科技有限责任公司 A kind of fatigue driving detection device and automobile
CN104408878A (en) * 2014-11-05 2015-03-11 唐郁文 Vehicle fleet fatigue driving early warning monitoring system and method
CN104866843A (en) * 2015-06-05 2015-08-26 中国人民解放军国防科学技术大学 Monitoring-video-oriented masked face detection method
CN107491769A (en) * 2017-09-11 2017-12-19 中国地质大学(武汉) Method for detecting fatigue driving and system based on AdaBoost algorithms

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN101599207A (en) * 2009-05-06 2009-12-09 深圳市汉华安道科技有限责任公司 A kind of fatigue driving detection device and automobile
CN104408878A (en) * 2014-11-05 2015-03-11 唐郁文 Vehicle fleet fatigue driving early warning monitoring system and method
CN104866843A (en) * 2015-06-05 2015-08-26 中国人民解放军国防科学技术大学 Monitoring-video-oriented masked face detection method
CN107491769A (en) * 2017-09-11 2017-12-19 中国地质大学(武汉) Method for detecting fatigue driving and system based on AdaBoost algorithms

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IT屋: "EmguCV - 人脸识别 - 使用Microsoft Access数据库的训练集", 《IT屋》 *
匿名者2: "OpenCV人脸识别--detectMultiScale函数", 《博客园》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733570A (en) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 Glasses detection method and device, electronic equipment and storage medium
CN112733570B (en) * 2019-10-14 2024-04-30 北京眼神智能科技有限公司 Glasses detection method and device, electronic equipment and storage medium
CN111294524A (en) * 2020-02-24 2020-06-16 中移(杭州)信息技术有限公司 Video editing method and device, electronic equipment and storage medium
CN111294524B (en) * 2020-02-24 2022-10-04 中移(杭州)信息技术有限公司 Video editing method and device, electronic equipment and storage medium
CN113505674A (en) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN114821747A (en) * 2022-05-26 2022-07-29 深圳市科荣软件股份有限公司 Method and device for identifying abnormal state of construction site personnel

Also Published As

Publication number Publication date
CN109299641B (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN109299641A (en) A kind of train dispatcher&#39;s fatigue monitoring image adaptive Processing Algorithm
EP3767522A1 (en) Image recognition method and apparatus, and terminal and storage medium
US6879709B2 (en) System and method for automatically detecting neutral expressionless faces in digital images
WO2018188453A1 (en) Method for determining human face area, storage medium, and computer device
CN105868716B (en) A kind of face identification method based on facial geometric feature
CN109635727A (en) A kind of facial expression recognizing method and device
CN110443189A (en) Face character recognition methods based on multitask multi-tag study convolutional neural networks
CN109657583A (en) Face&#39;s critical point detection method, apparatus, computer equipment and storage medium
CN109325462B (en) Face recognition living body detection method and device based on iris
CN112784763A (en) Expression recognition method and system based on local and overall feature adaptive fusion
Linder et al. Real-time full-body human gender recognition in (RGB)-D data
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
CN109389002A (en) Biopsy method and device
CN110084192A (en) Quick dynamic hand gesture recognition system and method based on target detection
CN109086659A (en) A kind of Human bodys&#39; response method and apparatus based on multimode road Fusion Features
CN109977867A (en) A kind of infrared biopsy method based on machine learning multiple features fusion
CN109063626A (en) Dynamic human face recognition methods and device
CN109711309A (en) A kind of method whether automatic identification portrait picture closes one&#39;s eyes
CN111291773A (en) Feature identification method and device
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
CN108174141A (en) A kind of method of video communication and a kind of mobile device
Bouhabba et al. Support vector machine for face emotion detection on real time basis
Yuan et al. Ear detection based on CenterNet
Yaseen et al. A Novel Approach Based on Multi-Level Bottleneck Attention Modules Using Self-Guided Dropblock for Person Re-Identification
Ray et al. Design and implementation of affective e-learning strategy based on facial emotion recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant