CN100462047C - Safe driving auxiliary device based on omnidirectional computer vision - Google Patents

Safe driving auxiliary device based on omnidirectional computer vision Download PDF

Info

Publication number
CN100462047C
CN100462047C CNB2007100676334A CN200710067633A CN100462047C CN 100462047 C CN100462047 C CN 100462047C CN B2007100676334 A CNB2007100676334 A CN B2007100676334A CN 200710067633 A CN200710067633 A CN 200710067633A CN 100462047 C CN100462047 C CN 100462047C
Authority
CN
China
Prior art keywords
formula
point
lip
projection
eyes
Prior art date
Application number
CNB2007100676334A
Other languages
Chinese (zh)
Other versions
CN101032405A (en
Inventor
汤一平
Original Assignee
汤一平
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 汤一平 filed Critical 汤一平
Priority to CNB2007100676334A priority Critical patent/CN100462047C/en
Publication of CN101032405A publication Critical patent/CN101032405A/en
Application granted granted Critical
Publication of CN100462047C publication Critical patent/CN100462047C/en

Links

Abstract

The auxiliary safety operation equipment based on omnibearing computer vision includes an omnibearing vision sensor for acquiring omnibearing video information outside and inside the vehicle, and an auxiliary safety operation controller for detecting fatigue operation and warning on fatigue operation. The omnibearing vision sensor installed in the right of the driver's seat and connected to the auxiliary safety operation controller detects the facial state, eye state and mouth state of the driver and the steering wheel state, monitors the vehicle direction, speed, etc, and warns the fatigue operation in case of detecting fatigue operation state. The present invention detects the characteristic parameter of fatigue operation state, and has high judgment precision and high measurement precision.

Description

Safe driving auxiliary device based on omnidirectional computer vision

(1) technical field

The invention belongs to omnibearing vision sensor technology, image recognition and understanding technology, the application of Computer Control Technology aspect safe driving of vehicle, especially a kind of safe driving auxiliary device based on omnidirectional computer vision.

(2) background technology

Fig. 1 is the driving model of road environment, driver, car, in whole driving operation process, the driver constantly carries out perception-judgement-action, and perception fatigue, judgement fatigue and tired these interference factors that exist constantly of action all influence driver's normal operation at any time, finish until driving, all there is the danger of driving failure in the independent or common fatigue that produces of any fatigue.

Transport information is at first passed through driver's perception stage, various information are carried out perception in this stage, the driver was to road environment, induced organ is visual organ, organon auditus, taste organ official rank, owing to be subjected to the influence of perception fatigue, tend to occur perception false sense phenomenon, weaken phenomenon, do not feel phenomenon, the driver carries out antagonism with anti-fatigue ability to false sense, makes it to arrive the judgement stage by perception stage if be able to success.

Equally, because of tired influence is judged in the behavior of information processing stage, still misjudgment phenomenon can appear, the organ of judging is the central nervous system, comprise brain, cerebellum etc., owing to formed by the judgement behavior to cause the restriction of the tired factor, tend to occur that erroneous judgement is disconnected, judgement is weak, do not judge phenomenon, the driver, makes it to arrive action phase by the judgement stage if be able to success to judging disconnected antagonism by accident with anti-fatigue ability.

Moved tired influence because of the action phase behavior, the driver also the malfunction phenomenon can occur, if to the success of malfunction antagonism, can realize the driving to vehicle, last, running state of the vehicle and follow-up road feed back to the driver again.

Driving fatigue as said sensed-judgement-action represents with its fatigue state that respectively it mainly shows as following 10 classes: (1) yawns incessantly, and face is felt numb; (2) head is more and more heavy, and automatic nod again and again (doze) is difficult to the attitude that maintenance comes back; (3) loosening all muscles, blepharoptosis, even close one's eyes; (4) blurred vision, eyes are rubescent, dry; (5) visual field narrows down, and always leaks the information of seeing of misunderstanding; (6) bradykinesia, it is slow to judge; (7) attention can't be concentrated, and ability of thinking descends; (8) stiff in the movements, rhythm is slow; (9) disorient, drive to vacillate now to the left, now to the right on highway; 10) the random conversion speed of a motor vehicle, travel speed is indefinite.

Chinese invention patent (publication number CN 1851498A) discloses a kind of fatigue-driving detection technology, Chinese invention patent (publication number CN 1830389A) discloses a kind of device for monitoring fatigue driving state and method, China's utility model patent (patent No. 03218647.9) discloses a kind of travel condition of vehicle record alarm analysis instrument, and Chinese utility model patent (patent No. ZL 200420072961.5) discloses the long-distance monitorng device of a kind of passenger vehicle fatigue driving and overload transportation.

Also there are many researcheres to carry out research and Products Development for the driving fatigue state that detects the driver abroad, the Nikolaos P of University of Minnesota and Papaniko lopoulos have developed the tracking and the navigation system of a cover driver's eyes, by being placed in the face of a CCD photographic head supervision driver in the car; The head position sensor that ASCI (Advanced Safety Concepts Inc.) develops is measured driver's head position.This device is the capacitance sensor array of design and installation on operating seat, each pick off can both be exported the position of driver's head distance pick off, utilizing the trigonometric function relation to calculate lifts one's head at X, Y, position in the Z three dimensions, whether position that also can the real-time tracking head utilizes the variation characteristic of each time period head position simultaneously, can judge the driver and doze off; The steering wheel monitoring arrangement S.A.M. (steering attention monitor) of U.S. Electronic SafetyProducts company exploitation is the sensor device of the improper motion of a kind of monitoring direction dish, is applicable to various vehicles.Sensor device is not reported to the police during the steering wheel proper motion, if steering wheel continues 4s without any operation, S.A.M. will send chimes of doom, till steering wheel continues proper motion; The DAS2000 type road surface warning system of U.S. Ellison Research Labs laboratory development (The DAS2000 Road Alert System) is exactly a kind of infrared monitor that computerizeds control on the highway that is arranged on, when vehicle departs from center line of road, can give a warning to the driver; In addition, also there is researcher photographic head to be installed, is used for measuring vehicle to leave the time and the degree of white line at the front end of vehicle.And report to the police to the driver.

From above-mentioned 5 Chinese patents, on three different aspects of fatigue driving, lacking overall analysis and synthesis, such as driving fatigue state and the detection method analyzed by the driving model of setting up road environment, driver, car on above-mentioned three different aspects; More single on the detection of driving fatigue, obtaining means, thereby can only go to judge from more single presentation, problems such as the judgement precision is not high caused; Function ratio is more single, and the method for monitoring also exists limitation; The measurement index that lacks objective judgement physiological fatigue degree.From some external achievements in research and product situation, also be to go to deal with problems basically from some aspects of driving fatigue; Because the detection means difference, if these technological synthesiss just need to be got up various pick off.

(3) summary of the invention

In order to overcome the deficiency that detection means is single, discrimination precision is not high, certainty of measurement is not high of existing safe driving auxiliary device, the invention provides the states such as state, monitoring vehicle travel direction, monitoring vehicle travel speed of state, the steering wheel of state, the face of a kind of state that adopts an omnibearing vision sensor to detect driver's face simultaneously, eye, the characteristic parameter of comprehensive detection driving fatigue state, the discrimination precision height has improved the safe driving auxiliary device based on omnidirectional computer vision of certainty of measurement.

The technical solution adopted for the present invention to solve the technical problems is:

A kind of safe driving auxiliary device based on omnidirectional computer vision, comprise the omnibearing vision sensor that is used to obtain the inside and outside comprehensive video information of car, the safe driving pilot controller that warning is provided when being used to detect various fatigue drivings and having the fatigue driving situation to take place, described omnibearing vision sensor is installed in the right-hand of the interior driver's seat of car; Described omnibearing vision sensor output attachment security is driven pilot controller, described omnibearing vision sensor comprises in order to the evagination catadioptric minute surface of object in the field inside and outside the reflection car, in order to prevent dark circles cone, the transparent cylinder that anaclasis and light are saturated and to be used to take the photographic head of imaging body on the evagination mirror surface, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the bottom center of evagination catadioptric minute surface, and described photographic head facing to evagination catadioptric minute surface up; Described safe driving pilot controller comprises:

Detection range is cut apart module, and the comprehensive video information that is used for obtaining from omnibearing vision sensor is divided into the video perspective image at vehicle front visual angle, visual angle, driver's seat, steering wheel visual angle;

Face's locating module is used for the people's face location to the driver, and face complexion is obeyed two-dimentional Gauss distribution at the CrCb chrominance space, formula (4) expression of the probability density function of this colour of skin distributed model,

f ( x 1 , x 2 ) = 1 2 π | C | 1 / 2 exp { - 1 2 ( X - μ ) T C - 1 ( X - μ ) } - - - ( 4 )

Wherein, μ=(Cr, Cb) T=(156.560,117.436) T, two values in this vector refer to the average of color component Cr, Cb respectively, and C is the covariance matrix of Cr, Cb, with formula (5) expression,

C = σ rr 2 σ rb σ br σ bb 2 = 160.130 12.143 12.143 299.457 - - - ( 5 )

Wherein, Be respectively the variance of Cr, Cb, σ Rb, σ BrIt is respectively the covariance of Cr, Cb;

According to the Gaussian distribution model of the colour of skin, calculate the color of all pixels in the facial image and the similarity of the colour of skin, get the calculation of similarity degree formula and be,

P(Cr,Cb)=exp{-0.5(X-μ) TC -1(X-μ)} (6)

X=(Cr,Cb) T (7)

Wherein, and X=(Cr, Cb) TBe the vector of pixel in Cr, Cb chrominance space, C, μ value are identical with above-mentioned (4), (5) formula;

After calculating the similarity value, utilize normalized method that similarity is converted into gray value between 0~255, obtain the gray-scale map of visual angle, driver's seat image; Utilize preset threshold that gray-scale map is carried out binary conversion treatment, area of skin color becomes complete white, and remainder becomes complete black; And the horizontal projection that utilizes image grey level histogram to do image obtain extract the zone in vertical direction the top and the maximum of bottom; Do vertical projection diagram obtain extract the zone in the horizontal direction the left side and the maximum on right side;

If people's face long for h, wide be w, according to the dimension constraint of people's face, as satisfy the condition of people's face length-width ratio 0.8≤h/w≤1.5, confirming as this zone is people's face positioning image;

The lip location and the detection module of yawning, be used to locate driver's lip and detect the driver and yawn, red pixel point in people's face positioning image is carried out floor projection and upright projection, determine that this zone is mouth region, longest distance between floor projection two adjacent wave paddy is the length of lip, and the ultimate range of two adjacent wave paddy is the width of lip in the vertical projection; And the lip characteristic point when defining lip successively and being in closure state, open configuration: left and right sides corners of the mouth point, upper lip center go up that point, upper lip center descend point, lower lip center to go up point most most, point is descended at the lower lip center most most; The distance h that point is descended at point and lower lip center is most gone up at the upper lip center most mWith face length W mRatio, be the parameter Doo of mouth stretching degree according to the mouth model definition m, shown in formula (32):

Doo m = h m W m - - - ( 32 )

When setting up, the bigger mouth open configuration that setting continues for some time judges the generation of yawning, shown in formula (33):

∑φ(Doo m)≥β (33)

Wherein, &phi; ( Doo m ) = 1 Doo m &GreaterEqual; &alpha; 0 Doo m < &alpha; - - - ( 34 )

Defining the persistent period of once yawning is that yawn begins to represent with formula (35) to the time of yawn end:

T y=t 2-t 1 (35)

Be the interval of successive mouth stretching degree,, then add up yawning number of times or persistent period in a period of time, add up with formula (36) when finding once to yawn more than or equal to α:

Total = &Sigma; t t + m T y - - - ( 36 )

Fatigue driving test and appraisal and alarm module are used for according to default that yawning number of times of a period of time or persistent period thresholding, as the number of times that records or persistent period greater than setting thresholding, be judged to be fatigue driving, and send alarm command to alarm device.

As preferred a kind of scheme: described safe driving pilot controller also comprises:

The eyes identification module, be used for demarcating eye feature zone and characteristic point according to the left and right sides corners of the mouth point of lip and facial normalized parameter, if the coordinate of the left corners of the mouth is (leftx, lefty), the coordinate of the right corners of the mouth is (rightx, righty), according to putting in order of facial face, the face length formula is represented with formula (31):

W m=sqrt((rightx-leftx)*(rightx-leftx)+(righty-lefty)*(righty-lefty)) (31)

The height H ighofEye of one-sided eyes calculates with formula (37):

HighofEye=0.31*W m (37)

The length L engthofEye of one-sided eyes calculates with formula (38):

LengthofEye=0.63*W m (38)

Origin coordinates (the x of one-sided eye areas 2, y 2) calculate with formula (39), (40):

x 2=rightx-0.1*LengthofEye (39)

y 2=righty-1.35*W m (40)

Utilize the projection of the black pixel point in the image to determine the eyes border, the vertical coordinate of black pixel point vertical projection diagram zone one for this reason lists the pixel sum that all are judged as black, and length is N; Abscissa for row number, length is M, establishing area size is M*N, the value of each point pixel is I e(x, y), the projection function on the black pixel point vertical and horizontal direction calculates with formula (41), (42);

P ey ( x ) = &Sigma; y = 1 N I e ( x . y ) - - - ( 41 )

P ex ( y ) = &Sigma; x = 1 M I e ( x . y ) - - - ( 42 )

When floor projection, estimate the height H ighofEye of eyes by the length of face, up look for distance W between per two continuous troughs by the bottom of eye areas, when running into first W when more close, think that the zone between these two troughs is the height zone of eyes with right eye height H ighofEye;

When upright projection, estimate the length L engthofEye of eyes by the length of face, begin search from the right side of upright projection, when searching two distance L between the continuous trough when more close, think that the zone between these two troughs is the length areas of eyes with the height LengthofEye of right eye; The blink detection module is used for eyes and opens level calculating method and adopt the eye model definition, as formula (43):

Doo e = h e W e - - - ( 43 )

The degree of opening of representing eyes in the formula (43) with the ratio of width to height of eyes boundary rectangle, wherein h eBe the yardstick of opening of eyes, W eGet the distance of 2 canthus points for the width eye widths of eyes;

Definition state nictation is that the state that the degree of opening of eyes in the video image adds up greater than certain frame number less than the frame number of setting threshold in video continuously is expressed as with formula (44):

∑φ(Doo e)≥β e (44)

Wherein, &phi; ( Doo e ) = 1 Doo e &GreaterEqual; &alpha; e 0 Doo e < &alpha; e - - - ( 45 )

Formula (45) expression eyes are opened degree and be less than or equal to α continuously in videos eFrame number accumulative total surpass β eThink during frame and taken place once nictation;

Definition persistent period nictation is that eyes closed is represented with formula (46) to the interval between opening in process nictation:

T b=t 2-t 1 (46)

T bRepresent that opening degree for successive is less than or equal to α eTime span;

Defining frequency of wink is the inverse of twice interval of blinking of generation recently, and the frequency of blinking with formula (47) expression is:

f h = 1 t n - t 2 - - - ( 47 ) ;

In fatigue driving test and appraisal and alarm module, set the frequency of wink thresholding, as the frequency of wink that records is judged to be fatigue driving greater than the frequency of wink thresholding, and sends alarm command to alarm device.

As preferred another kind of scheme: described safe driving pilot controller also comprises:

Facial movement track following module is used to adopt Kalman filtering to follow the tracks of the facial movable of driver, is obtaining the facial state vector x of a series of representatives according to formula (15) t, x T+1, X T+2... in facial zone central point (x t, y t), (x T+1, y T+1), (x T+2, y T+2) after the information, make a facial zone central point movement locus line by the time sequence;

The assembly average of the facial zone central point of employing in the driver begins to drive back a period of time, computing formula shown in (48),

x &OverBar; = &Sigma; i = t t + n x i n , y &OverBar; = &Sigma; i = t t + n y i n - - - ( 48 )

The defining point head status be in the video image center of face point position continuously forwards displacement surpass the frame number of certain threshold value, the state that adds up greater than certain frame number is expressed as with formula (49):

∑φ(Doo h)≥β h (49)

Wherein, &phi; ( Doo h ) = 1 Doo h &GreaterEqual; &alpha; h 0 Doo h < &alpha; h - - - ( 50 )

The degree that formula (49) expression is nodded in video continuously more than or equal to α hFrame number accumulative total surpass β hDuring frame, judge and take place once to nod;

Doo h = ( x n - x &OverBar; ) 2 + ( y n - y &OverBar; ) 2 - - - ( 51 )

In the formula (51), (x n, y n) driver's facial zone center position when being the n frame, (x y) begins to drive the assembly average of the facial zone center position in a period of time of back for the driver;

In fatigue driving test and appraisal and alarm module, set the number of times thresholding of nodding, the normal place of head surpasses deviation value α during with normal driving hPersistent period thresholding, as the number of times of nodding that records is greater than the number of times thresholding of nodding, perhaps record with normal driving the time head normal place surpass deviation value α hPersistent period is judged to be fatigue driving greater than the persistent period thresholding, and sends alarm command to alarm device.

As preferred another kind of again scheme: described safe driving pilot controller also comprises: the vehicle heading detection module, be used for according to vehicle front visual angle image, with white line on the road or road top edge greenbelt is datum line, whether the direction of advance that detects vehicle by video image departs from mutually with above-mentioned white line or greenbelt, parallel lines with above-mentioned datum line will be made on the central point of vehicle, the running orbit that detects vehicle whether with described parallel lines left and right deviation, surpass the thresholding of setting that departs from as the vehicle operating tracing point, check that current driving is in this shift state, if the state that departs from of last time is the positive deviation state, the state that departs from of this detected state is that negative bias is from state, or do not depart from, so just think to have produced once and depart from, write down time and degree that this time departs from;

In fatigue driving test and appraisal and alarm module, set the skew alarm distance,, be judged to be fatigue driving or dangerous driving, and send alarm command greater than the skew alarm distance as current migration program to alarm device.

Further, described safe driving pilot controller also comprises: direction of operating plate-like attitude detection module, be used for according to steering wheel visual angle image, as detect and bend is arranged on the road ahead, lane change is arranged, when having turnout and front vehicles to enter the traveling lane situation, detect the current operational motion whether steering wheel is arranged; In fatigue driving test and appraisal and alarm module,, be judged to be dangerous driving, and send alarm command to alarm device as the current operational motion that does not have steering wheel.

Further, described detection range is cut apart module and also comprised: perspective view launches the unit, is used to utilize the planar coordinate points P (i of perspective projection, j) ask A in the three-dimensional of space (X, Y, Z), obtain the transformational relation of projection plane and space three-dimensional, conversion relational expression is represented with formula (60):

X=R*cos?β-i*sin?β (60)

Y=R*sin?β+i*cos?β

Z=D*sin?γ-j*cos?γ

(R=D*cos?γ+j*sin?γ)

In the following formula: D arrives bi-curved focus O for the perspective projection plane mDistance, the β angle is the angle of incident ray projection on the XY plane, the γ angle is the angle of the horizontal plane of incident ray and hyperboloid focus, the i axle is and the parallel plane transverse axis of XY that the j axle is and i axle and O mThe longitudinal axis of-G axle right angle intersection;

With the above-mentioned P that tries to achieve with formula (60) (X, Y, Z) some substitution formula (57) and (58) just can try to achieve with the planar coordinate points P of perspective projection (i, j) corresponding on imaging plane P (x, y) point:

x = Xf ( b 2 - c 2 ) ( b 2 + c 2 ) Z - 2 bc X 2 + Y 2 + Z 2 - - - ( 57 )

y = Yf ( b 2 - c 2 ) ( b 2 + c 2 ) Z - 2 bc X 2 + Y 2 + Z 2 - - - ( 58 )

Further again, described catadioptric minute surface adopts hyperbolic mirror to design: shown in the optical system that constitutes of hyperbolic mirror can represent by following 5 equatioies;

((X 2+Y 2)/a 2)-(Z 2/b 2)=-1 (Z>0) (52)

c = a 2 + b 2 - - - ( 53 )

β=tan -1(Y/X) (54)

α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ (55)

&gamma; = tan - 1 [ f / ( X 2 + Y 2 ) ] - - - ( 56 )

X in the formula, Y, Z representation space coordinate, c represents the focus of hyperbolic mirror, and 2c represents two distances between the focus, a, b is respectively the real axis of hyperbolic mirror and the length of the imaginary axis, β represents the angle-azimuth of incident ray on the XY plane, and α represents the angle-angle of depression of incident ray on the XZ plane, and f represents the distance of imaging plane to the virtual focus of hyperbolic mirror.

Technical conceive of the present invention is: adopt video technique and image understanding technology to monitor the state of the head that driver's driving fatigue can be by detecting the driver-yawn incessantly; Automatic nod again and again (doze) is difficult to the attitude that maintenance comes back; Detect the state-blepharoptosis of driver's eye, even close one's eyes; Detect the state-bradykinesia of steering wheel, it is slow to judge; Stiff in the movements, rhythm is slow; Supervision vehicle traveling direction-disorient drives to vacillate now to the left, now to the right on highway; Monitor the conversion speed of a motor vehicle of automobile driving speed-arbitrarily, various fatigue states such as travel speed is indefinite and driving condition are judged, thereby can be obtained accurate judged result.Want to detect simultaneously state, the steering wheel of state, the eye of state, the face of driver's head state, monitor vehicle traveling direction, monitor states such as automobile driving speed, optimal mode is to adopt a pick off to obtain these status informations, and omnibearing vision sensor provides a kind of possibility for realizing this technology.

The omnibearing vision sensor ODVS that developed recently gets up (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the quantity of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in car is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving target in the detection and tracking monitoring range; Can obtain the real time imaging of the inside and outside scene of car.This ODVS video camera mainly is made up of a ccd video camera and an illuminator that faces photographic head.Reflective mirror reflects the image in one week of horizontal direction to the ccd video camera imaging, like this, just can obtain the environmental information of 360 ° of horizontal directions in piece image.This omnidirectional camera has very outstanding advantage, under the real-time processing requirements to panorama, is a kind of quick, approach of visual information collection reliably especially.

Thereby need state, the steering wheel of state, the eye of a kind of state that adopt omnibearing vision sensor can detect driver's face simultaneously, face state, monitor vehicle traveling direction, monitor states such as automobile driving speed, when having detected the generation of driving fatigue phenomenon, the driver is implemented to report to the police, while registration of vehicle running status, realization is carried out light or voice prompt alarm to operation against ruless such as overload, hypervelocity, fatigue drivings, the safe driving auxiliary device of functions such as the recording of uncommon incident, video recording and reproduction.

Beneficial effect of the present invention mainly shows: 1, detect by above-mentioned omnibearing fatigue driving, can detect the independent or common fatigue that produces of any fatigue effectively, improved the reliability of safe driving; 2, the discrimination precision height has improved certainty of measurement; 3, by the captured video information of omnibearing vision sensor, can be used for the registration of vehicle running status, realize light or voice prompt alarm are carried out in operation against rules such as overload, hypervelocity, fatigue driving, the recording of uncommon incident, record a video and function such as reproduction.

(4) description of drawings

Fig. 1 is the schematic diagram of safe auxiliary driving device of the present invention;

Fig. 2 is that the software module of safe auxiliary driving device of the present invention is divided;

Fig. 3 is the recognition of face flow chart on the operating seat in the safe auxiliary driving device of the present invention;

Fig. 4 is the driving fatigue decision flow chart in the safe auxiliary driving device of the present invention;

Fig. 5 opens the sketch map of degree for the detection eyes in the safe auxiliary driving device of the present invention;

Fig. 6 is the sketch map of the detection face opening width in the safe auxiliary driving device of the present invention;

Fig. 7 is the sketch map of the detection eyes opening width in the safe auxiliary driving device of the present invention;

Fig. 8 is based on the face location algorithm flow chart on the operating seat in the safe auxiliary driving device of omnidirectional computer vision;

Fig. 9 is the schematic diagram of the catadioptric imaging of omnibearing vision sensor;

Figure 10 is the structure chart of omnibearing vision sensor;

Figure 11 is the perspective schematic diagram of omnibearing vision sensor;

Figure 12 detects general flow chart based on the safe driving in the safe auxiliary driving device of omnidirectional computer vision;

Figure 13 is drive to disorient, vacillate now to the left, now to the right a kind of form of expression on highway.

(5) specific embodiment

Below in conjunction with accompanying drawing the present invention is further described.

With reference to Fig. 1~Figure 13, a kind of safe auxiliary driving device based on omnidirectional computer vision at first is the perspective view that obtains visual angle, driver's seat, steering wheel visual angle, vehicle drive road environment by omnibearing vision sensor; The detection of people's face, face detection, eye detection, face trajectory that visual angle, driver's seat perspective view is used for the driver detect, and judge driver's perception state and judge whether state is in fatigue state by image understanding and identification; By the understanding of steering wheel visual angle perspective view being judged whether driver's operating state exists phenomenons such as bradykinesia, judgement is slow, stiff in the movements, rhythm is slow; Perspective view by the vehicle front visual angle obtains road environment and vehicle operating track, judges the sense of direction whether vehicle lost, the time and the degree of vacillating now to the left, now to the right and being driven on the highway, departing from white line then;

The principle of omnibearing vision sensor (ODVS) and the principle of perspective view are described; Described omnibearing vision sensor, the manufacturing technology scheme of its opticator mainly are made of the photographic head that vertically downward catadioptric mirror dough-making powder makes progress.It is concrete that to constitute be to be fixed on bottom by the cylinder of transparent resin or glass by the image unit that collecting lens and CCD (perhaps CMOS) constitute, the top of cylinder is fixed with the catadioptric mirror of a downward deep camber, the dark circles cone that between catadioptric mirror and collecting lens, has a diameter to diminish gradually, this coniform body is fixed on the middle part of catadioptric mirror, and the pyramidal purpose of dark circles is to cause light in cylinder inside luminous reflectance phenomenon saturated and that produce by the cylinder body wall in order to prevent superfluous light from injecting.Fig. 9 is the schematic diagram of the optical system of expression omnibearing imaging device of the present invention.

The operation principle of omnibearing vision sensor is: enter the light at the center of hyperbolic mirror, reflect towards its virtual focus according to bi-curved minute surface characteristic.Material picture reflexes to imaging in the collecting lens through hyperbolic mirror, a some P on this imaging plane (x, y) corresponding the coordinate A of a point spatially in kind (X, Y, Z).

11-hyperbola face mirror among Fig. 9,12-incident ray, the focus Om (0 of 13-hyperbolic mirror, 0, c), the virtual focus of 14-hyperbolic mirror is camera center O c (0,0 ,-c), 15-reflection ray, 16-imaging plane, the space coordinates A of 17-material picture (X, Y, Z), 18-incide the space coordinates of the image on the hyperboloid minute surface, 19-be reflected in some P on the imaging plane (x, y).

The optical system that hyperbolic mirror shown in Figure 10 constitutes can be represented by following 5 equatioies;

((X 2+Y 2)/a 2)-(Z 2/b 2)=-1 (Z>0) (52)

c = a 2 + b 2 - - - ( 53 )

β=tan -1(Y/X) (54)

α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ (55)

&gamma; = tan - 1 [ f / ( X 2 + Y 2 ) ] - - - ( 56 )

X in the formula, Y, Z representation space coordinate, c represents the focus of hyperbolic mirror, and 2c represents two distances between the focus, a, b is respectively the real axis of hyperbolic mirror and the length of the imaginary axis, β represents the angle-azimuth of incident ray on the XY plane, and α represents the angle-angle of depression of incident ray on the XZ plane, and f represents the distance of imaging plane to the virtual focus of hyperbolic mirror.

Described omnibearing vision sensor structure as shown in Figure 9.Omnibearing vision sensor can be realized 360 ° of visual ranges in the horizontal direction, realizes 90 ° of visual ranges in vertical direction; Therefore omnibearing vision sensor can be installed in the right front of driver's seat in the car, also can be installed in the upper right side of driver's seat as required, therefore can make one's options according to user's needs and the real space situation in the car; The setting height(from bottom) of omnibearing vision sensor and the position in car are to decide according to the angle of depression of omnibearing vision sensor and the situation of driver's seat, the design of the mirror surface of omnibearing vision sensor with and three essential conditions of installation site be: people's face that 1) can clearly capture the driver; 2) can capture the situation of driver's operation control wheel; 3) can observe the situation of vehicle operating the place ahead environment.

360 ° of comprehensive principles of making a video recording are described, a point A (x1 on the space, y1, z1) through catadioptric 2 direct reflections to the lens 6 to a subpoint P1 (x should be arranged, y), the light of scioptics 6 becomes directional light and projects CMOS image unit 5, and microprocessor 7 reads in this ring-type image by video interface, adopts software that this ring-type image is carried out the video image that perspective view launches to obtain the perspective cut apart according to vehicle front visual angle, visual angle, driver's seat, steering wheel visual angle.

For perspective view there being one understand preferably, as shown in Figure 10, we are from bi-curved real focus O here mTo the perspective projection zero G straight line O that to draw a distance be D m-G is with this O mThe perpendicular plane of-G is as the perspective projection plane, and (X, Y is Z) towards focus O from an A mLight on the perspective projection plane, have an intersection point P (X, Y, Z), if (X, Y Z) are updated to the P (x on imaging plane that just can easily ask in formula (57), (58) with this intersection point P, y) point therefore can be by trying to achieve each point on the perspective projection plane from above-mentioned relation.

x = Xf ( b 2 - c 2 ) ( b 2 + c 2 ) Z - 2 bc X 2 + Y 2 + Z 2 - - - ( 57 )

y = Yf ( b 2 - c 2 ) ( b 2 + c 2 ) Z - 2 bc X 2 + Y 2 + Z 2 - - - ( 58 )

As shown in Figure 10, the optical axis of hyperbolic mirror is the Z axle, photographic head is towards the positive direction setting of Z axle, imaging plane is the input picture of photographic head, we are with the intersection point g of the optical axis of hyperbolic mirror and the imaging plane initial point as imaging plane, its coordinate is x, y, and x axle, y axle are consistent with the length limit of sensitive chip in the photographic head respectively, so O mThe X-axis of-XYZ coordinate system and the xy plane parallel of imaging plane coordinate system.

The perspective projection plane is and O mThe plane that-G connecting line is perpendicular, with the binary plane coordinate system i of G point as initial point, j, wherein the i axle is and the parallel plane transverse axis of XY, the j axle is and i axle and O mThe longitudinal axis of-G axle right angle intersection will be from the perspective projection plane to bi-curved focus O mDistance as D, definition perspective projection planar banner is w, depth is H.Because the i axle is and the XY plane parallel, be again vertical with the Z axle, therefore resulting perspective projection plane is to be that the coordinate center is gone up with XY plane (horizontal plane) and rotated an angle with the G point, this angle is exactly O mThe angle of-G connecting line and Z axle.

Here we are with O m-G is as the transform center axle, and some G is as the transform center point, and (the perspective projection plane is to bi-curved focus O to use β (angle-azimuth of incident ray on the XY plane), γ (angle of the horizontal plane of incident ray and hyperboloid focus) and distance D mDistance) represent the transform center axle, the β angle can be calculated by formula (54) in 0 °~360 ° scopes, equally also can represent with formula (59):

β=tan -1(Y/X)=tan -1(y/x) (59)

Here the β angle is the angle of incident ray projection on the XY plane, with the Z axle be initial point (initial point of polar coordinate system) counterclockwise, in 0 °~360 ° scopes (this is the horizontal field of view scope of omni-directional visual); The γ angle is the angle of the horizontal plane of incident ray and hyperboloid focus, shown in formula (56), this angle is relevant with the hyperboloid focal position with space coordinates, if on the hyperboloid focus, make a horizontal plane, be exactly the angle of giving horizontal plane and Om-G axle so, here with space coordinates Z point more than the hyperboloid focus as [+], be called the elevation angle, the conduct [-] of Z point below the hyperboloid focus is called the angle of depression; The γ angular range just has different γ angular range (this is the vertical field of view scope of omni-directional visual) according to different minute surface designs between-90 °~+ 90 °;

Distance D determines that according to the air line distance of perspective projection plane and hyperboloid focus in general, the long more scenery of distance D is more little, and distance D flash thing more is big more; The planar banner w of perspective projection, depth H can be determined by needs, when determining banner W, depth H size, at first to determine the horizontal vertical ratio of display window, owing to be the size of representing banner w, depth H with pixel, therefore to determine the pixel value of banner W, depth H in computer.

By the planar coordinate points P of perspective projection (i, j) ask A in the three-dimensional of space (X, Y Z), so just can obtain the transformational relation of projection plane and space three-dimensional, and conversion relational expression is represented with formula (60):

X=R*cosβ-i*sinβ

Y=R*sinβ+i*cosβ (60)

Z=D*sinγ-j*cosγ

(R=D*cosγ+j*sinγ)

In the formula: D arrives bi-curved focus O for the perspective projection plane mDistance, the β angle is the angle of incident ray projection on the XY plane, the γ angle is the angle of the horizontal plane of incident ray and hyperboloid focus, the i axle is and the parallel plane transverse axis of XY that the j axle is and i axle and O mThe longitudinal axis of-G axle right angle intersection, the direction of i axle and j axle is by shown in the accompanying drawing 11;

(Z) some substitution formula (57) and (58) just can be tried to achieve (i, j) corresponding P (x, y) point on imaging plane with the planar coordinate points P of perspective projection for X, Y with the above-mentioned P that tries to achieve with formula (60).So just can try to achieve comprehensive perspective view, that is to say the coordinate system set up on the imaging plane and the corresponding relation of the planar coordinate system of perspective projection by the image information that on imaging plane, obtains.Such corresponding relation has been arranged, the image information of certain point that we just can obtain from imaging plane; By the corresponding relation of two coordinate systems, the image information of this point correctly is presented on the corresponding position, perspective projection plane.

Video image for the perspective that obtains vehicle front visual angle, visual angle, driver's seat, steering wheel visual angle cutting apart is divided into remote detection and closely detects two kinds of patterns according to different needs in this patent.Described remote detection is to bi-curved focus O with the perspective projection plane mDistance be placed on place beyond the 10m, adopt remote detecting pattern exactly such as perspective view for the vehicle front visual angle; Described closely the detection is to bi-curved focus O with the perspective projection plane mDistance be placed on 1m with interior place, adopt closely detecting pattern exactly such as perspective view for visual angle, driver's seat, steering wheel visual angle.The expansion of perspective view is that the perspective view in accompanying drawing 2 launches to carry out in the module.

In order to obtain above-mentioned different visual angles and the different perspective view that requires, in program, dispose detection range and cut apart module, as shown in accompanying drawing 2, the user can be according to the size and the orientation of self-defined perspective view from the image that omnibearing vision sensor obtained in this module; Behind self-defined perspective view, program can generate corresponding perspective view window, detects the definition of content on this window; Such as on the perspective view window of visual angle, driver's seat, defining above-mentioned people's face detection, face detection, eye detection, face trajectory detection, be used for the detection to driver's fatigue driving, this detection mainly is from driver's perception state and judges the detection that state aspect carries out; Will define the steering wheel operation detection on the perspective view window of steering wheel visual angle, this detection mainly is the detection of carrying out from driver's operating state aspect; Will define on the perspective view window at vehicle front visual angle that road environment detects and the detection of vehicle operating track, this detection mainly is the detection of carrying out from road condition and transport condition.

Detect and yawn, nictation, automaticly nod, be difficult to keep the attitude that comes back etc. to drive the perception state again and again, realize by the detection of people's face, face detection, eye detection, facial movement track detection and image understanding; Described face detection, eye detection, face trajectory detect and are based upon on people's face detection basis;

Therefore, next step work at first is that people's face detects, and after defining visual angle, driver's seat perspective view, people's face position of driver must be within this perspective view scope.People's face detect to be to carry out in face's locating module in accompanying drawing 2.Its handling process such as accompanying drawing 3 provide.

The present invention is directed to the requirement of device real-time, utilize face complexion to have the characteristics of good cluster characteristic at color space, select the mapping space of YCrCb space for use as colour of skin distribution statistics, limit colour of skin distributed areas preferably, set up a two-dimentional Gauss distribution mathematical model by statistics identification, utilize method for detecting human face based on similarity and people's face shape characteristic based on the colour of skin.Take the maximum between-cluster variance thresholding method, utilize the image grey level histogram projection that face is positioned.

Face complexion is obeyed two-dimentional Gauss distribution at the CrCb chrominance space, and the probability density function of this colour of skin distributed model can be used formula (4) expression,

f ( x 1 , x 2 ) = 1 2 &pi; | C | 1 / 2 exp { - 1 2 ( X - &mu; ) T C - 1 ( X - &mu; ) } - - - ( 4 )

Wherein, μ=(Cr, Cb) T=(156.560,117.436) T, two values in this vector refer to the average of color component Cr, Cb respectively, and C is the covariance matrix of Cr, Cb, with formula (5) expression,

C = &sigma; rr 2 &sigma; rb &sigma; br &sigma; bb 2 = 160.130 12.143 12.143 299.457 - - - ( 5 )

Wherein, Be respectively the variance of Cr, Cb, σ b, σ BrIt is respectively the covariance of Cr, Cb.According to the Gaussian distribution model of the colour of skin, calculate the color of all pixels in the facial image and the similarity of the colour of skin.Getting the calculation of similarity degree formula is,

P(Cr,C6)=exp{-0.5(X-μ) TC -1(X-μ)} (6)

X=(Cr,Cb) T (7)

Wherein, and X=(Cr, Cb) TBe the vector of pixel in Cr, Cb chrominance space, C, μ value are identical with above-mentioned (4), (5) formula.

After calculating the similarity value, utilize normalized method that similarity is converted into gray value between 0~255, obtain the gray-scale map of detected coloured image, gray-scale map can reflect the similarity degree of the color and the colour of skin of each pixel intuitively.Here at first find out the maximum in the similarity value of all pixels, and be the normalization that benchmark carries out similarity with this value, allow the pixel of similarity value maximum become pure white (gray value is 255), other pixels are separately converted to corresponding gray according to similarity size separately.

In order to be partitioned into the area of skin color of image, need choose appropriate threshold with the gray level image binaryzation.This patent is taked the maximum between-cluster variance thresholding method, rectangular histogram is slit into two groups in a certain threshold value punishment, decision threshold when the variance that is divided into two groups is maximum.If with the gray value of piece image is 1~M level, gray value is that the pixel count of i is n i, can obtain the frequency that each gray-value pixel value occurs so, with formula (8) expression,

P i = n i / &Sigma; i = 1 M n i - - - ( 8 )

Then all pixels are divided into two groups of S with K 0={ 1-K} and S 1={ K+1-M}, then S 0The probability that takes place can calculate with formula (9),

&eta; 0 = &Sigma; i = 1 K p i = &eta; ( K ) - - - ( 9 )

S 1The probability that takes place can calculate with formula (10),

&eta; 1 = &Sigma; i = K + 1 M p i = 1 - &eta; ( K ) - - - ( 10 )

S 0Meansigma methods can calculate with formula (11),

&lambda; 0 = &Sigma; i = 1 K i p i &eta; 0 = &lambda; ( K ) &eta; ( K ) - - - ( 11 )

S 1Meansigma methods can calculate with formula (12),

&lambda; 1 = &Sigma; i = K + 1 M i p i &eta; 1 = &lambda; - &lambda; ( K ) 1 - &eta; ( K ) - - - ( 12 )

Wherein &lambda; 1 = &Sigma; i = 1 M ip i Be the overall intensity meansigma methods of image, &lambda; K = &Sigma; i = 1 K ip i Be the average gray that gray value is lower than the pixel of K, definition S 0, S 1Formula (13) expression of variance between two groups,

&sigma; 2 ( K ) = &eta; 0 ( &lambda; 0 - &lambda; ) 2 + &eta; 1 ( &lambda; 1 - &lambda; ) 2 = [ &lambda;&eta; ( K ) - &lambda; ( K ) ] 2 &lambda; ( K ) [ 1 - &eta; ( K ) ] - - - ( 13 )

Seek and to make σ between 1~M 2(K) get peaked K, can obtain threshold value.Utilize this threshold value that facial image is carried out binary conversion treatment, area of skin color becomes complete white, and remainder becomes complete black; The horizontal projection that then utilizes image grey level histogram to do image obtain extract the zone in vertical direction the top and the maximum of bottom; Do vertical projection diagram obtain extract the zone in the horizontal direction the left side and the maximum on right side, utilize these values that face is positioned then.If people's face long for h, wide be w, according to the dimension constraint of people's face, promptly satisfy the condition of people's face length-width ratio 0.8≤h/w≤1.5, can confirm as the positioning image that this zone is people's face.As shown in Figure 3, after having determined people's face position of driver, just can determine its position, be used to detect the presentation whether fatigue driving is arranged according to the face and the color characteristic of face and eyes.

Described lip location and yawning detection, the detection of this part are to carry out in the mouth locating module in accompanying drawing 2; Because driver's lip location can be used for judging the yawning situation of driver with tracking, it is determining the accuracy of next step eye location simultaneously.Because it is well-regulated that the face of people's face distribute, can probably demarcate the rough position of further feature zone and characteristic point by two basic points and facial normalized parameter, that is to say, as long as make two corners of the mouths, just can calculate the angle of inclination of head in planar range and the zone of eyes.

Utilize the lip color for red these characteristics lip to be cut apart and located, its concrete implementation method is: at first, the method that adopts red pixel point driver's face-image to be carried out level and upright projection is carried out Region Segmentation to lip, and then utilizes edge extracting and the bonded method of red pixel point extract phase that lip is extracted.Lip feature extraction handling process is shown in accompanying drawing 6,7.

Utilize the projection of red pixel point in the face-image to determine the lip border, the vertical coordinate of the vertical projection diagram of red pixel point is that image string (length is N) is gone up the pixel sum that all are judged as redness, abscissa is row number (length is M), and it has reflected the image variation of red pixel point in the horizontal direction.If the image size is M*N, the each point pixel value be I (x, y), then the projection function of red pixel point on vertical direction and horizontal direction be with formula (28), (29) expression,

P y ( x ) = &Sigma; y = 1 N I ( x . y ) - - - ( 28 )

P x ( y ) = &Sigma; x = 1 M I ( x . y ) - - - ( 29 )

Because it is 1:10 that people's lip height H eightofLip and the general proportions of height of head HeightofHead are closed, utilize this relation can obtain a proper statistical variable RedThresh, in the computational process of floor projection and upright projection, judge the projection value P of delegation's (or string) of each coordinate points y(x), if projection value greater than height of head or width 1/6 statistical variable RedThresh is added 1 automatically, as statistical variable RedThresh during greater than 2*HeightofLip, automatically threshold value Thresh is added a number, again carry out projection then and calculate, carry out projection calculating up to selecting proper threshold value Thresh.

Red pixel in facial image point is carried out floor projection and upright projection, determine that this zone is mouth region.Longest distance between floor projection two adjacent wave paddy is the length of lip, and the ultimate range of two adjacent wave paddy is the width of lip in the vertical projection, thereby orients the zone of mouth.

After utilizing above-mentioned lip partitioning algorithm that mouth region is split, in mouth image, extract lip, then the corners of the mouth and lip further feature point are positioned.Here be red characteristics in conjunction with lip, adopt edge extracting and the bonded method of red pixel point extract phase that lip is extracted.The main process of algorithm is as follows: the extraction at (1) lip edge: lip is adopted edge extracting and binaryzation, extract apparent in view lip edge; (2) extraction of red pixel point: because lip is red, thus more complete from the red lip edge of marginal zone extraction institute, extract noise still less.As long as the color value of red component, judges promptly that this point is for red greater than green component and blue component.In order to make judged result more reliable, establish a threshold value Thresh, when the color value of red component judges that greater than green component and blue component and when surpassing threshold value Thresh this point is for red; Judgment formula is provided by formula (30),

((R-B)>Thresh)&&((R-G)>Thresh) (30)

(3) extraction of lip: extract after the lip edge, two statistical variables " red goals for " and " all goals fors " are set, wherein " red goals for " represents the number of red pixel point in the lip region, the number sum of " all goals fors " expression red pixel point and marginal point all will be added up these two statistical variables when each pixel is handled.Definite employing dynamic thresholding method of threshold value sobelThresh in the edge extracting operator, give earlier initial value of sobelThresh (getting empirical value 10), if statistical variable " all goals fors " greater than lip region pixel sum half then sobelThresh add an integer K 1 automatically; If statistical variable " red goals for " is greater than 1/4 of lip region pixel sum, then in program, Thresh added automatically an integer K 2, and the extraction of carrying out lip is again calculated; Like this, when sobelThresh and Thresh get a suitable value, just can well lip have been extracted;

The location of lip is the localized basis of human eye, also be to judge yawning important means, if lip is located wrong or error is significantly arranged, will yawn and very big difficulty even the failure of whole position fixing process are brought in the location of eye feature point to judgement, so the localized accuracy of face is crucial.The location of lip characteristic point is to carry out on the localized basis of lip, and the flow process of lip positioning feature point as shown in Figure 8;

Further specify the algorithm of lip positioning feature point, after accurately obtaining the lip characteristic point, just can carry out the lip characteristic vector.The lip characteristic point that this patent need define has: left and right sides corners of the mouth point, upper lip center are gone up point, upper lip center most and are descended most that point is gone up at point, lower lip center most, point is descended at the lower lip center most.This patent adopts approximatioss when lip is in closure state, open configuration the lip characteristic point accurately to be located.Specific algorithm is:

(1) lip is in closure state

(a) left and right corners of the mouth point location

Earlier in the middle of lip region, establish two pointers respectively, one in the above, one below, simultaneously turn left mobile along the edge lines of lip, when the distance of these two pointer indication positions very little (or equating) or continue toward moving left when all not had marginal point, just think to be exactly the X coordinate of the left side corners of the mouth herein, the centre of the Y coordinate of two pointer indication positions is exactly the Y coordinate of the left corners of the mouth.Obtain the coordinate of the right corners of the mouth according to similar method.

(b) point is gone up at the upper lip center most, point location is descended at the lower lip center most

After the coordinate of making the right corners of the mouth and left side corners of the mouth, with these two points be basic point respectively by both sides toward the middle method that moves two pointers, make that point is gone up at the upper lip center most, point is descended at the lower lip center most.If W mBe the length of lip, i.e. distance between two corners of the mouths; The characteristic point of last lower lip outward flange mid point and the horizontal range of the left corners of the mouth are fixed, i.e. 1/2W m, so just made the upper lip center and gone up the X coordinate that point, lower lip center descend point most most.Set out by the left corners of the mouth respectively,, go up the X coordinate time that point, lower lip center descend point most most, just can make their Y coordinate whenever searching the upper lip center along last lower lip edge search.

(c) the upper lip center descends point, lower lip center to go up point location most most

When lip was closed, the upper lip center descended point, lower lip center to go up point and left and right sides corners of the mouth point most on same straight line most, and the mid point of getting its left and right corners of the mouth point coordinates value descends point, lower lip center to go up the coordinate figure of point most most as the upper lip center.

(2) lip is in open configuration

(a) left and right corners of the mouth point location

Localization method when being similar to the lip closure, also establish two pointers respectively, earlier in the middle of upper lip, one in the above, one below, edge lines along upper lip are turned left mobile, when the distance of these two pointer indication positions very little (or equating) or continue just to think to be exactly the x coordinate of the left side corners of the mouth herein that the centre of the Y coordinate of two pointer indication positions is exactly the Y coordinate of the left side corners of the mouth toward moving left when all not had marginal point.In like manner orient the right corners of the mouth characteristic point.

(b) point is gone up at the upper lip center most, point location is descended at the upper lip center most

After the coordinate of making the right corners of the mouth and left side corners of the mouth, be basic point with these two points, pointer moves at upper lip, respectively by both sides toward the middle method that moves two pointers, make that point is gone up at the upper lip center most, point is descended at the upper lip center most.Localization method when closed with lip is identical, establishes W mBe the length of face; The horizontal range of two characteristic points of other of face edge and the corners of the mouth is fixed, and promptly going up the characteristic point of lower lip outward flange mid point and the horizontal range of the left corners of the mouth is 1/2W m, so just made the upper lip center and gone up the x coordinate that point, upper lip center descend point most most.Set out by the left corners of the mouth respectively,,, just can make their Y coordinate whenever the X coordinate time that searches characteristic of correspondence point along the search of upper lip edge.

(c) point is gone up at the lower lip center most, point location is descended at the lower lip center most

Set out by the left corners of the mouth, pointer infra lip motion up and down along the search of lower lip edge, arrives the x coordinate time with the upper lip central point, makes the Y coordinate of lower lip central feature point again.

If the coordinate of the left corners of the mouth be (leftx, lefty), the coordinate of the right corners of the mouth be (rightx, righty), according to putting in order of facial face, the empirical parameter computing formula is as follows:

The face length formula can represent with formula (31),

W m=sqrt((rightx-leftx)*(rightx-leftx)+(righty-lefty)*(righty-lefty)) (31)

The detection of the successive closure of above-mentioned lip, open configuration has been arranged, just can judge whether to yawn incessantly, need here to detect lip stretching degree, open frequency and persistent period.At first be the stretching degree of lip, in this patent with the upper lip center on point and lower lip center descend the distance h of point most mWith face length W mRatio, (face length W when supposing lip closure, open configuration here mConstant), this hypothesis is yawned influence not quite to detection, describes the parameter Doo of mouth stretching degree according to the mouth model definition m, can represent with formula (32),

Doo m = h m W m - - - ( 32 )

Then be the detection of yawning, the variation different manifestations of mouth shape is that the bigger mouth open configuration formula (33) of certain time is thought the generation of yawning when setting up when yawning with normal speech or eupnea,

∑φ(Doo m)≥β (33)

Wherein, &phi; ( Doo m ) = 1 Doo m &GreaterEqual; &alpha; 0 Doo m < &alpha; - - - ( 34 )

The stretching degree of formula (34) expression mouth is thought to have taken place once to yawn when continuous frame number accumulative total more than or equal to α surpasses the β frame in video, according to empirical analysis to a plurality of mimic videos of yawning, get α=0.5, β=5 frames (about 0.5 second) among the present invention, because the actual video of yawning will be adjusted according to the frequency of actual samples;

Detect the persistent period of yawning, accompanying drawing 7 is that yawn begins can to represent with formula (35) to the time of yawn end for ideal mouth stretching degree change curve defines the persistent period of once yawning,

T y=t 2-t 1 (35)

Be the interval of successive mouth stretching degree, come computation time at interval with the accumulative total frame number in actual time more than or equal to α; When finding once to yawn, to add up yawning number of times or persistent period in a period of time with that, add up with formula (36),

Total = &Sigma; t t + m T y - - - ( 36 )

Yawning number of times or persistent period be long more represents that then driver's degree of fatigue is high more.

Detecting on the basis of face, face feature by the people can detect parameters such as persistent period frequency of wink nictation, as weighing the whether tired eye feature of driver, detect nictation persistent period frequency of wink and at first will identify eyes, main in the eyes identification module in the accompanying drawing 2 what handle is that face feature identification by people's face and face goes out human eye; Because it is well-regulated that the face of people's face distribute, can probably demarcate the rough position of eye feature zone and characteristic point by the two basic points and the facial normalized parameter of the corners of the mouth, that is to say, as long as make two corners of the mouths, just can calculate the zone of eyes.Owing to people's right and left eyes symmetry, the synchronicity of nictation, only detect the nictation of right eye in this patent, at first utilize formula (31), the height H ighofEye of right eye calculates with formula (37),

HighofEye=0.31*W m (37)

The length L engthofEye of right eye calculates with formula (38),

LengthofEye=0.63*W m (38)

Origin coordinates (the x of right eye region 2, y 2) calculate with formula (39), (40),

x 2=rightx-0.1*LengthofEye (39)

y 2=righty-1.35*W m (40)

After extrapolating the scope of right eye, length and the width of considering people's face are different, and head has certain horizontally rotating with the degree of depth and rotates, may there be bigger discrepancy in the zone of being made by empirical parameter with practical situation, may comprise the zone of eyebrow and the zone that the part hair covers such as the ocular of being made, therefore must utilize black pixel point to carry out floor projection and upright projection, and the accurate zone of determining eyes with dynamic threshold.

Because Asians's eyelid, eyeball all are natural black, therefore can utilize the projection of the black pixel point in the image to determine the eyes border.The regional for this reason string of the vertical coordinate of black pixel point vertical projection diagram (length is N) is gone up all and is judged as the pixel sum of black; Abscissa is row number (length is M), and it has reflected the image variation of black pixel point in the horizontal direction.If area size is M*N, the value of each point pixel is I e(x, y), then the projection function on the black pixel point vertical and horizontal direction calculates with formula (41), (42),

P ey ( x ) = &Sigma; y = 1 N I e ( x . y ) - - - ( 41 )

P ex ( y ) = &Sigma; x = 1 M I e ( x . y ) - - - ( 42 )

Utilize black pixel point that eye areas is accurately cut apart, in order to eliminate the interference of shade, the color value of establishing each pixel is ColorValue, establishes a threshold value Thresh, when ColorValue<Thresh sets up, then this pixel is decided to be black pixel point.The size of Thresh value is extremely important to the accurate judgement of eye areas, the influence of much deep mixed shade is arranged around the eye areas in most sample images, so the Thresh value is if get a fixed value, can not produce correct crest and trough to different images when the projection, also just can not distinguish eye areas well, so determine that an appropriate threshold is extremely important.

In general, eye color is darker than the shade around it, and as judgment standard, establishing the right eye region height is HighofArea with the above-mentioned height of being decided eye areas and width, and width is LengthofArea.

Judge the projection value P of each row Ey(x), if projection makes greater than 1/3 of picture altitude HighofArea, statistical variable Flag adds 1 automatically.Statistical variable Flag adds 5 with threshold value Thresh during greater than LengthofArea/4 automatically, and the clear O of statistical variable Flag, carries out upright projection then again and calculates, and carries out upright projection and calculates up to choosing proper threshold value Thresh.In like manner, in this way floor projection being carried out in this zone calculates.

Further when carrying out floor projection, a plurality of crests and trough may appear in projection result, and this is because also include in the zone that original this zone of calculating may cover the zone and the hair of eyebrow.Because can estimate the height H ighofEye of right eye by the length of face, up look for distance W between per two continuous troughs by the bottom of right eye region, when running into first W when more close, just can think that the zone between these two troughs is exactly the height zone of right eye with right eye height H ighofEye.When carrying out upright projection, estimate the length L engthofEye of right eye by the length of face, begin search from the right side of upright projection, when searching two distance L between the continuous trough when more close, can think that just the zone between these two troughs is exactly the length areas of right eye with the height LengthofEye of right eye.The position that just can orient right eye by the length areas of the width regions of right eye and right eye.

Many driver fatigues based on eye feature all adopt PERCLOS as the parameter that detects nictation in detecting and using, PERCLOS be actually iris not by the height of eyelid shield portions divided by the iris diameter, because the diameter of iris detects to come out in eyes closed, and the detection of 2 canthus points is more or less freely and the distance of 2 canthus points is not opened or closed influence by eyes substantially, calculating eyes with PERCLOS, to open degree methods different, and the ratio of the yardstick that this patent is opened with eyes and the distance of 2 canthus points rather than iris diameter is opened the tolerance of degree as eyes.Define eyes corresponding to the eye model formula (43) of accompanying drawing 5 and opened level calculating method

Doo e = h e W e - - - ( 43 )

The degree of opening of representing eyes in the formula (43) with the ratio of width to height of eyes boundary rectangle.H wherein eBe the yardstick of opening of eyes, W eGet the distance of 2 canthus points for the width eye widths of eyes.

Definition state nictation is that the state that the degree of opening of eyes in the video image adds up greater than certain frame number less than the frame number of certain threshold value in video continuously is expressed as with formula (44) in this patent

∑φ(Doo e)≥β e (44)

Wherein, &phi; ( Doo e ) = 1 Doo e &GreaterEqual; &alpha; e 0 Doo e < &alpha; e - - - ( 45 )

Formula (45) expression eyes are opened degree and be less than or equal to α continuously in videos eFrame number accumulative total surpass β eThink during frame and taken place once nictation,, get α among the present invention according to empirical analysis to a plurality of mimic nictation of video e=0.2, β e=4, because actual video nictation will be adjusted according to the frequency of actual samples;

Further detect persistent period nictation, to open degree curve definitions persistent period nictation be that eyes closed is represented with formula (46) to the interval between opening in process nictation to 7 desirable eyes with reference to the accompanying drawings,

T b=t 2-t 1 (46)

T hRepresented that opening degree for successive is less than or equal to α eTime span, in Video Detection is used, what that continue frame number nictation to come computation time at interval with, nictation, long more explanation degree of fatigue of persistent period was high more;

The further frequency of wink that detects, 7 idealized eyes are opened the degree change curve with reference to the accompanying drawings, and defining frequency of wink is the inverse of twice interval of blinking of generation recently, and the frequency of blinking with formula (47) expression is

f h = 1 t n - t 2 - - - ( 47 )

The high more degree of fatigue that shows of frequency of wink is high more, can come computation time at interval with twice frame number suddenly in Video Detection is used.For the driver's who wears glasses situation, particularly with the driver of sunglasses, owing to can't obtain the image information of eyes, so when occurring for the situation that can't discern eyes, program is not carried out the judgement of eyestrain information from video angle.

Further, on the basis that detects people's face central point, can obtain the trajectory of people's face central point, utilize this trajectory can judge the automatic attitude of nodding again and again, being difficult to keep new line of driver, realize in the facial movement track following module of this judgement in accompanying drawing 2.In this patent, adopt Kalman filtering to follow the tracks of the facial movable of driver, obtaining the facial state vector X of a series of representatives according to formula (15) t, X T+1, X 1+2... in facial zone central point (x t, y t), (x T+1, y T+1), (x T+2, y T+2) ... etc. after the information, can make a facial zone central point movement locus line by the time sequence.

At first need a criterion, adopt the assembly average that begins to drive the facial zone central point in a period of time of back the driver among the present invention, computing formula shown in (48),

x &OverBar; = &Sigma; i = t t + n x i n , y &OverBar; = &Sigma; i = t t + n y i n - - - ( 48 )

In this patent the defining point head status be in the video image center of face point position continuously forwards displacement surpass the frame number of certain threshold value, the state that adds up greater than certain frame number is expressed as with formula (49)

∑φ(Doo h)≥β h (49)

Wherein, &phi; ( Doo h ) = 1 Doo h &GreaterEqual; &alpha; h 0 Doo h < &alpha; h - - - ( 50 )

The degree that formula (49) expression is nodded in video continuously more than or equal to α hFrame number accumulative total surpass β hDuring frame, think and taken place once to nod, be close with frequency of wink basically, because the actual video of nodding will be adjusted according to the frequency of actual samples according to empirical analysis to a plurality of mimic videos of nodding;

Doo h = ( x n - x &OverBar; ) 2 + ( y n - y &OverBar; ) 2 - - - ( 51 )

In the formula, (x n, y n) driver's facial zone center position when being the n frame, (x y) begins to drive the assembly average of the facial zone center position in a period of time of back for the driver.

The number of times of nodding or during with normal driving the normal place of head surpass deviation value α hPersistent period is long more represents that then driver's degree of fatigue is high more.

Above-mentioned image understanding and intelligent decision all are to carry out on the video image basis according to visual angle, driver's seat perspective view.

The road environment by vehicle drive and the state of driving judge whether the situation of fatigue driving, and the vehicle heading detection module in the accompanying drawing 2 will mainly be realized following detection content; At first be to monitor vehicle traveling direction, mainly drive whether lost sense of direction, drive to vacillate now to the left, now to the right on highway by detecting; Be that the environment on the line direction detects before the vehicle that obtains by omnibearing vision sensor among the present invention, the benchmark that detects is white line or the road top edge greenbelt on the road, whether the direction of advance that detects vehicle by video image departs from mutually with above-mentioned white line or greenbelt, as shown in Figure 15.To make the parallel lines with above-mentioned datum line (white line or edge line) on the central point of vehicle among the present invention, whether the running orbit by the detection vehicle judges whether to drive to vacillate now to the left, now to the right on highway with above-mentioned parallel lines left and right deviation situation (positive deviation and negative bias from).Be provided with one among the present invention and depart from threshold value, when the vehicle operating tracing point surpassed this threshold value, which kind of the vehicle during inspection is driven at present was in and departs from state, and then the state that departs from according to last time determines whether to have taken place to drive to vacillate now to the left, now to the right; If the state that departs from of last time is the positive deviation state, this detected state depart from state be negative bias from state, or do not depart from, so just think to have produced once and depart from, write down time and degree that this time departs from; Depart from the time and departure degree has judged whether fatigue driving or dangerous driving situation according to each in the program.

Detect the state of driver's operation steering wheel, the state that detects steering wheel mainly is to detect driver's generation blunt, phenomenon such as judgement is slow, stiff in the movements, rhythm is slow that whether responds, among the present invention the operational motion of steering wheel detected mainly to be placed in the accompanying drawing 2 and carry out in the steering wheel motion detection module, also used some testing results in the vehicle heading detection module certainly; Though this is the detection of an operating state, as shown in Figure 1, also be perception state and judgement loss of form or unusual a kind of follow-up action reflection.The vehicle condition of this driver behavior detection and road conditions and periphery has confidential relation; if detect bend is arranged on the road ahead, lane change is arranged, when the turnout is arranged (perhaps running into the vehicle operating sign) and front vehicles enters the traveling lane situation; there is not the operational motion of steering wheel just can think that movement response is not normal or unusual; this situation is just abnormally dangerous, is the results of above-mentioned several fatigue driving presentations on driver's driver behavior.Be closely related as state that detects steering wheel and supervision vehicle traveling direction, this has reflected the movement response that the driver has done road conditions.

The safe driving that accompanying drawing 12 is based in the safe auxiliary driving device of omnidirectional computer vision detects general flow chart, the driver connects with the mains behind the actuating vehicle, omnibearing vision sensor just obtains the inside and outside omnidirectional video image of car, according to defined three surveyed areas of user, be launched into three perspective views of visual angle, driver's seat, steering wheel visual angle, vehicle drive road environment by software then; At first carry out the identification of road ahead environment, then detect vehicle and whether departed from driving line,, require to correct its travel direction if find have the situation of departing to remind to the driver by output device by vehicle drive road environment perspective view; Then detect again whether sign board is arranged on the road, point out to the driver by output device if detected sign; And then judge whether the driver has the action of direction of operating dish at once, beginning to drive after after a while, if there is not to find to have the action of any direction of operating dish, just think that driving fatigue is arranged is suspicious, above-mentioned detection obtains video image by steering wheel visual angle perspective view; Then according to the perspective view at visual angle, driver's seat, detect the track of driver's mouth, eye and center of face, judge according to some fatigue characteristics of people whether the driver has fatigue driving, if detect the situation or the tendency of fatigue driving, output device is reminded safe driving information to the driver, the mode of reminding can adopt and sound, light, abnormal smells from the patient that the people is regained consciousness, also can adopt forced brake modes such as automatic emergency brake for serious fatigue driving.

Claims (6)

1. safe driving auxiliary device based on omnidirectional computer vision, it is characterized in that: described safe driving auxiliary device comprises the omnibearing vision sensor that is used to obtain the inside and outside comprehensive video information of car, the safe driving pilot controller that warning is provided when being used to detect various fatigue drivings and having the fatigue driving situation to take place, and described omnibearing vision sensor is installed in the right-hand of the interior driver's seat of car; Described omnibearing vision sensor output attachment security is driven pilot controller, described omnibearing vision sensor comprises in order to the evagination catadioptric minute surface of object in the field inside and outside the reflection car, in order to prevent dark circles cone, the transparent cylinder that anaclasis and light are saturated and to be used to take the photographic head of imaging body on the evagination mirror surface, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the bottom center of evagination catadioptric minute surface, and described photographic head facing to evagination catadioptric minute surface up;
Described safe driving pilot controller comprises:
Detection range is cut apart module, and the comprehensive video information that is used for obtaining from omnibearing vision sensor is divided into the video perspective image at vehicle front visual angle, visual angle, driver's seat, steering wheel visual angle;
Face's locating module is used for the people's face location to the driver, and face complexion is obeyed two-dimentional Gauss distribution at the CrCb chrominance space, formula (4) expression of the probability density function of colour of skin distributed model,
f ( x 1 , x 2 ) = 1 2 &pi; | C | 1 / 2 exp { - 1 2 ( X - &mu; ) T C - 1 ( X - &mu; ) } - - - ( 4 )
Wherein, μ=(Cr, Cb) T=(156.560,117.436) T, two values in this vector refer to the average of color component Cr, Cb respectively, and C is the covariance matrix of Cr, Cb, with formula (5) expression,
C = &sigma; rr 2 &sigma; rb &sigma; br &sigma; bb 2 = 160.130 12.143 12.143 299.457 - - - ( 5 )
Wherein, Be respectively the variance of Cr, Cb, σ Rb, σ BrIt is respectively the covariance of Cr, Cb; According to the Gaussian distribution model of the colour of skin, calculate the color of all pixels in the facial image and the similarity of the colour of skin, get the calculation of similarity degree formula and be,
P(Cr,Cb)=exp{-0.5(X-μ) TC -1(X-μ)} (6)
X=(Cr,Cb) T (7)
Wherein, and X=(Cr, Cb) TBe the vector of pixel in Cr, Cb chrominance space, C, μ value are identical with above-mentioned (4), (5) formula;
After calculating the similarity value, utilize normalized method that similarity is converted into gray value between 0~255, obtain the gray-scale map of visual angle, driver's seat image; Utilize preset threshold that gray-scale map is carried out binary conversion treatment, area of skin color becomes complete white, and remainder becomes complete black; And the horizontal projection that utilizes image grey level histogram to do image obtain extract the zone in vertical direction the top and the maximum of bottom; Do vertical projection diagram obtain extract the zone in the horizontal direction the left side and the maximum on right side;
If people's face long for h, wide be w, according to the dimension constraint of people's face, if satisfy the condition of people's face length-width ratio 0.8≤h/w≤1.5, confirming as this zone is people's face positioning image;
The lip location and the detection module of yawning, be used to locate driver's lip and detect the driver and yawn, red pixel point in people's face positioning image is carried out floor projection and upright projection, determine that this zone is mouth region, longest distance between floor projection two adjacent wave paddy is the length of lip, and the ultimate range of two adjacent wave paddy is the width of lip in the vertical projection; And the lip characteristic point when defining lip successively and being in closure state, open configuration: left and right sides corners of the mouth point, upper lip center go up that point, upper lip center descend point, lower lip center to go up point most most, point is descended at the lower lip center most most; The distance h that point is descended at point and lower lip center is most gone up at the upper lip center most mWith face length W mRatio, be the parameter Doo of mouth stretching degree according to the mouth model definition m, shown in formula (32):
Doo m = h m W m - - - ( 32 )
When setting up, the bigger mouth open configuration that setting continues for some time judges the generation of yawning, shown in formula (33):
∑φ(Doo m)≥β (33)
Wherein, &phi; ( Doo m ) = 1 Doo m &GreaterEqual; &alpha; 0 Doo m < &alpha; - - - ( 34 )
Defining the persistent period of once yawning is that yawn begins to represent with formula (35) to the time of yawn end:
T y=t 2-t 1 (35)
Be the interval of successive mouth stretching degree,, then add up yawning number of times or persistent period in a period of time, add up with formula (36) when finding once to yawn more than or equal to α:
Total = &Sigma; t t + m T y - - - ( 36 )
Fatigue driving test and appraisal and alarm module are used for according to yawning number of times or persistent period threshold value in default a period of time, if number of times that records or persistent period greater than setting threshold, are judged to be fatigue driving, and send alarm command to alarm device.
2. the safe driving auxiliary device based on omnidirectional computer vision as claimed in claim 1 is characterized in that: described safe driving pilot controller also comprises:
The eyes identification module, be used for demarcating eye feature zone and characteristic point according to the left and right sides corners of the mouth point of lip and facial normalized parameter, if the coordinate of the left corners of the mouth is (leftx, lefty), the coordinate of the right corners of the mouth is (rightx, righty), according to putting in order of facial face, the face length formula is represented with formula (31):
W m=sqrt((rightx-leftx)*(rightx-leftx)+(righty-lefty)*(righty-lefty))
(31)
The height H ighofEye of one-sided eyes calculates with formula (37):
HighofEye=0.31*W m (37)
The length L engthofEye of one-sided eyes calculates with formula (38):
LengthofEye=0.63*W m (38)
Origin coordinates (the x of one-sided eye areas 2, y 2) calculate with formula (39), (40):
x 2=rightx-0.1*LengthofEye (39)
y 2=righty-1.35*W m (40)
Utilize the projection of the black pixel point in the image to determine the eyes border, the vertical coordinate of black pixel point vertical projection diagram zone one for this reason lists the pixel sum that all are judged as black, and length is N; Abscissa for row number, length is M, establishing area size is M*N, the value of each point pixel is I e(x, y), the projection function on the black pixel point vertical and horizontal direction calculates with formula (41), (42):
P ey ( x ) = &Sigma; y = 1 N I e ( x , y ) - - - ( 41 )
P ex ( y ) = &Sigma; x = 1 M I e ( x , y ) - - - ( 42 )
When floor projection, estimate the height H ighofEye of eyes by the length of face, up look for distance W between per two continuous troughs by the bottom of eye areas, when running into first W when more close, think that the zone between these two troughs is the height zone of eyes with right eye height H ighofEye;
When upright projection, estimate the length L engthofEye of eyes by the length of face, begin search from the right side of upright projection, when searching two distance L between the continuous trough when more close, think that the zone between these two troughs is the length areas of eyes with the height LengthofEye of right eye; The blink detection module is used for eyes and opens level calculating method and adopt the eye model definition, as formula (43):
Doo e = h e W e - - - ( 43 )
The degree of opening of representing eyes in the formula (43) with the ratio of width to height of eyes boundary rectangle, wherein h eBe the yardstick of opening of eyes, W eGet the distance of 2 canthus points for the width eye widths of eyes;
Definition state nictation is that the state that the degree of opening of eyes in the video image is less than or equal to setting threshold continuously in video frame number adds up greater than certain frame number is expressed as with formula (44):
∑φ(Doo e)≥βe (44)
Wherein, &phi; ( Doo e ) = 1 Doo e &le; &alpha; e 0 Doo e > &alpha; e - - - ( 45 )
Formula (45) expression eyes are opened degree and be less than or equal to α continuously in videos eFrame number accumulative total surpass β eThink during frame and taken place once nictation;
Definition persistent period nictation is that eyes closed is represented with formula (46) to the interval between opening in process nictation:
T b=t 2-t 1 (46)
T bRepresent that opening degree for successive is less than or equal to α eTime span;
Defining frequency of wink is the inverse of twice interval of blinking of generation recently, and the frequency of blinking with formula (47) expression is:
f b = 1 t n - t 2 - - - ( 47 ) ;
In fatigue driving test and appraisal and alarm module, set the frequency of wink threshold value, if the frequency of wink that records is judged to be fatigue driving greater than the frequency of wink threshold value, and send alarm command to alarm device.
3. the safe driving auxiliary device based on omnidirectional computer vision as claimed in claim 1 or 2 is characterized in that: described safe driving pilot controller also comprises:
The vehicle heading detection module, be used for according to vehicle front visual angle image, with white line on the road or road top edge greenbelt is datum line, whether the direction of advance that detects vehicle by video image departs from mutually with above-mentioned white line or greenbelt, parallel lines with above-mentioned datum line will be made on the central point of vehicle, the running orbit that detects vehicle whether with described parallel lines left and right deviation, surpass the threshold value of setting that departs from as the vehicle operating tracing point, check whether current driving is in shift state, if the state that departs from of last time is the positive deviation state, the state that departs from of this detected state is that negative bias is from state, or do not depart from, so just think to have produced once and depart from, write down time and degree that this time departs from;
In fatigue driving test and appraisal and alarm module, set the skew alarm distance, if current offset distance is judged to be fatigue driving or dangerous driving, and sends alarm command to alarm device greater than the skew alarm distance.
4. the safe driving auxiliary device based on omnidirectional computer vision as claimed in claim 3 is characterized in that: described safe driving pilot controller also comprises:
Direction of operating plate-like attitude detection module is used for according to steering wheel visual angle image, bend is arranged on the road ahead, lane change is arranged, when having turnout and front vehicles to enter the traveling lane situation, detects the current operational motion whether steering wheel is arranged if detect;
In fatigue driving test and appraisal and alarm module, if the current operational motion that does not have steering wheel is judged to be dangerous driving, and sends alarm command to alarm device.
5. the safe driving auxiliary device based on omnidirectional computer vision as claimed in claim 1 or 2 is characterized in that: described detection range is cut apart module and is also comprised:
Perspective view launches the unit, be used to utilize the planar coordinate points P of perspective projection (i, j) ask A in the three-dimensional of space (X, Y Z), obtain the transformational relation of projection plane and space three-dimensional, and conversion relational expression is represented with formula (60):
X=R*cosβ-i*sinβ (60)
Y=R*sinβ+i*cosβ
Z=D*sinγ-j*cosγ
R=D*cosγ+j*sinγ
In the following formula: D arrives bi-curved focus O for the perspective projection plane mDistance, the β angle is the angle of incident ray projection on the XY plane, the γ angle is the angle of the horizontal plane of incident ray and hyperboloid focus, the i axle is and the parallel plane transverse axis of XY that the j axle is and i axle and O mThe longitudinal axis of-G axle right angle intersection, G are the perspective projection zero;
With the above-mentioned A that tries to achieve with formula (60) (X, Y, Z) some substitution formula (57) and (58) just can try to achieve with the planar coordinate points P of perspective projection (i, j) corresponding on imaging plane P (x, y) point:
x = Xf ( b 2 - c 2 ) ( b 2 + c 2 ) Z - 2 bc X 2 + Y 2 + Z 2 - - - ( 57 )
y = Yf ( b 2 - c 2 ) ( b 2 + c 2 ) Z - 2 bc X 2 + Y 2 + Z 2 - - - ( 58 )
In the following formula, c represents the focus of hyperbolic mirror, and a, b are respectively the real axis of hyperbolic mirror and the length of the imaginary axis, and f represents the distance of imaging plane to the virtual focus of hyperbolic mirror.
6. the safe driving auxiliary device based on omnidirectional computer vision as claimed in claim 1 or 2 is characterized in that: described catadioptric minute surface adopts hyperbolic mirror to design: shown in the optical system that constitutes of hyperbolic mirror represent by following 5 equatioies;
((X 2+ Y 2)/a 2)-(Z 2/ b 2)=-1 wherein, Z〉0 (52)
c = a 2 + b 2 - - - ( 53 )
β=tan -1(Y/X) (54)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ (55)
&gamma; = tan - 1 [ f / ( X 2 + Y 2 ) ] - - - ( 56 )
X in the formula, Y, Z representation space coordinate, c represents the focus of hyperbolic mirror, 2c represents two distances between the focus, a, b is respectively the real axis of hyperbolic mirror and the length of the imaginary axis, and β represents that the angle of incident ray on the XY plane is the azimuth, and α represents that the angle of incident ray on the XZ plane is the angle of depression, γ is the angle of the horizontal plane of incident ray and hyperboloid focus, and f represents the distance of imaging plane to the virtual focus of hyperbolic mirror.
CNB2007100676334A 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision CN100462047C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100676334A CN100462047C (en) 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100676334A CN100462047C (en) 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision

Publications (2)

Publication Number Publication Date
CN101032405A CN101032405A (en) 2007-09-12
CN100462047C true CN100462047C (en) 2009-02-18

Family

ID=38729287

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100676334A CN100462047C (en) 2007-03-21 2007-03-21 Safe driving auxiliary device based on omnidirectional computer vision

Country Status (1)

Country Link
CN (1) CN100462047C (en)

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4372804B2 (en) * 2007-05-09 2009-11-25 トヨタ自動車株式会社 Image processing device
JP5023214B2 (en) * 2007-09-17 2012-09-12 ボルボ テクノロジー コーポレイション How to tell the deviation of car parameters
US8570176B2 (en) * 2008-05-28 2013-10-29 7352867 Canada Inc. Method and device for the detection of microsleep events
CN101732055B (en) * 2009-02-11 2012-04-18 北京智安邦科技有限公司 Method and system for testing fatigue of driver
JP5444898B2 (en) 2009-07-09 2014-03-19 アイシン精機株式会社 Status detection device, status detection method and program
US8489253B2 (en) * 2009-09-30 2013-07-16 Honda Motor Co., Ltd. Driver state assessment device
JP5593695B2 (en) * 2009-12-28 2014-09-24 ソニー株式会社 Image processing apparatus, image processing method, and program
CN102088471A (en) * 2010-03-16 2011-06-08 上海海事大学 Health and safety monitoring system for personnel on board based on wireless sensor network
CN102097003B (en) * 2010-12-31 2014-03-19 北京星河易达科技有限公司 Intelligent traffic safety system and terminal
CN102184388A (en) * 2011-05-16 2011-09-14 苏州两江科技有限公司 Face and vehicle adaptive rapid detection system and detection method
EP2564777B1 (en) * 2011-09-02 2017-06-07 Volvo Car Corporation Method for classification of eye closures
CN102413282B (en) * 2011-10-26 2015-02-18 惠州Tcl移动通信有限公司 Self-shooting guidance method and equipment
CN102723003B (en) * 2012-05-28 2015-01-07 华为终端有限公司 Reading state reminding method and mobile terminal
CN103198616B (en) * 2013-03-20 2015-10-28 重庆大学 Based on method for detecting fatigue driving and the system of the identification of driver's neck moving characteristic
TWI506564B (en) * 2013-05-29 2015-11-01
CN104464192B (en) * 2013-09-18 2017-02-08 武汉理工大学 Device and method for detection and early warning of unstable driving state of vehicle driver
KR101386823B1 (en) * 2013-10-29 2014-04-17 김재철 2 level drowsy driving prevention apparatus through motion, face, eye,and mouth recognition
KR102113767B1 (en) * 2013-11-28 2020-05-21 현대모비스 주식회사 Device for detecting the status of the driver and method thereof
US9971411B2 (en) * 2013-12-10 2018-05-15 Htc Corporation Method, interactive device, and computer readable medium storing corresponding instructions for recognizing user behavior without user touching on input portion of display screen
CN103617421A (en) * 2013-12-17 2014-03-05 上海电机学院 Fatigue detecting method and system based on comprehensive video feature analysis
CN103824421B (en) * 2014-03-26 2016-11-16 重庆长安汽车股份有限公司 Active safety fatigue driving detection caution system
KR101589427B1 (en) * 2014-04-04 2016-01-27 현대자동차 주식회사 Apparatus and method for controlling vehicle drive based on driver fatigue
CN103902986B (en) * 2014-04-17 2017-04-26 拓维信息系统股份有限公司 Method for achieving organ positioning in mobile game facial makeup with insect tentacle functions as references
CN104013414B (en) * 2014-04-30 2015-12-30 深圳佑驾创新科技有限公司 A kind of Study in Driver Fatigue State Surveillance System based on intelligent movable mobile phone
CN103983239B (en) * 2014-05-21 2016-02-10 南京航空航天大学 Based on the distance-finding method of the wide line in track
DE102014220759B4 (en) * 2014-10-14 2019-06-19 Audi Ag Monitoring a degree of attention of a driver of a vehicle
CN104408878B (en) * 2014-11-05 2017-01-25 唐郁文 Vehicle fleet fatigue driving early warning monitoring system and method
US9866163B2 (en) 2015-03-16 2018-01-09 Thunder Power New Energy Vehicle Development Company Limited Method for controlling operating speed and torque of electric motor
US9586618B2 (en) * 2015-03-16 2017-03-07 Thunder Power Hong Kong Ltd. Vehicle control system for controlling steering of vehicle
CN105589459B (en) * 2015-05-19 2019-07-12 中国人民解放军国防科学技术大学 The semi-autonomous remote control method of unmanned vehicle
CN105069431B (en) * 2015-08-07 2018-09-14 成都明图通科技有限公司 The localization method and device of face
US10358143B2 (en) * 2015-09-01 2019-07-23 Ford Global Technologies, Llc Aberrant driver classification and reporting
CN105303830A (en) * 2015-09-15 2016-02-03 成都通甲优博科技有限责任公司 Driving behavior analysis system and analysis method
CN105469467A (en) * 2015-12-04 2016-04-06 北海创思电子科技产业有限公司 EDR (event data recorder) capable of monitoring fatigue driving
CN106904169A (en) * 2015-12-17 2017-06-30 北京奇虎科技有限公司 Traffic safety method for early warning and device
CN105718872B (en) * 2016-01-15 2020-02-04 武汉光庭科技有限公司 Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
CN205440259U (en) * 2016-03-25 2016-08-10 深圳市兼明科技有限公司 Safe driving device based on iris discernment
CN105788176B (en) * 2016-05-25 2018-01-26 厦门理工学院 Fatigue driving monitors based reminding method and system
CN106228293A (en) * 2016-07-18 2016-12-14 重庆中科云丛科技有限公司 teaching evaluation method and system
CN106249877A (en) * 2016-07-18 2016-12-21 广东欧珀移动通信有限公司 Control method and control device
CN106236046A (en) * 2016-09-05 2016-12-21 合肥飞鸟信息技术有限公司 Driver fatigue monitoring system
CN106571015A (en) * 2016-09-09 2017-04-19 武汉依迅电子信息技术有限公司 Driving behavior data collection method based on Internet
CN106448265A (en) * 2016-10-27 2017-02-22 广州微牌智能科技有限公司 Collecting method and device of driver's driving behavior data
WO2018094939A1 (en) * 2016-11-26 2018-05-31 华为技术有限公司 Information prompt method and terminal
CN106817364B (en) * 2016-12-29 2020-02-07 北京神州绿盟信息安全科技股份有限公司 Brute force cracking detection method and device
CN107161152A (en) * 2017-05-24 2017-09-15 成都志博科技有限公司 Driver's detecting system of lane shift monitoring
CN107358151A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 A kind of eye motion detection method and device and vivo identification method and system
CN107368777A (en) * 2017-06-02 2017-11-21 广州视源电子科技股份有限公司 A kind of smile motion detection method and device and vivo identification method and system
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN108099915A (en) * 2017-12-25 2018-06-01 芜湖皖江知识产权运营中心有限公司 A kind of fatigue driving control system for identifying applied in intelligent vehicle
CN108162754A (en) * 2017-12-25 2018-06-15 芜湖皖江知识产权运营中心有限公司 A kind of vehicle audio input control system for intelligent travel
CN108128241A (en) * 2017-12-25 2018-06-08 芜湖皖江知识产权运营中心有限公司 A kind of fatigue driving control system for identifying applied in intelligent vehicle
AU2018421183A1 (en) * 2018-04-25 2020-07-02 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for blink action recognition based on facial feature points
CN110766912A (en) * 2018-07-27 2020-02-07 长沙智能驾驶研究院有限公司 Driving early warning method, device and computer readable storage medium
CN109157219A (en) * 2018-08-08 2019-01-08 成都工业学院 Fatigue detecting processing system and fatigue detecting processing method
CN109447025A (en) * 2018-11-08 2019-03-08 北京旷视科技有限公司 Fatigue detection method, device, system and computer readable storage medium
CN109584507B (en) * 2018-11-12 2020-11-13 深圳佑驾创新科技有限公司 Driving behavior monitoring method, device, system, vehicle and storage medium
CN111291590A (en) * 2018-12-06 2020-06-16 广州汽车集团股份有限公司 Driver fatigue detection method, driver fatigue detection device, computer equipment and storage medium
CN109635737A (en) * 2018-12-12 2019-04-16 中国地质大学(武汉) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN109615826B (en) * 2019-01-22 2020-08-28 宁波财经学院 Intelligent doze-proof and alarm system
CN110751011A (en) * 2019-05-23 2020-02-04 北京嘀嘀无限科技发展有限公司 Driving safety detection method, driving safety detection device and vehicle-mounted terminal
CN110427815B (en) * 2019-06-24 2020-07-10 特斯联(北京)科技有限公司 Video processing method and device for realizing interception of effective contents of entrance guard
CN110579360A (en) * 2019-10-22 2019-12-17 东北林业大学 Automobile control behavior parameter acquisition equipment and method
CN111880545A (en) * 2020-02-17 2020-11-03 李华兰 Automatic driving device, system, automatic driving decision processing method and device
CN111415524A (en) * 2020-03-31 2020-07-14 桂林电子科技大学 Intelligent processing method and system for fatigue driving

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1613425A (en) * 2004-09-15 2005-05-11 南京大学 Method and system for drivers' fatigue prealarming biological identification
US6927694B1 (en) * 2001-08-20 2005-08-09 Research Foundation Of The University Of Central Florida Algorithm for monitoring head/eye motion for driver alertness with one camera
CN1878297A (en) * 2005-06-07 2006-12-13 浙江工业大学 Omnibearing vision device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6927694B1 (en) * 2001-08-20 2005-08-09 Research Foundation Of The University Of Central Florida Algorithm for monitoring head/eye motion for driver alertness with one camera
CN1613425A (en) * 2004-09-15 2005-05-11 南京大学 Method and system for drivers' fatigue prealarming biological identification
CN1878297A (en) * 2005-06-07 2006-12-13 浙江工业大学 Omnibearing vision device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于24位彩色人脸图像嘴唇的分割和提取. 陆继祥等.计算机工程,第29卷第2期. 2003 *
基于机器视觉的行车安全综合保障系统研究. 王荣本等.山东交通学院学报,第14卷第2期. 2006 *

Also Published As

Publication number Publication date
CN101032405A (en) 2007-09-12

Similar Documents

Publication Publication Date Title
US9665802B2 (en) Object-centric fine-grained image classification
CN105769120B (en) Method for detecting fatigue driving and device
Bila et al. Vehicles of the future: A survey of research on safety issues
US9251598B2 (en) Vision-based multi-camera factory monitoring with dynamic integrity scoring
CN104573646B (en) Chinese herbaceous peony pedestrian detection method and system based on laser radar and binocular camera
Choi et al. A general framework for tracking multiple people from a moving camera
Seshadri et al. Driver cell phone usage detection on strategic highway research program (shrp2) face view videos
US9547798B2 (en) Gaze tracking for a vehicle operator
Hsiao et al. Occlusion reasoning for object detectionunder arbitrary viewpoint
Cyganek et al. Hybrid computer vision system for drivers' eye recognition and fatigue monitoring
US9180887B2 (en) Driver identification based on face data
John et al. Traffic light recognition in varying illumination using deep learning and saliency map
Garcia et al. Sensor fusion methodology for vehicle detection
CN106908783B (en) Based on obstacle detection method combined of multi-sensor information
Gandhi et al. Pedestrian protection systems: Issues, survey, and challenges
Tawari et al. Continuous head movement estimator for driver assistance: Issues, algorithms, and on-road evaluations
Trivedi et al. Looking-in and looking-out of a vehicle: Computer-vision-based enhanced vehicle safety
CN106203274B (en) Real-time pedestrian detection system and method in video monitoring
Sivaraman et al. Vehicle detection by independent parts for urban driver assistance
Ji et al. Real time visual cues extraction for monitoring driver vigilance
US9552524B2 (en) System and method for detecting seat belt violations from front view vehicle images
Alefs et al. Road sign detection from edge orientation histograms
US9405982B2 (en) Driver gaze detection system
Guo et al. Pedestrian detection for intelligent transportation systems combining AdaBoost algorithm and support vector machine
EP1671216B1 (en) Moving object detection using low illumination depth capable computer vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model
ASS Succession or assignment of patent right

Owner name: HAKIM INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: TANG YIPING

Effective date: 20110420

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20110420

Address after: Hangzhou City, Zhejiang province 310012 Tianmushan Road No. 176 (West Lake soyea Software Park building 18, 6 floor)

Patentee after: Hakim Information Technology Co., Ltd.

Address before: City Zhaohui District Six School of information engineering Zhejiang University of Technology Hangzhou 310014 Zhejiang Province

Patentee before: Tang Yiping

COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 310014 COLLEGE OF INFORMATION ENGINEERING, ZHEJIANG UNIVERSITY OF TECHNOLOGY, AREA 6, ZHAOHUI, XIACHENG DISTRICT, HANGZHOU CITY, ZHEJIANG PROVINCE TO: 310012 6/F, BUILDING 18, NO. 176 (WESTLAKE SOYEA SOFTWARE PARK), TIANMUSHAN ROAD, HANGZHOU CITY, ZHEJIANG PROVINCE

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 1101, South Building, Handing International Building, No. 5 Yongfuqiao Road, Xiacheng District, Hangzhou City, Zhejiang Province

Patentee after: Handing you Yu Internet Limited by Share Ltd

Address before: 310012 6 floors, Building 18, No. 176 Tianmushan Road (West Lake Digital Source Software Park), Hangzhou City, Zhejiang Province

Patentee before: Hakim Information Technology Co., Ltd.