CN1497493A - Moving-based image segmentation apparatus for passenger tracking using Housdov range heuristic method - Google Patents

Moving-based image segmentation apparatus for passenger tracking using Housdov range heuristic method Download PDF

Info

Publication number
CN1497493A
CN1497493A CNA2003101003920A CN200310100392A CN1497493A CN 1497493 A CN1497493 A CN 1497493A CN A2003101003920 A CNA2003101003920 A CN A2003101003920A CN 200310100392 A CN200310100392 A CN 200310100392A CN 1497493 A CN1497493 A CN 1497493A
Authority
CN
China
Prior art keywords
image
template
occupant
split
current environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2003101003920A
Other languages
Chinese (zh)
Inventor
M・E・法默
M·E·法默
陈讯昌
文力
周川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eaton Corp
Original Assignee
Eaton Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eaton Corp filed Critical Eaton Corp
Priority to CNA2003101003920A priority Critical patent/CN1497493A/en
Publication of CN1497493A publication Critical patent/CN1497493A/en
Pending legal-status Critical Current

Links

Images

Abstract

A segmentation system is disclosed that allows a segmented image of a vehicle occupant to be identified within an overall image (the 'ambient image') of the area that includes the image of the occupant. The segmented image from a past sensor measurement within can help determine a region of interest within the most recently captured ambient image. To further reduce processing time, the system can be configured to assume that the bottom of segmented image does not move. Differences between the various ambient images captured by the sensor can be used to identify movement by the occupant, and thus the boundary of the segmented image. A template image is then fitted to the boundary of the segmented image for an entire range of predetermined angles. The validity of each fit within the range of angles can be evaluated. The template image can also be modified for future ambient images.

Description

Utilize the based drive image segmentating device that the occupant follows the tracks of that is used for of Hausdorff distance trial method
Related application
This part continuation application requires to enjoy the rights and interests of following U. S. application: the applying date be May 30 calendar year 2001, sequence number be 09/870,151 and name be called the application of " A RULES-BASEDOCCUPANT CLASSIFICATION SYSTEM FOR AIRBAGDEPLOYMENT "; The applying date be July 10 calendar year 2001, sequence number be 09/901,805 and name be called the application of " IMAGE PROCESSING SYSTEM FOR DYNAMICSUPPRESSION OF AIRBAGS USING MULTIPLE MODELLIKELIHOODS TO INFER THREE DIMENSIONALINFORMATION "; The applying date be November 5 calendar year 2001, sequence number be 10/006,564 and name be called the application of " IMAGE PROCESSING SYSTEM FOR ESTIMATING THEENERGY TRANSFER OF AN OCCUPANT INTO AN AIRBAG "; The applying date be Dec 17 calendar year 2001, sequence number be 10/023,787 and name be called the application of " IMAGESEGMENTATION SYSTEM AND METHOD "; The applying date be January 17, sequence number in 2002 be 10/052,152 and name be called the application of " IMAGE PROCESSINGSYSTEM FOR DETERMINING WHEN AN AIRBAG SHOULDDEPLOYED ".The application is introduced as reference in the content whole ground of these applications.
Technical field
The present invention relates generally to the people of a motion or " split image " of object from surrounding and comprising system and the technology of separating in " ambient image " in zone of people this motion or object.The present invention be more particularly directed to an occupant split image from surrounding and comprising this occupant's the ambient image in zone and separating so that make the technology of suitable airbag deployment decision.
Background technology
Under many circumstances, people may wish the split image of " target " people or object is separated from ambient image, and this ambient image has comprised around the image of " target " people or object.The airbag deployment system is a remarkable example of this situation.The airbag deployment system can make various expansion decision, and these decisions can interrelate with such or such mode and feature split image, the occupant available from the occupant.The degree of closeness of occupant's type, occupant and air bag, occupant's speed and acceleration, because the bump between occupant and the air bag and the size of the energy that air bag need absorb and occupant's further feature can be included the airbag deployment decision in.
In the prior art of relevant image Segmentation Technology, exist great obstacle.Existing image Segmentation Technology is unaccommodated often in the high-speed target environment, as in the vehicle that brakes or collide identification the occupant split image the time, just be not suitable for.Existing image Segmentation Technology is not used occupant's motion, helps to discern the occupant and around the border between the zone of its environment.Adopt prior art system not use occupant's motion to help carry out image segmentation, but common utilization is suitable for some technology of low-speed motion and even static situation most, come " to struggle ", rather than utilize the feature relevant to help cutting procedure with motion with occupant's motion.
Relevant with the challenge of motion is the challenge of promptness.The video camera of standard is taken about 40 two field pictures common p.s..The sensor that a lot of actual air bag deployment arrangement of using comprise can be than standard camera even is caught sensor reading quickly.The airbag deployment system needs reliable real-time information, to be used for launching decision.If occupant's split image can not be identified before next frame image or sensor measurement are hunted down, then catch image fast or other sensing data will be helpless to the airbag deployment system.The airbag deployment system can only be the same fast with its slowest essential treatment step.Yet, to compare as the occupant with around the technology of the discrimination factor between member's the zone with failing to utilize motion, the image Segmentation Technology of utilizing occupant's motion to help cutting procedure can more promptly be finished its work.
Each system of prior art fails to include the situation " intelligence " of a relevant special sight in cutting procedure usually, thereby such system can not be absorbed in any special area of ambient image.Be in particular airbag deployment and handle and the cutting procedure of design, can comprise situation " intelligence ", this situation intelligence can not be used by the general image cutting procedure.For example, people may wish, utilize the split image information in past recently, and comprising having comprised the past prediction of desired movement subsequently, system can be absorbed in the region-of-interest in the ambient image.Consider catching fast of sensor measurement, may moving of the occupant between adjacent twice sensor measurement is limited.This limit is relevant with situation, and and closely linked such as the factors such as time between twice sensor measurement.
Existing cutting techniques also fails to include the useful hypothesis of closing occupant's motion in the vehicle.People may wish that the cutting procedure in the vehicle can be considered such fact, and promptly the occupant normally rotates around its buttocks, in the seating area little motion." intelligence " like this makes system can be absorbed in most important zone in the ambient image, thereby has saved the valuable processing time.
What further aggravated the processing time requirement in existing segmenting system is that these systems fail past data is included in the current decision.People may wish, use such as technology such as Kalman filter, follow the tracks of and predict occupant's feature.People may wish also that on ambient image, this template can be adjusted with each sensor measurement with a template applications.Using reusable and revisable template is a kind of useful method, past data can be included in current decision, and reduce the needs that from the beginning regenerate image.
Summary of the invention
The present invention is an image segmentation system or method, and it can be used for producing " split image " of an occupant or other interest " target " from one " ambient image ", should " ambient image " comprise in this " target " and the vehicle around the environment that is somebody's turn to do " target ".This system can identify " roughly " border of this split image by nearest ambient image (" current environment image ") is compared with previous ambient image (" previous environment image ").Then, can derive from border all previous environment images, that template applications adjustable, split image identifies to this with one, and further this border of refinement.
In a preferred embodiment of the invention, have only the part of ambient image to handle.By using and the relevant information of all previous split images, can in the current environment image, determine one " region-of-interest ".In a preferred embodiment, suppose that the occupant of vehicle keeps sitting posture, thereby no longer need the zone at close seat in the processing environment image.Thus, the bottom of split image is fixed up, and makes system can ignore this part of ambient image.Whether a lot of embodiment of native system will use certain image threshold trial method (thresholding heuristic), decide a specific environment image can use reliably.Too much motion may make ambient image unreliable.Very few motion may make ambient image unnecessary.
Can use multiple different technology to cooperate and revise template.In certain embodiments, template is rotated by a series of predetermined angulars within the angular range.On each angle, can use multiple trial method that this special " cooperation " estimated.
By the following detailed description of present embodiment, and with reference to accompanying drawing, those skilled in the art can obtain clearly to understand to various aspects of the present invention.
Description of drawings
Fig. 1 is a partial view, shows the example of the surrounding environment of an image segmentation system;
Fig. 2 shows a high-rise treatment scheme, and it has described the example of an image segmentation system; This system catches split image from ambient image, and described split image is offered an airbag deployment system;
Fig. 3 is a process flow diagram, shows the example of the image segmentation process that is incorporated into the airbag deployment process;
Fig. 4 is a process flow diagram, shows the example of an image segmentation process;
Fig. 5 is a histogrammic example of pixel characteristic, and this histogram can be used by an image segmentation system;
Fig. 6 is the example of the figure of a cumulative distribution function, and this cumulative distribution function can be used by an image segmentation system;
The block diagram of Fig. 7 shows the example of an image threshold trial method, and this image threshold trial method can be attached in the image segmentation system;
Fig. 8 a shows the example of a split image, and this split image can stand template to be handled;
Fig. 8 b shows the example that a template is handled;
Fig. 8 c shows a split image and is standing the template processing;
Fig. 8 d shows the example in an ellipse garden, and this ellipse diagram can be matched with split image;
Fig. 8 e shows the example in an ellipse garden, and this ellipse diagram has been matched with split image after handling through template;
Fig. 8 f shows the example of a new profile, and this new profile is generated to be used for following template and handles;
Fig. 9 shows an example in the ellipse garden, a top of representing an occupant, and some examples of the some potential important feature in this ellipse garden, top;
Figure 10 shows an example that is in the ellipse garden, top of "Left"-deviationist, Right deviation and center state;
Figure 11 is a Markov chain synoptic diagram, and it shows three kinds of state/patterns, promptly left-leaning, Right deviation and placed in the middle, and and these states between the probability that interrelates of conversion;
Figure 12 is a Markov chain synoptic diagram, shows three kinds of state/patterns, i.e. people, static and collision, and and these states between the probability that links of conversion;
Figure 13 is a process flow diagram, shows the example of the processing that can be carried out by a shape tracker and fallout predictor;
Figure 14 is a process flow diagram, shows the example of the processing that can be carried out by a motion tracking device and fallout predictor.
DETAILED DESCRIPTION OF THE PREFERRED
The present invention is an image segmentation system, and it can reach " split image " of catching occupant or other " target " object (being referred to as " occupant ") " ambient image " of " target " peripheral region from comprising " target ".
I. the partial view of surrounding environment
Referring now to accompanying drawing, Fig. 1 shows the partial view of surrounding environment, and this surrounding environment can be applied among the many different embodiment of an image segmentation system 16 potentially.If occupant 18 is on the scene, occupant 18 can be sitting on the seat 20.In certain embodiments, a video camera or other sensor (being referred to as " video camera " 22) that can catch image fast can be fixed on the inside ceiling panel 24, and its position is than occupant's 18 height, and than occupant 18 more close before the wind shelves.Can place video camera 22 on the angle that dips down slightly facing to occupant 18, so that catch the variation that in seat 20, moves forwards or backwards the occupant's 18 who produces upper body angle owing to the occupant.Video camera 22 can place a lot of potential positions, is widely known by the people in the art in these positions.And system 16 can use multiple different video camera 22, comprises that common per second takes the standard camera of about 40 width of cloth images.System 16 can use the video camera that has than this higher or lower speed.
In certain embodiments, video camera 22 can contain or comprise an infrared ray or other light source, and this light source relies on the direct current operation, so that lasting illumination to be provided in dark surrounds.System 16 can be designed under the dark condition, as night, fog, heavy rain, black clouds, solar eclipse and any other environment darker than common sunshine condition.System 16 also can be used under the brighter condition of light.The use infrared illumination can be to the use of occupant's 18 hiding light sources.Other alternate embodiments can be used following one or more mode: the light source of the light source that separates with video camera, the non-infrared ray light of emission, the light that utilizes alternating current only to launch in mode intermittently.System 16 can comprise multiple other illumination and video camera 22 configurations.And, according to lighting condition, can use different trial method and threshold value to system 16.Therefore, system can use " intelligence " relevant with occupant 18 current environment.
A trial method be can implement or computing machine, computer network or any other computing equipment of a computer program moved or the image segmentation logic is being carried in configuration (be referred to as the I computer system " 30).Computer system 30 can be any computing machine or the equipment that can carry out following cutting procedure.Computer system 30 can be arranged on vehicle or the vehicle almost Anywhere.Preferably, computer system 30 is positioned near video camera 22 parts, in order to avoid transmit camera review by very long lead.Show among the figure that an air spring control 32 is arranged in instrument panel 34.Even but this air spring control 32 is positioned at a diverse location, system 16 still can normally play a role.Similarly, an airbag deployment system 36 preferably is arranged in the instrument panel 34 before occupant 18 and the seat 20, although system 16 also can use other position.In certain embodiments, air spring control 32 is same equipment with computer system 30.System 16 can implement neatly, comprises that change the future of the design aspect of vehicle and airbag deployment system 36.
II. the advanced processes flow process of airbag deployment
Fig. 2 is the flow process of an advanced processes, shows an example handling the image segmentation system 16 under the situation at airbag deployment.Video camera 22 can capture the ambient image 38 of seating area 21, and this ambient image had both comprised occupant 18 and the seating area 21 around it.In the figure, seating area 21 comprises whole occupant 18, although in a lot of different situations and embodiment, will only catch a part of occupant 18, particularly when occupant's lower limb be can't see in video camera 22 residing positions.
Ambient image 38 can be sent to computing machine 30.Computing machine 30 can be separated occupant 18 split image 31 from ambient image 38.The process that computing machine 30 carries out image segmentation will be described below.Then, can analyze, to make suitable airbag deployment decision split image 31.This process also will be described below.For example, split image 31 can be used for decision when airbag deployment, and whether occupant 18 is near excessively apart from the air bag 36 that is launching.Analysis and feature thereof to split image 31 can be sent to air spring control 32, so that airbag deployment system 36 information relevant according to obtain and occupant 18 is made suitable expansion decision.
Fig. 3 shows the more detailed example of described process, and this process is from capturing ambient image 38, up to suitable occupant's data are sent to air spring control 32.As long as the occupant is in vehicle, then this process is just constantly repeating.In a preferred embodiment, past data is incorporated in the analysis to current data, and therefore a flow arrow is arranged, and draws the top of getting back to this figure from the air spring control 32 of this figure bottom.
New ambient image 38 is caught by video camera 22 or other sensor repeatedly.Up-to-date captured ambient image 38 can be called the current environment image.Older ambient image 38 can be described as previous environment image 38 or past ambient image.After an ambient image 38 was caught by video camera 22, it can be handled by image segmentation subsystem (" image segmentation process ") 40 subsequently.The image segmentation process will be described in greater detail below.As shown in the drawing, cutting procedure can comprise the past data relevant with occupant 18 feature, these past datas or transmit from air spring control 32, or be stored in the computer system 30.Yet, image segmentation system 40 and do not require this information as the input could work.In a preferred embodiment, occupant's feature in past and data can be by image segmentation process 40 visits, so that make system 16 be absorbed in a region-of-interest in the ambient image 38, and/or otherwise intelligence and surroundings situation are attached to cutting procedure 40.
Split image 31 produces as the result of image segmentation process 40.In different embodiment, split image 31 may be taked the multiple different image and the form of characteristics of image potentially.Yet a lot of occupant's features in potential occupant's property field are not attached in the airbag deployment decision.The key feature that is used to launch purpose is relevant with position and motion feature usually.Therefore, not reason does not allow whole split image 31 stand subsequently processing.In a preferred embodiment, the ellipse garden of using cooperate subsystem 44 with an ellipse garden be engaged in split image 31 around so that system 16 can handle ellipse garden subsequently, and ellipse garden is an object of having removed all unimportant features of split image 31.In other embodiments, system 16 can represent occupant 18 to the configuration of other geometric configuration or some thing instead.
One tracing subsystem 46 can be used for following the tracks of occupant's feature, as position, speed, acceleration and further feature.In certain embodiments, tracing subsystem 46 also can be used for " forward direction extrapolation " (extrapolateforward) occupant's data, thus produce relevant these features between twice sensor measurement during in will be and so on to predict.In a preferred embodiment, follow the tracks of and the one or more Kalman filter of predicting subsystem 46 uses,, all measurement results of the sensor in past and nearest sensor measurement are combined so that in the mode of probability weight.Kalman filter will be described below.
Tracing subsystem 46 can comprise multiple different subsystem, and these different subsystems are absorbed in the different subclass of occupant's feature.For example, tracing subsystem 46 can comprise a shape tracker and predictor module 48, be used for following the tracks of and prediction " shape " feature, and a motion tracking device and predictor module 50, be used for following the tracks of and prediction " motion " feature.Can will discuss in detail below by the process that these modules are carried out.
Then, the information that is produced by tracing subsystem 40 can send air spring control 32 to, so that airbag deployment subsystem 36 is implemented suitable behavior.In some cases, because the occupant comes across maybe and will come across a hazardous location, launch to be hindered.In certain embodiments, can be configured, make the amount of its kinetic energy that can absorb from occupant 18 according to the air bag needs, and take place with different strength to airbag deployment.Tracing subsystem 40 also can be used for decision whether collision has taken place, and whether such crashworthness gets deployment balloon.
III. image segmentation heuristic method
Process flow diagram shown in Figure 4 shows the example of the image segmentation trial method that can be implemented by system 16.System 16 is flexibly, can comprise the multiple different variant of process as shown in the drawing.Some embodiment may use treatment step more still less, and some other embodiment is with some treatment steps of interpolation.In a preferred embodiment, each ambient image of being caught by video camera 22 all can stand the cutting procedure of process as shown in the drawing.
A. " zone of concern " and the module region paid close attention to
Region-of-interest in the ambient image 38 determines in step 52.This process is not necessarily called in all embodiment of system 16.Yet, consider common time and the resource constraint of other application facet, preferably some zone that focuses in ambient image 38 in airbag deployment decision and system 16.Determine that region-of-interest is to be finished by a region-of-interest module of cutting apart in the subsystem 40.In a preferred embodiment, occupant's nearest previous position (as the most probable position of previous split image 31 in previous ambient image 38, perhaps to the nearest prediction of the position of split image 31 in previous ambient image 38) is used to the decision most probable position of (" current ") split image 31 in current environment image 38 recently.If tracing subsystem 46 comprises the ability of making future anticipation, then this future anticipation can provide the necessary information of calling the region-of-interest module.Position and exercise data can preferably be incorporated in the region-of-interest analysis.Occupant's feature as passenger type (adult, children, children's seat etc.) and any potentially other relevant occupant's feature, also can be attached in this analysis.
In a preferred embodiment, tracing subsystem 46 is got the position and the shape of the split image 31 (being represented by an ellipse garden usually) that calculates at last, and according to state transition matrix, with its forward projection in current image frame.This process below will be discussed.Current ellipse garden ginsent number can multiply each other with state transition matrix, thereby produces the output of all new values of being predicted in " current " period.
In a preferred embodiment, region-of-interest is defined as the rectangle of its direction along ellipse garden major axis, and this ellipse garden is to cooperate subsystem 44 to be produced by ellipse garden.In other embodiments, system 16 can use different therewith shapes and shape series.In a preferred embodiment, the height of rectangle is preferably located in the pixel place of predetermined quantity on the top, ellipse garden, and the base of rectangle then is defined as " N " pixel place under the mid point or the centre of form that is in ellipse garden.This is the pixel that will ignore near the image bottom, because the common little motion of these pixels, and this is because occupant 18 tends to center on the buttocks rotation, its buttocks is then normally motionless on the seat.When occupant 18 used of seat belts, this hypothesis was especially correct, but in the occasion that does not use of seat belts, this hypothesis is still useful.Other embodiment can comprise different region-of-interests, can be more greater or lesser than region-of-interest described above.By being absorbed in less region-of-interest, the processing time has reduced.And some inessential movement effects can neglect aptly as the object of brandishing and cross vehicle window of hand.In a preferred embodiment, have only region-of-interest (for example: scope) be transmitted being for further processing, and the place of every mentioning " ambient image ", can be interpreted as the region-of-interest that is meant in the ambient image.In some other embodiment, processing subsequently is not limited to region-of-interest.When after step 52 has been determined region-of-interest, the processing of system 16 can two parallel, separate with route simultaneously in carry out.In some other embodiment, these routes can be merged into single route in sequence, rather than two processes are carried out in mode simultaneously.
B. " difference images " and image difference module
One image difference module 53 is used in carries out image difference trial method on the above-mentioned region-of-interest.Image difference module 53 produces " difference " image, the difference between this image representative current (as catching a recently) ambient image 38 and the previous environment image.Image difference trial method determines the pixel value difference between nearest ambient image 38 and the current environment image 38.System 16 uses the absolute value of this kind difference to determine which pixel has different values in current environment image 38, and thereby which pixel representative image in the object that moving or occupant's border.Static object, the most of zones as vehicle interior will be eliminated, because they can not change from an image to another image, what they produced is minimum absolute value.Image difference module 53 generates difference images effectively, and these difference images show the border at the edge of any object that is moving, because can aware the most significant motion at the edge of object just.
C. low-pass module
In a preferred embodiment, with an application of low pass filters on difference images.Low-pass filter is used to reduce high frequency noise, also is used for fuzzy slightly difference images, and this will enlarge the width at the edge of finding in the difference images.Discuss as following, this is important with difference images as shielding in processing subsequently.In the figure, low-pass filter and function thereof are incorporated in the image difference module 53.
D. preserve ambient image and be used for future " difference images "
Current ambient image 38 is saved in step 54, like this, and the previous environment image 38 of the next ambient image 38 that it just can be handled as system 16.In other embodiments, can be the purpose of generation difference images, and produce and store the weighted array of previous all ambient image 38.
E. generate the gradient image module
In a preferred embodiment, one generates gradient image module 56 uses by region-of-interest module 52 determined region-of-interests, generates the gradient image trial method by carrying out one, and generates the gradient image of this region-of-interest.This image gradient trial method is sought in the target image, its vertiginous zone of image amplitude, as the part of moving in the split image.One method for optimizing is to calculate in the current environment image 38, or preferably only is the directions X in the region-of-interest of current environment image 38 and the gradient (derivative) of Y direction.
The calculating of Y direction can be image (i, j)-image (i, j-N), the X-axis coordinate of " i " represent pixel wherein, the Y-axis coordinate of " i " represent pixel.The variation of " N " representative image amplitude.The calculating of directions X can be image (i, j)-image (i-N, j).The border of determining in the gradient image Shen can be used in the processing such as template renewal subsequently.
Equation 1: gradient image (Y direction)=image (i, j)-image (i, j-N)
Equation 2: gradient image (directions X)=image (i, j)-image (i-N, j)
F. image difference threshold module
Can use an image difference threshold module (or be called simply " image threshold module ") 58, " difference images " that produce in step 53 are carried out the threshold value trial method.Whether the threshold value trial method 58 of step 58 is used to determine current environment image 38, or the region-of-interest in the current environment image 38 preferably, the processing that should be undertaken subsequently by system 16.The threshold value trial method of step 58 also can be used as " shielding " of described gradient image subsequently, so that eliminate built-in edge, as door trim edge or other non-moving Internal Elements.
1. image is carried out " thresholding "
Generate a degree and the threshold value that threshold value difference images can relate to the difference in brightness in " difference " image and compare, this threshold value or predetermined is perhaps preferably from just producing the brightness data of processed ambient image 38.Come " difference " image is carried out " thresholding " for environment for use image 38 features own, should at first generate the histogram of a pixel brightness value.
A. histogram
In a preferred embodiment, come calculated threshold by the histogram that generates " difference " value.Fig. 5 is the example of this Nogata Figure 74.
Anyly can be divided into one or more pixels 78 by video camera 22 captured ambient image 38.In general, the quantity of the pixel 78 in the ambient image 38 is big more, and then the resolution of image 38 is good more.In a preferred embodiment, ambient image 38 should be at least about 400 pixels at Width, and ambient image 38 should be at least about 300 pixels in short transverse.If pixel 78 quantity are very few, then may be difficult to a split image 31 and from ambient image 38, separate.Yet the quantity of pixel 78 depends on the kind and the model of video camera 22, and when pixel 78 quantity of video camera 22 increased, generally it also became more expensive.The video camera of a standard can be caught the image that about 400 pixels are wide, 300 pixels are high.Such video camera can be taken enough careful ambient image 38, makes preferred implementation system of the present invention keep relatively inexpensive again simultaneously, because use is the non-customized video camera 22 of standard.Therefore, preferred embodiment is approximately 120,000 (400 * 300) with pixel 78 sums that use, although region-of-interest generally includes the pixel 78 of much less.
Each pixel 78 can have one or more different pixel characteristic or attribute (being referred to as " feature ") 76, and system 16 uses these features that split image 31 is separated from ambient image 38.Pixel 78 can have one or more pixel characteristic 76, and wherein each feature is represented by one or more pixel values.An example of pixel characteristic 76 is brightness metric value (" brightness ").In a preferred embodiment, the pixel characteristic 76 in described " difference " image is represented the difference of the brightness value between current environment image 38 and the previous environment image 38.The pixel characteristic 76 of brightness can be used as the pixel value relevant with special pixel 76 and measured, storage and handle.In a preferred embodiment, brightness can be expressed as the digital pixel value that is between 0 (the darkest possible brightness) and 255 (possible brightness).Other pixel characteristic may comprise the weighted array of color, temperature, two or more features, maybe might be used to any further feature that split image 31 is distinguished from ambient image 38.Other embodiment can use further feature to distinguish pixel, and sets up the histogram of these features.
Nogata Figure 74 among this figure record has the quantity of the pixel 78 of a special single pixel characteristic 76 or pixel characteristic 76 combinations (being referred to as " feature ").Nogata Figure 74 is recorded in the total quantity of the pixel 78 that has a special pixel value on this feature.Thereby the Y value representation of the low order end of this figure has the quantity of the pixel 78 of brightness value 255 (difference of the maximum possible of brightness value), and has the quantity of the pixel of brightness value 0 (brightness value does not have difference) at the Y of this figure high order end value representation.
B. cumulative distribution function
Can use the histogram of Fig. 5, produce as shown in Figure 6 a cumulative distribution function.Cumulative distribution curve 80 is a kind of like this instruments, by this kind instrument, system in decision pixel intensity (or further feature) one when changing a border that whether really shows between split image 31 and the ambient image 38, can be incorporated into one " confidence factor " index.
Cumulative distribution curve 80 is supported such ability, promptly is chosen in the pixel of N% before the change aspect of pixel value.The longitudinal axis can representative system 16 not classify as the cumulative probability 82 of representing boundary pixel 78 with any pixel 78 mistakenly.Cumulative probability 82 can be value 1-N, and wherein N is the preceding N% that is used to select motion pixel 78.For example, select preceding 10% pixel will cause 0.9 probability, on behalf of an environment pixel, 0.9 be not identified as a probability of cutting apart pixel by mistake with knowing.Absolute certitude (1.0 probability) can only reach like this, supposes that promptly whole 120,000 pixels all are environment pixels 78, does not for example have pixel 78 to represent occupant 18 split image 31.Such determinacy to system 16 without any help, because it does not provide a starting point, so that begin to set up occupant 18 shape.On the contrary, the degree of accuracy of a substandard as 0 value or near 0 value, can not exclude enough pixels 78 from the classification of boundary pixel 78.Therefore in a preferred embodiment, 0.85 probability is gratifying, and preceding 15% pixel is selected comes out.In other embodiments, the probable value within the scope 0 to 1.0 can be used.In some other embodiment, different lighting conditions will be useful for different pixels is divided into group according to image-region.Different image-regions can have different " N " values.
In the environment of image threshold more than, the probability as 0.90,0.80 or 0.70 is preferred, because they show very high degree of accuracy probability usually, provides the essence basis of pixel 78 simultaneously again.In a preferred embodiment, many image thresholds system 16 will have the cumulative distribution function 80 with image threshold quantity as much.
System 16 can comprise a plurality of difference images of use and a plurality of image threshold, and these images and threshold value can multitude of different ways combine.For example, can use threshold probability 0.90,0.70 and 0.50, generate three thresholding difference images, these thresholding difference images can use multiple different trial method to combine.
B. difference images are carried out " thresholding "
Fig. 7 is a block diagram, shows the example of a single image threshold value embodiment.One image threshold 84 makes that system 16 can be by the threshold that the pixel value of a special pixel 78 and a cumulative probability 82 of being wished by people are as shown in Figure 6 determined, and the possible boundary pixel of " N% " before selecting.In a preferred embodiment, difference images are carried out thresholding and will produce a binary picture.Pixel value is set as 1 more than or equal to the value of the pixel of threshold value.The value of all other pixels is set as 0.In a preferred embodiment, this process produces a binary picture, and wherein the value of each pixel is not 1 to be exactly 0.
Are 2. " difference images " worth carrying out subsequent treatment?
Return Fig. 4, the difference images of thresholding be used to determine these difference images and this difference images from ambient image 38, whether be worth carrying out, and worth system 16 relies on aftertreatment.If too much motion is arranged, prove that then to use these difference images with the form of aftertreatment be reliable inadequately in difference images.Too much motion may betide under at random the situation, as when an occupant 16 on the seat sweater from the beginning on cover following time.Such situation will produce a large amount of " motion ", but system 16 can't finally generate an ellipse garden, to send it to air spring control 32.If too many motion is arranged, then system 16 should or depend on by the nearest prediction of following the tracks of relevant these occupant's 18 current features that produced with prognoses system 46 in step 62, or preferably as described below, will predict the forward direction extrapolation recently.
If very few motion is arranged, the change that then had nothing substantial the ambient image 38 from last time, system 16 can depend on previous ellipse garden by preceding one time cycle of treatment produced in step 60 like this.Solve very few motion and/or cross the degree of accuracy that the problem of doing more physical exercises can improve system 16 greatly.Whether decision too much or very few motion taken place, can be in system 16 by described image threshold is done more physical exercises or a predetermined image threshold value of very few motion compares and implements with representing.
G. clean the gradient image module
One cleaning gradient image module 64 (or being called the cleaning image module simply) can be used to " cleaning " from generating the gradient image that gradient image module 56 derives.Generally include edge by generating the gradient image (preferably being limited to initial region-of-interest) that gradient image module 56 transmits, as the edge of door trim etc. from vehicle interior.These edges have nothing to do, because they are not parts of occupant 18.The difference images of thresholding can be used as " shielding ", come unwanted fixed factors in the removal of images, and only keep to constitute split image 31 the edge, and therein with its around have the pixel of motion.This can help system 16 distinguish the motion pixel from background pixel, thereby has increased the precision such as trial methods such as following template matches and template renewal processes.
H. template matches module
One template matches module 66 can be called by system 16.Template matches module 66 is carried out a template and is cooperated or the template matches trial method.As described below, in a preferred embodiment, template image is a previous split image 31.In other embodiments, template image can be scheduled to, but preferably as described below adjustable.A kind of such trial method is the Hausdorff distance heuristic method.The example that Hausdorff distance calculates provides in equation 3:
h(M,I)=max?min?m-i
Variable " m " is the point in the template image, and variable " I " is the point in the difference images.Distance can be to the distance (pressing pixel) of nearest non-zero pixel from an image to another image.System 16 can use the different distortion of Hausdorff distance trial method.
Template image can be rotated with a series of angles of process, and these a series of angles are that occupant 18 may rotate process between twice measurement of sensor.These 6 degree that normally add deduct, the worst case value during between these video camera two two field pictures, this moment, vehicle was in the situation of high-speed brake, and occupant 18 hitches the partial rotation of buttocks around shoulder harness.
For a postrotational angle, can call the Hausdorff distance trial method and calculate " distance " between the template image of difference images and rotation.Template image and difference images preferably all are binary pictures.The template position that has minimum Hausdorff distance is the rotational angle corresponding to the template of aiming at difference images the best.
If minimum Hausdorff distance can not clearly be distinguished, then the ambient image 38 of initial captured has certain mistake.For example, occupant 18 may stop camera 22 by temporary transient hand with them.If ought compare with the predetermined threshold value in being attached to system 16, difference between minimum Hausdorff distance and time low Hausdorff distance too hour, then current ambient image 38 should be ignored by system 16, and should use tracking and predicting subsystem 46 future anticipation of extrapolation split images 31 in addition.
I. upgrade formwork module
Can produce a suitable split image 31 if the template of coupling shows, then a renewal formwork module 68 can call in system 16, so that in order to use and the enhancing template image future of system 16.Originally this template image is to produce by the sampling of a template contours being carried out the angle five equilibrium.Then, can in new gradient image, search for this group point.Rotary template is to find the best fit on the angle in described new gradient image.For described each reference mark, generate a straight line vertical with the point of contact of described profile.More the new template trial method increases the position of this point along described vertical line, and seeks optimum matching for line segment in gradient image.In certain embodiments, such one group of reposition can be stored in the computing machine 30, as a sequence of data points, does not use so that be used as a template image.In other embodiments, from described sequence of data points, produce a cubic spline match subsequently, and on the angle of template five equilibrium, along one group of new reference mark of described profile generation.Line transect is as new profile.
Fig. 8 a shows a template image, i.e. the example of a previous split image 31.Fig. 8 b shows a series of angles 86 that template image can rotate process.Fig. 8 c shows the angular range that is applied to an image.Fig. 8 d is the example in the ellipse garden 88 that can be produced by system 16.Fig. 8 e is the example of new template more that an ellipse garden is matched with occupant 18.Fig. 8 f be produce, with the example as a new profile of template in future.
J. oval adaptation module
In case the best-fit template is determined and revise that system 16 can extract corresponding ellipse garden ginsent number, follow the tracks of and predicting subsystem 46 so that can offer these parameters.
One ellipse garden adaptation module 70 can be used for an ellipse garden 88 is adapted to coupling and template renewal that is produced.This function also can be carried out with image segmentation subsystem 40 in the adaptive subsystem 44 in ellipse garden discretely.In either case, system 16 can be incorporated into the multiple different adaptive trial method in ellipse garden.An example of the adaptive trial method in ellipse garden is " a directly least square trial method "
Directly the least square trial method is used as each non-zero pixels on the template (x, y) sampled value that can be used for least square fitting.In a preferred embodiment, suppose that the bottom in ellipse garden is motionless.Therefore, it preferably is not the part as the definite region-of-interest in front.By using the bottom in a last ellipse garden, system 16 can guarantee that this ellipse garden keeps correct direction, and the foot in this ellipse garden is on the seat.If this hypothesis about occupant's motion is inaccurate, then resulting vertical movement will produce too much motion, and system 16 will abandon this image like this, extrapolate at the forward direction of step 62 pair prediction last time and depend on as discussed above.In order to finish ellipse garden, and consider that the bottom is not the fact of the part of region-of-interest, can use the bottom in an ellipse garden, help the correct orientation in ellipse garden like this, make its foot on the seat.System 16 can use the ellipse garden of some different samples, and the ellipse garden of these samples is the initial ellipse garden when just beginning open system 16.
IV. oval and occupant's feature
In the airbag deployment embodiment of system 16, system 16 preferably uses ellipse garden 88 to represent the occupant, so that monitor relevant occupant's feature.In other embodiments, can use other shape to represent occupant 18 split image 31.In a preferred embodiment, the adaptive subsystem in ellipse garden is the software in the computing machine 30, but in other embodiments, the adaptive subsystem in ellipse garden may reside in the different computing machine or equipment.
In a preferred embodiment, the ellipse garden 88 that is used for occupant's signature tracking and prediction can extend up to its head from occupant's buttocks.
Fig. 9 shows many variablees that can draw from ellipse garden 88, these variablees are represented the feature of some occupant's 18 relevant with airbag deployment system 36 split image 31.The centre of form 94 in ellipse garden 88 can be determined by system 16, so that follow the tracks of occupant 18 feature.How determining the centre of form 94 in ellipse garden 88, is known in the art.Other embodiment can use other point on the ellipse garden 88, follows the tracks of with airbag deployment 36 or other and handles relevant occupant's 18 feature.Can draw multiple occupant's 18 features from ellipse garden 88.
Motion feature comprises x coordinate (" distance ") 98 and one top rake (" θ ") 100 of the centre of form 94.The shape measure value comprises the y coordinate (" highly ") 96 of the centre of form 94, the minor axis length (" minor axis ") 92 in the long axis length in ellipse garden (" major axis ") 90 and ellipse garden.
Information and other mathematics of being preferably all shape and motion measurement value and obtaining rate of change are derived, as speed (first order derivative) and acceleration (double derivative), therefore in a preferred embodiment, nine shape facilities (highly, highly, highly, major axis, major axis, major axis, minor axis, minor axis and minor axis) and six motion features (distance, distance, distance, θ, θ and θ) are arranged.One side rake angle Φ is not shown, because it is perpendicular to the plane of delineation, and as what below will go through, this side rake angle is derived, rather than measure.Motion and shape facility are used to calculate occupant 18 volume, and finally calculate its quality, so that determine occupant 18 kinetic energy.Other embodiment may comprise more ground or quantity occupant's 18 features still less.
Figure 10 shows side rake angle (" Φ ") 102.In a preferred embodiment of the invention, 3 shape states are arranged, promptly left-leaning towards driver (left side) 106, just sit (centre) 104, Right deviation and leave driver (the right) 108, its side rake angle is respectively-Φ, 0 and Φ.In a preferred embodiment, according to the character of employed vehicle, Φ is set to a value between 15 and 40 degree.Other embodiment can comprise the shape state of varying number, and the side rake angle 102 of different range.
V. markov probability chain
System 16 can comprise the way of realization of a multi-model probability weight of a plurality of Kalman filter.In a preferred embodiment, the Kalman filter that is applied to motion feature is different with the Kalman filter that is applied to shape facility.And preferably, each single shape facility all has an independent Kalman filter for each shape state that system 16 supports.Similarly, preferably, each single motion feature all has an independent Kalman filter for each motor pattern that system 16 supports.Have some predetermined probability with from a state to another state or the conversion from a pattern to another pattern interrelate.These probability can obtain describing by using Markov chain.System 16 is flexibly, and can support to be used for the multiple different probability of multiple different mode and state.The user of system 16 can be freely with themselves probable value be set to that Markov chain discloses, and obtain below in the variable that blow by blow describes.This makes system 16 and the different embodiment dirigibility relevant with the different operating environment obtain maximization.
Figure 11 shows employed three kinds of shape states in the preferred embodiments of the present invention.In a preferred embodiment, an occupant 18 or tendency driver (" "Left"-deviationist ") 106 perhaps hit exactly (" center ") 104 that be seated, and perhaps incline from driver's (" Right deviation ") 108.Occupant's 18 probability that be in a special state, end at a special state then can be identified by some lines, and these lines are from a special shape state, its arrow points shape state subsequently.For example, an occupant who is in the center state remains in the probability P of center state C-CArrow by 110 places is represented.The probability P that moves to left from the centre C-LRepresent by arrow 114, and the probability P that moves to right from the centre C-RBe 112.It is 1 that all probability that derive from initial center state must be added up.
Equation 4:P C-C+ P C-L+ P C-R=1.0
Similarly, must to add up be 1.0 to all probability that derive from arbitrary special state.
On behalf of the occupant 18 of a "Left"-deviationist, the arrow at 118 places will hit exactly the probability (P that is seated in next time period L-C).Similarly, the arrow at 120 places represent the occupant of a "Left"-deviationist will be in next time period the probability (P of Right deviation L-R), and on behalf of the occupant of a "Left"-deviationist, the arrow at 116 places will keep left-leaning probability (P L-L).Derive from initial left-leaning state might probability and be necessary for 1.
Equation 5:P L-C+ P L-L+ P L-R=1.0
At last, on behalf of the occupant of a Right deviation, the arrow at 122 places will keep the probability (P of Right deviation R-R), on behalf of the occupant of a Right deviation, the arrow at 124 places will enter the probability (R of center state R-C), and the arrow at 126 places is represented the probability (P of occupant with "Left"-deviationist R-L).Derive from initial Right deviation state might probability and be necessary for 1.
Equation 6:P R-C+ P R-L+ P R-R=1.0
In fact, common video camera is caught 40 to 100 two field pictures (high-speed camera 22 p.s.s caught 250 to 1000 two field pictures) 22 p.s.s.Therefore, the occupant of a "Left"-deviationist 106 becomes the occupant of Right deviation 108, and perhaps the occupant of a Right deviation 108 becomes left-leaning 106 occupant, and at first is not converted to the state of " center " 104, and this is impossible basically.Most probably, left-leaning 106 occupants at first entered center state 104 before becoming Right deviation 108 occupants, and similarly, the most real is that a Right deviation 108 occupants at first became the occupant of center 104 before becoming left-leaning 106 occupants.Therefore, the P at 120 places L-RShould be set to one and approach zero but be not equal to zero very little number, and the P at 126 places R-LShould be set to one near zero but be not equal to zero very little number.
Figure 12 shows a similar Markov chain, the relevant probability that its representative is relevant with motor pattern.The preferred embodiment of system 16 uses three kinds of motor patterns: still-mode 130, and represent a human occupant 18 to be in still-mode, as be in the sleep; People's quasi-mode 132, the behavior of representing an occupant 18 are as the typical passenger of automobile or other vehicle, and yes for such occupant in motion, but be not to move in extreme mode; And crash mode 134, represent an occupant 18 who is in the vehicle that collides or collide preceding brake mode.
The occupant's 18 of be in a special pattern, coming to an end then in next time increment to be in a special state probability can be identified by some lines, and these lines are from current state, its arrow points new state.For example, an occupant who is in still-mode keeps the probability P of still-mode S-SArrow by 136 places is represented.Move to the probability P of people's quasi-mode from still-mode S-HArrow by 136 places is represented.From the static probability P that moves to collision S-CBe at 140 places.It is 1 that all probability that derive from initial stationary state 130 must be added up.
Equation 7:P S-S+ P S-H+ P S-C=1.0
Similarly, the probability that is converted to people's quasi-mode from people's quasi-mode is 142 P H-H, the probability that is converted to still-mode from people's quasi-mode is 144 P H-S, and be 146 P from the probability that people's quasi-mode is converted to crash mode H-CIt is 1 that all probability that derive from initial human state must be added up.
Equation 8:P H-H+ P H-C+ P H-S=1.0
The probability that is converted to crash mode from crash mode is 148 P C-C, the probability that is converted to still-mode from crash mode is 150 P C-s, and the probability from crash mode to people's quasi-mode is 152 P C-HIt is 1 that all probability that derive from initial collision status must be added up.
Equation 9:P C-C+ P C-S+ P C-H=1.0
In fact, in case entered collision status, 18 of occupants extremely can not (if not fully can not) leave this collision status at 134 places again.In most of the cases, the collision at 134 places will finish occupant 18 travelling.Accordingly, in a preferred embodiment, P C-H, P C-CAnd P C-SAll be set to approach zero.People it is desirable for system 16 and allow some to leave the chance of collision status 134, otherwise when temporary transient system 16 " noise " situation or certain other very during phenomenon occurring, system 16 just may be stuck in collision status 134.Other embodiment can arbitrary special probability be set to the suitable value between 0 and 1, and can use the pattern of varying number.System 16 can comprise multiple probable value, and these probable values preferably customize according to the environment of specific embodiments and system 16.
The transition probabilities that interrelates with different shape state and motor pattern is used for each feature and combinations of states produces a Kalman filter equation.Then, so that give the suitable weight of each Kalman filter, it is a result that the result of these wave filters is gathered by using described various probability.All described probability are preferably pre-defined by the user of system 16.
The markov chain probability provides such means, can give various Kalman filter with weight for each feature and each state and each pattern.The system 46 of tracking and predicting subsystem has comprised the markov chain probability with the form of two subsystems, and these two subsystems are shape tracker and fallout predictor 48 and motion tracking device and fallout predictor 50.
VI. shape tracker and fallout predictor
Figure 13 shows the detailed process flow diagram of shape tracker and fallout predictor 48.In a preferred embodiment of the invention, the major axis (" major axis ") 90 in ellipse garden 88, minor axis (" minor axis ") 92 in ellipse garden 88 and the y coordinate (" highly ") 96 of the centre of form 94 are followed the tracks of and predicted to shape tracker and fallout predictor 48.Each feature has a vector, the position of this this specific characteristic of vector description, speed and acceleration information.The major axis vector be [major axis, major axis ', major axis "], major axis ' represent the rate of change or the speed of major axis wherein, and major axis " is represented the second derivative (being the rate of change or the acceleration of major axis speed) of major axis.As a same reason, the minor axis vector be [minor axis, minor axis ', minor axis "], highly vector be [highly, highly ', highly "].Any other shape vector will have position, speed (rate of change) and acceleration (double derivative) component similarly.
Shape tracker and fallout predictor 48 upgrade covariance and gain matrix in more new shape prediction of step 200 in step 202, in more new shape estimation of step 204, produce integrating shape in step 206 and estimate.These processes will be described below.When system 16 is in active state, forever carry out from the circulation of step 200 to 206.In the circulation first of this process, do not predict renewablely in step 200, there are not covariance or gain matrix renewable in step 202 yet.Therefore, circulation will be leapt to step 204 first.In circulation subsequently, first step of the process 48 that shape is followed the tracks of and predicted is the more new shape prediction in step 200.Described shape tracker and fallout predictor 48 infer also that the occupant is whether left-leaning, Right deviation or center are seated.This information is used to determine whether the occupant is in the hazardous location, as what below will describe in more detail.
A. more new shape prediction
Carry out a new shape forecasting process more in step 200.This process is got shape estimation last time, and uses a transition matrix that it is extrapolated for a future anticipation.
Equation 10: the vector forecasting of renewal=transition matrix * vector last time is estimated
Transition matrix is applied to last time vector with Newtonian mechanics to be estimated, thus, based on occupant 18 past position, speed and acceleration, and forward direction projects about this occupant and will be in prediction where.Last time vector is estimated to be as described below and is produced in step 204.
So will establish an equation down is applied to all shape variablees and all shape states, in this equation, x is the shape variable, and Δ t representative is (speed) over time, 1/2 Δ t 2Represent acceleration.
Equation 11:
(1 Δt 1/2Δt 2) (x)
Vector forecasting=(the 01 Δ t) * (x ') that upgrades
(0 0 1 ) (x″)
In a preferred embodiment of the invention, the vector forecasting of nine renewals is arranged,, the shape variable of three shape states and three non-derivation is arranged because in a preferred embodiment in step 200, and 3 * 3=9.The shape vector prediction of these renewals is:
The major axis of the center state that upgrades
The major axis of the Right deviation state that upgrades
The major axis of the left-leaning state that upgrades
The minor axis of the center state that upgrades
The minor axis of the Right deviation state that upgrades
The minor axis of the left-leaning state that upgrades
The height of the center state that upgrades
The height of the Right deviation state that upgrades
The height of the left-leaning state that upgrades
B. upgrade covariance and gain matrix
After the shape prediction of having upgraded all variablees and all states in step 200, must be at step 202 new shape prediction covariance matrix, shape gain matrix and shape estimate covariance matrix more.Error in the shape prediction covariance interpretation prediction process.As previously mentioned, gain represents nearest measurement with the weight that obtains, and explains the error in the areal survey process.The shape estimate covariance is explained the error in the estimation procedure.
At first upgrade the prediction covariance.Be used for upgrading each shape and predict that the equation of covariance matrix is as follows:
Equation 12: shape prediction covariance matrix=[the estimate covariance matrix * transposition (state transition matrix) that state transition matrix * is old]+system noise
State transition matrix is to be used for matrix above more new shape prediction, that embody Newtonian mechanics.Old estimate covariance matrix derives from a preceding round-robin step 204.From 200 to 206 circulation first, step 202 is skipped.Obtaining a transpose of a matrix only needs simply to exchange row with row, exchanges row with being listed as, and is known in this area.Therefore, the transposed matrix of state transition matrix is its behavior row, it classifies capable state transition matrix as.System noise is a constant matrices, and the noise that is used for system is attached among the consideration.The constant that is used for the system noise matrix is provided with by user of the present invention, is known but select the method for noise constant in this area.
Next is gain matrix with the matrix that is updated.As discussed above, gain representative should give the weight degree of confidence of a new measured value.Its value is that the most accurate measurement is represented in 1 gain, and this moment, estimation in the past can be ignored.Its value is that 0 gain is represented to measure least accurately, and nearest measurement this moment will be left in the basket, and user of the present invention will only depend on estimation in the past.The gain role can be shown by the basic Kalman filter equation 13 of equation 12:
X (new estimation)=X (old prediction)+ gain [X (old prediction)+ X (measured value)]
Gain is not a number simply, because for the combination of each shape variable and shape state, all have a gain.Being used for more, the general equation of new gain is an equation 14:
Gain=shape prediction covariance matrix * transposition (measurement matrix) * inverse matrix (residue covariance)
The calculating of shape covariance matrix as before.Measure matrix and only be and a kind ofly separate for the purpose of determining gain and extract location components in the shape vector, ignore the method for its speed and component of acceleration.Measuring transpose of a matrix is [100] simply.Why isolating location components in the shape variable is because speed and acceleration are actually the component of derivation, and having only the position is by the image measurement of taking.Gaining related is the weight that should give actual measured value.
General form of expression X in Kalman filter (new estimation)=X (old prediction)+ gain [X (old prediction)+ X (measured value)] in, described residue is represented the difference between old prediction and the new measured value.The complete matrix that has some residue covariances.The inverse matrix of residue covariance matrix is used to upgrade gain matrix.How obtaining an inverse of a matrix matrix, is known in the art, and this is a simple linear algebra process.The equation of residue covariance matrix is an equation 15:
Residue covariance=[measuring matrix * prediction covariance * transposition (measurement matrix)]+measurement noise
Measure matrix and be and be used for simple matrix that the location components of a shape vector is separated from its speed and component of acceleration.The calculating of prediction covariance as above.Measuring transpose of a matrix is one one row matrix [100] simply, rather than has a column matrix of same components value.Measuring noise is a constant, is used for sensor 22 and cutting procedure 40 relevant errors are attached to consideration.
With last matrix that is updated is shape estimate covariance matrix, and it represents evaluated error.Because estimate to be based on current measurement and prediction in the past, evaluated error is generally serious not as predicated error.Being used for more, the equation of new shape estimate covariance matrix is an equation 16:
Shape estimate covariance matrix=
(unit matrix-gain matrix * measures matrix) * shape prediction covariance matrix
Unit matrix is known in this area, it include only one by 1 that form, extend to bottom-right oblique line from the upper left side, and any other position is 0.Gain matrix as above calculates and describes.Measuring matrix also is as described above, and is used for the location components of shape vector is separated from its speed and component of acceleration.The prediction covariance matrix also as above calculates and describes.
C. more new shape is estimated
Call more new shape estimation procedure in step 204.The first step of this process is to calculate described residue.
Equation 17: residue=measured value-(measuring matrix * prediction covariance)
Then, shape state itself is updated.
Equation 18: the shape vector estimation=shape vector prediction of renewal+(gain * residual value)
After being decomposed into single equation, its result is as follows:
X C (major axis during t)=X C (major axis during t)+ gain [X C (major axis during t-1)+ X C (major axis of measurement)]
X L (major axis during t)=X L (major axis during t)+ gain [X L (major axis during t-1)+ X L (major axis of measurement)]
X R (major axis during t)=X R (major axis during t)+ gain [X R (major axis during t-1)+ X R (major axis of measurement)]
X C (minor axis during t)=X C (minor axis during t)+ gain [X C (minor axis during t-1)+ X C (minor axis of measurement)]
X L (minor axis during t)=X L (minor axis during t)+ gain [X L (minor axis during t-1)+ X L (minor axis of measurement)]
X R (minor axis during t)=X R (minor axis during t)+ gain [X R (minor axis during t-1)+ X R (minor axis of measurement)]
X C (height during t)=X C (height during t)+ gain [X C (height during t-1)+ X C (height of measurement)]
X L (height during t)=X L (height during t)+ gain [X L (height during t-1)+ X L (height of measurement)]
X R (height during t)=X R (height during t)+ gain [X R (height during t-1)+ X R (height of measurement)]
In a preferred embodiment, C represents the center state, and the L representative inclines to the left to driver's state, and R represents to the state of Right deviation from the driver.Letter t represents time increment, and t+1 represents and then t time increment afterwards, and t-1 represents and then t time increment before.
D. producing integrating shape estimates
Last step of the process that constantly repeats between step 200 and step 208 is in the comprehensive estimating step of the generation at 208 places.The first of this process estimates to distribute a probability for each shape vector.Use same as discussed above formula, the residue covariance is recomputated.
Equation 19:
Covariance residue matrix=
[measuring matrix * prediction covariance matrix * transposition (measurement matrix)]+measurement noise
Next step calculates the actual likelihood of each shape vector.System compares by the predicted value of each state and currency about each shape variable being actually and so on optimum estimate recently, and what state the decision occupant is in.
Equation 20:
(C)
Possibility (R)=e -(residue-side-play amount) 2/2 ò 2
(L)
In the preferred embodiment of native system 16, there is not side-play amount, cancelled out each other because can suppose all side-play amounts, and the process of system 16 can be the zero-mean gaussian signal.σ represents variance, and is defined the implementation phase of the present invention by human development person.In this area, how people know by checking data, and are that σ distributes a useful value.
Having, the state of high likelihood has determined side rake angle Φ.If occupant 18 is in the center state, then side rake angle is 0 degree.If the occupant is inclined to the left side, then side rake angle is-Φ.If the occupant is inclined to the right, then side rake angle is Φ.In a preferred embodiment of the invention, Φ and-Φ is based on the kind of vehicle of using system 16 and model and predetermined.
Next step, possibility that produces from above and predetermined markov pattern probability previously discussed, state probability obtain upgrading.
Equation 21:P C=P C-C+ P R-C+ P L-C
Equation 22:P R=P R-R+ P C-R+ P L-R
Equation 23:P L=P L-L+ P C-L+ P R-L
The equation of the pattern probability that is used to upgrade is as follows, wherein the likelihood of a special pattern of μ representative as preceding calculating.
Equation 24: the probability of left-leaning state=
1/[μ L*(P L-L+P C-L+P R-L)+μ R*(P R-R+P C-R+P L-R)+
μ C*(P C-C+P R-C+P L-C)]*μ L*(P L-L+P C-LP R-L)
Equation 25: the probability of Right deviation state=
1/[μ L*(P L-L+P C-L+P R-L)+μ R*(P R-R+P C-R+P L-R)+
μ C*(P C-C+P R-C+P L-C)]*μ R*(P R-R+P C-R+P L-R)
Equation 26: the probability of center state=
1/[μ L*(P L-L+P C-L+P R-L)+μ R*(P R-R+P C-R+P L-R)+
μ C*(P C-C+P R-C+P L-C)]*μ C*(P C-C+P L-C+P L-C)
By using above-mentioned each probability, and estimate, finally calculate integrating shape and estimate in conjunction with each shape vector.As discussed above, in a preferred embodiment, P R-LAnd P L-RBe set to 0.
Equation 27:
The probability * X of the left-leaning state of X= Left-leaning
The probability * X of+Right deviation state Right deviation
The probability * X of+center state The center
Wherein, X is arbitrary shape variable, comprises the speed and the acceleration derivation value of a measured value.
When vehicle is in service or when on the seat 20 occupant 18 being arranged, from 200 to 208 circulation is just constantly repeating.The range request one of crossing at 200 places is estimated before to produce at 206 places, and the range request of crossing at 202 places exists covariance and gain matrix to upgrade, therefore when the first time when associating 200 to 208 repetitive cycling, never call the processing at 200 and 202 places.
VII. motion tracking device and fallout predictor
Motion tracking device shown in Figure 14 and the function of fallout predictor 50 are similar to shape tracker shown in Figure 13 with fallout predictor 48 in many aspects.The motion tracking device is followed the tracks of the feature different with the shape tracker with fallout predictor.In a preferred embodiment of the invention, the x coordinate 98 of the centre of form 94 and top rake θ 100, and their corresponding speed and the tracked and prediction of acceleration (general designation " kinematic variables " or " motion feature ").The x coordinate 98 of the centre of form 94 is used for determining a position of occupant 18 and automobile such as the distance between instrument panel 34, airbag deployment system 36 or certain other position.In a preferred embodiment, used instrument panel 34, because air bag generally promptly launches since then.
The x coordinate vector comprises a location components (x), a speed component (x ') and a component of acceleration (x ").The θ vector comprises a location components (θ), a speed component (θ ') and a component of acceleration (θ ") similarly.Any other motion vector will have position, speed and component of acceleration similarly.
Motion tracking device and fallout predictor subsystem 50 upgrade motion prediction at 208 places, carry out the step of upgrading covariance and gain matrix at 210 places, upgrade estimation at 212 places, and execution produces the step that integrated motion is estimated at 214 places.Circulation from 208 to 214 is symmetrical with circulation from 200 to 206 in many aspects.When cycling through motion tracking device and fallout predictor 50 for the first time, do not have motion prediction to upgrade at 208 places, and do not have covariance or gain matrix to upgrade at 210 places.Therefore, circulation for the first time is since 212 places.
According to the regulation of Patent Law, explain and described principle of the present invention and operational mode in an embodiment.Yet, it must be understood that the present invention can implement in the mode different with detailed explanation and description, and does not leave its spirit and scope.

Claims (31)

1. a method is used for the current environment image (38) that a current separate image (31) is caught from a sensor (22) is separated, and described image partition method comprises:
A described current environment image (38) and a previous environment image (38) are compared (53);
According to the difference of described current environment image (38), discern the border of described current split image (31) with described previous environment image (38); And
Use the Hausdorff distance trial method with a template and the boundary matching (66) that identifies.
2. method as claimed in claim 1, wherein said previous environment image (38) are less than about 1/40 second time IT before described current environment image (38) is hunted down.
3. method as claimed in claim 1 also is included in and determines (52) one region-of-interests in the current environment image (38).
4. method as claimed in claim 3 also comprises and ignores the part that is not in the current environment image (38) within the described region-of-interest (52).
5. method as claimed in claim 3 determines in described ambient image (38) that wherein region-of-interest (52) comprises from the position of described previous split image (31) the described current split image of prediction (31).
6. method as claimed in claim 5 is wherein used the position of a Kalman filter (46) from described previous split image (31) the described current split image of prediction (31).
7. method as claimed in claim 3, wherein said region-of-interest (52) are the rectangles in the current environment image (38).
8. method as claimed in claim 3, a bottom section of wherein said previous split image (31) is left in the basket in described current environment image (38).
9. method as claimed in claim 1, the corresponding a plurality of pixels (78) in a plurality of pixels (78) in the wherein said current environment image (38) and the described previous environment image (38) compare.
10. method as claimed in claim 9, each pixel (78) in the described a plurality of pixels (78) in the wherein said current environment image (38) all with described previous environment image (38) in described a plurality of pixels (78) in a respective pixel (78) compare.
11. method as claimed in claim 1 also comprises low-pass filter is used on the border of identification.
12. the method for claim 1 also comprises and carries out that image gradient is heuristiced (56) so that to current environment image (38) and variation range location between the ambient image (38) is arranged earlier.
13. method as claimed in claim 1 also comprises thresholding (82) is carried out on described fixed border.
14. method as claimed in claim 1 also comprises and selects described previous split image (31), as described current split image (31).
15. the method for claim 1 also comprises and calls a cleaning gradient image trial method (64).
16. method as claimed in claim 1, wherein coupling (66) described template comprises by an angular range rotation template.
17. as the method for claim 16, wherein said angular range is approximately to spend to+6 degree from-6.
18. as the method for claim 16, the angle in the wherein said angular range is predetermined.
19. method as claimed in claim 1, wherein template is a binary picture.
20. method as claimed in claim 1 also comprises modification (68) template.
21., wherein revise (68) template and comprise the cubic spline match is set as the method for claim 20.
22., wherein revise (68) template and comprise one group of new reference mark is set as the method for claim 21.
23. method as claimed in claim 1 also comprises making ellipse (88) adaptive (70) in affiliated template.
24., wherein ellipse garden (88) adaptive (70) are comprised in described template and call direct least square fitting trial method as the method for claim 23.
25., wherein ellipse garden (88) adaptive (70) are comprised the lower part of duplicating previous oval (88) in described template as the method for claim 24.
26. one kind is used for isolating the method for current split image (31) from current environment image (38), comprises:
From a previous environment image (38), the region-of-interest in identification (52) the described current environment images (38);
In an image difference (53), described image difference (53) is to determine by the respective regions in described region-of-interest (52) in the described current environment image (38) and the described previous environment (38) is compared with an application of low pass filters;
Carry out (56) one image gradients and calculate, this calculating is used for finding that at described current environment image (38) one changes the zone of image amplitude fast;
Use a predetermined cumulative distribution function (82), described image difference is carried out thresholding;
Clear up described image gradient and calculate the result of (56);
Use the Hausdorff distance trial method that one template image is mated (66) result in described cleaning; And
With an ellipse garden (88) adaptive (70) in described template image.
27. one kind is used for segmenting system (16) that a split image (31) is separated from an ambient image (38), comprises:
One ambient image (38) comprises a split image (31) and a region-of-interest;
One gradient image module comprises a gradient image (56), and wherein said gradient image module produces described gradient image (56) in described region-of-interest (52);
One formwork module comprises a template and a template matches, and Hao Siduofu exploration, and wherein said formwork module is from described template, and described gradient image (56) and described Hao Siduofu sound out, and produce described template matches.
28. as the system (16) of claim 27, wherein said formwork module supposes that described split image (31) remains on the position that takes a seat.
29. as the system (16) of claim 27, wherein said formwork module rotates described template.
30. the system (16) as claim 29 also comprises an angular range, this scope comprises a plurality of predetermined angles, and wherein said formwork module rotates described template by each of described a plurality of predetermined angulars.
31. the system (16) as claim 27 also comprises:
One product graph picture, a binary picture, and a nonbinary image.
Wherein said template is a binary picture, and described gradient image (56) is a nonbinary image; And
Wherein said product graph similarly is to produce by described template and described gradient image (56) are multiplied each other.
CNA2003101003920A 2002-10-11 2003-10-09 Moving-based image segmentation apparatus for passenger tracking using Housdov range heuristic method Pending CN1497493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2003101003920A CN1497493A (en) 2002-10-11 2003-10-09 Moving-based image segmentation apparatus for passenger tracking using Housdov range heuristic method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/269,357 2002-10-11
CNA2003101003920A CN1497493A (en) 2002-10-11 2003-10-09 Moving-based image segmentation apparatus for passenger tracking using Housdov range heuristic method

Publications (1)

Publication Number Publication Date
CN1497493A true CN1497493A (en) 2004-05-19

Family

ID=34256244

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2003101003920A Pending CN1497493A (en) 2002-10-11 2003-10-09 Moving-based image segmentation apparatus for passenger tracking using Housdov range heuristic method

Country Status (1)

Country Link
CN (1) CN1497493A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8055076B2 (en) 2005-07-06 2011-11-08 Sony Corporation Tag information production apparatus, tag information production method and recording medium
CN108090916A (en) * 2017-12-21 2018-05-29 百度在线网络技术(北京)有限公司 For tracking the method and apparatus of the targeted graphical in video

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8055076B2 (en) 2005-07-06 2011-11-08 Sony Corporation Tag information production apparatus, tag information production method and recording medium
CN108090916A (en) * 2017-12-21 2018-05-29 百度在线网络技术(北京)有限公司 For tracking the method and apparatus of the targeted graphical in video
CN108090916B (en) * 2017-12-21 2019-05-07 百度在线网络技术(北京)有限公司 Method and apparatus for tracking the targeted graphical in video

Similar Documents

Publication Publication Date Title
US10234957B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN1284373C (en) Methods of and units for motion or depth estimation and image processing apparatus provided with such motion estimation unit
CN1306452C (en) Monitor, monitoring method and programm
US8824802B2 (en) Method and system for gesture recognition
CN1897015A (en) Method and system for inspecting and tracting vehicle based on machine vision
CN1839409A (en) Human detection device and human detection method
Veenman et al. Resolving motion correspondence for densely moving points
JP5726125B2 (en) Method and system for detecting an object in a depth image
CN1924894A (en) Multiple attitude human face detection and track system and method
CN1198451C (en) Apparatus and method for automatically tracking mobile object
CN1711516A (en) Motion detection apparatus
CN106548185B (en) A kind of foreground area determines method and apparatus
CN1871622A (en) Image collation system and image collation method
CN101051385A (en) Tracking method and device for special shooted objects and tracking method and device for aspect parts
CN1747559A (en) Three-dimensional geometric mode building system and method
CN1906638A (en) Information recognition device, information recognition method, information recognition program, and alarm system
CN1070670C (en) efficient motion vector detection
CN1729485A (en) Method and device for tracing moving object in image
CN1581982A (en) Pattern analysis-based motion vector compensation apparatus and method
CN1606758A (en) Sensor and imaging system
CN1456015A (en) Recognizing film and video objects occuring in parallel in single television signal fields
CN1911606A (en) Apparatus and method for controlling behavior of robot
JP2013045433A (en) Learning apparatus, method for controlling learning apparatus, detection apparatus, method for controlling detection apparatus, and program
CN1794265A (en) Method and device for distinguishing face expression based on video frequency
CN1607551A (en) Method and apparatus for image-based photorealistic 3D face modeling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication