GB2525587A - Monocular camera cognitive imaging system for a vehicle - Google Patents
Monocular camera cognitive imaging system for a vehicle Download PDFInfo
- Publication number
- GB2525587A GB2525587A GB1406697.1A GB201406697A GB2525587A GB 2525587 A GB2525587 A GB 2525587A GB 201406697 A GB201406697 A GB 201406697A GB 2525587 A GB2525587 A GB 2525587A
- Authority
- GB
- United Kingdom
- Prior art keywords
- frame
- environment
- sub
- vehicle
- search
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000001149 cognitive effect Effects 0.000 title claims abstract description 14
- 238000003384 imaging method Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 42
- 239000013598 vector Substances 0.000 claims abstract description 26
- 238000012706 support-vector machine Methods 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 2
- 238000003672 processing method Methods 0.000 claims description 2
- 230000002123 temporal effect Effects 0.000 claims 1
- 231100001261 hazardous Toxicity 0.000 abstract 1
- 239000011159 matrix material Substances 0.000 description 10
- 238000001514 detection method Methods 0.000 description 5
- 230000002902 bimodal effect Effects 0.000 description 4
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
A cognitive imaging system and a method of cognitive imaging are disclosed which include a step of classifying the complexity of a background environment in a segment, sub-frame or region of interest (ROI) of a frame captured by a monocular camera installed in a vehicle (vessel or craft) in order to have a field of view in the normal direction of motion of the vehicle. The classification is added to a queue of predetermined length and updated by adding a new classification every time a sub-frame is scanned in a subsequent new captured frame during target hazard tracking. The sequence of classifications forms an environment analysing vector. The environment analysing vector is used to determine if the environment background for the sub-frame can be reused in a subsequent background subtraction to track the target in a subsequently captured frame. The resulting tracking data can be used to determine if a hazardous situation is imminent and to alert the vehicle driver or to control the vehicle to avoid the hazard.
Description
A MONOCULAR CAMERA COGNITIVE IMAGING SYSTEM FOR
A VEHICLE
Technical field
[001] The present invention concerns a system for cognitive imaging in a vehicle using a monocular camera to capture image data.
Prior Art
[002] The field of cognitive imaging generally requires one or a sequence of image frames to be captured and analysed to deduce useful information from the image. For example in the field of vehicle control, an image of a forward field of view may be captured and analysed to determine if a hazard appears ahead of the vehicle. The resulting information may be used to steer the vehicle. Such systems are commonly stereoscopic in order to exploit parallax for ranging and sizing. Such systems are therefore complex and expensive.
[003] Systems using a monocular camera to capture an image are relatively less complex and expensive and occupy less space within the vehicle. However, reliably distinguishing a foreground hazard from a background has presented significant problems to monocular cognitive imaging systems. Such systems generally use a variety of techniques to identify a foreground hazard such as another road vehicle, of which image subtraction is one. Where an environment or background is relatively unchanging it is possible to implement image subtraction as between two sequential image frames. In this case the relatively unchanging background image will be subtracted substantially completely leaving an image relating to a moving foreground hazard or target. For this process to be applied to a moving hazard it is necessary to analyse the complexity of the environment in two or more captured frames in order to determine if it can be subtracted with any confidence that the target will be all that is left.
Statement of Invention
[004] Accordingly there is provided a cognitive imaging system for a vehicle comprising: a monocular camera arranged to focus an image of a field of view onto an image sensor, said image sensor processing the image into image data, and a processor device responsive to machine readable code stored in a memory to be capable of driving the camera to capture a sequence of images to implement an image processing method in order to process the image data and identify a hazard in
the field of view; wherein:
the processor is responsive to capture a time sequence of images from the camera, the processor is responsive to scan pixel image data of a first captured image in a sequence and applies a series of filters to identify a sub frame in which a vehicular hazard may exist and to distinguish the background environment from the foreground vehicle hazard; the processor is responsive to calculate a fast environment classification criterion for the each sub frame in the current frame; the processor is responsive to add each fast environment classification to a fast environment identification queue of a predetermined length to form an environment analysing vector; the processor responding to analyse the pixel image data of a subsequent captured image to add fast environment classification data to the fast environment identification queue; whereby a slow environment result is updated according to the fast environment analysing vector calculated from the preceding sequence of captured frames.
[005] The process of target recognition and tracking is divided into two loops referred to as fast environment and slow environment. Fast environment addresses the problem of a fast changing environment when the environment (background) changes fast.
[006] The slow environment loop seeks to reduce processing required to track a target first identified in the fast environment loop by identifying and exploiting a situation where the environment, and therefore the background is changing slowly. In general the "slow environment loop" is a faster process and is liable to reduce the burden on hardware assets.
Brief Description of Drawings
[007] An embodiment of a cognitive imaging system according to the present invention will now be described, by way of example only, with reference to the accompanying illustrative figures, in which: figure 1 is an isometric view of a user vehicle travelling on a road approaching a tunnel following a target vehicle, figure 2 is a diagram of the field of view of a system camera from a system equipped user vehicle, figure 3 is a diagram of the bottom shadow of each target vehicle in figure 2; figure 4a is a side elevation of the scene in figure 1; figure 4b is a plan of the scene in figure 1; figure 5 is a side elevation of the user vehicle illustrating aspects of the system installation; figure 6 is a high level chart of the system operation: figures 7A & 7B combine to form a detailed flowchart of the system operation.
Detailed Description
[DOS] Figure 1 illustrates a user vehicle 1 equipped with the system traveling on a road approaching a tunnel. As shown in figures 1, 4a and 4b a target vehicle Ti is leading the user vehicle 1 into the tunnel, while a second target vehicle T2 is exiting the tunnel towards the user vehicle 1. The target vehicles Ti and T2 form the foreground against a background environment which is moving relative to the user vehicle 1. As shown in figure 5, the user or camera vehicle is fitted with a system camera 2 mounted to have a field of view (FOV) in the forward direction. The camera is mounted to have a lens axis at a height h-camera above the ground. To calibrate the system it is necessary to load the system with the height of the vehicle h-car.
[009] The system may be interfaced with the user vehicle management systems in order to read the vehicle speed, indicator condition and steering angle.
[010] Referring to figure 7, at step S-U the camera is installed in a host vehicle preferably behind the windscreen as shown in figure 5, where it will not obscure the drivers view and has a field of view looking horizontally forwards through the windscreen. The camera forms part of a cognitive imaging system (015) shown diagrammatically in figure 8 including memory and a processor running machine readable code to implement an image recognition process and to intelligently output alerts to the vehicle driver via a vehicle management system (VMS) of the vehicle. Alerts may be via visual and or audible means. The output means may be integral with the CIS or may interface to output via a VDU or loudspeaker integral with the vehicle.
[011] The CIS will preferably have an interface for communication with the output of the vehicle management sensors. The interface may be hardwired or wireless and may be via pods in the vehicle management system or direct to the sensors. Vehicle sensor data captured by the CIS will preferably include: vehicle speed, accelerometer data, steering angle, accelerator angle, brake sensor and indicator actuation. The interface may also be installed to be capable of driving the vehicle management system to actuate the vehicle brake or other systems.
[012] When the CIS is first actuated it may go through a calibration procedure to install according to the particular height above ground, and other factors which are characteristic of the particular vehicle installation in order to determine the values of h_car and h_camera shown in figure 5.
[013] As pad of the initiation process, whenever the CIA system starts up step 0.1 initialises a frame counter to a fixed initial value such as 1.
[014] In normal operation the first process step, S-i, is to capture a first real time image frame (FR1) and subject it to the fast environment process loop (FEPL). The FEPL is so named because the environment or background is either completely new due to the start-up condition or changing fast and is therefore unknown for the initial frame FRi.
[015] After capture the frame may be subject to image pre-processing steps such as re-sampling, noise reduction contrast enhancement or scale space representation.
[016] In the first pass of the fast environment process loop FRi is scanned at 3-2 to identify any region of interest (ROl). An ROl (sometimes known as a sub-frame) is any portion of FRi which contains image data suspected of indicating a potential hazard. An example of a potential hazard is a vehicle moving relative to the host vehicle, and not part of the background which could adopt a vector resulting in a collision. ROl are identified by thresholding and boundary detection techniques.
[017] ROl are identified in the captured image by inspecting the pixels along a search path which starts at the middle of the bottom of the frame and proceeds towards the top; and progresses from the middle to each of the two sides, either sequentially or simultaneously. The process of defining an ROl is described in greater detail below.
[018] The target vehicle height up_car is determined according to the position where the vehicle bottom appears in the row of the search point in the image. Thus: h cafliera -Ii car up -car = flow + (inmw -trow) " -- [019] h -camera (1) [020] where mrow is the frame row in which the stationary point for the RCI lies, irow is the row in which the point at infinity lies, h_camera is the vertical height from the ground to the camera in the host vehicle and h_car is the a priori height of the target vehicle.
[021] Based on the stationary point (search_x, search_y) where every search point passes over the height (up_car) of the target vehicle which locates on this stationary point, the system constructs an ROl corresponding to this stationary point that is the region of the frame suspected to contain a hazard object image where the stationary point appears in the bottom centre of the image. The ROl is then defined by the following coordinate points of the frame: (search x-up car.search y-1up car) (search x-up car,search y+'up car) (search -x, search -v -up -car) (search -x search -V + up -c:ar) [022] 2 2 (2) [023] At step S-2. 1 the system determines if the count of ROl exceeds zero, in other words if ROIn is the number of regions of interest is ROln>O. If so it goes to step S3 and if not to step Si where the next real time image is captured. Each ROl is uniquely identifiable in a frame by a stationary point'. The stationary point is a coordinate, common to any frame, that is to say that in a sequence of frames the stationary point will not move from one frame to another. Preferably the stationary point is established in the middle of the bottom row of each ROl from which the examination of the ROl ordinarily commences.
[024] There may be no, one or several ROl in FRi. In the case where there are no ROl the system goes to step Si to capture a new (n+lth) frame. If there are any unexamined ROl At step 5-3 a first one of the ROl of FRi is selected for examination.
[025] Examination comprises a number of technical tests the first of which starts at step S-3.O.i where the bottom shadow of the selected ROl is determined. The bottom shadow area of the suspected hazard vehicle is extracted based on the ROl at (2) using equation (3) below to generate a left hand bottom shadow and equation (4) for the right hand bottom shadow: (sear/i -s--up car, search car) (search 51 up car, search _,v) (search -x, search -y -up -car) (search -x. search -4 (3) (search -x -1up -car, search -v) (seaiii -x --up -car.sear h -y + -up -car) (search -x, search -v) (search -x, search -; +-up -car) 4 (4) [026] The potential ground area to the left and right of the ROl is determined according to (5) and (6) below: (search -x + 5, se arch y -1up -car) (search -x + 5. search -y) (search x+20.search y-'up car) (search x+20.search) 4 (5) (search -x + 5,se arch y) (search -x + 5, search -y + itj, -car) (search -x + 20, search v) (search -x + 20. search -y + -up -car) (6) p27] When the ROl is produced be a vehicle which is far distant from the host vehicle, the bottom shadow will tend to resemble a horizontal line. This effect will become progressively more pronounced the further the target vehicle is from the host vehicle and the bottom shadow will become progressively less like a horizontal line the closer the target is to the host vehicle. In this case the potential left and right bottom shadow update to (7) and (8) below: (search x-'up car,search y-'up car) (search x-' up car, search y) (search -x, search -v -up -car) (search -x. search -2 (7) (search_x-up_car,search_y) (search_x-up_car,search_y+up_car) (search -x, search -v) (search -x, search -y + up -car) 2 (8) and the potential left and right ground area updates to (9) and (10): (search x+5.search up car) (searchx+5.searchv) (search -x + 20,search v -up car) (search x+ 20. search y) (9) (search -x + 5, search -v) (search -x + 5. search -y + -up -car) (search_x+5 +1up_car,search_,v) (search_x+5 +up_car.search_y+'up_car) 2 2 2 (10) [028] The apparent depth or height of the bottom shadow is an indication of the distance of the target from the host vehicle. Thus at 5-3.1 the depth of the boftom shadow is compared to a threshold value, if the threshold value is exceeded at step 3.2 the target is close enough to the host vehicle to be a hazard.
[029] The system calculates the mean grey level (Mean_Si) of the potential left bottom shadow area and the mean grey level (Mean_S2) of the potential right shadow area, respectively. At the system calculate the mean grey level of the potential left and right ground area (Mean_Gi) and the mean grey level of the potential right ground area (Mean_G2). The above mean grey level is obtained by calculating the weighted average of a grey value of the corresponding region. The weight is based on the distance of the potential hazard object from the camera.
[030] Detection of a hazard target is determined at step 3.1 if: Shadow udgeinen/ = (([A/lean _G1 -Mean 51 -threshold _ Shadow) > 0)1 (iAiean G2 -Mean S2 -threshold Shadow) >0) (ii) [031] That is the shadow judgement is positive when the difference between the mean grey bottom shadow of a one of the left and right bottom shadow and the corresponding mean grey shadow of the left or right ground area of a ROl exceeds the threshold (threshold_shadow).
[032] If the threshold is not exceeded the process goes to step 3.3 where the ROl is tagged as examined and if there are any remaining ROl unexamined the step 3.4 returns the process to step 3 and if there are no remaining unexamined ROl the process steps to Si to capture a new frame.
[033] If the threshold is exceeded the process steps to S4 to implement a dynamic edge accumulative decision (AEDD). AEDD comprises the steps of determining the vertical edge accumulative decision area of the left side as follows: (search --car, search up car) (search x-'up -car,search y -up car) (search x,search y--up car) (search x,search y--up car) 2 4. (12) and similarly a right side accumulative edge decision area for the right side: (search x--up carsearch +up car) (serchx-licarsearchy+'upcar) (search -x, search y + up -car) (search -x, search -y + -up -car) 4 2 (13) [034] The system seeks edges in each are using a Sobel operator and conducts 2-dimensional convolution, to obtain the left and right vertical edge image SobeLl, Sobel_2 respectively. Conduct binarization to the left and right edge image Sobel_1, Sobel_2 by utilizing binarization dynamic threshold and accumulate downward. The units of accumulated values could be obtained after accumulating left and right vertical edges sub-frame. Then, compare the maximum value of aforementioned left and right accumulated values with the pre-defined two stage dynamic bimodal cumulative threshold Double Edge Threshold. If both of the maximum left or right values exceed a Double Edge Threshold, the system allows the current dynamic edge accumulative decision to pass. Otherwise the possibility that vehicles exist in the ROl of the current stationary point equals zero. Then the search point goes to next stationary point along the search path, updates the interesting sub-frame and goes into the next dynamic bottom shadow decision state, goes back to step 53.
[035] If the double edge threshold is passed the process goes to step 5 from decision step 4.1 to apply a dynamic support vector machine (SVM) decision step, otherwise the ROl fails step 4.1 and goes to step 3.3. In the SVM the captured and pre-processed sub-frames are regularized to the same size and normalized, preferably to 64*64 pixels. A gradient histogram is established for the ROl. According to the gradient histogram of the sub-frame, the system adopts block units with size of 8*8 pixels, cell unit with size of 4*4 smooth step size of 8 and a histogram bin number of 9. Then the normalized 64*64 ROl with a potential vehicle object is transformed to 1*1764 feature vector with feature vector dimension of 1764 and each of dimension value represents statistics accumulated value along the certain direction of gradient in the specific sub area.
[036] Once the ROl is regularized and normalised the ROl is compared with a database of templates of potential target road vehicles, such as cars, vans, lorries, coaches etc. The templates are sampled not only for a range of vehicles but each vehicle before a range of backgrounds and conditions such as weather and/or daylight and/or night.
Comparison may be via adding the ROI to the negative template and examining the output result. If the result is zero or near zero the target vehicle in the ROl is deemed a match with the template. The output from the SVM is one of three SVM classifiers.
[037] In more detail, the steps to generate classifiers are to obtain the histogram of orientated gradients (HOG) feature vector matrix of the block unit and cell unit gradient histogram after regularizing. Construct an augmented feature matrix based on the feature vector matrix by utilizing the feature vector matrix and combining the class samples and then find a hyper classification plane that can distinguish positive templates and negative templates through SVM.
[038] Identifying process: Based on the three SVM classifiers corresponding to different scenarios choose the corresponding SVM according to the slow environment identifying result and match it with the current identified slow environment. 1]
[039] Then make the final decision at step 5.1 by utilizing the following formula: I (w.x)±b>O [0 (co*x0)±b<O (14) Where indicates that: the SVM classifies the interesting sub-frame into a positive template space; a hazard target vehicle is present in the ROl; the system directly outputs the ROl.
Where the SVM step classifies the ROl to the negative template space.
If there is no hazard target vehicle present in the ROl and the process goes to step 3 to examine a new unexamined ROl or step 1 to capture a new frame where step 3.4 finds no unexamined ROl.
[040] The step of tagging may be achieved in any suitable way, for example by setting a flag against the stationary point of the ROI.
[041] If the output of step 5.1 is 1, that is to say, positive, the process goes to step 5.2 and records the stationary point to track the target vehicle in subsequent cycles. The stationary point of the ROl is associated with a recording of the background of the of the ROl.
[042] To record the ROl background the system may advantageously record only the backgrounds to the left (Background_L) and right (Background_R) of the target vehicle.
To do this the system obtains grey level information of every pixel of the backgrounds in sub regions to the left and right of the vehicle and within the ROl where the left region may be defined by: (search x -up car. scare/i j; -up car) (search x -up car, scare/i y -car) (sc'archx+1upcar,searchy-upcar) (search x+1up car,search -up car) [043] 8 8 2. (15) and the right as: (search x-'up car.searc/i J;+-up car) (search x-'up car,searc/2 y+up car) (search -x + U]) -car, search -p + -car) (scare/i -X + -U]) -car, scare/i -y + U]) -car) 8 2 8 (16) [044] In each of Background_L and Background_R the grey level information of every pixel is determined on a row by row basis to construct a left/right background grey level matrix.
Based on the background grey level matrix, the system calculates the mean value of each row. From the mean value on each row the system constructs a row mean vector MeanGEl for the left and MeanGBI for the right: L'5-apcv 05apc.v -Cr-/3ackground-Z(I,i) Cr-/3ackgrouud_1(0.625*up_car.i) Aiean = _________________________________________ __________________________________________________________ 0.5*up_car -0.5*up_car (17) I) - * Cr-/?ac/cgroziud-r(I,i) y U-/?ackground_r(0.625*up_car,i) A-Jean = . ______________________________________________ 0.5*Up car 0.5*1/p_ear (18) [045] Then the system extracts the maximum and minimum values of row mean vectors which correspond to the left and right background sub-frame of the ROl respectively.
Based on the maximum and minimum values, the system constructs a feature matrix: rIviecs)GB,Ivfax IvIeanGB/vIen Ikk'an -. = I /c',fl.rc LMeaiirM Mean5A 1/n (1 9) [046] where Mean051Max and Mean051Min represent the maximum and minimum of the row mean vectors in the left background sub-frame; MeanoarMax and Mean5,-Min represent the maximum and minimum the row mean vector in the right background sub-frame.
[047] Then the system applies a space coordinate transformation to the feature matrix (19) and obtains a new profile feature matrix Means,,adoMea(v,es; [048] Based on the profile feature matrix, the fast environment classification criterion for the current single frame is as follows.
The environment is "1, Sunny" when: Meanshadowfeature[l 1] >150 &&Meanshadowfeature [2,1]> 150 II Meanshadowfeature[l 2] >90 && Meanshadowfeature [2,2]>90 The environment is "4, Nightfall" when: Meanshadowfeature [1,1]cBO && MeanshadoMeature [2,1]c80; The environment is "2, Cloudy" when: Meanshadowfeajure [1,2]>50 && Meanshadoeature [2,2]>50; The environment is "3, Dusk" when: Meanshadowfeajure [1,2]c50 && MeanshadoMeature [2,2]c50; The environment is "0, Uphold" otherwise.
where: && is logical AND operation, is logical OR operation; Means,,QdoMeaturn [1,1] represents the entry in the 1st row and 1st column of MeanSflUdQMCflf(C; MeanshaduMeature [2,1] represents the entry in the 2nd row and 1st column of MeanS/JdQ wfeature; MeanshadQMeafure [1,2] represents the entry in the 1st row and 2nd column of Mø& flsnacjowreature M&8fls/,adQMeat,,e [2,2] represents the entry in the 2nd row and 2nd column of MeanshaaoMeaturn [049] According to the above results, the system updates the parameters as following at step 6.2, as single frame dynamic decision system parameter update based on the fast environment detection results.
Fast Single Frame Fast Dynamic Dynamic Two Environment Environment Binarization Distinguish Stage Indication Identification Sobel Threshold for Dynamic Result Threshold Shadow and Bimodal for Global Ground Cumulati Gradient threshold_shadow ye Image Threshol d 1 Sunny 100 50 3⁄4 vehicle height 2 Cloudy 80 30 1/6 vehicle height 3 Dusk 50 25 1/6 vehicle height 4 Nightfall 30 20 1/6 vehicle height 0 Uphold Last Uphold Last Uphold Last Uphold Settings and Settings Settings and Last Results and Results Results Settings and Results [082] At step 7 the system selects one of the recorded stationary points identifying an ROl which has passed each of steps 3 to 5. It is now desirable to determine if the background exhibits a high level of complexity or a low level of complexity (the background is simple). To determine the complexity of the background each ROl is scanned backwards at step 7.1 as compared to step 3, thus the search pattern is from top to bottom and from the edges towards the centre. However, at step 7.2 the bottom shadow is determined from the backscan results and again compared to the bottom shadow threshold mentioned at step 3 above.
[083] The system will uphold the current environment results if some ROI passes step S3 after the backtracking search point has reached the start point searching along the opposite direction of the search path. Step 7.3 indicates that there are some (one or more) approximate false-alarm stationary points and the background environment is therefore determined to be complex at step 7.4 so that the process can advance directly to step 7.7.
[084] If the decision at step 7.4 is that the environment is simple 7.5, the process advances to step 7.6. In this case, if the result of the fast environment decision is "1", it requires rectification for the current fast environment identification result into 2". The system will re-identify the fast environment around the target and rectify the fast environment error at step 7.6. The dynamic decision parameter of the next frame will be based on the rectified fast environment decision.
[085] At step 7.7 a fast environment identification que is formed with a length of 100. This is the fast environment identification queue for the purpose of slow environment identification (SEI). The SEI is outer loop of the faster environment loop. Every output of the environment identification result will be pushed into the queue. If the current queue stores 100 statistic results, then the first statistic I that is pushed in the queue will be squeezed out of the queue and keeps the first in, first out' rule. The push strategies are: If none of interested sub-frame passes S3 (dynamic bottom shadow decision), 54 (dynamic edge accumulative decision) and S5 (dynamic support vector machine (SVM) decision), that is, the system cannot find the "object stationary point", "0" will be pushed to queue.
If the fast environment decision of current sub-frame is "1" and the system finds approximate false-alarm stationary point during the backtracking search process, "1' will be pushed into the queue.
If the fast environment decision of current sub-frame is "1" and system cannot find approximate false-alarm stationary point during the backtracking search process, "2" will be pushed into the queue.
If the fast environment decision of current sub-frame is 2" or 3" or 4", the fast environment decision result of current sub-frame will be pushed into the queue. If there is an interest sub-frame that passes the S3, S4 and S5, the rectified fast environment decision results 0" 1" 2" "3" "4" will be pushed into the queue sequentially.
Therefore, it forms an environment analysing vector with a range from 0 to 4.
[086] At step 7.8 the system then utilizes the 100 statistic fast environment decision result to update the slow environment result according to the following strategies: If the number of "1" is larger than 30, the slow environment will be set to "1" (the complicated condition); If the number of "3" and "4" is larger than 30, the slow environment will be set to 3 (the weak light condition); As to the other situations, the slow environment will be set to 2 (the normal condition).
[087] The process then goes to step 8.0 where the fast environment identification according to the slow environment result is set at a certain interval.
[088] If the slow environment result is "1", the fast environment identification will be set to 3, nightfall". Sequentially, the fast environment related parameters, including "Dynamic Binarization Sobel Threshold for Global Gradient Image", "Dynamic Distinguish Threshold for Shadow and Ground" and "Two Stage Dynamic Bimodal vertical direction Cumulative Threshold" will change according to the reset of fast environment, from 1 to 3, for the next frame.
[089] If the slow environment identification result is "2" or "3", then the fast environment identification will be forced to "4, dusk". Sequentially, the fast environment related parameters, including "Dynamic Binarization Sobel Threshold for Global Gradient Image", "Dynamic Distinguish Threshold for Shadow and Ground" and "Two Stage Dynamic Bimodal vertical direction Cumulative Threshold" will change according to the reset of fast environment for the next frame.
[090] When sub-frames (ROl) pass S3, S4 and S5 in several consecutive frames, it can be concluded that a vehicle object exists in the sub-frame (ROl) of the corresponding stationary point. Each sub-frames of interest will be marked as a sub-frame of object tracking. The detailed process is described below: Taking the frame as a unit and the current frame as the base point, continuously obtain the frames which start from current frame to the previous several frames.
Then analyse the existence and location of these interested sub-frames which pass the S3, S4 and S5.
If the slow environment identification output is "1", the environment identification result is complex. If there still is an interested sub-frame in the current and previous four frames that can pass the 3 stage dynamic decision under the condition that the stationary point of these 5 interested frames is close to each other, then it can be regarded that there exists a object and the object is marked as the final object detection result.
If the slow environment identification output is "2" or "3", the object feature is weakened. Therefore, if there still exists an interested sub-frame in the current and previous three image frames that can pass the three stage dynamic decision under the condition that the stationary point of these interested sub-frames is close to each other, it can be regarded that an object exists and the object is marked as the final object detection result.
[091] Referring again to step 5.3, where each ROl has been scanned so that there are no RCI in the Nth captured frame which have not been subject to at least step 3, the system outputs to step 5.3.1 where a frame counter is incremented from N to N+l. The process advances to step 5.3.2 where the frame count is compared to a predetermined frame count number "X". In the example X may be five but might be set to a higher or lower value. This ensures that a sequence of five captured frames must be inspected with the corresponding ROl exhibiting the presence of a potential hazard target before target tracking is confirmed. When the frame count N equals the frame count number 5, the system advances to step 5.3.3 where the frame count is reset to 1. From step 5.3.3 the system advances to step 9.1 where the target data is output for subsequent processing. After step 9.1 the method advances to step 1 where a new frame is captured for examination.
[092] If the frame count number is not exceeded at step 5.3.2 the method advances directly to step 1 to capture a new frame for examination.
[093] The cognitive imaging system having confirmed the presence of a foreground object which may prove to be a hazard is able to track the hazard with a high degree of confidence against even fast changing background environments. The tracking of any hazard target is achieved across several sequentially captured frames, which facilitates the step of calculating a vector for the target and hence a plot of the position of the target vehicle against time which can be compared to the current vector of the host vehicle. As a result the system is able to identify objects on a potential collision course with the user vehicle and issue a hazard warning to the driver by way of integral warning systems. Integral warning system may, for example, include a screen, warning lights or an acoustic warning from a loudspeaker. Alternatively the warning may be transferred over the interface to drive the warning indicators of the vehicle.
A feature of the system is an ability to correlate the lateral movement of the vehicle with the operation of the vehicle indicators in order to warn the driver of changing lane without indicating or otherwise driving erratically.
Claims (22)
- Claims 1. A cognitive imaging system for a vehicle comprising: a monocular camera arranged to focus an image of a field of view onto an image sensor, said image sensor capable of processing the image into image data, and a processor device responsive to machine readable code stored in a memory to be capable of driving the camera to capture a sequence of image frames to implement an image processing method in order to process the image frame data and identify ahazard in the field of view; wherein:the processor is responsive to capture a time sequence of images from the camera, the processor is responsive to scan pixel image data of a first captured frame from a sequence of frames and applies a series of filters to identify a sub-frame in which a vehicular hazard may exist and to distinguish the background environment from the foreground vehicle hazard; the processor is responsive to calculate an environment classification criterion for the each sub-frame in the current frame; the processor is responsive to add each fast environment classification to a fast environment identification queue of a predetermined length to form an environment analysing vector; the processor responding to analyse the pixel image data of a subsequent captured image to add fast environment classification data to the fast environment identification queue; whereby 2] a slow environment result is updated according to the fast environment analysing vector calculated from the preceding sequence of captured frames.
- 2. A system according to claim 1 wherein the filters comprise at least a bottom shadow filter, wherein the processor calculates a bottom shadow of a suspected hazard and compares the bottom shadow to a threshold value.
- 3. A system according to claim 2 wherein the filters include a dynamic edge filter.
- 4. A system according to claim 3 wherein the processor is responsive to the output of the dynamic edge filter to apply a support vector machine (SVM) filter, wherein each sub-frame is regularized, normalised and compared to a database of templates to generate one of a plurality of classifiers, of which classifiers at least one indicates a match with a template.
- 5. A system according to claim 4 wherein the processor is responsive to the sub-frame passing each filter to back-scan the ROl suspected to contain a hazard.
- 6. A system according to claim 5 wherein the processor is responsive to apply the bottom shadow filter to the output of the back-scanned ROI.
- 7. A system according to claim 5 or 6 wherein the processor responds to the output of the back-scan bottom shadow filter to determine if the background environment of the sub-frame is simple or complex.
- 8. A system according to claim 7 wherein the processor calibrates the simplicity or complexity of the background environment of a sub-frame to determine if the background can be reused to track the target hazard in the sub-frame in one or more subsequent captured frames.
- 9. A system according to any one of the preceding claims wherein the processor calculates a vector of an identified target hazard and a vector of the system and calculates a probability of the vectors coinciding to output a warning of the risk of a collision.
- 10. A system according to claim 9 wherein the system includes an interface for communication of data with an inboard vehicle management system whereby the system can capture data determined from any of the vehicle: I. steering angle sensor, II. accelerator/throttle angle sensor, Ill. direction indicator state, IV. brake operation sensor.
- 11. A system according to claim 10 wherein the system interface is capable of driving the inboard vehicle management system to actuate any of the vehicle: brakes, steering or accelerator.
- 12. A method of cognitive imaging in a cognitive imaging system comprising: capturing a sequence of frames of monocular images of a single field of view from a moving vehicle; scanning a temporal first of the frames to identify sub-frames (ROl) which are most likely to contain a foreground object constituting a potential hazard; scanning each sub-frame to calculate an environment classification sub-frame; adding each environment classification to an environment classification queue of a predetermined length to form an environment analysing vector; scanning a subsequent one of the sequence of captured frames to generate sub-frame environment classifications and adding the environment classifications to the environment classification queue to update the environment analysing vector; whereby a slow environment result is updated according to the fast environment analysing vector calculated from the preceding sequence of captured frames.
- 13. A method according to claim 12 wherein a first of the filters is a bottom shadow filter which calculates the boftom shadow of a sub-frame and compares the bottom shadow to a threshold value.
- 14. A method according to claim 12 or 13 wherein a one of the filters is a dynamic edge filter.
- 15. A method according to claim 12, 13 or 14 wherein a one of the filters is a support vector machine (SVM) filter wherein each sub-frame is regularised and normalised and compared to a database of templates to generate one of a plurality of classifiers, of which classifiers at least one indicates a match with a template.
- 16. A method according to any one of claims 12 to 15 in which any sub-frame which passes each of the filters is subject to a back-scan where the pixel data is recovered in reverse order from the sub-frame and the bottom shadow is recalculated from the back-scan data to be subject to the bottom shadow threshold test.
- 17. A method according to claims 16 wherein the output of the back scan bottom shadow filter is used to determine if the background environment of the sub-frame is simple or complex.
- 18. A method according to claim 17, wherein the complexity classification of the sub-frame is used to determine if the background of the sub-frame can be used in the identification of target by sub-frame subtraction in a subsequent frame.
- 19. A method according to any one of claims 12 to 18 wherein the vehicle: I. steering angle, II. accelerator/throttle angle Ill. direction indicator state, or IV. brake operation state is integrated with a hazard target vector determined by the method to generate a hazard warning if there is a significant risk of collision.
- 20. A method according to claim 19 wherein, in response to a calculated high risk of collision one or more of the vehicle, brakes, steering or accelerator are actuated to alleviate the risk of collision.
- 21. A road vehicle in combination with a system according to any one of claims 1 to 11.
- 22. Machine readable code packaged and encoded to be communicated for execution in a system to implement a method according to any one of claims 12 to 20.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1406697.1A GB2525587A (en) | 2014-04-14 | 2014-04-14 | Monocular camera cognitive imaging system for a vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1406697.1A GB2525587A (en) | 2014-04-14 | 2014-04-14 | Monocular camera cognitive imaging system for a vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201406697D0 GB201406697D0 (en) | 2014-05-28 |
GB2525587A true GB2525587A (en) | 2015-11-04 |
Family
ID=50844975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1406697.1A Withdrawn GB2525587A (en) | 2014-04-14 | 2014-04-14 | Monocular camera cognitive imaging system for a vehicle |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2525587A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040763A (en) * | 2017-05-02 | 2017-08-11 | 阜阳师范学院 | A kind of intelligent monitor system based on target following |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0671706A2 (en) * | 1994-03-09 | 1995-09-13 | Nippon Telegraph And Telephone Corporation | Method and apparatus for moving object extraction based on background subtraction |
US20070183661A1 (en) * | 2006-02-07 | 2007-08-09 | El-Maleh Khaled H | Multi-mode region-of-interest video object segmentation |
CN101303732A (en) * | 2008-04-11 | 2008-11-12 | 西安交通大学 | Method for apperceiving and alarming movable target based on vehicle-mounted monocular camera |
JP2011154634A (en) * | 2010-01-28 | 2011-08-11 | Toshiba Information Systems (Japan) Corp | Image processing apparatus, method and program |
CN102542571A (en) * | 2010-12-17 | 2012-07-04 | 中国移动通信集团广东有限公司 | Moving target detecting method and device |
CN102915545A (en) * | 2012-09-20 | 2013-02-06 | 华东师范大学 | OpenCV(open source computer vision library)-based video target tracking algorithm |
US20130243322A1 (en) * | 2012-03-13 | 2013-09-19 | Korea University Research And Business Foundation | Image processing method |
-
2014
- 2014-04-14 GB GB1406697.1A patent/GB2525587A/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0671706A2 (en) * | 1994-03-09 | 1995-09-13 | Nippon Telegraph And Telephone Corporation | Method and apparatus for moving object extraction based on background subtraction |
US20070183661A1 (en) * | 2006-02-07 | 2007-08-09 | El-Maleh Khaled H | Multi-mode region-of-interest video object segmentation |
CN101303732A (en) * | 2008-04-11 | 2008-11-12 | 西安交通大学 | Method for apperceiving and alarming movable target based on vehicle-mounted monocular camera |
JP2011154634A (en) * | 2010-01-28 | 2011-08-11 | Toshiba Information Systems (Japan) Corp | Image processing apparatus, method and program |
CN102542571A (en) * | 2010-12-17 | 2012-07-04 | 中国移动通信集团广东有限公司 | Moving target detecting method and device |
US20130243322A1 (en) * | 2012-03-13 | 2013-09-19 | Korea University Research And Business Foundation | Image processing method |
CN102915545A (en) * | 2012-09-20 | 2013-02-06 | 华东师范大学 | OpenCV(open source computer vision library)-based video target tracking algorithm |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040763A (en) * | 2017-05-02 | 2017-08-11 | 阜阳师范学院 | A kind of intelligent monitor system based on target following |
Also Published As
Publication number | Publication date |
---|---|
GB201406697D0 (en) | 2014-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5297078B2 (en) | Method for detecting moving object in blind spot of vehicle, and blind spot detection device | |
US9047518B2 (en) | Method for the detection and tracking of lane markings | |
US9626599B2 (en) | Reconfigurable clear path detection system | |
US10081308B2 (en) | Image-based vehicle detection and distance measuring method and apparatus | |
CN101633356B (en) | System and method for detecting pedestrians | |
KR101912914B1 (en) | Method and system for recognition of speed limit sign using front camera | |
CN107667378B (en) | Method and device for detecting and evaluating road surface reflections | |
JP5136504B2 (en) | Object identification device | |
JP4173902B2 (en) | Vehicle periphery monitoring device | |
US9165197B2 (en) | Vehicle surroundings monitoring apparatus | |
JP4528283B2 (en) | Vehicle periphery monitoring device | |
US20210312199A1 (en) | Apparatus, method, and computer program for identifying state of object, and controller | |
EP2741234B1 (en) | Object localization using vertical symmetry | |
JPWO2017098709A1 (en) | Image recognition apparatus and image recognition method | |
CN108629225B (en) | Vehicle detection method based on multiple sub-images and image significance analysis | |
KR20210097782A (en) | Indicator light detection method, apparatus, device and computer-readable recording medium | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
JP4674179B2 (en) | Shadow recognition method and shadow boundary extraction method | |
CN111626170A (en) | Image identification method for railway slope rockfall invasion limit detection | |
US8229170B2 (en) | Method and system for detecting a signal structure from a moving video platform | |
Xiao et al. | Detection of drivers visual attention using smartphone | |
WO2017077261A1 (en) | A monocular camera cognitive imaging system for a vehicle | |
CN107506739B (en) | Night forward vehicle detection and distance measurement method | |
JP2004086417A (en) | Method and device for detecting pedestrian on zebra crossing | |
JPH11142168A (en) | Environment-recognizing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
COOA | Change in applicant's name or ownership of the application |
Owner name: QUANTUM VISION TECHNOLOGIES LIMITED Free format text: FORMER OWNER: YUE ZHANG |
|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |