KR20160135482A - Apparatus and method for predicting moving of on-road obstable - Google Patents
Apparatus and method for predicting moving of on-road obstable Download PDFInfo
- Publication number
- KR20160135482A KR20160135482A KR1020150068836A KR20150068836A KR20160135482A KR 20160135482 A KR20160135482 A KR 20160135482A KR 1020150068836 A KR1020150068836 A KR 1020150068836A KR 20150068836 A KR20150068836 A KR 20150068836A KR 20160135482 A KR20160135482 A KR 20160135482A
- Authority
- KR
- South Korea
- Prior art keywords
- obstacle
- behavior
- image
- vehicle
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000004458 analytical method Methods 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 239000000284 extract Substances 0.000 claims abstract description 9
- 230000002123 temporal effect Effects 0.000 claims abstract description 9
- 230000006399 behavior Effects 0.000 claims description 152
- 239000013598 vector Substances 0.000 claims description 42
- 238000007781 pre-processing Methods 0.000 claims description 24
- 238000013459 approach Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 8
- 230000009471 action Effects 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000003702 image correction Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 206010041349 Somnolence Diseases 0.000 claims description 3
- 230000003542 behavioural effect Effects 0.000 claims 1
- 238000013145 classification model Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R21/00—Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
- B60R21/01—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
- B60R21/013—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
- B60R21/0134—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- H04N13/02—
-
- H04N5/225—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- B60W2050/14—
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Human Computer Interaction (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a dynamic obstacle motion predicting apparatus and method for predicting a motion of a dynamic obstacle around a vehicle based on a temporal and spatial motion pattern and controlling a running of the vehicle according to a predicted result, An image obtaining unit for obtaining an outer rectangular image of the vehicle; An obstacle detection unit that detects the position and size of the dynamic obstacle from the obtained image; Recognizes the behavior of the dynamic obstacle detected by the obstacle detection unit, extracts a motion pattern of the recognized dynamic obstacle, compares the extracted motion pattern with various motion patterns of the pre-modeled dynamic obstacle, An obstacle behavior analysis unit for analyzing the obstacle behavior; And a vehicle control unit for controlling the running state of the vehicle according to the obstacle behavior analysis result.
Description
The present invention relates to an apparatus and method for predicting an obstacle motion, and more particularly, to a system and method for predicting a motion of a dynamic obstacle around a vehicle based on a temporal and spatial motion pattern, Prediction apparatus and method thereof.
Recently, a system for shooting and monitoring the surroundings of a vehicle is gradually increasing. As the image processing technology is developed, not only the vehicle surroundings image is displayed to the driver, but also technology for detecting and determining the possibility of collision by detecting an object in the surroundings of the vehicle is being developed.
In the past, a simple image of a vehicle, which is merely provided with no front view, has been provided without a viewpoint change. However, at the time of parking, a virtual viewpoint that overlooks the ground from above the vehicle so as to clearly show whether the vehicle is in contact with an object, A technique of converting the surrounding images of the vehicle to the viewpoint was also developed.
However, there is a problem in that when an object located near the vehicle is detected by applying a single object detection algorithm to the vehicle regardless of the distance from the vehicle or the camera viewpoint, the object around the vehicle can not be reliably detected.
In addition, a conventional camera-based vehicle safety system recognizes a vehicle or a pedestrian, and uses their position information in a safety system such as a frontal collision warning, lane change support, and the like. Therefore, only the vehicle or the pedestrian is detected, and only the warning is performed, so that there is a problem that the complete vehicle collision can not be prevented.
However, in the autonomous driving and semi-autonomous driving car in the future, it is necessary to be able to judge the motion (behavior) of the surrounding obstacle (pedestrian, vehicle, etc.) in advance.
SUMMARY OF THE INVENTION Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and it is an object of the present invention to provide a method and apparatus for predicting a motion of a dynamic obstacle around a vehicle based on a spatiotemporal movement pattern, And a method for estimating a dynamic obstacle motion. That is, the present invention recognizes surrounding obstacles in an image frame input from a plurality of cameras installed at different positions in a vehicle, recognizes the temporal and spatial motion patterns of the recognized obstacles, predicts the behavior of the obstacles in advance, And an object of the present invention is to provide a dynamic obstacle motion prediction apparatus and a method thereof capable of controlling driving.
According to an aspect of the present invention, there is provided an apparatus for predicting motion of a dynamic obstacle, the apparatus including: an image obtaining unit for obtaining an outer rectangular image of a vehicle; An obstacle detection unit for detecting a dynamic obstacle from the acquired image; Recognizes the behavior of the dynamic obstacle detected by the obstacle detection unit, extracts a motion pattern of the recognized dynamic obstacle, compares the extracted motion pattern with various motion patterns of the pre-modeled dynamic obstacle, An obstacle behavior analysis unit for analyzing the obstacle behavior; And a vehicle control unit for controlling the running state of the vehicle according to the obstacle behavior analysis result. ,
The image obtaining unit includes at least one of a mono camera, a stereo camera, and an AVM camera.
And a preprocessor for preprocessing the image frames of the image obtained from the image acquiring unit and providing the image frames to the obstacle sensing unit.
The preprocessing of the preprocessor processes at least one of image correction, color conversion, and edge component detection.
The obstacle detection unit extracts a feature vector from the preprocessed image frame in the preprocessing unit, and detects a dynamic obstacle such as a vehicle and a pedestrian using the extracted feature vector.
And a storage unit for classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.
The obstacle behavior analyzing unit may include a feature vector extracting unit for extracting a space time feature vector by accumulating N obstacle areas from the previous frame to the current frame with respect to the obstacle detected in the obstacle sensing image frame provided from the obstacle sensing unit; An obstacle behavior classifier for recognizing a behavior of an obstacle by comparing the spatio-temporal feature vector extracted by the feature vector extraction unit and a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And an analysis unit for generating warning information for warning the driver by analyzing the behavior of the obstacle recognized by the obstacle behavior classifier and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle.
And an alarm processing unit for outputting a warning signal to the driver through the human-machine interface (HMI) according to the warning information generated by the analysis unit.
The feature vector extracted by the feature vector extracting unit is extracted using HOG (Histogram of Oriented Gradient), LBP scheme.
The modeling of the various motion patterns for the dynamic obstacle stored in the storage unit may be performed by at least one of a forward acceleration behavior, a forward deceleration behavior, a front lane change behavior, a rearward approach behavior, a lateral approach behavior, The behavior is classified into a behavior pattern, and the dynamic obstacle is classified into at least one of a behavior that crosses a roadway when a pedestrian is a pedestrian, a walk that leads to a road, a wait for a crossing, And a space-time local feature for expressing and recognizing behaviors in three-dimensional space and time is extracted for each behavior of the data set, and the space-time motion pattern is learned and stored .
According to another aspect of the present invention, there is provided a method for predicting dynamic obstacle motion, comprising: acquiring a vehicle outside image using at least one camera installed in a vehicle; Detecting a dynamic obstacle from the acquired image; Analyzing the behavior of the dynamic obstacle by recognizing the behavior of the detected dynamic obstacle, extracting a motion pattern of the recognized dynamic obstacle, comparing the extracted motion pattern and various motion patterns of the pre-modeled dynamic obstacle ; And controlling the running state of the vehicle according to the obstacle behavior analysis result.
The image acquiring camera in the step of acquiring the image uses at least one of a mono camera, a stereo camera, and an AVM camera.
Processing the image frames of the image obtained in the step of acquiring the image.
The preprocessing performs at least one of an image correction, a color conversion, and an edge component detection operation. ,
In the step of detecting the obstacle, the obstacle detection extracts a feature vector from the preprocessed image frame, and detects dynamic obstacles such as a vehicle and a pedestrian using the extracted feature vector.
And classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.
Analyzing the behavior of the dynamic obstacle comprises: extracting a space time feature vector by accumulating N obstacle areas from the previous image frame to the current image frame with respect to the motion obstacle detected in the sensing step; Recognizing a behavior of an obstacle by comparing the extracted space-time feature vector with a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And generating warning information for warning the driver by analyzing the behavior of the recognized obstacle and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle.
And outputting a warning signal to the driver through the human-machine interface (HMI) according to the warning information generated in the generating step.
In extracting the characteristic vector, the extraction of the characteristic vector is performed using HOG (Histogram of Oriented Gradient) and LBP (Local Binary Patterns).
The step of classifying and storing the various motion patterns for the modeled dynamic obstacle, respectively, may include the steps of: classifying and storing the various motion patterns for the modeled dynamic obstacle, wherein the dynamic obstacle is classified into a forward acceleration behavior, a forward deceleration behavior, a front lane change behavior, a rearward approach behavior, And classifying the behavior into at least one of a behavior that crosses a roadway when a dynamic obstacle is a pedestrian, a behavior that leads to a roadway, a wait for a crossing, or a behavior that is difficult to predict; Collecting a plurality of image data sets for each of the actions; Extracting a space-time local feature for representing and recognizing a behavior in three-dimensional space-time with respect to each action of the collected data set; And learning the spatiotemporal movement pattern using the extracted feature vector and storing the spatiotemporal motion pattern.
According to the present invention, it is possible to effectively prevent the vehicle collision accident by predicting the movement of the dynamic obstacle around the vehicle on the basis of the temporal / spatial motion pattern and controlling the running of the vehicle according to the predicted result. That is, peripheral obstacles are recognized in an image frame input from a plurality of cameras installed at different positions on the vehicle, and the behavior of the obstacle is predicted in advance by recognizing the temporal and spatial motion patterns of the recognized obstacles, Thus, there is an effect that a vehicle collision accident can be prevented in advance.
According to the present invention, an image frame collected from a plurality of cameras capable of monitoring different positions of the vehicle, that is, front, rear, and rear of the vehicle, The vehicle can be used as a core technology of an unmanned vehicle or a semi-vehicle in the future by controlling the running of the vehicle by predicting the behavior of the vehicle.
BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of a dynamic obstacle motion prediction apparatus according to the present invention; FIG.
FIG. 2 is a view for explaining a detailed configuration of an obstacle behavior recognition unit shown in FIG. 1 and its operation; FIG.
3 is a flowchart showing an operation flow for a dynamic obstacle motion prediction method according to the present invention;
BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like numbers refer to like elements throughout.
In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. The following terms are defined in consideration of the functions in the embodiments of the present invention, which may vary depending on the intention of the user, the intention or the custom of the operator. Therefore, the definition should be based on the contents throughout this specification.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a preferred embodiment of a dynamic obstacle motion prediction apparatus and method according to the present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a block diagram of a dynamic obstacle motion prediction apparatus according to the present invention.
1, the dynamic obstacle motion prediction apparatus according to the present invention includes an
The
The
The preprocessing
The
That is, the
The obstacle
The behavior classification model stored in the
In the case of vehicles, behavior patterns can be classified by forward acceleration behavior, forward deceleration behavior, forward lane change behavior, rearward approach behavior, lateral approach behavior, and drowsiness driving behavior.
On the other hand, pedestrians can be categorized as crossing roads, walking to India, waiting for crossing, and difficult to predict. For each behavior of the dataset, a spatial-temporal motion pattern is extracted by extracting a space-time local feature to represent and recognize the behavior in 3D space-time for each behavior of the data set .
The specific configuration and operation of the obstacle
FIG. 2 is a diagram for explaining the detailed configuration of the obstacle
As shown in FIG. 2, the obstacle
The feature
The
The obstacle
When the behavior of the obstacle is analyzed, the obstacle
The
On the other hand, the
The dynamic obstacle motion prediction method according to the present invention, which corresponds to the motion prediction apparatus according to the present invention, will be described step by step with reference to FIG.
FIG. 3 is a flowchart showing an operation flow for a dynamic obstacle motion prediction method according to the present invention.
As shown in FIG. 3, first, a front, a rear, a side or a side of a vehicle is photographed using a plurality of cameras installed in a vehicle to acquire an image for each direction (S301). Here, the plurality of cameras may be installed at appropriate positions of the vehicle so as to capture both the front, the side, and the rear of the vehicle. Here, in the case of the plurality of cameras, at least one of a mono camera, a stereo camera, and an AVM camera may be used. For example, in the case of a camera, a mono camera or a stereo camera can be installed in the vicinity of a room mirror, a camera can be installed in a side mirror and a rear side, and a camera can be installed in all directions like an AVM.
Next, an image frame for an image obtained from each camera is received and a preprocessing process is sequentially performed (S302). Here, the preprocessing process performs operations such as image correction, color conversion, and edge component detection for efficient image processing. Here, the preprocessing process is a known technique, and a detailed description thereof will be omitted.
After performing the pre-processing operation as described above, dynamic obstacles such as a pedestrian and a vehicle are detected in the preprocessed image frame (S303). That is, the detection of motion obstacles such as a pedestrian or a vehicle uses a classifier. The classifier extracts feature points, i.e., feature vectors, using Histogram of Oriented Gradient (LOG) and Local Binary Patterns (LBP) To detect dynamic obstacles such as vehicles and pedestrians. Here, the HOG and LBP (Local Binary Patterns) schemes are well-known technologies, and a detailed description thereof will be omitted.
That is, in step S303, a dynamic obstacle such as a pedestrian or a vehicle is detected using the preprocessed image frame in step S302, and a pedestrian list and a vehicle list including information such as the location and size of the dynamic obstacle Of the obstacle list information.
Then, the behavior of each obstacle belonging to the obstacle list extracted in step S303 is recognized. Obstacle behavior recognition uses the behavior classification model generated through obstacle behavior pattern learning. Here, the behavior classification model can be stored in a database.
The behavior classification model stored in the database is generated by the obstacle behavior pattern learning, the obstacle behavior pattern learning is performed for each obstacle behavior, and the behavior of the obstacle is classified for the obstacle behavior pattern learning.
In the case of vehicles, behavior patterns can be classified by forward acceleration behavior, forward deceleration behavior, forward lane change behavior, rearward approach behavior, lateral approach behavior, and drowsiness driving behavior.
On the other hand, pedestrians can be categorized as crossing roads, walking to India, waiting for crossing, and difficult to predict. We acquire a large number of image data sets for each action, and learn spatio - temporal movement patterns by extracting feature vectors for expressing and recognizing behaviors in 3D space - time for each behavior of the data set.
Hereinafter, the behavior of the obstacle will be described in more detail.
As shown in FIG. 2, a space-time feature vector is extracted by accumulating N obstacle areas from (N-1) previous frame to current frame with respect to the obstacle detected in the dynamic obstacle detection image frame detected in step S303 (S304 ).
In step S305, the behavior of the obstacle is recognized by comparing the extracted space-time feature vector extracted in step S304 with the learned vehicle or the spatio-temporal feature vector of each pedestrian stored in the database, that is, the temporal / spatial motion pattern.
Then, the behavior of the obstacle is analyzed using the obstacle behavior recognition information generated in step S305 (S306).
When the behavior of the obstacle is analyzed, warning information for warning the driver of the behavior of the obstacle and vehicle control information based on the obstacle behavior analysis information are generated, and the driver is informed of the behavior of the obstacle by using the generated warning information Warn (S306). Here, as a method of warning to recognize the obstacle recognition and analysis result by the driver, it is possible to use a human-machine interface (HMI) to warn with voice, text or image.
In step S307, a control signal for controlling the speed, braking, etc. of the vehicle is generated using the vehicle control signal vehicle control information generated in step S306. That is, according to the result of recognition and behavior analysis of the obstacle, it is possible to control the braking and running speed of the vehicle in order to prevent a collision accident or the like. Here, it should be understood that the vehicle control object can be set variously.
Although the present invention has been described in connection with the embodiment thereof with reference to the accompanying drawings, it is to be understood that the scope of the present invention is not limited to the specific embodiments, Various modifications, alterations, and changes may be made without departing from the scope of the present invention.
Therefore, the embodiments described in the present invention and the accompanying drawings are intended to illustrate rather than limit the technical spirit of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments and accompanying drawings . The scope of protection of the present invention should be construed according to the claims, and all technical ideas within the scope of equivalents should be interpreted as being included in the scope of the present invention.
10: Image acquisition units 10-1, 10-2, ..., 10-n:
20: preprocessing section 30: obstacle detecting section
40: storage unit 50: obstacle behavior recognition unit
60: Obstacle behavior analysis unit 70: Warning processing unit
80:
Claims (20)
An obstacle detection unit for detecting a dynamic obstacle from the acquired image;
Recognizes the behavior of the dynamic obstacle detected by the obstacle detection unit, extracts a motion pattern of the recognized dynamic obstacle, compares the extracted motion pattern with various motion patterns of the pre-modeled dynamic obstacle, An obstacle behavior analysis unit for analyzing the obstacle behavior;
A vehicle controller for controlling a running state of the vehicle according to the obstacle behavior analysis result;
And a motion estimator for estimating motion of the moving object.
Wherein the image acquiring unit includes at least one of a mono camera, a stereo camera, and an AVM camera.
Further comprising a preprocessing unit for preprocessing the image frames of the image obtained from the image acquiring unit and providing the image frames to the obstacle sensing unit.
Wherein the preprocessing section of the preprocessing section processes at least one of image correction, color conversion, and edge component detection.
The obstacle detection unit
Wherein the preprocessing unit extracts feature vectors from the preprocessed image frame and detects dynamic obstacles such as vehicles and pedestrians using the extracted feature vectors.
And a storage unit for classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.
The obstacle behavior analyzing unit,
A feature vector extractor for extracting a space-time feature vector by accumulating N obstacle areas from a previous frame to a current frame with respect to an obstacle detected in the obstacle sensing image frame provided from the obstacle sensing unit;
An obstacle behavior classifier for recognizing a behavior of an obstacle by comparing the spatio-temporal feature vector extracted by the feature vector extraction unit and a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And
And an analysis unit for generating warning information for warning the driver by analyzing the behavior of the obstacle recognized by the obstacle behavior classifier and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle, Prediction device.
And an alarm processing unit for outputting a warning signal to the driver through an HMI (Human-Machine Interface) according to the warning information generated by the analysis unit.
Wherein the feature vector extracted by the feature vector extracting unit is extracted using HOG (Histogram of Oriented Gradient), LBP scheme.
The modeling of the various motion patterns for the dynamic obstacle stored in the storage unit may be performed by at least one of a forward acceleration behavior, a forward deceleration behavior, a front lane change behavior, a rearward approach behavior, a lateral approach behavior, The behavior is classified into behavioral patterns, and the dynamic obstacles are classified into at least one of behavior that crosses a roadway when a pedestrian is a pedestrian, a walk that leads to a road, a waiting for a crossing,
A plurality of image data sets for each of the above actions is collected, and a space-time local feature for expressing and recognizing the behavior in three-dimensional space and time is extracted for each behavior of the data set, And stores the dynamic obstacle motion prediction result.
Detecting a dynamic obstacle from the acquired image;
Analyzing the behavior of the dynamic obstacle by recognizing the behavior of the detected dynamic obstacle, extracting a motion pattern of the recognized dynamic obstacle, comparing the extracted motion pattern and various motion patterns of the pre-modeled dynamic obstacle ;
Controlling a running state of the vehicle according to the obstacle behavior analysis result;
And estimating a dynamic obstacle motion.
Wherein the image acquisition camera in the step of acquiring the image uses at least one of a mono camera, a stereo camera, and an AVM camera.
And performing pre-processing on the image frames of the image obtained in the step of acquiring the image.
Wherein the preprocessing performs at least one of an image correction, a color conversion, and an edge component detection operation.
Wherein the step of detecting the obstacle includes extracting a feature vector from the preprocessed image frame and detecting a dynamic obstacle such as a vehicle and a pedestrian using the extracted feature vector.
And classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.
Wherein analyzing the behavior of the dynamic obstacle comprises:
Extracting a space time feature vector by accumulating N obstacle areas from a previous image frame to a current image frame with respect to the dynamic obstacle detected in the sensing step;
Recognizing a behavior of an obstacle by comparing the extracted space-time feature vector with a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And
And generating warning information for warning the driver by analyzing the behavior of the recognized obstacle and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle.
And outputting a warning signal to the driver through an HMI (Human-Machine Interface) according to the warning information generated in the generating step.
Wherein the extraction of the feature vector is performed using a Histogram of Oriented Gradient (HOG) or Local Binary Patterns (LBP).
Classifying and storing various motion patterns for the modeled dynamic obstacle, respectively,
The dynamic obstacle is classified into a behavior pattern for at least one of forward acceleration behavior, forward deceleration behavior, forward lane change behavior, rearward approach behavior, lateral approach behavior, and drowsiness driving behavior in the case of a vehicle, Categorizing into at least one of a behavior traversing to a destination, a walk leading to a destination, a waiting for a traversal, or a difficult predictive action;
Collecting a plurality of image data sets for each of the actions;
Extracting a space-time local feature for representing and recognizing a behavior in three-dimensional space-time with respect to each action of the collected data set; And
Learning the temporal and spatial motion patterns using the extracted feature vectors, and then storing the temporal and spatial motion patterns.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150068836A KR20160135482A (en) | 2015-05-18 | 2015-05-18 | Apparatus and method for predicting moving of on-road obstable |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150068836A KR20160135482A (en) | 2015-05-18 | 2015-05-18 | Apparatus and method for predicting moving of on-road obstable |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160135482A true KR20160135482A (en) | 2016-11-28 |
Family
ID=57706813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150068836A KR20160135482A (en) | 2015-05-18 | 2015-05-18 | Apparatus and method for predicting moving of on-road obstable |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160135482A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180069147A (en) * | 2016-12-14 | 2018-06-25 | 만도헬라일렉트로닉스(주) | Apparatus for warning pedestrian in vehicle |
KR20190078105A (en) * | 2017-12-26 | 2019-07-04 | 엘지전자 주식회사 | Autonomous vehicle and method of controlling the same |
KR20190093729A (en) * | 2018-01-09 | 2019-08-12 | 삼성전자주식회사 | Autonomous driving apparatus and method for autonomous driving of a vehicle |
CN110126822A (en) * | 2018-02-08 | 2019-08-16 | 本田技研工业株式会社 | Vehicle control system, control method for vehicle and storage medium |
KR20200023707A (en) * | 2018-08-23 | 2020-03-06 | 엘지전자 주식회사 | Moving robot |
WO2020198134A1 (en) * | 2019-03-22 | 2020-10-01 | Vergence Automation, Inc. | Lighting-invariant sensor system for object detection, recognition, and assessment |
CN111886598A (en) * | 2018-03-21 | 2020-11-03 | 罗伯特·博世有限公司 | Fast detection of secondary objects that may intersect the trajectory of a moving primary object |
CN111982143A (en) * | 2020-08-11 | 2020-11-24 | 北京汽车研究总院有限公司 | Vehicle and vehicle path planning method and device |
KR20210099489A (en) * | 2020-02-04 | 2021-08-12 | 인하대학교 산학협력단 | Method and Apparatus for Driver Drowsiness Detection with optimized pre-processing for eyelid-closure classification using SVM |
KR20210106040A (en) * | 2020-02-19 | 2021-08-30 | 재단법인대구경북과학기술원 | Apparatus and method for setting driving route |
US11514594B2 (en) | 2019-10-30 | 2022-11-29 | Vergence Automation, Inc. | Composite imaging systems using a focal plane array with in-pixel analog storage elements |
CN115626159A (en) * | 2021-07-01 | 2023-01-20 | 信扬科技(佛山)有限公司 | Vehicle warning system and method and automobile |
-
2015
- 2015-05-18 KR KR1020150068836A patent/KR20160135482A/en unknown
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180069147A (en) * | 2016-12-14 | 2018-06-25 | 만도헬라일렉트로닉스(주) | Apparatus for warning pedestrian in vehicle |
KR20190078105A (en) * | 2017-12-26 | 2019-07-04 | 엘지전자 주식회사 | Autonomous vehicle and method of controlling the same |
KR20190093729A (en) * | 2018-01-09 | 2019-08-12 | 삼성전자주식회사 | Autonomous driving apparatus and method for autonomous driving of a vehicle |
CN110126822A (en) * | 2018-02-08 | 2019-08-16 | 本田技研工业株式会社 | Vehicle control system, control method for vehicle and storage medium |
CN111886598A (en) * | 2018-03-21 | 2020-11-03 | 罗伯特·博世有限公司 | Fast detection of secondary objects that may intersect the trajectory of a moving primary object |
KR20200133787A (en) * | 2018-03-21 | 2020-11-30 | 로베르트 보쉬 게엠베하 | High-speed detection of secondary objects that can cross the trajectory of the primary object in motion |
KR20200023707A (en) * | 2018-08-23 | 2020-03-06 | 엘지전자 주식회사 | Moving robot |
WO2020198134A1 (en) * | 2019-03-22 | 2020-10-01 | Vergence Automation, Inc. | Lighting-invariant sensor system for object detection, recognition, and assessment |
US11514594B2 (en) | 2019-10-30 | 2022-11-29 | Vergence Automation, Inc. | Composite imaging systems using a focal plane array with in-pixel analog storage elements |
KR20210099489A (en) * | 2020-02-04 | 2021-08-12 | 인하대학교 산학협력단 | Method and Apparatus for Driver Drowsiness Detection with optimized pre-processing for eyelid-closure classification using SVM |
KR20210106040A (en) * | 2020-02-19 | 2021-08-30 | 재단법인대구경북과학기술원 | Apparatus and method for setting driving route |
CN111982143A (en) * | 2020-08-11 | 2020-11-24 | 北京汽车研究总院有限公司 | Vehicle and vehicle path planning method and device |
CN111982143B (en) * | 2020-08-11 | 2024-01-19 | 北京汽车研究总院有限公司 | Vehicle and vehicle path planning method and device |
CN115626159A (en) * | 2021-07-01 | 2023-01-20 | 信扬科技(佛山)有限公司 | Vehicle warning system and method and automobile |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20160135482A (en) | Apparatus and method for predicting moving of on-road obstable | |
CN106485233B (en) | Method and device for detecting travelable area and electronic equipment | |
CN111033510B (en) | Method and device for operating a driver assistance system, driver assistance system and motor vehicle | |
CN107004363B (en) | Image processing device, on-vehicle display system, display device, and image processing method | |
US20190012551A1 (en) | System and method for vehicle control based on object and color detection | |
US9317752B2 (en) | Method for detecting large size and passenger vehicles from fixed cameras | |
US9449236B2 (en) | Method for object size calibration to aid vehicle detection for video-based on-street parking technology | |
US9363483B2 (en) | Method for available parking distance estimation via vehicle side detection | |
US11804048B2 (en) | Recognizing the movement intention of a pedestrian from camera images | |
KR101912453B1 (en) | Apparatus And Method Detectinc Obstacle And Alerting Collision | |
JP2021504856A (en) | Forward collision control methods and devices, electronics, programs and media | |
US11003928B2 (en) | Using captured video data to identify active turn signals on a vehicle | |
Aytekin et al. | Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information | |
KR20130118116A (en) | Apparatus and method avoiding collision with moving obstacles in automatic parking assistance system | |
KR20180058624A (en) | Method and apparatus for detecting sudden moving objecj appearance at vehicle | |
JP2021136021A (en) | Dangerous object identification through causal inference using driver-based danger evaluation and intention recognition driving model | |
JP2007249841A (en) | Image recognition device | |
KR102355431B1 (en) | AI based emergencies detection method and system | |
US20220410931A1 (en) | Situational awareness in a vehicle | |
EP3709208A1 (en) | Method and control unit for detecting a region of interest | |
Rajendar et al. | Prediction of stopping distance for autonomous emergency braking using stereo camera pedestrian detection | |
US20120155711A1 (en) | Apparatus and method for analyzing video | |
CN114067292A (en) | Image processing method and device for intelligent driving | |
KR20180047149A (en) | Apparatus and method for risk alarming of collision | |
JP7269694B2 (en) | LEARNING DATA GENERATION METHOD/PROGRAM, LEARNING MODEL AND EVENT OCCURRENCE ESTIMATING DEVICE FOR EVENT OCCURRENCE ESTIMATION |