KR20160135482A - Apparatus and method for predicting moving of on-road obstable - Google Patents

Apparatus and method for predicting moving of on-road obstable Download PDF

Info

Publication number
KR20160135482A
KR20160135482A KR1020150068836A KR20150068836A KR20160135482A KR 20160135482 A KR20160135482 A KR 20160135482A KR 1020150068836 A KR1020150068836 A KR 1020150068836A KR 20150068836 A KR20150068836 A KR 20150068836A KR 20160135482 A KR20160135482 A KR 20160135482A
Authority
KR
South Korea
Prior art keywords
obstacle
behavior
image
vehicle
unit
Prior art date
Application number
KR1020150068836A
Other languages
Korean (ko)
Inventor
백장운
엄태정
박미룡
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020150068836A priority Critical patent/KR20160135482A/en
Publication of KR20160135482A publication Critical patent/KR20160135482A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • H04N13/02
    • H04N5/225
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • B60W2050/14

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

 The present invention relates to a dynamic obstacle motion predicting apparatus and method for predicting a motion of a dynamic obstacle around a vehicle based on a temporal and spatial motion pattern and controlling a running of the vehicle according to a predicted result, An image obtaining unit for obtaining an outer rectangular image of the vehicle; An obstacle detection unit that detects the position and size of the dynamic obstacle from the obtained image; Recognizes the behavior of the dynamic obstacle detected by the obstacle detection unit, extracts a motion pattern of the recognized dynamic obstacle, compares the extracted motion pattern with various motion patterns of the pre-modeled dynamic obstacle, An obstacle behavior analysis unit for analyzing the obstacle behavior; And a vehicle control unit for controlling the running state of the vehicle according to the obstacle behavior analysis result.

Figure P1020150068836

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to an apparatus and method for predicting dynamic obstacle motion,

The present invention relates to an apparatus and method for predicting an obstacle motion, and more particularly, to a system and method for predicting a motion of a dynamic obstacle around a vehicle based on a temporal and spatial motion pattern, Prediction apparatus and method thereof.

Recently, a system for shooting and monitoring the surroundings of a vehicle is gradually increasing. As the image processing technology is developed, not only the vehicle surroundings image is displayed to the driver, but also technology for detecting and determining the possibility of collision by detecting an object in the surroundings of the vehicle is being developed.

In the past, a simple image of a vehicle, which is merely provided with no front view, has been provided without a viewpoint change. However, at the time of parking, a virtual viewpoint that overlooks the ground from above the vehicle so as to clearly show whether the vehicle is in contact with an object, A technique of converting the surrounding images of the vehicle to the viewpoint was also developed.

However, there is a problem in that when an object located near the vehicle is detected by applying a single object detection algorithm to the vehicle regardless of the distance from the vehicle or the camera viewpoint, the object around the vehicle can not be reliably detected.

In addition, a conventional camera-based vehicle safety system recognizes a vehicle or a pedestrian, and uses their position information in a safety system such as a frontal collision warning, lane change support, and the like. Therefore, only the vehicle or the pedestrian is detected, and only the warning is performed, so that there is a problem that the complete vehicle collision can not be prevented.

However, in the autonomous driving and semi-autonomous driving car in the future, it is necessary to be able to judge the motion (behavior) of the surrounding obstacle (pedestrian, vehicle, etc.) in advance.

SUMMARY OF THE INVENTION Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and it is an object of the present invention to provide a method and apparatus for predicting a motion of a dynamic obstacle around a vehicle based on a spatiotemporal movement pattern, And a method for estimating a dynamic obstacle motion. That is, the present invention recognizes surrounding obstacles in an image frame input from a plurality of cameras installed at different positions in a vehicle, recognizes the temporal and spatial motion patterns of the recognized obstacles, predicts the behavior of the obstacles in advance, And an object of the present invention is to provide a dynamic obstacle motion prediction apparatus and a method thereof capable of controlling driving.

According to an aspect of the present invention, there is provided an apparatus for predicting motion of a dynamic obstacle, the apparatus including: an image obtaining unit for obtaining an outer rectangular image of a vehicle; An obstacle detection unit for detecting a dynamic obstacle from the acquired image; Recognizes the behavior of the dynamic obstacle detected by the obstacle detection unit, extracts a motion pattern of the recognized dynamic obstacle, compares the extracted motion pattern with various motion patterns of the pre-modeled dynamic obstacle, An obstacle behavior analysis unit for analyzing the obstacle behavior; And a vehicle control unit for controlling the running state of the vehicle according to the obstacle behavior analysis result. ,

The image obtaining unit includes at least one of a mono camera, a stereo camera, and an AVM camera.

And a preprocessor for preprocessing the image frames of the image obtained from the image acquiring unit and providing the image frames to the obstacle sensing unit.

The preprocessing of the preprocessor processes at least one of image correction, color conversion, and edge component detection.

The obstacle detection unit extracts a feature vector from the preprocessed image frame in the preprocessing unit, and detects a dynamic obstacle such as a vehicle and a pedestrian using the extracted feature vector.

And a storage unit for classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.

The obstacle behavior analyzing unit may include a feature vector extracting unit for extracting a space time feature vector by accumulating N obstacle areas from the previous frame to the current frame with respect to the obstacle detected in the obstacle sensing image frame provided from the obstacle sensing unit; An obstacle behavior classifier for recognizing a behavior of an obstacle by comparing the spatio-temporal feature vector extracted by the feature vector extraction unit and a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And an analysis unit for generating warning information for warning the driver by analyzing the behavior of the obstacle recognized by the obstacle behavior classifier and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle.

And an alarm processing unit for outputting a warning signal to the driver through the human-machine interface (HMI) according to the warning information generated by the analysis unit.

The feature vector extracted by the feature vector extracting unit is extracted using HOG (Histogram of Oriented Gradient), LBP scheme.

The modeling of the various motion patterns for the dynamic obstacle stored in the storage unit may be performed by at least one of a forward acceleration behavior, a forward deceleration behavior, a front lane change behavior, a rearward approach behavior, a lateral approach behavior, The behavior is classified into a behavior pattern, and the dynamic obstacle is classified into at least one of a behavior that crosses a roadway when a pedestrian is a pedestrian, a walk that leads to a road, a wait for a crossing, And a space-time local feature for expressing and recognizing behaviors in three-dimensional space and time is extracted for each behavior of the data set, and the space-time motion pattern is learned and stored .

According to another aspect of the present invention, there is provided a method for predicting dynamic obstacle motion, comprising: acquiring a vehicle outside image using at least one camera installed in a vehicle; Detecting a dynamic obstacle from the acquired image; Analyzing the behavior of the dynamic obstacle by recognizing the behavior of the detected dynamic obstacle, extracting a motion pattern of the recognized dynamic obstacle, comparing the extracted motion pattern and various motion patterns of the pre-modeled dynamic obstacle ; And controlling the running state of the vehicle according to the obstacle behavior analysis result.

The image acquiring camera in the step of acquiring the image uses at least one of a mono camera, a stereo camera, and an AVM camera.

Processing the image frames of the image obtained in the step of acquiring the image.

The preprocessing performs at least one of an image correction, a color conversion, and an edge component detection operation. ,

In the step of detecting the obstacle, the obstacle detection extracts a feature vector from the preprocessed image frame, and detects dynamic obstacles such as a vehicle and a pedestrian using the extracted feature vector.

And classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.

Analyzing the behavior of the dynamic obstacle comprises: extracting a space time feature vector by accumulating N obstacle areas from the previous image frame to the current image frame with respect to the motion obstacle detected in the sensing step; Recognizing a behavior of an obstacle by comparing the extracted space-time feature vector with a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And generating warning information for warning the driver by analyzing the behavior of the recognized obstacle and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle.

And outputting a warning signal to the driver through the human-machine interface (HMI) according to the warning information generated in the generating step.

In extracting the characteristic vector, the extraction of the characteristic vector is performed using HOG (Histogram of Oriented Gradient) and LBP (Local Binary Patterns).

The step of classifying and storing the various motion patterns for the modeled dynamic obstacle, respectively, may include the steps of: classifying and storing the various motion patterns for the modeled dynamic obstacle, wherein the dynamic obstacle is classified into a forward acceleration behavior, a forward deceleration behavior, a front lane change behavior, a rearward approach behavior, And classifying the behavior into at least one of a behavior that crosses a roadway when a dynamic obstacle is a pedestrian, a behavior that leads to a roadway, a wait for a crossing, or a behavior that is difficult to predict; Collecting a plurality of image data sets for each of the actions; Extracting a space-time local feature for representing and recognizing a behavior in three-dimensional space-time with respect to each action of the collected data set; And learning the spatiotemporal movement pattern using the extracted feature vector and storing the spatiotemporal motion pattern.

According to the present invention, it is possible to effectively prevent the vehicle collision accident by predicting the movement of the dynamic obstacle around the vehicle on the basis of the temporal / spatial motion pattern and controlling the running of the vehicle according to the predicted result. That is, peripheral obstacles are recognized in an image frame input from a plurality of cameras installed at different positions on the vehicle, and the behavior of the obstacle is predicted in advance by recognizing the temporal and spatial motion patterns of the recognized obstacles, Thus, there is an effect that a vehicle collision accident can be prevented in advance.

According to the present invention, an image frame collected from a plurality of cameras capable of monitoring different positions of the vehicle, that is, front, rear, and rear of the vehicle, The vehicle can be used as a core technology of an unmanned vehicle or a semi-vehicle in the future by controlling the running of the vehicle by predicting the behavior of the vehicle.

BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of a dynamic obstacle motion prediction apparatus according to the present invention; FIG.
FIG. 2 is a view for explaining a detailed configuration of an obstacle behavior recognition unit shown in FIG. 1 and its operation; FIG.
3 is a flowchart showing an operation flow for a dynamic obstacle motion prediction method according to the present invention;

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like numbers refer to like elements throughout.

In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. The following terms are defined in consideration of the functions in the embodiments of the present invention, which may vary depending on the intention of the user, the intention or the custom of the operator. Therefore, the definition should be based on the contents throughout this specification.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a preferred embodiment of a dynamic obstacle motion prediction apparatus and method according to the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a block diagram of a dynamic obstacle motion prediction apparatus according to the present invention.

1, the dynamic obstacle motion prediction apparatus according to the present invention includes an image acquiring unit 10, a preprocessing unit 20, an obstacle sensing unit 30, a storage unit 40, an obstacle behavior recognition unit 50, an obstacle behavior analysis unit 60, a warning processing unit 70, and a vehicle control unit 80.

The image acquisition unit 10 may include a plurality of cameras 10-1, 10-2, ..., 10-n and a plurality of vision sensors, Can be installed at appropriate positions of the vehicle so as to be able to shoot both the front, the side, and the rear. Here, in the case of the plurality of cameras, at least one of a mono camera, a stereo camera, and an AVM camera may be used. For example, in the case of the cameras 10-1, 10-2, ... 10-n, a mono camera or a stereo camera may be installed in the vicinity of the room mirror, a camera may be installed in the side mirror at the rear side, The camera can be installed in all directions.

The image acquiring unit 10 composed of the plurality of cameras 10-1, 10-2, ..., 10-n takes images of the front, side, rear, or all sides of the vehicle, , And sequentially provides the obtained image to the preprocessing unit 20 on a frame-by-frame basis.

The preprocessing unit 20 receives the image frames provided by the cameras 10-1, 10-2, ..., 10-n of the image acquiring unit 10 and performs preprocessing sequentially. Here, the pre-processing unit performs a preprocessing process such as image correction, color conversion, and edge component detection for efficient image processing.

The obstacle detecting unit 30 detects the dynamic obstacle such as a pedestrian or a vehicle in the preprocessed image frame after performing the preprocessing operation. That is, a dynamic obstacle detection such as a pedestrian or a vehicle detection uses a classifier (not shown). The classifier extracts feature points, i.e., feature vectors using HOG (Histogram of Oriented Gradient) or LBP (Local Binary Patterns) , And detects dynamic obstacles such as a vehicle and a pedestrian using the extracted feature points. Here, the HOG and LBP (Local Binary Patterns) schemes are well-known technologies, and a detailed description thereof will be omitted.

That is, the obstacle sensing unit 30 senses a dynamic obstacle such as a pedestrian or a vehicle using the preprocessed image frame in the preprocessing unit 20, and includes information such as the position and size of the dynamic obstacle according to the sensing result And outputs the extracted information to the obstacle behavior recognition unit 50. The obstacle behavior recognition unit 50 detects the obstacle information such as the pedestrian list and the vehicle list.

The obstacle behavior recognition unit 50 recognizes the behavior of each obstacle belonging to the obstacle list outputted from the obstacle detection unit 30. [ Obstacle behavior recognition uses the behavior classification model generated through obstacle behavior pattern learning. Here, the behavior classification model is stored in the storage unit 40.

The behavior classification model stored in the storage unit 40 is generated by the dynamic obstacle behavior pattern learning, the obstacle behavior pattern learning is performed for each behavior of the obstacle, and the behavior of the obstacle is classified for the dynamic obstacle behavior pattern learning.

In the case of vehicles, behavior patterns can be classified by forward acceleration behavior, forward deceleration behavior, forward lane change behavior, rearward approach behavior, lateral approach behavior, and drowsiness driving behavior.

On the other hand, pedestrians can be categorized as crossing roads, walking to India, waiting for crossing, and difficult to predict. For each behavior of the dataset, a spatial-temporal motion pattern is extracted by extracting a space-time local feature to represent and recognize the behavior in 3D space-time for each behavior of the data set .

The specific configuration and operation of the obstacle behavior recognition unit 50 will now be described with reference to FIG.

FIG. 2 is a diagram for explaining the detailed configuration of the obstacle behavior recognition unit 50 shown in FIG. 1 and its operation.

As shown in FIG. 2, the obstacle behavior recognition unit 50 may include a feature vector extractor 51 and an obstacle behavior classifier 52.

The feature vector extracting unit 51 extracts a space-time feature vector by accumulating N obstacle areas from the frame preceding (N-1) to the current frame with respect to the obstacle detected in the obstacle sensing image frame provided from the obstacle sensing unit 30 Then, the extracted space-time feature vector is provided to the obstacle behavior classifier 52.

The obstacle behavior classifier 52 compares the extracted space time feature vector provided from the feature vector extractor 51 with the learned vehicle or pedestrian space time feature vector stored in the storage unit 40, After recognizing the behavior, the recognition result information, that is, the obstacle recognition information, is provided to the obstacle behavior analysis unit 60.

 The obstacle behavior analysis unit 60 analyzes the behavior of the obstacle using the obstacle behavior recognition information provided from the obstacle behavior classifier 52 of the obstacle behavior recognition unit 50. [

When the behavior of the obstacle is analyzed, the obstacle behavior analysis unit 60 generates warning information for warning the driver of the behavior state of the obstacle and provides the warning information to the warning processing unit 70, And provides the vehicle control unit 80 with the vehicle control information.

The warning processor 70 warns the driver of the obstacle recognition and analysis result according to the warning information provided from the obstacle behavior analysis unit 60. [ Here, as a warning method, a human-machine interface (HMI) can be used to warn with voice, text or video.

On the other hand, the vehicle control unit 80 generates a control signal for controlling the speed, braking, etc. of the vehicle using the vehicle control information provided from the obstacle behavior analysis unit 60 to control the vehicle. That is, the vehicle control unit 80 can control the braking and running speed of the vehicle in order to prevent a collision or the like in accordance with the result of recognition and behavior analysis of the obstacle. Here, it should be understood that the vehicle control object can be set variously.

The dynamic obstacle motion prediction method according to the present invention, which corresponds to the motion prediction apparatus according to the present invention, will be described step by step with reference to FIG.

FIG. 3 is a flowchart showing an operation flow for a dynamic obstacle motion prediction method according to the present invention.

As shown in FIG. 3, first, a front, a rear, a side or a side of a vehicle is photographed using a plurality of cameras installed in a vehicle to acquire an image for each direction (S301). Here, the plurality of cameras may be installed at appropriate positions of the vehicle so as to capture both the front, the side, and the rear of the vehicle. Here, in the case of the plurality of cameras, at least one of a mono camera, a stereo camera, and an AVM camera may be used. For example, in the case of a camera, a mono camera or a stereo camera can be installed in the vicinity of a room mirror, a camera can be installed in a side mirror and a rear side, and a camera can be installed in all directions like an AVM.

Next, an image frame for an image obtained from each camera is received and a preprocessing process is sequentially performed (S302). Here, the preprocessing process performs operations such as image correction, color conversion, and edge component detection for efficient image processing. Here, the preprocessing process is a known technique, and a detailed description thereof will be omitted.

After performing the pre-processing operation as described above, dynamic obstacles such as a pedestrian and a vehicle are detected in the preprocessed image frame (S303). That is, the detection of motion obstacles such as a pedestrian or a vehicle uses a classifier. The classifier extracts feature points, i.e., feature vectors, using Histogram of Oriented Gradient (LOG) and Local Binary Patterns (LBP) To detect dynamic obstacles such as vehicles and pedestrians. Here, the HOG and LBP (Local Binary Patterns) schemes are well-known technologies, and a detailed description thereof will be omitted.

That is, in step S303, a dynamic obstacle such as a pedestrian or a vehicle is detected using the preprocessed image frame in step S302, and a pedestrian list and a vehicle list including information such as the location and size of the dynamic obstacle Of the obstacle list information.

Then, the behavior of each obstacle belonging to the obstacle list extracted in step S303 is recognized. Obstacle behavior recognition uses the behavior classification model generated through obstacle behavior pattern learning. Here, the behavior classification model can be stored in a database.

The behavior classification model stored in the database is generated by the obstacle behavior pattern learning, the obstacle behavior pattern learning is performed for each obstacle behavior, and the behavior of the obstacle is classified for the obstacle behavior pattern learning.

In the case of vehicles, behavior patterns can be classified by forward acceleration behavior, forward deceleration behavior, forward lane change behavior, rearward approach behavior, lateral approach behavior, and drowsiness driving behavior.

On the other hand, pedestrians can be categorized as crossing roads, walking to India, waiting for crossing, and difficult to predict. We acquire a large number of image data sets for each action, and learn spatio - temporal movement patterns by extracting feature vectors for expressing and recognizing behaviors in 3D space - time for each behavior of the data set.

Hereinafter, the behavior of the obstacle will be described in more detail.

As shown in FIG. 2, a space-time feature vector is extracted by accumulating N obstacle areas from (N-1) previous frame to current frame with respect to the obstacle detected in the dynamic obstacle detection image frame detected in step S303 (S304 ).

In step S305, the behavior of the obstacle is recognized by comparing the extracted space-time feature vector extracted in step S304 with the learned vehicle or the spatio-temporal feature vector of each pedestrian stored in the database, that is, the temporal / spatial motion pattern.

Then, the behavior of the obstacle is analyzed using the obstacle behavior recognition information generated in step S305 (S306).

When the behavior of the obstacle is analyzed, warning information for warning the driver of the behavior of the obstacle and vehicle control information based on the obstacle behavior analysis information are generated, and the driver is informed of the behavior of the obstacle by using the generated warning information Warn (S306). Here, as a method of warning to recognize the obstacle recognition and analysis result by the driver, it is possible to use a human-machine interface (HMI) to warn with voice, text or image.

In step S307, a control signal for controlling the speed, braking, etc. of the vehicle is generated using the vehicle control signal vehicle control information generated in step S306. That is, according to the result of recognition and behavior analysis of the obstacle, it is possible to control the braking and running speed of the vehicle in order to prevent a collision accident or the like. Here, it should be understood that the vehicle control object can be set variously.

Although the present invention has been described in connection with the embodiment thereof with reference to the accompanying drawings, it is to be understood that the scope of the present invention is not limited to the specific embodiments, Various modifications, alterations, and changes may be made without departing from the scope of the present invention.

Therefore, the embodiments described in the present invention and the accompanying drawings are intended to illustrate rather than limit the technical spirit of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments and accompanying drawings . The scope of protection of the present invention should be construed according to the claims, and all technical ideas within the scope of equivalents should be interpreted as being included in the scope of the present invention.

10: Image acquisition units 10-1, 10-2, ..., 10-n:
20: preprocessing section 30: obstacle detecting section
40: storage unit 50: obstacle behavior recognition unit
60: Obstacle behavior analysis unit 70: Warning processing unit
80:

Claims (20)

An image obtaining unit for obtaining an outer rectangular image of the vehicle;
An obstacle detection unit for detecting a dynamic obstacle from the acquired image;
Recognizes the behavior of the dynamic obstacle detected by the obstacle detection unit, extracts a motion pattern of the recognized dynamic obstacle, compares the extracted motion pattern with various motion patterns of the pre-modeled dynamic obstacle, An obstacle behavior analysis unit for analyzing the obstacle behavior;
A vehicle controller for controlling a running state of the vehicle according to the obstacle behavior analysis result;
And a motion estimator for estimating motion of the moving object.
The method according to claim 1,
Wherein the image acquiring unit includes at least one of a mono camera, a stereo camera, and an AVM camera.
The method according to claim 1,
Further comprising a preprocessing unit for preprocessing the image frames of the image obtained from the image acquiring unit and providing the image frames to the obstacle sensing unit.
The method of claim 3,
Wherein the preprocessing section of the preprocessing section processes at least one of image correction, color conversion, and edge component detection.
5. The method of claim 4,
The obstacle detection unit
Wherein the preprocessing unit extracts feature vectors from the preprocessed image frame and detects dynamic obstacles such as vehicles and pedestrians using the extracted feature vectors.
6. The method of claim 5,
And a storage unit for classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.
The method according to claim 6,
The obstacle behavior analyzing unit,
A feature vector extractor for extracting a space-time feature vector by accumulating N obstacle areas from a previous frame to a current frame with respect to an obstacle detected in the obstacle sensing image frame provided from the obstacle sensing unit;
An obstacle behavior classifier for recognizing a behavior of an obstacle by comparing the spatio-temporal feature vector extracted by the feature vector extraction unit and a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And
And an analysis unit for generating warning information for warning the driver by analyzing the behavior of the obstacle recognized by the obstacle behavior classifier and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle, Prediction device.
8. The method of claim 7,
And an alarm processing unit for outputting a warning signal to the driver through an HMI (Human-Machine Interface) according to the warning information generated by the analysis unit.
8. The method of claim 7,
Wherein the feature vector extracted by the feature vector extracting unit is extracted using HOG (Histogram of Oriented Gradient), LBP scheme.
The method according to claim 6,
The modeling of the various motion patterns for the dynamic obstacle stored in the storage unit may be performed by at least one of a forward acceleration behavior, a forward deceleration behavior, a front lane change behavior, a rearward approach behavior, a lateral approach behavior, The behavior is classified into behavioral patterns, and the dynamic obstacles are classified into at least one of behavior that crosses a roadway when a pedestrian is a pedestrian, a walk that leads to a road, a waiting for a crossing,
A plurality of image data sets for each of the above actions is collected, and a space-time local feature for expressing and recognizing the behavior in three-dimensional space and time is extracted for each behavior of the data set, And stores the dynamic obstacle motion prediction result.
The method comprising the steps of: acquiring a vehicle outside image using at least one camera installed in the vehicle;
Detecting a dynamic obstacle from the acquired image;
Analyzing the behavior of the dynamic obstacle by recognizing the behavior of the detected dynamic obstacle, extracting a motion pattern of the recognized dynamic obstacle, comparing the extracted motion pattern and various motion patterns of the pre-modeled dynamic obstacle ;
Controlling a running state of the vehicle according to the obstacle behavior analysis result;
And estimating a dynamic obstacle motion.
12. The method of claim 11,
Wherein the image acquisition camera in the step of acquiring the image uses at least one of a mono camera, a stereo camera, and an AVM camera.
12. The method of claim 11,
And performing pre-processing on the image frames of the image obtained in the step of acquiring the image.
14. The method of claim 13,
Wherein the preprocessing performs at least one of an image correction, a color conversion, and an edge component detection operation.
15. The method of claim 14,
Wherein the step of detecting the obstacle includes extracting a feature vector from the preprocessed image frame and detecting a dynamic obstacle such as a vehicle and a pedestrian using the extracted feature vector.
16. The method of claim 15,
And classifying and storing various motion patterns for the modeled dynamic obstacle, respectively.
17. The method of claim 16,
Wherein analyzing the behavior of the dynamic obstacle comprises:
Extracting a space time feature vector by accumulating N obstacle areas from a previous image frame to a current image frame with respect to the dynamic obstacle detected in the sensing step;
Recognizing a behavior of an obstacle by comparing the extracted space-time feature vector with a learned vehicle or a space-time motion pattern for each pedestrian stored in the storage unit; And
And generating warning information for warning the driver by analyzing the behavior of the recognized obstacle and control information for controlling the running of the vehicle according to the analyzed behavior of the obstacle.
18. The method of claim 17,
And outputting a warning signal to the driver through an HMI (Human-Machine Interface) according to the warning information generated in the generating step.
18. The method of claim 17,
Wherein the extraction of the feature vector is performed using a Histogram of Oriented Gradient (HOG) or Local Binary Patterns (LBP).
17. The method of claim 16,
Classifying and storing various motion patterns for the modeled dynamic obstacle, respectively,
The dynamic obstacle is classified into a behavior pattern for at least one of forward acceleration behavior, forward deceleration behavior, forward lane change behavior, rearward approach behavior, lateral approach behavior, and drowsiness driving behavior in the case of a vehicle, Categorizing into at least one of a behavior traversing to a destination, a walk leading to a destination, a waiting for a traversal, or a difficult predictive action;
Collecting a plurality of image data sets for each of the actions;
Extracting a space-time local feature for representing and recognizing a behavior in three-dimensional space-time with respect to each action of the collected data set; And
Learning the temporal and spatial motion patterns using the extracted feature vectors, and then storing the temporal and spatial motion patterns.

KR1020150068836A 2015-05-18 2015-05-18 Apparatus and method for predicting moving of on-road obstable KR20160135482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150068836A KR20160135482A (en) 2015-05-18 2015-05-18 Apparatus and method for predicting moving of on-road obstable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150068836A KR20160135482A (en) 2015-05-18 2015-05-18 Apparatus and method for predicting moving of on-road obstable

Publications (1)

Publication Number Publication Date
KR20160135482A true KR20160135482A (en) 2016-11-28

Family

ID=57706813

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150068836A KR20160135482A (en) 2015-05-18 2015-05-18 Apparatus and method for predicting moving of on-road obstable

Country Status (1)

Country Link
KR (1) KR20160135482A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180069147A (en) * 2016-12-14 2018-06-25 만도헬라일렉트로닉스(주) Apparatus for warning pedestrian in vehicle
KR20190078105A (en) * 2017-12-26 2019-07-04 엘지전자 주식회사 Autonomous vehicle and method of controlling the same
KR20190093729A (en) * 2018-01-09 2019-08-12 삼성전자주식회사 Autonomous driving apparatus and method for autonomous driving of a vehicle
CN110126822A (en) * 2018-02-08 2019-08-16 本田技研工业株式会社 Vehicle control system, control method for vehicle and storage medium
KR20200023707A (en) * 2018-08-23 2020-03-06 엘지전자 주식회사 Moving robot
WO2020198134A1 (en) * 2019-03-22 2020-10-01 Vergence Automation, Inc. Lighting-invariant sensor system for object detection, recognition, and assessment
CN111886598A (en) * 2018-03-21 2020-11-03 罗伯特·博世有限公司 Fast detection of secondary objects that may intersect the trajectory of a moving primary object
CN111982143A (en) * 2020-08-11 2020-11-24 北京汽车研究总院有限公司 Vehicle and vehicle path planning method and device
KR20210099489A (en) * 2020-02-04 2021-08-12 인하대학교 산학협력단 Method and Apparatus for Driver Drowsiness Detection with optimized pre-processing for eyelid-closure classification using SVM
KR20210106040A (en) * 2020-02-19 2021-08-30 재단법인대구경북과학기술원 Apparatus and method for setting driving route
US11514594B2 (en) 2019-10-30 2022-11-29 Vergence Automation, Inc. Composite imaging systems using a focal plane array with in-pixel analog storage elements
CN115626159A (en) * 2021-07-01 2023-01-20 信扬科技(佛山)有限公司 Vehicle warning system and method and automobile

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180069147A (en) * 2016-12-14 2018-06-25 만도헬라일렉트로닉스(주) Apparatus for warning pedestrian in vehicle
KR20190078105A (en) * 2017-12-26 2019-07-04 엘지전자 주식회사 Autonomous vehicle and method of controlling the same
KR20190093729A (en) * 2018-01-09 2019-08-12 삼성전자주식회사 Autonomous driving apparatus and method for autonomous driving of a vehicle
CN110126822A (en) * 2018-02-08 2019-08-16 本田技研工业株式会社 Vehicle control system, control method for vehicle and storage medium
CN111886598A (en) * 2018-03-21 2020-11-03 罗伯特·博世有限公司 Fast detection of secondary objects that may intersect the trajectory of a moving primary object
KR20200133787A (en) * 2018-03-21 2020-11-30 로베르트 보쉬 게엠베하 High-speed detection of secondary objects that can cross the trajectory of the primary object in motion
KR20200023707A (en) * 2018-08-23 2020-03-06 엘지전자 주식회사 Moving robot
WO2020198134A1 (en) * 2019-03-22 2020-10-01 Vergence Automation, Inc. Lighting-invariant sensor system for object detection, recognition, and assessment
US11514594B2 (en) 2019-10-30 2022-11-29 Vergence Automation, Inc. Composite imaging systems using a focal plane array with in-pixel analog storage elements
KR20210099489A (en) * 2020-02-04 2021-08-12 인하대학교 산학협력단 Method and Apparatus for Driver Drowsiness Detection with optimized pre-processing for eyelid-closure classification using SVM
KR20210106040A (en) * 2020-02-19 2021-08-30 재단법인대구경북과학기술원 Apparatus and method for setting driving route
CN111982143A (en) * 2020-08-11 2020-11-24 北京汽车研究总院有限公司 Vehicle and vehicle path planning method and device
CN111982143B (en) * 2020-08-11 2024-01-19 北京汽车研究总院有限公司 Vehicle and vehicle path planning method and device
CN115626159A (en) * 2021-07-01 2023-01-20 信扬科技(佛山)有限公司 Vehicle warning system and method and automobile

Similar Documents

Publication Publication Date Title
KR20160135482A (en) Apparatus and method for predicting moving of on-road obstable
CN106485233B (en) Method and device for detecting travelable area and electronic equipment
CN111033510B (en) Method and device for operating a driver assistance system, driver assistance system and motor vehicle
CN107004363B (en) Image processing device, on-vehicle display system, display device, and image processing method
US20190012551A1 (en) System and method for vehicle control based on object and color detection
US9317752B2 (en) Method for detecting large size and passenger vehicles from fixed cameras
US9449236B2 (en) Method for object size calibration to aid vehicle detection for video-based on-street parking technology
US9363483B2 (en) Method for available parking distance estimation via vehicle side detection
US11804048B2 (en) Recognizing the movement intention of a pedestrian from camera images
KR101912453B1 (en) Apparatus And Method Detectinc Obstacle And Alerting Collision
JP2021504856A (en) Forward collision control methods and devices, electronics, programs and media
US11003928B2 (en) Using captured video data to identify active turn signals on a vehicle
Aytekin et al. Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information
KR20130118116A (en) Apparatus and method avoiding collision with moving obstacles in automatic parking assistance system
KR20180058624A (en) Method and apparatus for detecting sudden moving objecj appearance at vehicle
JP2021136021A (en) Dangerous object identification through causal inference using driver-based danger evaluation and intention recognition driving model
JP2007249841A (en) Image recognition device
KR102355431B1 (en) AI based emergencies detection method and system
US20220410931A1 (en) Situational awareness in a vehicle
EP3709208A1 (en) Method and control unit for detecting a region of interest
Rajendar et al. Prediction of stopping distance for autonomous emergency braking using stereo camera pedestrian detection
US20120155711A1 (en) Apparatus and method for analyzing video
CN114067292A (en) Image processing method and device for intelligent driving
KR20180047149A (en) Apparatus and method for risk alarming of collision
JP7269694B2 (en) LEARNING DATA GENERATION METHOD/PROGRAM, LEARNING MODEL AND EVENT OCCURRENCE ESTIMATING DEVICE FOR EVENT OCCURRENCE ESTIMATION