CN106326866B - Early warning method and device for vehicle collision - Google Patents

Early warning method and device for vehicle collision Download PDF

Info

Publication number
CN106326866B
CN106326866B CN201610730383.7A CN201610730383A CN106326866B CN 106326866 B CN106326866 B CN 106326866B CN 201610730383 A CN201610730383 A CN 201610730383A CN 106326866 B CN106326866 B CN 106326866B
Authority
CN
China
Prior art keywords
image information
key points
obstacle
determining
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610730383.7A
Other languages
Chinese (zh)
Other versions
CN106326866A (en
Inventor
余道明
陈强
兴军亮
张康
董健
黄君实
杨浩
龙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610730383.7A priority Critical patent/CN106326866B/en
Publication of CN106326866A publication Critical patent/CN106326866A/en
Application granted granted Critical
Publication of CN106326866B publication Critical patent/CN106326866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle collision early warning method and a vehicle collision early warning device, wherein the method comprises the following steps: judging whether an obstacle exists in the vehicle advancing direction in the first driving image information; if yes, key point detection is carried out on the obstacle based on the first driving image information, and a first centroid relative distance of the obstacle is determined according to the detected key points; tracking second driving image information based on the key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the second driving image information is an image acquired after the first driving image information; and calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, and performing collision early warning operation according to the vehicle pre-collision time. The technical scheme of the invention solves the problem of how to reliably and real-timely perform early warning operation on the possible vehicle collision accident.

Description

Early warning method and device for vehicle collision
Technical Field
The invention relates to the technical field of vehicle-mounted terminal equipment, in particular to a vehicle collision early warning method and a vehicle collision early warning device.
Background
At present, with the development of automobile intellectualization, the automobile auxiliary driving technology becomes one of the main research directions of technical research and development personnel. Such techniques can provide the user with the necessary information and/or warnings while driving the vehicle to avoid dangerous situations such as vehicle collisions, vehicle departures from the road, etc. With the further development of automobile intelligence, it is even desirable to be able to achieve vehicle unmanned driving through assisted driving techniques. For the auxiliary driving technology, an important technical problem is how to timely and accurately perform effective early warning on the impending vehicle collision condition.
In the prior art, the distance between the vehicles in front can be detected through the laser radar, and early warning is carried out on the condition that the vehicle collision is likely to happen according to the detection result, however, hardware equipment required by the method is extremely expensive, common users are difficult to be equipped with laser radar equipment, in addition, the process of installing the laser radar equipment is complex, and the possibility of changing the appearance of the vehicle exists. In addition, a vehicle collision early warning scheme based on vision also exists in the prior art, wherein the scheme is divided into a binocular scheme and a monocular scheme, and the defects of the conventional vehicle collision early warning scheme based on the monocular mainly comprise that the requirement on the algorithm accuracy for determining the more accurate relative distance between the front vehicle and the vehicle is higher; the binocular-based vehicle collision early warning scheme has the defects that a disparity map needs to be calculated in the calculation process, the algorithm for calculating the disparity map is complex, and the current terminal equipment hardware cannot support real-time calculation.
Therefore, it is desirable to provide a vehicle collision early warning scheme capable of effectively providing a warning service in real time.
Disclosure of Invention
In order to overcome the above technical problems or at least partially solve the above technical problems, the following technical solutions are proposed:
one embodiment of the invention provides a vehicle collision early warning method, which comprises the following steps:
judging whether an obstacle exists in the vehicle advancing direction in the first driving image information;
if yes, key point detection is carried out on the obstacle based on the first driving image information, and a first centroid relative distance of the obstacle is determined according to the detected key points;
tracking second driving image information based on the key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the second driving image information is an image acquired after the first driving image information;
and calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, and performing collision early warning operation according to the vehicle pre-collision time.
Preferably, the determining whether there is an obstacle in the vehicle traveling direction in the first traveling image information includes:
detecting whether an obstacle exists in the first driving image information;
if so, determining the position of the obstacle;
the traveling direction of the vehicle is detected based on the first traveling image information, and it is determined whether the position of the obstacle is in the traveling direction of the vehicle according to the detection result.
Preferably, determining a first centroid relative distance of the obstacle from the detected keypoints comprises:
dividing the detected key points according to a preset dividing rule, and respectively determining the key points in the divided areas;
and respectively carrying out centroid calculation on the key points in the divided regions, and determining the first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
Preferably, the dividing the detected key points according to a predetermined dividing rule includes:
and dividing the detected key points according to the image area where the obstacle is located.
Preferably, the determining the key points in the divided regions respectively comprises:
selecting a preset number of key points in each divided area;
wherein, carry out the barycenter to the key point in the partition region respectively and calculate, include:
and respectively carrying out centroid calculation on the selected key points with the preset number in the divided areas.
Preferably, tracking the second driving image information based on the key points detected in the first driving image information includes:
calculating tracking values of corresponding key points in the second driving image information through a preset image tracking algorithm based on the selected preset number of key points in the first driving image information;
determining the key points corresponding to the tracking values larger than a preset tracking threshold value as key points to be tracked;
wherein, according to the corresponding key point in the second driving image information determined by tracking, determining the second centroid relative distance of the obstacle comprises:
and determining a second centroid relative distance of the obstacle according to the key point to be tracked.
Preferably, determining the second centroid relative distance of the obstacle according to the keypoints to be tracked comprises:
sorting the tracking values corresponding to the key points to be tracked aiming at each divided area, and extracting a preset number of key points in each divided area before sorting;
and respectively carrying out centroid calculation based on the extracted key points in each divided region, and determining a second centroid relative distance of the obstacle according to a centroid calculation result.
Preferably, the keypoints comprise at least one of FAST, ORB, and Harris feature points.
Preferably, the collision warning operation is performed according to a pre-collision time of the vehicle, and includes:
and if the pre-collision time of the vehicle is judged to be smaller than the preset early warning time threshold value, performing collision early warning operation.
Another embodiment of the present invention provides a vehicle collision warning apparatus, including:
the judging module is used for judging whether an obstacle exists in the vehicle advancing direction in the first driving image information;
the detection module is used for detecting key points of the obstacles based on the first driving image information when judging that the obstacles exist in the driving direction of the vehicle in the first driving image information, and determining a first centroid relative distance of the obstacles according to the detected key points;
the tracking module is used for tracking the second driving image information based on the key points detected in the first driving image information and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the second driving image information is an image acquired after the first driving image information;
and the early warning module is used for calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance and carrying out collision early warning operation according to the vehicle pre-collision time.
Preferably, the judging module includes:
the obstacle detection unit is used for detecting whether an obstacle exists in the first driving image information;
a position determination unit for determining a position of an obstacle when it is detected that the obstacle exists in the first traveling image information;
and the detection and judgment unit is used for detecting the advancing direction of the vehicle based on the first driving image information and judging whether the position of the obstacle is in the advancing direction of the vehicle according to the detection result.
Preferably, the detection module comprises:
the region dividing unit is used for dividing the detected key points according to a preset dividing rule and respectively determining the key points in the divided regions;
and the distance calculation unit is used for respectively carrying out centroid calculation on the key points in the divided regions and determining the first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
Preferably, the area dividing unit is configured to divide the detected key point according to an image area where the obstacle is located.
Preferably, the region dividing unit is configured to select a predetermined number of key points in each divided region;
the distance calculation unit is used for respectively carrying out centroid calculation on the selected key points with the preset number in the divided areas.
Preferably, the tracking module comprises:
the tracking value calculation unit is used for calculating the tracking value of the corresponding key point in the second driving image information through a preset image tracking algorithm based on the selected key points with the preset number in the first driving image information;
the key point determining unit is used for determining the key points corresponding to the tracking values larger than the preset tracking threshold value as the key points to be tracked;
and the distance determining unit is used for determining the second centroid relative distance of the obstacle according to the key point to be tracked.
Preferably, the distance determination unit includes:
the key point extracting subunit is used for sorting the tracking values corresponding to the key points to be tracked aiming at each divided area and extracting a preset number of key points in each divided area before sorting;
and the distance determining subunit is used for respectively performing centroid calculation on the basis of the extracted key points in each divided region and determining a second centroid relative distance of the obstacle according to a centroid calculation result.
Preferably, the keypoints comprise at least one of FAST, ORB, and Harris feature points.
Preferably, the early warning module is used for performing collision early warning operation when judging that the vehicle pre-collision time is smaller than a preset early warning time threshold value.
The technical scheme of the invention solves the problem of how to reliably and real-timely perform early warning operation on the possible vehicle collision accident. If the obstacle exists in the vehicle traveling direction in the first driving image information, key point detection is carried out on the obstacle based on the first driving image information, the relative distance of a first mass center of the obstacle is determined according to the detected key point, and the length of a line segment after the key point of the obstacle is connected can be determined through the operation steps; tracking the second driving image information based on the key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the length of the line after the key points of the obstacle after movement are connected can be correspondingly determined through the steps; and calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, performing collision early warning operation according to the vehicle pre-collision time, and comparing the lengths of the line sections after connecting the key points of the obstacle before and after movement to judge the relative distance between the vehicle and the obstacle. The relative distance between the vehicle and the obstacle determined according to the method is high in accuracy, so that the pre-collision time of the vehicle determined according to the relative distance is also high in accuracy, and the accuracy of the early warning operation on the condition of the impending vehicle collision can be improved. In addition, the algorithm adopted by the scheme of the invention has lower complexity, so that the hardware equipment of the terminal equipment can support real-time calculation processing, and further, the reliable guarantee is provided for the personal safety of a driver.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a method for warning of a vehicle collision according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a vehicle collision warning device according to another embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, a "terminal" as used herein includes both devices having a wireless signal receiver, which are devices having only a wireless signal receiver without transmit capability, and devices having receive and transmit hardware, which have devices having receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, a "terminal Device" may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart tv, a set-top box, etc.
Fig. 1 is a flowchart illustrating a vehicle collision warning method according to an embodiment of the present invention.
Step S110: and judging whether an obstacle exists in the vehicle advancing direction in the first driving image information.
The first driving image information can be acquired through a vehicle-mounted video capture device, wherein the vehicle-mounted video capture device can be a driving recorder, a 360-degree panoramic camera and the like.
Preferably, the step of determining whether there is an obstacle in the vehicle traveling direction in the first traveling image information includes step S111, step S112, and step S113: step S111: detecting whether an obstacle exists in the first driving image information; step S112: if so, determining the position of the obstacle; step S113: the traveling direction of the vehicle is detected based on the first traveling image information, and it is determined whether the position of the obstacle is in the traveling direction of the vehicle according to the detection result.
Wherein the detecting the traveling direction of the vehicle based on the first traveling image information further includes: judging whether a lane line exists in the first driving image information; if yes, determining the extending direction of the lane line; the traveling direction of the vehicle is determined according to the extending direction of the lane line. It should be noted that the prior art includes various methods for detecting the traveling direction of the vehicle, and the present invention is not described herein again. .
Step S120: and if so, performing key point detection on the obstacle based on the first driving image information, and determining a first centroid relative distance of the obstacle according to the detected key point.
Preferably, the keypoints include, but are not limited to, FAST, ORB, and Harris feature points.
The FAST feature point is determined based on the gray value of the image around any selected pixel point, the gray value of the pixel points in the area around any selected pixel point can be detected, and if the gray value difference value between the selected pixel point and the gray value of any selected pixel point in the area around any selected pixel point is larger than a preset difference threshold value and the number of the pixel points with the gray value difference value between the selected pixel point and any pixel point larger than the preset gray value difference value is larger than the preset number of the pixel points, any selected pixel point is determined to be a FAST feature point.
Harris feature points may also be referred to as corner points, and the identification of a corner point is usually done in a local area or window. If a preset window is moved in each direction in the image information, the gray value of the image information in the preset window is changed greatly, and then the angular point in the preset window can be determined; if the gray value of the image information in the preset window changes less when the preset window moves in each direction of the image information, determining that no corner point exists in the preset window; if the gray-scale value of the image information in the predetermined window changes more when the predetermined window moves in one direction in the image information and changes less when the predetermined window moves in another direction in the image information, it can be determined that the image information in the predetermined window is a line segment.
The ORB feature points are obtained based on the two feature points, and feature points with predetermined digits in the top order from large to small of the correlation response value of the Harris feature point corresponding to the FAST feature point are selected from the FAST feature points.
It should be noted that the key points of the obstacle in the first driving image information are determined by detecting the characteristic points such as FAST, ORB, and Harris, so as to be tracked by the related image tracking algorithm.
Preferably, the step of determining the first centroid relative distance of the obstacle according to the detected key points includes steps S121 and S122: step S121: dividing the detected key points according to a preset dividing rule, and respectively determining the key points in the divided areas; step S122: and respectively carrying out centroid calculation on the key points in the divided regions, and determining the first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
More preferably, the step of dividing the detected key points according to a predetermined dividing rule specifically includes: and dividing the detected key points according to the image area where the obstacle is located.
Specifically, firstly, detecting an image area where the obstacle is located, and framing the image area where the obstacle is located in a framing mode; and then, dividing the selected area, wherein the selected area may be divided uniformly or non-uniformly according to the area of the selected area, which is not limited in the embodiment of the present invention. In addition, the detected key points may be divided according to a predetermined division rule, or may be divided according to an area in which the detected key points are gathered, or divided evenly according to the number of the detected key points.
The divided regions can be determined in the above manner, and then, the key points within each divided region can be determined.
And finally, respectively carrying out centroid calculation on the key points in the divided regions, and determining the first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
For example, first, it is determined that the obstacle in the traveling direction of the vehicle is a truck a; then, detecting an image area where the truck A is located, and framing the image area where the truck A is located in a framing mode; then, equally dividing the selected area into nine blocks, and sequentially and respectively determining 18, 7, 15, 19, 18, 14, 20 and 21 key points in the nine divided areas; secondly, respectively carrying out centroid calculation on the key points in the nine divided regions to obtain nine discrete centroids; finally, the mutual distances between the nine discrete centroids are determined, and the first centroid relative distance of the obstacle is determined based on the mutual distances between the nine discrete centroids. Wherein the first centroid relative distance is determined based on the calculation of key points on the obstacle, and in practical physical meaning, the first centroid relative distance can represent the length of a line segment of the obstacle enclosed by the key points in the obstacle after connection.
It should be noted that, although the key points in the divided regions can be determined according to the above method, the number of the determined key points is too large to facilitate the subsequent related calculation operations for the key points, and therefore, the following will describe a method that can preferably select the key points in the divided regions, thereby reducing the overhead of the calculation amount for the key points.
Preferably, the step of determining the key points in the divided regions respectively specifically includes: selecting a preset number of key points in each divided area; wherein, the step of respectively carrying out centroid calculation on the key points in the divided regions further comprises the following steps: and respectively carrying out centroid calculation on the selected key points with the preset number in the divided areas.
For example, the area where the image of the selected obstacle is located is averagely divided into nine blocks, and 18, 7, 15, 19, 18, 14, 20 and 21 key points in the nine divided areas are sequentially and respectively determined; then, selecting nine key points in each divided region as key points which need to be subjected to subsequent calculation operation, wherein it needs to be noted that only seven key points exist in one divided region of the nine divided regions, and therefore, for the divided regions with less than nine key points, all the key points are selected as the key points which need to be subjected to subsequent calculation operation; and finally, respectively carrying out centroid calculation on the selected key points in the divided regions.
Step S130: and tracking the second driving image information based on the key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the second driving image information is an image acquired after the first driving image information.
It should be noted that the second driving image information is an image acquired after the first driving image information, and the time difference between the acquisition of the second driving image information and the first driving image information is generally set to a small value, for example, the second driving image information is acquired one frame or N frames after the first driving image information is acquired. Since the time difference between the second driving image information and the first driving image information is small, the second driving image information does not change much compared with the first driving image information, and therefore the second driving image information can be tracked based on the key points detected in the first driving image information to determine the trend of the change of the key points detected in the first driving image information relative to the corresponding key points in the second driving image information.
The tracking operation of the key points can be performed by image tracking algorithms, wherein the image tracking algorithms include, but are not limited to, a particle filter algorithm, a MeanShift algorithm, and a KLT (Kanade Lucas Tomasi) algorithm. The KLT algorithm is preferably selected in the scheme of the invention, and the algorithm determines the corresponding relation between the first driving image information and the second driving image information by utilizing the change of the pixel points in the image sequence on the time domain and the correlation between the adjacent image information based on the instantaneous speed of the pixel point motion of the space moving object in the image information, thereby calculating the motion related information of the moving object between the adjacent image information. The algorithm can be divided into three categories:
(1) region-based or feature-based matching methods;
(2) frequency domain based methods;
(3) gradient-based methods.
Briefly, the KLT algorithm studies to determine the movement of each pixel point position using the temporal variation and correlation of intensity data of the pixel points in the image sequence.
The precondition assumptions for applying the KLT algorithm include:
(1) the brightness between adjacent image information is constant;
(2) the acquisition time of the adjacent image information is continuous, or the movement range of a moving object between the adjacent image information is small;
(3) the space in the adjacent image information keeps consistent, that is, the pixel points of the adjacent image information have the same motion.
Preferably, the step of tracking the second driving image information based on the key point detected in the first driving image information includes steps S131 and S132: step S131: calculating tracking values of corresponding key points in the second driving image information through a preset image tracking algorithm based on the selected preset number of key points in the first driving image information; step S132: determining the key points corresponding to the tracking values larger than a preset tracking threshold value as key points to be tracked; wherein the step of determining a second centroid relative distance of the obstacle from the corresponding keypoint in the second driving image information determined by the tracking further comprises: and determining a second centroid relative distance of the obstacle according to the key point to be tracked.
For example, the area in the first-line image information where the framed obstacle is located is averagely divided into nine blocks, and the number of the predetermined number of key points in the nine divided areas is sequentially determined to be 9, 7, 9; then, calculating a tracking value of a corresponding key point in the second driving image information through a KLT algorithm; and then, determining the key points corresponding to the tracking values larger than the preset tracking threshold value as the key points to be tracked, wherein the size of the tracking values represents the degree of response of the key points to an image tracking algorithm in the tracking process.
Preferably, the step of determining the second centroid relative distance of the obstacle according to the keypoints to be tracked comprises: sorting the tracking values corresponding to the key points to be tracked aiming at each divided area, and extracting a preset number of key points in each divided area before sorting; and respectively carrying out centroid calculation based on the extracted key points in each divided region, and determining a second centroid relative distance of the obstacle according to a centroid calculation result.
For example, the area in the first-line image information where the framed obstacle is located is averagely divided into nine blocks, and the number of the key points to be tracked in the nine divided areas determined sequentially and respectively according to the corresponding tracking values of the key points is 7, 4, 3, 6, 8, 5, 6, or 9; then, aiming at each divided area, sorting the tracking values corresponding to the key points to be tracked, and extracting key points of the top three sorted positions in each divided area; and finally, calculating the mass centers based on the extracted three key points in each divided region, and determining the second mass center relative distance of the obstacle according to the nine mass centers determined by calculation.
Step S140: and calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, and performing collision early warning operation according to the vehicle pre-collision time.
Specifically, the vehicle pre-crash time may be determined by calculation from the following equation:
d (t +1)/d (t) is S; formula (1)
TmΔ (t)/(s-1); formula (2)
Wherein d (t +1) represents the second centroid relative distance; d (t) represents the first centroid relative distance; Δ (t) represents a time difference value of an interval between the acquisition of the second running image information and the acquisition of the first running image information; t ismIndicating a vehicle pre-crash time.
Of course, the pre-collision time of the vehicle may also be determined by calculating according to the first relative centroid distance and the second relative centroid distance through other algorithms, which is not limited in the embodiment of the present invention.
Preferably, the step of performing collision warning operation according to the pre-collision time of the vehicle specifically includes: and if the pre-collision time of the vehicle is judged to be smaller than the preset early warning time threshold value, performing collision early warning operation.
For example, the pre-collision time of the vehicle is determined to be 10s, and whether the pre-collision time of the vehicle is smaller than a preset early warning time threshold value 15s is judged; and then, judging that the pre-collision time 10s of the vehicle is less than a preset early warning time threshold value 15s, and the vehicle is about to collide, so that collision early warning operation is performed to prompt a driver to decelerate or brake the vehicle in advance so as to ensure personal safety.
It should be noted that the first driving image information may be a first frame image of a video recording obtained in real time, and the second driving image information may be a second frame image of the video recording, and of course, in practical applications, the second driving image information only needs to be an image obtained after the first driving image information. Further, third vehicle image information can be acquired, the image information is acquired after the second vehicle image information, therefore, when a key point in the second vehicle image information is detected for tracking a corresponding key point in the third vehicle image information, firstly, the position coordinate of the key point detected in the second vehicle image information and the position coordinate of the key point detected in the first vehicle image information can be determined, whether the difference value between the position coordinate of the key point detected in the second vehicle image information and the position coordinate of the key point detected in the first vehicle image information is within a predetermined position difference value range is judged, if yes, only any one key point is reserved to duplicate the key point of the similar position, and therefore, the overhead of tracking calculation on the similar key point is saved.
The technical scheme of the invention solves the problem of how to reliably and real-timely perform early warning operation on the possible vehicle collision accident. If the obstacle exists in the vehicle traveling direction in the first driving image information, key point detection is carried out on the obstacle based on the first driving image information, the relative distance of a first mass center of the obstacle is determined according to the detected key point, and the length of a line segment after the key point of the obstacle is connected can be determined through the operation steps; tracking the second driving image information based on the key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the length of the line after the key points of the obstacle after movement are connected can be correspondingly determined through the steps; and calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, performing collision early warning operation according to the vehicle pre-collision time, and comparing the lengths of the line sections after connecting the key points of the obstacle before and after movement to judge the relative distance between the vehicle and the obstacle. The relative distance between the vehicle and the obstacle determined according to the method is high in accuracy, so that the pre-collision time of the vehicle determined according to the relative distance is also high in accuracy, and the accuracy of the early warning operation on the condition of the impending vehicle collision can be improved. In addition, the algorithm adopted by the scheme of the invention has lower complexity, so that the hardware equipment of the terminal equipment can support real-time calculation processing, and further, the reliable guarantee is provided for the personal safety of a driver.
Fig. 2 is a schematic structural diagram of a vehicle collision warning device according to another embodiment of the present invention.
The determination module 110 determines whether there is an obstacle in the vehicle traveling direction in the first traveling image information.
The first driving image information can be acquired through a vehicle-mounted video capture device, wherein the vehicle-mounted video capture device can be a driving recorder, a 360-degree panoramic camera and the like.
Preferably, the judging module 110 includes an obstacle detecting unit, a position determining unit, and a detecting and judging unit: the obstacle detection unit detects whether an obstacle exists in the first driving image information; the position determining unit determines the position of the obstacle when detecting that the obstacle exists in the first driving image information; the detection and judgment unit detects the traveling direction of the vehicle based on the first traveling image information, and judges whether the position of the obstacle is in the traveling direction of the vehicle according to the detection result.
The detection and judgment unit is further used for judging whether a lane line exists in the first driving image information; if yes, determining the extending direction of the lane line; the traveling direction of the vehicle is determined according to the extending direction of the lane line. It should be noted that, the prior art includes various methods and apparatuses for detecting a traveling direction of a vehicle, and the present invention is not described herein again. .
The detection module 220 detects a key point of the obstacle based on the first driving image information when judging that the obstacle exists in the first driving image information in the driving direction of the vehicle, and determines a first centroid relative distance of the obstacle according to the detected key point.
Preferably, the keypoints include, but are not limited to, FAST, ORB, and Harris feature points.
The FAST feature point is determined based on the gray value of the image around any selected pixel point, the gray value of the pixel points in the area around any selected pixel point can be detected, and if the gray value difference value between the selected pixel point and the gray value of any selected pixel point in the area around any selected pixel point is larger than a preset difference threshold value and the number of the pixel points with the gray value difference value between the selected pixel point and any pixel point larger than the preset gray value difference value is larger than the preset number of the pixel points, any selected pixel point is determined to be a FAST feature point.
Harris feature points may also be referred to as corner points, and the identification of a corner point is usually done in a local area or window. If a preset window is moved in each direction in the image information, the gray value of the image information in the preset window is changed greatly, and then the angular point in the preset window can be determined; if the gray value of the image information in the preset window changes less when the preset window moves in each direction of the image information, determining that no corner point exists in the preset window; if the gray-scale value of the image information in the predetermined window changes more when the predetermined window moves in one direction in the image information and changes less when the predetermined window moves in another direction in the image information, it can be determined that the image information in the predetermined window is a line segment.
The ORB feature points are obtained based on the two feature points, and feature points with predetermined digits in the top order from large to small of the correlation response value of the Harris feature point corresponding to the FAST feature point are selected from the FAST feature points.
It should be noted that the key points of the obstacle in the first driving image information are determined by detecting the characteristic points such as FAST, ORB, and Harris, so as to be tracked by the related image tracking algorithm.
Preferably, the detection module 220 includes an area dividing unit and a distance calculating unit: the region dividing unit divides the detected key points according to a preset dividing rule and respectively determines the key points in the divided regions; the distance calculation unit respectively performs centroid calculation on the key points in the divided regions, and determines a first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
More preferably, the area dividing unit is specifically configured to divide the detected key points according to an image area where the obstacle is located.
Specifically, firstly, detecting an image area where the obstacle is located, and framing the image area where the obstacle is located in a framing mode; and then, dividing the selected area, wherein the selected area may be divided uniformly or non-uniformly according to the area of the selected area, which is not limited in the embodiment of the present invention. In addition to the above, the operation manner of dividing the detected key points according to the predetermined division rule may be performed by dividing the area in which the detected key points are gathered, or by equally dividing the number of the detected key points, or the like.
The divided regions may be determined by the region dividing unit, and then, the key points within each divided region may be determined.
And finally, the distance calculation unit respectively carries out centroid calculation on the key points in the divided regions, and determines the first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
For example, first, it is determined that the obstacle in the traveling direction of the vehicle is a truck a; then, detecting an image area where the truck A is located, and framing the image area where the truck A is located in a framing mode; then, equally dividing the selected area into nine blocks, and sequentially and respectively determining 18, 7, 15, 19, 18, 14, 20 and 21 key points in the nine divided areas; secondly, respectively carrying out centroid calculation on the key points in the nine divided regions to obtain nine discrete centroids; finally, the mutual distances between the nine discrete centroids are determined, and the first centroid relative distance of the obstacle is determined based on the mutual distances between the nine discrete centroids. Wherein the first centroid relative distance is determined based on the calculation of key points on the obstacle, and in practical physical meaning, the first centroid relative distance can represent the length of a line segment of the obstacle enclosed by the key points in the obstacle after connection.
It should be noted that, according to the above apparatus, the key points in the divided regions can be determined, but the number of the determined key points is too large to facilitate the subsequent related calculation operations for the key points, so the apparatus that can be used for optimizing the key points in the divided regions will be described below to reduce the overhead of the calculation amount for the key points.
Preferably, the region dividing unit is specifically configured to select a predetermined number of key points in each divided region; the distance calculation unit is specifically configured to perform centroid calculation on a predetermined number of selected key points in the divided regions.
For example, the area where the image of the selected obstacle is located is averagely divided into nine blocks, and 18, 7, 15, 19, 18, 14, 20 and 21 key points in the nine divided areas are sequentially and respectively determined; then, selecting nine key points in each divided region as key points which need to be subjected to subsequent calculation operation, wherein it needs to be noted that only seven key points exist in one divided region of the nine divided regions, and therefore, for the divided regions with less than nine key points, all the key points are selected as the key points which need to be subjected to subsequent calculation operation; and finally, respectively carrying out centroid calculation on the selected key points in the divided regions.
The tracking module 230 tracks the second driving image information based on the key point detected in the first driving image information, and determines a second centroid relative distance of the obstacle according to the corresponding key point in the second driving image information determined by tracking, where the second driving image information is an image acquired after the first driving image information.
It should be noted that the second driving image information is an image acquired after the first driving image information, and the time difference between the acquisition of the second driving image information and the first driving image information is generally set to a small value, for example, the second driving image information is acquired one frame or N frames after the first driving image information is acquired. Since the time difference between the second driving image information and the first driving image information is small, the second driving image information does not change much compared with the first driving image information, and therefore the second driving image information can be tracked based on the key points detected in the first driving image information to determine the trend of the change of the key points detected in the first driving image information relative to the corresponding key points in the second driving image information.
The tracking operation of the key points can be performed by image tracking algorithms, wherein the image tracking algorithms include, but are not limited to, a particle filter algorithm, a MeanShift algorithm, and a KLT (Kanade Lucas Tomasi) algorithm. The KLT algorithm is preferably selected in the scheme of the invention, and the algorithm determines the corresponding relation between the first driving image information and the second driving image information by utilizing the change of the pixel points in the image sequence on the time domain and the correlation between the adjacent image information based on the instantaneous speed of the pixel point motion of the space moving object in the image information, thereby calculating the motion related information of the moving object between the adjacent image information. The algorithm can be divided into three categories:
(1) region-based or feature-based matching methods;
(2) frequency domain based methods;
(3) gradient-based methods.
Briefly, the KLT algorithm studies to determine the movement of each pixel point position using the temporal variation and correlation of intensity data of the pixel points in the image sequence.
The precondition assumptions for applying the KLT algorithm include:
(1) the brightness between adjacent image information is constant;
(2) the acquisition time of the adjacent image information is continuous, or the movement range of a moving object between the adjacent image information is small;
(3) the space in the adjacent image information keeps consistent, that is, the pixel points of the adjacent image information have the same motion.
Preferably, the tracking module 230 includes a tracking value calculation unit, a keypoint determination unit, and a distance determination unit: the tracking value calculation unit calculates the tracking value of the corresponding key point in the second driving image information through a preset image tracking algorithm based on the selected preset number of key points in the first driving image information; the key point determining unit determines the key points with the tracking values larger than a preset tracking threshold value as key points to be tracked; the distance determining unit determines a second centroid relative distance of the obstacle according to the key point to be tracked.
For example, the area in the first-line image information where the framed obstacle is located is averagely divided into nine blocks, and the number of the predetermined number of key points in the nine divided areas is sequentially determined to be 9, 7, 9; then, calculating a tracking value of a corresponding key point in the second driving image information through a KLT algorithm; and then, determining the key points corresponding to the tracking values larger than the preset tracking threshold value as the key points to be tracked, wherein the size of the tracking values represents the degree of response of the key points to an image tracking algorithm in the tracking process.
Preferably, the distance determining unit includes a key point extracting sub-unit, a distance determining sub-unit: the key point extraction subunit sequences the tracking values corresponding to the key points to be tracked aiming at each divided area and extracts a preset number of key points in each divided area before sequencing; the distance determining subunit performs centroid calculation based on the extracted key points in each divided region, and determines a second centroid relative distance of the obstacle according to a centroid calculation result.
For example, the area in the first-line image information where the framed obstacle is located is averagely divided into nine blocks, and the number of the key points to be tracked in the nine divided areas determined sequentially and respectively according to the corresponding tracking values of the key points is 7, 4, 3, 6, 8, 5, 6, or 9; then, aiming at each divided area, sorting the tracking values corresponding to the key points to be tracked, and extracting key points of the top three sorted positions in each divided area; and finally, calculating the mass centers based on the extracted three key points in each divided region, and determining the second mass center relative distance of the obstacle according to the nine mass centers determined by calculation.
The early warning module 240 determines the pre-collision time of the vehicle according to the first relative distance of the center of mass and the second relative distance of the center of mass, and performs a collision early warning operation according to the pre-collision time of the vehicle.
Specifically, the vehicle pre-crash time may be determined by calculation from the following equation:
d (t +1)/d (t) is S; formula (1)
Tm ═ Δ (t)/(s-1); formula (2)
Wherein d (t +1) represents the second centroid relative distance; d (t) represents the first centroid relative distance; Δ (t) represents a time difference value of an interval between the acquisition of the second running image information and the acquisition of the first running image information; tm represents the vehicle pre-crash time.
Of course, the pre-collision time of the vehicle may also be determined by calculating according to the first relative centroid distance and the second relative centroid distance through other algorithms, which is not limited in the embodiment of the present invention.
Preferably, the early warning module 240 is specifically configured to perform a collision early warning operation when it is determined that the pre-collision time of the vehicle is less than a predetermined early warning time threshold.
For example, the pre-collision time of the vehicle is determined to be 10s, and whether the pre-collision time of the vehicle is smaller than a preset early warning time threshold value 15s is judged; and then, judging that the pre-collision time 10s of the vehicle is less than a preset early warning time threshold value 15s, and the vehicle is about to collide, so that collision early warning operation is performed to prompt a driver to decelerate or brake the vehicle in advance so as to ensure personal safety.
It should be noted that the first driving image information may be a first frame image of a video recording obtained in real time, and the second driving image information may be a second frame image of the video recording, and of course, in practical applications, the second driving image information only needs to be an image obtained after the first driving image information. Furthermore, the embodiment of the invention also comprises a duplication elimination module which is used for acquiring the image information of the third row vehicle, this image information is acquired after the second running image information, and therefore, when a key point in the second running image information for tracking a corresponding key point in the third running image information is detected, first, the position coordinates of the key points detected in the second driving image information and the position coordinates of the key points detected in the first driving image information can be determined, whether the difference value between the position coordinates of the key points detected in the second driving image information and the position coordinates of the key points detected in the first driving image information is within a preset position difference value range or not is judged, if yes, only any key point is reserved, the key points of the similar positions are removed, so that the expense of tracking operation on the similar key points is saved.
The technical scheme of the invention solves the problem of how to reliably and real-timely perform early warning operation on the possible vehicle collision accident. If the obstacle exists in the vehicle traveling direction in the first driving image information, key point detection is carried out on the obstacle based on the first driving image information, the relative distance of a first mass center of the obstacle is determined according to the detected key point, and the length of a line segment after the key point of the obstacle is connected can be determined through the operation steps; tracking the second driving image information based on the key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the length of the line after the key points of the obstacle after movement are connected can be correspondingly determined through the steps; and calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, performing collision early warning operation according to the vehicle pre-collision time, and comparing the lengths of the line sections after connecting the key points of the obstacle before and after movement to judge the relative distance between the vehicle and the obstacle. The relative distance between the vehicle and the obstacle determined according to the method is high in accuracy, so that the pre-collision time of the vehicle determined according to the relative distance is also high in accuracy, and the accuracy of the early warning operation on the condition of the impending vehicle collision can be improved. In addition, the algorithm adopted by the scheme of the invention has lower complexity, so that the hardware equipment of the terminal equipment can support real-time calculation processing, and further, the reliable guarantee is provided for the personal safety of a driver.
Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable programmable Read-Only memories), EEPROMs (Electrically Erasable programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the features specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.
Those of skill in the art will appreciate that various operations, methods, steps in the processes, acts, or solutions discussed in the present application may be alternated, modified, combined, or deleted. Further, various operations, methods, steps in the flows, which have been discussed in the present application, may be interchanged, modified, rearranged, decomposed, combined, or eliminated. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (16)

1. A method for early warning of a vehicle collision, comprising:
judging whether an obstacle exists in the vehicle advancing direction in the first driving image information;
if yes, performing key point detection on the obstacle based on the first driving image information, and determining a first centroid relative distance of the obstacle according to the detected key point;
tracking second driving image information based on key points detected in the first driving image information, and determining a second centroid relative distance of the obstacle according to corresponding key points in the second driving image information determined by tracking, wherein the second driving image information is an image acquired after the first driving image information;
calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance, and performing collision early warning operation according to the vehicle pre-collision time;
judging whether an obstacle exists in the vehicle advancing direction in the first driving image information, comprising:
detecting whether an obstacle exists in the first driving image information;
if so, determining the position of the obstacle;
detecting the traveling direction of the vehicle based on the first driving image information, and judging whether the position of the obstacle is in the traveling direction of the vehicle according to the detection result;
wherein the detecting of the traveling direction of the vehicle based on the first traveling image information includes:
judging whether a lane line exists in the first driving image information;
if so, determining the extending direction of the lane line;
and determining the traveling direction of the vehicle according to the extending direction of the lane line.
2. The method of claim 1, wherein determining a first centroid relative distance of the obstacle from the detected keypoints comprises:
dividing the detected key points according to a preset dividing rule, and respectively determining the key points in the divided areas;
and respectively carrying out centroid calculation on the key points in the divided regions, and determining a first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
3. The method of claim 2, wherein the dividing the detected keypoints according to a predetermined division rule comprises:
and dividing the detected key points according to the image area where the obstacle is located.
4. The method of claim 2, wherein determining keypoints in the partitioned regions separately comprises:
selecting a preset number of key points in each divided area;
wherein, respectively carrying out centroid calculation on the key points in the divided regions comprises:
and respectively carrying out centroid calculation on the selected key points with the preset number in the divided regions.
5. The method of claim 4, wherein tracking the second driving image information based on key points detected in the first driving image information comprises:
calculating tracking values of corresponding key points in the second driving image information through a preset image tracking algorithm based on the selected preset number of key points in the first driving image information;
determining the key points corresponding to the tracking values larger than a preset tracking threshold value as key points to be tracked;
wherein, determining a second centroid relative distance of the obstacle according to the corresponding key point in the second driving image information determined by tracking comprises:
determining a second centroid relative distance of the obstacle according to the keypoint to be tracked.
6. The method of claim 5, wherein determining a second centroid relative distance of the obstacle from the keypoint to be tracked comprises:
sorting the tracking values corresponding to the key points to be tracked aiming at each divided area, and extracting a preset number of key points in each divided area before sorting;
and respectively carrying out centroid calculation based on the extracted key points in each divided region, and determining a second centroid relative distance of the obstacle according to a centroid calculation result.
7. The method of any one of claims 1-6, wherein the keypoints comprise at least one of FAST, ORB, and Harris keypoints.
8. The method of claim 1, wherein performing a pre-crash warning operation based on the pre-crash time of the vehicle comprises:
and if the vehicle pre-collision time is judged to be smaller than a preset early warning time threshold value, performing collision early warning operation.
9. A vehicle collision warning device, comprising:
the judging module is used for judging whether an obstacle exists in the vehicle advancing direction in the first driving image information;
the detection module is used for detecting key points of the obstacles based on the first driving image information when judging that the obstacles exist in the driving direction of the vehicle in the first driving image information, and determining a first centroid relative distance of the obstacles according to the detected key points;
the tracking module is used for tracking second driving image information based on the key points detected in the first driving image information and determining a second centroid relative distance of the obstacle according to the corresponding key points in the second driving image information determined by tracking, wherein the second driving image information is an image acquired after the first driving image information;
the early warning module is used for calculating and determining the vehicle pre-collision time according to the first centroid relative distance and the second centroid relative distance and carrying out collision early warning operation according to the vehicle pre-collision time;
the judging module comprises:
an obstacle detection unit configured to detect whether an obstacle exists in the first traveling image information;
a position determination unit configured to determine a position of an obstacle when the obstacle is detected to be present in the first traveling image information;
a detection and judgment unit for detecting the traveling direction of the vehicle based on the first traveling image information and judging whether the position of the obstacle is in the traveling direction of the vehicle according to the detection result;
the detection and judgment unit is specifically configured to judge whether a lane line exists in the first driving image information, determine an extending direction of the lane line if the lane line exists, and determine a traveling direction of the vehicle according to the extending direction of the lane line.
10. The apparatus of claim 9, wherein the detection module comprises:
the region dividing unit is used for dividing the detected key points according to a preset dividing rule and respectively determining the key points in the divided regions;
and the distance calculation unit is used for respectively carrying out centroid calculation on the key points in the divided regions and determining the first centroid relative distance of the obstacle according to a plurality of centroid calculation results.
11. The apparatus according to claim 10, wherein the area dividing unit is configured to divide the detected key points according to an image area where the obstacle is located.
12. The apparatus of claim 10, wherein the region dividing unit is configured to select a predetermined number of key points in each divided region;
the distance calculation unit is used for respectively carrying out centroid calculation on the selected key points with the preset number in the divided areas.
13. The apparatus of claim 12, wherein the tracking module comprises:
the tracking value calculation unit is used for calculating the tracking value of the corresponding key point in the second driving image information through a preset image tracking algorithm based on the selected key points with the preset number in the first driving image information;
the key point determining unit is used for determining the key points corresponding to the tracking values larger than a preset tracking threshold value as key points to be tracked;
and the distance determining unit is used for determining a second centroid relative distance of the obstacle according to the key point to be tracked.
14. The apparatus of claim 13, wherein the distance determining unit comprises:
the key point extracting subunit is used for sorting the tracking values corresponding to the key points to be tracked aiming at each divided area and extracting a preset number of key points in each divided area before sorting;
and the distance determining subunit is used for respectively performing centroid calculation on the basis of the extracted key points in each divided region and determining a second centroid relative distance of the obstacle according to a centroid calculation result.
15. The apparatus of any one of claims 9-14, wherein the keypoints comprise at least one of FAST, ORB, and Harris keypoints.
16. The device of claim 9, wherein the early warning module is configured to perform a collision early warning operation when it is determined that the pre-collision time of the vehicle is less than a predetermined early warning time threshold.
CN201610730383.7A 2016-08-25 2016-08-25 Early warning method and device for vehicle collision Active CN106326866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610730383.7A CN106326866B (en) 2016-08-25 2016-08-25 Early warning method and device for vehicle collision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610730383.7A CN106326866B (en) 2016-08-25 2016-08-25 Early warning method and device for vehicle collision

Publications (2)

Publication Number Publication Date
CN106326866A CN106326866A (en) 2017-01-11
CN106326866B true CN106326866B (en) 2020-01-17

Family

ID=57791109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610730383.7A Active CN106326866B (en) 2016-08-25 2016-08-25 Early warning method and device for vehicle collision

Country Status (1)

Country Link
CN (1) CN106326866B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6764378B2 (en) * 2017-07-26 2020-09-30 株式会社Subaru External environment recognition device
CN109131084A (en) * 2018-09-12 2019-01-04 广州星凯跃实业有限公司 360 panorama of active forewarning driving auxiliary control method and system
CN111339808B (en) * 2018-12-19 2024-04-23 北京嘀嘀无限科技发展有限公司 Vehicle collision probability prediction method, device, electronic equipment and storage medium
CN109993074A (en) * 2019-03-14 2019-07-09 杭州飞步科技有限公司 Assist processing method, device, equipment and the storage medium driven
CN110132302A (en) * 2019-05-20 2019-08-16 中国科学院自动化研究所 Merge binocular vision speedometer localization method, the system of IMU information
CN111554098B (en) * 2020-04-27 2021-10-15 上海智能交通有限公司 Expressway emergency lane occupation detection system and implementation method thereof
CN116118784B (en) * 2023-04-17 2023-06-13 禾多科技(北京)有限公司 Vehicle control method, apparatus, electronic device, and computer-readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101391589A (en) * 2008-10-30 2009-03-25 上海大学 Vehicle intelligent alarming method and device
CN103914688A (en) * 2014-03-27 2014-07-09 北京科技大学 Urban road obstacle recognition system
CN104299244A (en) * 2014-09-26 2015-01-21 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
US9342746B1 (en) * 2011-03-17 2016-05-17 UtopiaCompression Corporation Maneuverless passive range estimation using monocular image sequences
CN105718888A (en) * 2016-01-22 2016-06-29 北京中科慧眼科技有限公司 Obstacle prewarning method and obstacle prewarning device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101391589A (en) * 2008-10-30 2009-03-25 上海大学 Vehicle intelligent alarming method and device
US9342746B1 (en) * 2011-03-17 2016-05-17 UtopiaCompression Corporation Maneuverless passive range estimation using monocular image sequences
CN103914688A (en) * 2014-03-27 2014-07-09 北京科技大学 Urban road obstacle recognition system
CN104299244A (en) * 2014-09-26 2015-01-21 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
CN105718888A (en) * 2016-01-22 2016-06-29 北京中科慧眼科技有限公司 Obstacle prewarning method and obstacle prewarning device

Also Published As

Publication number Publication date
CN106326866A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN106326866B (en) Early warning method and device for vehicle collision
US11967109B2 (en) Vehicle localization using cameras
CN107392103B (en) Method and device for detecting road lane line and electronic equipment
CN107341454B (en) Method and device for detecting obstacles in scene and electronic equipment
CN108437986B (en) Vehicle driving assistance system and assistance method
CN105922990B (en) A kind of vehicle environmental based on high in the clouds machine learning perceives and control method
JP7499256B2 (en) System and method for classifying driver behavior - Patents.com
CN106611512B (en) Method, device and system for processing starting of front vehicle
US11631326B2 (en) Information providing system, server, onboard device, vehicle, storage medium, and information providing method
CN107590470B (en) Lane line detection method and device
CN112562405A (en) Radar video intelligent fusion and early warning method and system
CN108460968A (en) A kind of method and device obtaining traffic information based on car networking
CN108638999A (en) A kind of collision early warning system and method for looking around input based on 360 degree
CN109927629B (en) Display control apparatus, display control method, and vehicle for controlling projection apparatus
CN110942038A (en) Traffic scene recognition method, device, medium and electronic equipment based on vision
CN110967018B (en) Parking lot positioning method and device, electronic equipment and computer readable medium
CN106203381A (en) Obstacle detection method and device in a kind of driving
JP2021117048A (en) Change point detector and map information delivery system
EP3364336B1 (en) A method and apparatus for estimating a range of a moving object
CN104537649A (en) Vehicle steering judgment method and system based on image ambiguity comparison
CN106570487A (en) Method and device for predicting collision between objects
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN111950483A (en) Vision-based vehicle front collision prediction method
CN114424241A (en) Image processing apparatus and image processing method
JP2023104982A (en) Accident analysis device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240109

Address after: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.

TR01 Transfer of patent right