CN113511198B - Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns - Google Patents

Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns Download PDF

Info

Publication number
CN113511198B
CN113511198B CN202111077078.XA CN202111077078A CN113511198B CN 113511198 B CN113511198 B CN 113511198B CN 202111077078 A CN202111077078 A CN 202111077078A CN 113511198 B CN113511198 B CN 113511198B
Authority
CN
China
Prior art keywords
vehicle
self
target
collision
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111077078.XA
Other languages
Chinese (zh)
Other versions
CN113511198A (en
Inventor
徐显杰
马玉珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suoto Hangzhou Automotive Intelligent Equipment Co Ltd
Tianjin Soterea Automotive Technology Co Ltd
Original Assignee
Suoto Hangzhou Automotive Intelligent Equipment Co Ltd
Tianjin Soterea Automotive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suoto Hangzhou Automotive Intelligent Equipment Co Ltd, Tianjin Soterea Automotive Technology Co Ltd filed Critical Suoto Hangzhou Automotive Intelligent Equipment Co Ltd
Priority to CN202111077078.XA priority Critical patent/CN113511198B/en
Publication of CN113511198A publication Critical patent/CN113511198A/en
Application granted granted Critical
Publication of CN113511198B publication Critical patent/CN113511198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0953Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/06Direction of travel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/20Static objects

Abstract

The invention relates to the field of camera detection, and discloses a dead zone collision prediction method and device during self-turning and a storage medium. The method comprises the following steps: acquiring a plurality of images shot by a vehicle-mounted BSD camera in the process of turning a vehicle; carrying out target identification and tracking on the blind areas in the multiple images, and determining and identifying a pre-collided static target; from the current position of the static target, making a tangent line to the track predicted by the rear wheel at the steering side of the self-vehicle to obtain the position of a tangent point; and calculating the collision duration according to the current position of the self-vehicle and the position of the tangent point. The embodiment realizes the prediction of the collision time length of the blind area static target when the self-vehicle turns.

Description

Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns
Technical Field
The invention relates to the field of collision prediction, in particular to a method, equipment and a storage medium for predicting blind area collision during turning of a self-vehicle.
Background
Vehicle-mounted BSD (Blind Spot Detection) cameras (hereinafter referred to as cameras) are installed on two sides of the rear of the bicycle and used for detecting Blind zones on two sides of the rear of the bicycle when the bicycle runs. When a pedestrian or a rider or the like is detected to approach the bicycle, an early warning, such as a light flashing or whistling early warning, is triggered.
The current BSD products (including the above-mentioned cameras) only provide an early warning function, do not have an emergency braking function, and are even unable to predict whether there is a collision risk. Particularly, in a scene where the vehicle turns, the probability of collision is higher, and the driving risk is increased. In view of the above, the present invention is particularly proposed.
Disclosure of Invention
In order to solve the technical problem, the invention provides a blind area collision prediction method, equipment and a storage medium when an own vehicle turns, wherein the collision duration of a blind area static target is predicted when the own vehicle turns.
The embodiment of the invention provides a method for predicting blind area collision during turning of a bicycle, which comprises the following steps:
acquiring a plurality of images shot by a vehicle-mounted BSD camera in the process of turning a vehicle;
carrying out target identification and tracking on the blind areas in the multiple images, and determining and identifying a pre-collided static target;
from the current position of the static target, making a tangent line to the track predicted by the rear wheel at the steering side of the self-vehicle to obtain the position of a tangent point;
and calculating the collision duration according to the current position of the self-vehicle and the position of the tangent point.
An embodiment of the present invention provides an electronic device, including:
a processor and a memory;
the processor is used for executing the steps of the blind area collision prediction method during the turning of the self-vehicle according to any embodiment by calling the program or the instructions stored in the memory.
Embodiments of the present invention provide a computer-readable storage medium storing a program or instructions for causing a computer to execute the steps of the method for predicting a blind area collision when a vehicle turns according to any of the embodiments.
The embodiment of the invention has the following technical effects: the method comprises the steps of determining a static target through a target recognition and tracking method, calculating a track predicted by a rear wheel at the steering side of the self-vehicle, drawing a tangent line to obtain a tangent point, and calculating collision duration by adopting the current position and the position of the tangent point, so that the collision duration is accurately predicted by taking the actual running condition of the self-vehicle running along the tangent line on the track into consideration by utilizing the characteristic that the rear wheel at the steering side possibly collides with the static target on the tangent line when the rear wheel at the steering side comes to the tangent point, and whether collision risk exists is accurately predicted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a method for predicting blind zone collision during turning of a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the location of a blind area in an image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a coordinate system and a trajectory of a vehicle according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a tangent point and a tangent line provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The blind area collision prediction method during the turning of the self-vehicle is mainly suitable for predicting the collision condition of the static target in the blind area during the turning of the self-vehicle. The blind area collision prediction method during the turning of the self-vehicle provided by the embodiment of the invention can be executed by electronic equipment.
Fig. 1 is a flowchart of a method for predicting a blind area collision when a vehicle turns. Referring to fig. 1, the method for predicting the blind area collision during the turning of the self-vehicle specifically comprises the following steps:
and S110, acquiring a plurality of images shot by the vehicle-mounted BSD camera in the turning process of the vehicle.
Whether the own vehicle (i.e., the own vehicle) turns or not can be determined by various methods, such as whether a turn lamp is turned on, detecting a tire rotation angle and a steering wheel rotation angle, and the like. Because the turn signal lamp has certain misjudgment, the turn angle sensor is expensive, and not all vehicles are provided with the turn angle sensor; vehicles are typically equipped with inertial measurement units, such as six-axis gyroscopes. The present embodiment creatively uses yaw rate (yawRate) collected by a six-axis gyroscope to determine whether the vehicle is turning. The method specifically comprises the following three steps.
The first step is as follows: a yaw rate of the host vehicle is obtained, and a turning radius of the host vehicle is calculated from the yaw rate. See the following formula.
Figure 64870DEST_PATH_IMAGE001
Wherein r is the turning radius of the bicycle and the unit m. v is the vehicle speed in m/s and yawRate in °/s.
The second step is that: and calculating the steering wheel angle theta (namely the rotation angle of the front wheels) according to the turning radius r and the wheelbase of the bicycle. See the following formula.
Figure 991238DEST_PATH_IMAGE002
The third step: and if the steering wheel angle exceeds a set value, starting the vehicle-mounted BSD camera to shoot a plurality of images.
The set value is a steering wheel angle threshold value when the vehicle turns, and can be calibrated to be 30 degrees, for example. If the steering lamp is turned on and the steering wheel angle exceeds a set value or the yaw rate exceeds a set value, determining that the vehicle turns, namely the vehicle turns from one road to the other road, wherein the included angle between the two roads is less than a certain value, such as 100 degrees; at this time, a plurality of images continuously shot by the vehicle-mounted BSD camera are obtained from the current time, and the images are shot in the process of turning the vehicle. If the steering wheel angle exceeds the set value, the self vehicle may execute small-amplitude operation such as lane changing or obstacle avoidance instead of turning.
And S120, carrying out target identification and tracking on the blind areas in the multiple images, and determining and identifying the pre-collided static target.
The blind zone refers to the area of the ground that the driver cannot see through the rearview mirror. Fig. 2 is a schematic position diagram of blind areas in an image according to an embodiment of the present invention, and it is assumed that vehicle-mounted BSD cameras are respectively mounted on vehicle body panels on left and right sides of a vehicle, and the vehicle-mounted BSD cameras are disposed near a vehicle tail, and an effective blind area range that can be monitored by each vehicle-mounted BSD camera is a rectangular area having a length of 15 meters and a width of 4 meters, where an edge of the rectangular area near the vehicle is overlapped with a lower edge of a vehicle body where a corresponding vehicle-mounted BSD camera is located, and an edge of the rectangular area near the vehicle tail is flush with the corresponding vehicle-mounted BSD camera.
The method comprises the steps of identifying a target in a blind area through a correlation method of image identification, and tracking the identified target, wherein the tracking algorithm comprises but is not limited to a mean shift algorithm, a Kalman filter, a particle filter and the like, so as to determine a static target (which means to be static in a world coordinate system) which is about to collide (i.e. pre-collide) with a self-vehicle. Optionally, it is first determined whether the target is pre-collided, and then it is determined whether the target is static or dynamic. The method specifically comprises the following three steps.
The first step is as follows: and carrying out target identification on the blind areas in the multiple images to obtain an identified target.
The target in the present embodiment is a target of a potential collision, such as a pedestrian, a rider, and other vehicles. And detecting each image by adopting a target detection model to obtain the position and the category information of the target in the blind area.
The second step is that: constructing a trajectory of the target by tracking the target in a plurality of images.
Alternatively, the target is tracked only if it is within the blind zone and within the travelable zone. The drivable area includes all road surface areas starting from the own vehicle, and no obstacle is present between the road surface areas and the own vehicle. The drivable area is used as a segmentation type in the semantic segmentation result, so that semantic segmentation can be performed on a plurality of images to obtain the drivable area, and whether the target in the blind area is located in the drivable area is further judged.
Since the image is shot by the vehicle-mounted BSD camera, a coordinate system which is matched with the image and moves synchronously with the vehicle is set in the embodiment, and the coordinate system is called as a vehicle coordinate system, so that the information tracked and identified in the image can be directly mapped into the vehicle coordinate system for processing.
Fig. 3 is a schematic diagram of a coordinate system and a trajectory of a vehicle according to an embodiment of the present invention. The origin of the self-vehicle coordinate system is a vertical point from the optical center of the vehicle-mounted BSD camera to the ground and is also a contact point between the rear wheel at the steering side and the ground, the Y axis points to the direction of the vehicle head (namely the longitudinal direction of the self-vehicle), and the X axis (namely the transverse direction of the self-vehicle) is vertical to the Y axis and faces to the steering side. It can be seen that the position of the own vehicle coordinate system in the world coordinate system changes in real time as the own vehicle travels, but the own vehicle coordinate system is stationary relative to the own vehicle.
The same object is tracked in a plurality of images (for example, 5 consecutive images from the current time), and the position of the same object in the image coordinate system is obtained. The origin of the image coordinate system may be the lower left corner of the image, the width and length directions of the image constituting the two axes of the coordinate system. And then, projecting the position of the target from the image coordinate system to a self-vehicle coordinate system at a corresponding moment to obtain a plurality of track points. Specifically, an internal reference matrix, an external reference matrix and a translation matrix are respectively established according to the internal reference of the vehicle-mounted BSD camera and the installation position of the camera, and the position of the target is projected to the world coordinate system from the image coordinate system according to the internal reference matrix, the external reference matrix and the translation matrix. The position of the self-vehicle under the actual coordinate system can be obtained through positioning, and the position of the self-vehicle coordinate system can be further obtained. And projecting the position of the target from the world coordinate system into the own vehicle coordinate system according to the conversion relation between the world coordinate system and the own vehicle coordinate system.
And then, performing curve fitting on the plurality of track points to obtain a track. It should be noted that, if the target is a static target, the trajectory is actually a section of curve; if the target moves synchronously with the self-vehicle, the track is actually a point.
The third step: and if the track of the target is within the range of the predicted track of the vehicle, determining that the target is a pre-collision target, and further determining that a pre-collision static target is identified. The vehicle trajectory range is a range surrounded by a trajectory swept by the rear wheel on the steering side of the vehicle and the front wheel on the non-steering side of the vehicle, and is substantially a circular ring range. It should be noted that the range of the own vehicle trajectory needs to be projected to the own vehicle coordinate system at the current moment, so that the determination is performed in the same coordinate system.
Referring to fig. 3, the rectangle represents the bicycle, the lower right corner of the rectangle is the position of the rear wheel on the steering side, and the upper left corner of the rectangle is the position of the front corner on the non-steering side. And the whole area is in the track range of the bicycle. Specifically, the region is t0The coverage area of the vehicle body of the vehicle at the moment is t1And (3) the coverage range of the vehicle body of the vehicle at the moment, and the area is the predicted partial track range of the vehicle. If t is0The trajectory of the target at the moment is located within these 3 regions in fig. 3, t1And determining that the target is a pre-collision target when the track of the target is positioned in the third area in the figure 3.
The embodiment provides a novel self-vehicle coordinate system, the origin of the self-vehicle coordinate system corresponds to the optical center position of the vehicle-mounted BSD camera, and the target in the image can be conveniently projected into the self-vehicle coordinate system for processing; and, by constructing the trajectory and judging whether it is within the above range, it is determined whether the target is a pre-collision target. And if the track of the target is not positioned in the coverage range of the self-vehicle and is not positioned in the corresponding range of the predicted track, the target does not collide with the self-vehicle and the subsequent processing is not carried out.
When determining whether the target is a static target, the vehicle speed components in the X-axis (i.e., in the lateral direction of the vehicle) and Y-axis (i.e., in the longitudinal direction of the vehicle) directions of the vehicle coordinate system are first calculated from the steering wheel angle. See the following equations.
vx=v×sinθ
vy=v×cosθ
Wherein v isxIs the component of vehicle speed on the X axis, vyIs the vehicle speed component on the Y-axis.
Then, the relative velocity components of the target in the lateral direction and the longitudinal direction of the vehicle are calculated based on the position of the target and the image capturing time. Specifically, the difference between the positions of the target in the coordinate system of the vehicle in two adjacent images is calculated, the interval of the image capturing time is calculated, the relative speed (i.e. the speed of the target relative to the vehicle) is obtained by dividing the difference between the positions by the time interval, and then the relative speed components in the X-axis direction and the Y-axis direction are obtained through coordinate decomposition. Therefore, the target is tracked in the self-vehicle coordinate system in a track and speed tracking mode.
And if the difference between the speed component and the relative speed component in the transverse direction of the self-vehicle is smaller than a set value and the difference between the speed component and the relative speed component in the longitudinal direction of the self-vehicle is smaller than a set value, indicating that the speed of the target relative to the self-vehicle is consistent with the speed of the self-vehicle, and determining that the target is a static target. The set value is a speed threshold value for distinguishing the static target from the dynamic target, and can be set according to actual conditions, such as 0.2 m/s. On the contrary, if the difference between the vehicle speed component and the relative speed component in the X-axis direction is equal to or greater than the set value, or the difference between the vehicle speed component and the relative speed component in the Y-axis direction is equal to or greater than the set value, the target is determined to be a dynamic target.
In the embodiment, the static target and the dynamic target are successfully distinguished by comparing the speed of the target in the self-vehicle coordinate system with the vehicle speed through the characteristic that the self-vehicle coordinate system moves along with the self-vehicle, and other parameters and equipment do not need to be introduced.
And S130, from the current position of the static target, making a tangent line to the track predicted by the rear wheel at the steering side of the self-vehicle to obtain the position of a tangent point.
Referring to fig. 3, the track of the rear wheel approaches a standard circular arc in the turning process of the bicycle, the turning inner side of the bicycle body corresponds to the tangent line of the circular arc, and the turning side rear wheel corresponds to the position of the tangent point. Then, as soon as the steering-side rear wheel comes to the tangent point, there is a possibility of collision with a static target on the tangent line. Therefore, the invention utilizes the characteristic to calculate the tangent point of the static target on the track.
Fig. 4 is a schematic diagram of a tangent point and a tangent line provided by an embodiment of the invention. Firstly, the track of the rear wheel at the steering side of the self-vehicle needs to be predicted, namely, the turning radius of the self-vehicle is obtained, and the track of the rear wheel at the steering side is predicted according to the current position and the turning radius of the self-vehicle, namely, a section of circular arc based on the turning radius from the position of the rear wheel at the current steering side. The method for calculating the turning radius is described in the above embodiments, and is not described herein again. Of course, the turning radius can also be determined by the rear wheel turning angle.
Then, from the current position of the static target, a tangent line is made to the track to obtain the position of the tangent point. See FIG. 4, t0The time self-vehicle coordinate system: the contact point of the right rear wheel of the bicycle and the ground is taken as the origin (O' point), the longitudinal direction of the bicycle is the Y axis, and the transverse direction is the X axis. The center of the track is the point O. The predicted track of the right rear wheel is (c) the edge of the area towards the circle center side, and the pedestrian is positioned at the point A of the area (c) as a static target. A tangent line is drawn from the point A to the right rear wheel, and the tangent point is assumed to be the point O ''.
And S140, calculating the collision duration according to the current position of the self vehicle and the position of the tangent point.
The time length from the current position to the position of the tangent point is the collision time length, and the collision time length can be obtained by dividing the track distance between the two positions by the vehicle speed.
In the case of employing the yaw rate of the own vehicle coordinate system, the present embodiment provides a novel collision duration calculation method. Specifically, the distance r from the static target to the circle center corresponding to the track is calculated1(ii) a According to the distance r1A turning radius r and a current position A point of the static target, and calculating a central angle w rotated by a rear wheel on a steering side from the current position to the tangent point position O ''; and calculating the collision duration according to the central angle w and the yaw rate of the vehicle.
As is known, Ð AO 'O is a right angle, AO' is a tangent to a circular arc, and the length of the line OA is r1Point A at t0The coordinate in the time-of-day own vehicle coordinate system is (fxfy) Where the lengths of AB and OB, OA are known for right angle triangular AOB, see the following formula.
Figure 336769DEST_PATH_IMAGE003
With continued reference to fig. 4, there is the following equation:
w'=arccos(r/r1)
w'+w=arctan(AB/OB)=arctan(fy/(r-fx))
w is the central angle rotated by the rear wheel at the steering side from the current position to the tangent point position, and w' is the central angle rotated by the rear wheel at the steering side from the tangent point position to the intersection point position of the static target and the track in the centripetal direction. Then, w = arctan (fy/(r-fx)) -arcos (r/r)1)。
The collision time duration TTC is the time duration from the current position to the tangent point of the right rear wheel, see the following equation.
Figure 690390DEST_PATH_IMAGE004
In some embodiments, if the vehicle body is long and the static target is located closer to the tangent point, it is possible that the steered side rear wheel does not reach the tangent point and the vehicle body or the vehicle head may hit the static target, thus requiring a correction for the collision duration. Specifically, another collision duration is calculated according to the current longitudinal relative speed of the static target in the own vehicle coordinate system and the current longitudinal distance between the static target and the head of the own vehicle. That is, the relative velocity component in the self-vehicle longitudinal direction and the distance component of the stationary object from the self-vehicle head in the self-vehicle longitudinal direction are calculated. The time period obtained by dividing the distance component by the relative velocity component is referred to as another collision time period. Referring to fig. 4, the current longitudinal distance may be obtained by subtracting the length of the vehicle head to the rear wheels on the steering side from the length of AB. The smaller of the collision duration and the other collision duration is taken as a final collision duration.
In the embodiment shown in fig. 1, a static target is determined by a target recognition and tracking method, a tangent point is obtained by calculating a predicted track of a rear wheel on a steering side of a self-vehicle and drawing a tangent line, and collision duration is calculated by adopting a current position and a position of the tangent point, so that the collision duration is accurately predicted by utilizing the characteristic that when the rear wheel on the steering side comes to the tangent point, the static target on the tangent line is likely to collide, the actual running condition that the self-vehicle runs along the tangent line on the track is considered, and whether collision risk exists is accurately predicted.
On the basis of the above embodiments, after determining a braking strategy according to the collision duration and calculating the collision duration according to the current position of the vehicle and the position of the tangent point, a suitable control strategy is selected according to the collision duration. For example, if the collision duration is equal to or greater than a set value, the driver is warned. And if the collision time length is less than a set value, carrying out emergency braking. The set value is a time threshold value for starting braking, and can be determined according to the driving style of the hard brake/slow brake.
In the braking process, images can be acquired through the vehicle-mounted BSD camera in real time, and targets in image blind areas can be detected. And if the static target disappears, or the static target is not located in the track corresponding range predicted by the rear wheel on the steering side, or the self-vehicle is braked and stopped, and braking is cancelled. The predicted track corresponding range of the rear wheel on the steering side refers to the description of the above embodiments, and is not described again here.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 5, the electronic device 400 includes one or more processors 401 and memory 402.
The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 400 to perform desired functions.
Memory 402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 401 to implement the method for predicting a blind spot collision while turning a self-vehicle of any of the embodiments of the present invention described above and/or other desired functions. Various contents such as initial external parameters, threshold values, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 400 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 403 may include, for example, a keyboard, a mouse, and the like. The output device 404 can output various information to the outside, including warning prompt information, braking force, etc. The output devices 404 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 400 relevant to the present invention are shown in fig. 5, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 400 may include any other suitable components depending on the particular application.
In addition to the above-described methods and apparatuses, an embodiment of the present invention may also be a computer program product including computer program instructions that, when executed by a processor, cause the processor to perform the steps of the method for predicting a blind zone collision when a vehicle turns, provided by any of the embodiments of the present invention.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, an embodiment of the present invention may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to execute the steps of the method for predicting a blind area collision when an own vehicle turns, provided by any embodiment of the present invention.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to limit the scope of the present application. As used in the specification and claims of this application, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element.
It is further noted that the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention. Unless expressly stated or limited otherwise, the terms "mounted," "connected," "coupled," and the like are to be construed broadly and encompass, for example, both fixed and removable coupling or integral coupling; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention.

Claims (9)

1. A blind area collision prediction method during self-vehicle turning is characterized by comprising the following steps:
in the turning process of the self-vehicle, acquiring a plurality of images shot by a vehicle-mounted blind area monitoring BSD camera;
carrying out target identification and tracking on the blind areas in the multiple images, and determining and identifying a pre-collided static target;
from the current position of the static target, making a tangent line to the track predicted by the rear wheel at the steering side of the self-vehicle to obtain the position of a tangent point;
calculating collision duration according to the current position of the self-vehicle and the position of the tangent point;
the identifying and tracking the targets of the blind areas in the images and determining and identifying the pre-collided static target comprise:
carrying out target identification on the blind areas in the multiple images to obtain identified targets;
constructing a trajectory of the target by tracking the target in a plurality of images;
if the track of the target is located in the range of the predicted track of the vehicle and the target is determined to be a static target, determining to identify the static target of the pre-collision;
the predicted vehicle track range is a range surrounded by tracks swept by the rear wheels on the steering side of the vehicle and the front angles on the non-steering side of the vehicle.
2. The method of claim 1, wherein the acquiring of the plurality of images captured by the vehicle-mounted BSD camera during the turning of the vehicle comprises:
acquiring the yaw rate of the self-vehicle, and calculating the turning radius of the self-vehicle according to the yaw rate;
calculating the steering wheel angle according to the turning radius and the wheelbase of the bicycle;
and if the steering wheel angle exceeds a set value, starting the vehicle-mounted BSD camera to shoot a plurality of images.
3. The method of claim 1, wherein the determining that the target is a static target comprises:
calculating speed components in the transverse direction and the longitudinal direction of the vehicle according to the steering wheel rotation angle;
calculating relative speed components of the target in the transverse direction and the longitudinal direction of the vehicle according to the position of the target and the image shooting time;
and if the difference between the speed component and the relative speed component in the transverse direction of the self-vehicle is smaller than a set value, and the difference between the speed component and the relative speed component in the longitudinal direction of the self-vehicle is smaller than a set value, determining that the target is a static target.
4. The method of claim 1, wherein said tangent to the predicted trajectory of the rear wheel on the steering side of the self-propelled vehicle from the current position of the static target to obtain the position of the tangent point comprises:
acquiring the turning radius of the self-vehicle, and predicting the track of the rear wheel at the steering side according to the current position and the turning radius of the self-vehicle;
and from the current position of the static target, making a tangent line to the track to obtain the position of a tangent point.
5. The method according to claim 4, wherein the calculating of the collision duration according to the current position of the host vehicle and the position of the tangent point comprises:
calculating the distance from the static target to the circle center corresponding to the track;
calculating a central angle rotated by the rear wheel at the steering side from the current position to the position of the tangent point according to the distance, the turning radius and the current position of the static target;
and calculating the collision duration according to the central angle and the yaw rate of the vehicle.
6. The method of claim 5, further comprising, after said calculating a collision duration based on said central angle and a yaw rate of the host vehicle:
calculating another collision duration according to the relative speed of the current front edge of the static target and the self-vehicle and the current longitudinal distance between the current front edge of the static target and the head of the self-vehicle in the longitudinal direction of the self-vehicle;
the smaller of the collision duration and the other collision duration is taken as a final collision duration.
7. The method according to any one of claims 1 to 6, further comprising, after the calculating the collision duration according to the current position of the own vehicle and the position of the tangent point:
if the collision duration is less than a set value, emergency braking is carried out;
and in the braking process, if the static target disappears, or the static target is not positioned in the track corresponding range predicted by the rear wheel at the steering side, or the self-vehicle is braked and stopped, and the braking is cancelled.
8. An electronic device, characterized in that the electronic device comprises:
a processor and a memory;
the processor is configured to execute the steps of the blind area collision prediction method when the host vehicle turns a curve according to any one of claims 1 to 7 by calling a program or instructions stored in the memory.
9. A computer-readable storage medium characterized in that the computer-readable storage medium stores a program or instructions for causing a computer to execute the steps of the blind area collision prediction method when the own vehicle turns, according to any one of claims 1 to 7.
CN202111077078.XA 2021-09-15 2021-09-15 Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns Active CN113511198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111077078.XA CN113511198B (en) 2021-09-15 2021-09-15 Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111077078.XA CN113511198B (en) 2021-09-15 2021-09-15 Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns

Publications (2)

Publication Number Publication Date
CN113511198A CN113511198A (en) 2021-10-19
CN113511198B true CN113511198B (en) 2021-12-31

Family

ID=78063411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111077078.XA Active CN113511198B (en) 2021-09-15 2021-09-15 Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns

Country Status (1)

Country Link
CN (1) CN113511198B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115056802B (en) * 2022-08-17 2023-01-17 北京主线科技有限公司 Automatic driving method, device, equipment and storage medium for vehicle
CN115953328B (en) * 2023-03-13 2023-05-30 天津所托瑞安汽车科技有限公司 Target correction method and system and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428662A (en) * 2019-08-23 2019-11-08 玖安智能科技(杭州)有限公司 Right side intelligent collision pre-warning management system and working method based on millimeter wave
JP2020097275A (en) * 2018-12-17 2020-06-25 本田技研工業株式会社 Travel trajectory determining device and automatic driving device
CN111361557A (en) * 2020-02-13 2020-07-03 江苏大学 Early warning method for collision accident during turning of heavy truck
CN112193244A (en) * 2020-09-30 2021-01-08 浙江大学 Automatic driving vehicle motion planning method based on linear constraint

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9505413B2 (en) * 2015-03-20 2016-11-29 Harman International Industries, Incorporated Systems and methods for prioritized driver alerts

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020097275A (en) * 2018-12-17 2020-06-25 本田技研工業株式会社 Travel trajectory determining device and automatic driving device
CN110428662A (en) * 2019-08-23 2019-11-08 玖安智能科技(杭州)有限公司 Right side intelligent collision pre-warning management system and working method based on millimeter wave
CN111361557A (en) * 2020-02-13 2020-07-03 江苏大学 Early warning method for collision accident during turning of heavy truck
CN112193244A (en) * 2020-09-30 2021-01-08 浙江大学 Automatic driving vehicle motion planning method based on linear constraint

Also Published As

Publication number Publication date
CN113511198A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
US10832578B2 (en) System and method for collision prevention
US10144474B2 (en) Collision detection
CN106043297B (en) Collision avoidance based on front wheel off tracking during reverse operation
KR101628503B1 (en) Driver assistance apparatus and method for operating thereof
US9583003B2 (en) Vehicle danger notification control apparatus
CN113511197A (en) Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns
CN113511198B (en) Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns
CN110073429B (en) Method for monitoring the surroundings of a vehicle combination and monitoring system
CN113362607B (en) Steering state-based blind area early warning method, device, equipment and medium
JP2010083314A (en) Driving support device for vehicle
KR20110132437A (en) Method for automatically detecting a driving maneuver of a motor vehicle and a driver assistance system comprising said method
CN108734081B (en) Vehicle Lane Direction Detection
US11142193B2 (en) Vehicle and method for performing inter-vehicle distance control
CN114523963B (en) System and method for predicting road collisions with host vehicles
CN113370992B (en) Vehicle line pressing reminding method and device and storage medium
CN107139921B (en) A kind of steering collision-proof method and system for vehicle
CN116872921A (en) Method and system for avoiding risks of vehicle, vehicle and storage medium
CN113335311B (en) Vehicle collision detection method and device, vehicle and storage medium
CN112389392B (en) Vehicle active braking method, device, equipment and storage medium
US20220234581A1 (en) Vehicle control method, vehicle control device, and vehicle control system including same
US11195417B2 (en) Vehicle and method for predicating collision
CN110371025A (en) Method, system, equipment and the storage medium of the preposition collision detection for operating condition of overtaking other vehicles
CN113022593B (en) Obstacle processing method and device and traveling equipment
CN113077656B (en) Parking road section anti-collision early warning method based on vehicle-to-vehicle RFID communication
US11667295B2 (en) Apparatus and method for recognizing object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant