CN113978456A - Reversing image method with target display - Google Patents
Reversing image method with target display Download PDFInfo
- Publication number
- CN113978456A CN113978456A CN202111211609.XA CN202111211609A CN113978456A CN 113978456 A CN113978456 A CN 113978456A CN 202111211609 A CN202111211609 A CN 202111211609A CN 113978456 A CN113978456 A CN 113978456A
- Authority
- CN
- China
- Prior art keywords
- target
- coordinates
- image
- steering
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims description 23
- 238000005070 sampling Methods 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000006855 networking Effects 0.000 abstract description 2
- 230000009471 action Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
Abstract
The invention belongs to the technical field of car networking, and relates to a car backing image method with target detection. The method comprises the steps of obtaining the steering wheel corner and the steering direction, and calculating the front wheel corner; calculating a reverse driving area according to the steering angle of the front wheel; calculating the position of the target by detecting the pixel coordinates of the target; determining a target position and a driving area; calculating a rear wheel running track line; displaying the auxiliary image for backing a car; the traditional reversing image only displays a reversing track, and whether a collision risk exists in a target or an obstacle behind a vehicle cannot be accurately judged only through reversing estimation; the method and the device can automatically detect the target in the rear view image, and prompt a user whether the target has potential safety hazards of collision or not by calculating the distance between the target and a driving area in real time. The invention improves the safety auxiliary performance of the reversing image, and enables a user to observe the potential risk in the reversing process more intuitively.
Description
Technical Field
The invention belongs to the technical field of car networking and relates to a car backing image method with target display.
Background
The image of backing a car is also called as the auxiliary system of parking, apply to all kinds of large, medium and small vehicle backing a car or safe auxiliary field of driving extensively. When the reverse image is operated in a reverse gear mode, the system can automatically switch on a camera positioned at the tail of the vehicle, and the vehicle rear condition is displayed on a central control display screen. A higher-level reversing image system can mark a guide line on the video image, and the reversing guide line is changed in real time when the steering wheel rotates, so that the reversing track is accurately traced.
As technology advances, more and more vehicles are being equipped with automatic parking controllers for automatic parking. However, the efficiency of automatic parking and the coverage of parking scenes are still limited, and the image of backing a car is still practical.
In consideration of vehicle network layout, an automatic parking controller generally integrates automatic parking, 360-degree panoramic view and back image functions. Since a camera used in a vehicle is an ultra-wide angle lens or a fisheye lens, such a lens has a serious distortion, and although a user can observe pedestrians or obstacles through a reverse image, the positions of the pedestrians and the obstacles are not easy to judge.
Disclosure of Invention
The invention aims to solve the technical problem that the reversing image in the prior art cannot automatically detect whether a target has collision potential safety hazards, and provides a reversing image method with target display.
In order to improve the safety auxiliary capacity of the car backing image, the invention realizes a car backing image method with target display. The method not only displays the running track of backing, but also realizes the detection of a specific target based on a target display technology of deep learning. The detected targets include: pedestrians, riders, trolleys, buckets, garbage cans, strollers, roadblocks, parking spot ground locks, cats, dogs and the like. While indicating the location information of the object.
In order to solve the technical problems, the invention is realized by adopting the following technical scheme, which is described by combining the accompanying drawings as follows:
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
A car backing image method with target display comprises the following steps:
the method comprises the following steps: acquiring a steering wheel corner and a steering direction, and calculating a front wheel corner;
step two: calculating a reverse driving area according to the steering angle of the front wheel;
step three: calculating the position of the target by detecting the pixel coordinates of the target through the rearview image;
step four: determining a target position and a driving area;
step five: calculating a rear wheel running track line;
step six: and fusing and displaying the rearview image, the target frame information and the track line.
In the first step, the steering wheel angle and the steering direction are obtained, and the front wheel angle is calculated, wherein the specific contents are as follows:
and acquiring the turning angle and the steering of a steering wheel from the vehicle network bus, dividing the turning angle of the steering wheel by the steering ratio to obtain the front wheel steering angle gamma, and setting the steering of the steering wheel as D, the left steering as-1 and the right steering as + 1.
In the second step, the reverse driving area is calculated according to the steering angle of the front wheel, and the specific content is as follows:
when the steering angle gamma of the front wheel is 0, the backing track of the vehicle is a straight line; when gamma is not 0, the backing track of the vehicle is a circle; the radius of the central circle of the rear axle is R ═ Ldtanγ,LdIs the wheel base; assuming that the initial coordinates of the center of the rear axle are (0,0), the coordinates of the center O of the rear axle locus are (D × R,0), and the travel locus of all points on the vehicle body is a circle with O as the center.
Further, the travel region calculation method:
sampling the rear axle track at intervals of delta distance, wherein the coordinate of each sampling point is (x)0,y0),(x1,y1)...(xn,yn) Wherein (x)0,y0) For the initial coordinates (0,0), the sampling trajectory is calculated as follows:
the coordinates of each sampling point of arc A are (x'0,y′0),(x′1,y′1)...(x′h,y′h) The calculation method is as follows:
the coordinates of each sampling point of the arc B are (x ″)0,y″0),(x″1,y″1)...(x″m,y″m) The calculation method is as follows:
w is the vehicle width d1Distance from the head to the rear axle, d2The distance from the tail of the vehicle to the rear axle;
between arc a and arc B is the area in which the vehicle is about to travel, and all objects in this area will cause a collision.
In the third step, the position of the target is calculated by detecting the pixel coordinates of the target through the rearview image, and the specific contents are as follows:
the target in the rear view image is detected by a target recognition algorithm based on deep learning, the detected target position information is pixel coordinates in the image, and position coordinates of the target in the physical world (position coordinates of the real world) are calculated from the pixel coordinates.
The pixel coordinate is (X, Y), and the distortion is required according to the internal reference matrix K and the distortion (K) due to the distortion of the fisheye camera0,k1,k2,k3) Coefficient calculation distortion corrected coordinate (X)c,Yc)。
(u, v) normalizing the image plane coordinates, wherein theta is an incident angle, and theta is an incident angle after distortion;
the corrected coordinates (X)c,Yc) Coordinates (X) mapped to the ground by mapping matrix Aw,Yw) The calculation process is as follows:
(Xt,Yt,Zt) Is the mapped coordinates.
The target position and the driving area in the fourth step are determined, and the specific contents are as follows:
physical coordinates (X) of the objectw,Yw) And the distance of the driving area on the x axis is used as a judgment basis, and the target information is displayed according to the following rule:
(1) marking the target with a red frame when the target is within 1 meter of the vehicle;
(2) marking the target out by a yellow frame when the target is 1 meter away from the vehicle and in the driving track;
(3) when the target is 1 meter away from the vehicle and outside the driving track, the target is marked by a green frame.
And step five, calculating the rear wheel running track line, wherein the specific contents are as follows:
sampling the tracks of the left rear wheel and the right rear wheel at intervals of delta distance in the first step;
left rear wheel track sampling point (X'0,Y′0),(X′1,Y′1)...(X′i,Y′i) The calculation is as follows:
right rear wheel track sampling point (X ″)0,Y″0),(X″1,Y″1)...(X″j,Y″j) The calculation is as follows:
taking any point (x, y) on the trajectory line and mapping the point to the pixel coordinate (x) of the back view imageP,yP) (ii) a The calculation process is as follows:
calculating coordinates (x) of a camera coordinate systemc,zc,yc):
By camera coordinates (x)c,zc,yc) Calculating normalized image plane coordinates (u, v)
Carrying out distortion operation on the normalized image plane coordinates (u, v) to obtain coordinates (x) mapped to the rearview imageP,yP)。
θd=θ+k0θ3+k1θ5+k2θ7+k3θ9
Theta is the incident angle thetadAnd K is an internal reference matrix for the distorted incidence angle.
And step six, fusing and displaying the rear view image, the target frame information and the track line, wherein the specific contents are as follows:
and fusing the rearview image, the target frame information and the track line into one image to be displayed by the vehicle-mounted central control screen.
Compared with the prior art, the invention has the beneficial effects that:
the traditional reversing image only displays a reversing track, and whether a collision risk exists in a target or an obstacle behind a vehicle or not can not be accurately judged only through reversing estimation. The method and the device can automatically detect the target in the rear view image, and prompt a user whether the target has potential safety hazards of collision or not by calculating the distance between the target and a driving area in real time. The invention improves the safety auxiliary performance of the reversing image, and enables a user to observe the potential risk in the reversing process more intuitively.
Drawings
The invention is further described with reference to the accompanying drawings in which:
FIG. 1 is a flowchart of a method for displaying reverse images with a target according to the present invention;
fig. 2 is a schematic diagram of a reverse trajectory.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the embodiments of the present invention. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are only some, but not all embodiments of the invention. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and for simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the present invention.
The invention is described in detail below with reference to the attached drawing figures:
referring to fig. 1 and 2, a method for displaying a reverse image with a target includes the following steps:
the method comprises the following steps: acquiring a steering wheel corner and a steering direction, and calculating a front wheel corner;
and acquiring the turning angle and the steering of a steering wheel from the vehicle network bus, dividing the turning angle of the steering wheel by the steering ratio to obtain the front wheel steering angle gamma, and setting the steering of the steering wheel as D, the left steering as-1 and the right steering as + 1.
Step two: calculating a reverse driving area according to the steering angle of the front wheel;
when the steering angle gamma of the front wheel is 0, the backing track of the vehicle is a straight line; when γ is not 0, the reverse travel locus of the vehicle is a circle. Wherein the radius of the central circle of the rear axle is R ═ Ldtanγ,LdIs the wheel base; assuming that the initial coordinates of the center of the rear axle are (0,0), the coordinates of the center O of the rear axle locus are (D × R,0), and the travel locus of all points on the vehicle body is a circle with O as the center.
A travel area calculation method comprises the following steps:
sampling the rear axle track at intervals of delta distance, wherein the coordinate of each sampling point is (x)0,y0),(x1,y1)...(xn,yn) Wherein (x)0,y0) For the initial coordinate (0,0), the sampling trajectory is calculated as follows
The coordinates of each sampling point of arc A are (x'0,y′0),(x′1,y′1)...(x′h,y′h) The calculation method is as follows:
the coordinates of each sampling point of the arc B are (x ″)0,y″0),(x″1,y″1)...(x″m,y″m) The calculation method is as follows
w is the vehicle width d1Distance from the head to the rear axle, d2Distance from tail to rear axle
Between arc a and arc B is the area in which the vehicle is about to travel, and all objects in this area will cause a collision.
Step three: calculating the position of the target by detecting the pixel coordinates of the target;
and detecting the target in the rear-view image through a target recognition algorithm based on deep learning, wherein the detected target position information is pixel coordinates in the image, and the position coordinates of the target in the physical world are calculated according to the pixel coordinates.
The pixel coordinate is (X, Y), and the distortion is required according to the internal reference matrix K and the distortion (K) due to the distortion of the fisheye camera0,k1,k2,k3) Coefficient calculation distortion corrected coordinate (X)c,Yc)。
The corrected coordinates (X)c,Yc) Mapping to physical coordinates (X) by a mapping matrix Aw,Yw) The calculation process is as follows:
step four: determining a target position and a driving area;
physical coordinates (X) of the objectw,Yw) And the distance of the driving area on the x axis is taken as a judgment basis, and the user is prompted according to the following rules:
(4) marking the target with a red frame when the target is within 1 meter of the vehicle;
(5) marking the target out by a yellow frame when the target is 1 meter away from the vehicle and in the driving track;
(6) when the target is 1 meter away from the vehicle and outside the driving track, the target is marked by a green frame.
Step five: calculating a rear wheel running track line;
the left and right rear wheel trajectories are sampled every delta distance apart as in step one.
Left rear wheel track sampling point (X'0,Y′0),(X′1,Y′1)...(X′i,Y′i) The calculation is as follows:
right rear wheel track sampling point (X ″)0,Y″0),(X″1,Y″1)...(X″j,Y″j) The calculation is as follows:
two back wheel tracks are physical world coordinates, and any point (x, y) on a track line is taken and mapped to a pixel coordinate (x) of a back view imageP,yP). The calculation process is as follows
Mapping physical coordinates (x, y) to camera coordinates (x)c,zc,yc):
By camera coordinates (x)c,zc,yc) Calculating imaging plane coordinates (u, v)
Carrying out distortion operation on the imaging plane coordinates (u, v) to obtain the coordinates (x) mapped to the back view imageP,yP)。
Step six: and displaying the auxiliary image for backing the car.
And displaying the image fused with the rear-view camera image, the target information and the rear wheel trajectory line by a vehicle-mounted central control screen.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims. And those not described in detail in this specification are well within the skill of those in the art.
Claims (8)
1. A car backing image method with target display is characterized by comprising the following steps:
the method comprises the following steps: acquiring a steering wheel corner and a steering direction, and calculating a front wheel corner;
step two: calculating a reverse driving area according to the steering angle of the front wheel;
step three: calculating the position of the target by detecting the pixel coordinates of the target through the rearview image;
step four: determining a target position and a driving area;
step five: calculating a rear wheel running track line;
step six: and fusing and displaying the rearview image, the target frame information and the track line.
2. The reversing image method with the target identification function according to claim 1, characterized in that:
in the first step, the steering wheel angle and the steering direction are obtained, and the front wheel angle is calculated, wherein the specific contents are as follows:
and acquiring the turning angle and the steering of a steering wheel from the vehicle network bus, dividing the turning angle of the steering wheel by the steering ratio to obtain the front wheel steering angle gamma, and setting the steering of the steering wheel as D, the left steering as-1 and the right steering as + 1.
3. The reversing image method with the target identification function according to claim 2, characterized in that:
in the second step, the reverse driving area is calculated according to the steering angle of the front wheel, and the specific content is as follows:
when the steering angle gamma of the front wheel is 0, the backing track of the vehicle is a straight line; when gamma is not 0, the backing track of the vehicle is a circle; the radius of the central circle of the rear axle is R ═ Ldtanγ,LdIs the wheel base; assuming that the initial coordinates of the center of the rear axle are (0,0), the coordinates of the center O of the rear axle locus are (D × R,0), and the travel locus of all points on the vehicle body is a circle with O as the center.
4. The reversing image method with the target identification function according to claim 3, characterized in that:
a travel area calculation method comprises the following steps:
sampling the rear axle track at intervals of delta distance, wherein the coordinate of each sampling point is (x)0,y0),(x1,y1)...(xn,yn) Wherein (x)0,y0) As initial coordinates (0,0), sample the trackThe traces are calculated as follows:
the coordinates of each sampling point of arc A are (x'0,y′0),(x′1,y′1)...(x′h,y′h) The calculation method is as follows:
the coordinates of each sampling point of the arc B are (x ″)0,y″0),(x″1,y″1)...(x″m,y″m) The calculation method is as follows:
w is the vehicle width d1Distance from the head to the rear axle, d2The distance from the tail of the vehicle to the rear axle;
between arc a and arc B is the area in which the vehicle is about to travel, and all objects in this area will cause a collision.
5. The reversing image method with the target identification function according to claim 4, characterized in that:
in the third step, the position of the target is calculated by detecting the pixel coordinates of the target through the rearview image, and the specific contents are as follows:
and detecting the target in the rear-view image through a target recognition algorithm based on deep learning, wherein the detected target position information is pixel coordinates in the image, and the position coordinates of the target in the physical world are calculated according to the pixel coordinates.
The pixel coordinate is (X, Y), and the distortion is required according to the internal reference matrix K and the distortion (K) due to the distortion of the fisheye camera0,k1,k2,k3) Coefficient calculation distortion corrected coordinate (X)c,Yc)。
(u, v) normalizing the image plane coordinates, wherein theta is an incident angle, and theta is an incident angle after distortion;
the corrected coordinates (X)c,Yc) Coordinates (X) mapped to the ground by mapping matrix Aw,Yw) The calculation process is as follows:
(Xt,Yt,Zt) Is the mapped coordinates.
6. The reversing image method with the target identification function according to claim 5, characterized in that:
the target position and the driving area in the fourth step are determined, and the specific contents are as follows:
physical coordinates (X) of the objectw,Yw) And the distance of the driving area on the x axis is used as a judgment basis, and the target information is displayed according to the following rule:
(1) marking the target with a red frame when the target is within 1 meter of the vehicle;
(2) marking the target out by a yellow frame when the target is 1 meter away from the vehicle and in the driving track;
(3) when the target is 1 meter away from the vehicle and outside the driving track, the target is marked by a green frame.
7. The reversing image method with the target identification function according to claim 6, characterized in that:
and step five, calculating the rear wheel running track line, wherein the specific contents are as follows:
sampling the tracks of the left rear wheel and the right rear wheel at intervals of delta distance in the first step;
left rear wheel track sampling point (X'0,Y0′),(X′1,Y1′)...(X′i,Yi') calculated as follows:
right rear wheel track sampling point (X ″)0,Y″0),(X″1,Y″1)...(X″j,Y″j) The calculation is as follows:
taking any point (x, y) on the trajectory line and mapping the point to the pixel coordinate (x) of the back view imageP,yP) (ii) a The calculation process is as follows:
calculating coordinates (x) of a camera coordinate systemc,zc,yc):
By camera coordinates (x)c,zc,yc) Calculating normalized image plane coordinates (u, v)
Carrying out distortion operation on the normalized image plane coordinates (u, v) to obtain coordinates (x) mapped to the back view imageP,yP)。
θd=θ+k0θ3+k1θ5+k2θ7+k3θ9
Theta is the incident angle thetadAnd K is an internal reference matrix for the distorted incidence angle.
8. The reversing image method with the target identification function according to claim 7, characterized in that:
and step six, fusing and displaying the rear view image, the target frame information and the track line, wherein the specific contents are as follows: and fusing the rearview image, the target frame information and the track line into one image to be displayed by the vehicle-mounted central control screen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111211609.XA CN113978456B (en) | 2021-10-18 | 2021-10-18 | Reversing image display method with target display |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111211609.XA CN113978456B (en) | 2021-10-18 | 2021-10-18 | Reversing image display method with target display |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113978456A true CN113978456A (en) | 2022-01-28 |
CN113978456B CN113978456B (en) | 2024-04-02 |
Family
ID=79739192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111211609.XA Active CN113978456B (en) | 2021-10-18 | 2021-10-18 | Reversing image display method with target display |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113978456B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100861543B1 (en) * | 2007-04-06 | 2008-10-02 | 주식회사 만도 | Parking assistant apparatus and method for avoiding collision with obstacle |
CN101582164A (en) * | 2009-06-24 | 2009-11-18 | 北京锦恒佳晖汽车电子系统有限公司 | Image processing method of parking assist system |
DE102010028911A1 (en) * | 2010-05-12 | 2011-11-17 | Robert Bosch Gmbh | Method for monitoring movement of vehicle i.e. fork lift lorry, involves detecting collision hazard of vehicle by obstruction placed in region of curved travel path, or leaving curved travel path by vehicle |
CN106740866A (en) * | 2016-11-28 | 2017-05-31 | 南京安驾信息科技有限公司 | The computational methods and device of a kind of steering wheel angle |
CN112339762A (en) * | 2020-10-29 | 2021-02-09 | 三一专用汽车有限责任公司 | Reversing image safety early warning method, mixer truck and computer readable storage medium |
CN112598734A (en) * | 2020-12-17 | 2021-04-02 | 的卢技术有限公司 | Image-based method for accurately positioning pedestrians around vehicle body |
-
2021
- 2021-10-18 CN CN202111211609.XA patent/CN113978456B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100861543B1 (en) * | 2007-04-06 | 2008-10-02 | 주식회사 만도 | Parking assistant apparatus and method for avoiding collision with obstacle |
CN101582164A (en) * | 2009-06-24 | 2009-11-18 | 北京锦恒佳晖汽车电子系统有限公司 | Image processing method of parking assist system |
DE102010028911A1 (en) * | 2010-05-12 | 2011-11-17 | Robert Bosch Gmbh | Method for monitoring movement of vehicle i.e. fork lift lorry, involves detecting collision hazard of vehicle by obstruction placed in region of curved travel path, or leaving curved travel path by vehicle |
CN106740866A (en) * | 2016-11-28 | 2017-05-31 | 南京安驾信息科技有限公司 | The computational methods and device of a kind of steering wheel angle |
CN112339762A (en) * | 2020-10-29 | 2021-02-09 | 三一专用汽车有限责任公司 | Reversing image safety early warning method, mixer truck and computer readable storage medium |
CN112598734A (en) * | 2020-12-17 | 2021-04-02 | 的卢技术有限公司 | Image-based method for accurately positioning pedestrians around vehicle body |
Also Published As
Publication number | Publication date |
---|---|
CN113978456B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110517521B (en) | Lane departure early warning method based on road-vehicle fusion perception | |
US11688183B2 (en) | System and method of determining a curve | |
CN105539430B (en) | A kind of people's car mutual intelligent parking method based on handheld terminal | |
CN101894271B (en) | Visual computing and prewarning method of deviation angle and distance of automobile from lane line | |
CN107507131B (en) | 360-degree panoramic reverse image generation method based on single camera | |
CN105678787A (en) | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera | |
CN106054174A (en) | Fusion method for cross traffic application using radars and camera | |
CN110203210A (en) | A kind of lane departure warning method, terminal device and storage medium | |
CN107133985A (en) | A kind of vehicle-mounted vidicon automatic calibration method for the point that disappeared based on lane line | |
CN109624851B (en) | Augmented reality-based driving assistance method and system and readable storage medium | |
CN103204104B (en) | Monitored control system and method are driven in a kind of full visual angle of vehicle | |
CN108909625B (en) | Vehicle bottom ground display method based on panoramic all-round viewing system | |
CN111547045B (en) | Automatic parking method and device for vertical parking spaces | |
US20140160287A1 (en) | Guide method of a reverse guideline system | |
CN113320474A (en) | Automatic parking method and device based on panoramic image and human-computer interaction | |
Krasner et al. | Automatic parking identification and vehicle guidance with road awareness | |
CN113518995A (en) | Method for training and using neural networks to detect self-component position | |
WO2022062000A1 (en) | Driver assistance method based on transparent a-pillar | |
CN116101325A (en) | Narrow road traffic processing method and narrow road traffic processing device | |
CN109814115B (en) | Angle identification and correction method for vertical parking | |
CN113978456A (en) | Reversing image method with target display | |
CN111547048B (en) | Automatic parking method and device for inclined parking spaces | |
CN113353071B (en) | Narrow area intersection vehicle safety auxiliary method and system based on deep learning | |
CN115657037A (en) | Detection method and system for judging whether vehicles can pass through barrier road section | |
CN111547046B (en) | Parallel parking space pre-occupation type automatic parking method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |