CN112270311A - Near-target rapid detection method and system based on vehicle-mounted panoramic inverse projection - Google Patents

Near-target rapid detection method and system based on vehicle-mounted panoramic inverse projection Download PDF

Info

Publication number
CN112270311A
CN112270311A CN202011344562.XA CN202011344562A CN112270311A CN 112270311 A CN112270311 A CN 112270311A CN 202011344562 A CN202011344562 A CN 202011344562A CN 112270311 A CN112270311 A CN 112270311A
Authority
CN
China
Prior art keywords
vehicle
picture
determining
obstacle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011344562.XA
Other languages
Chinese (zh)
Other versions
CN112270311B (en
Inventor
胡钊政
伍锦祥
肖汉彪
彭超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202011344562.XA priority Critical patent/CN112270311B/en
Publication of CN112270311A publication Critical patent/CN112270311A/en
Application granted granted Critical
Publication of CN112270311B publication Critical patent/CN112270311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a near target rapid detection method and a system based on vehicle-mounted panoramic inverse projection, wherein the method comprises the following steps: acquiring shot pictures of a plurality of fisheye cameras; determining a corrected picture according to distortion correction preprocessing aiming at a shot picture; determining inverse projection images unified under the same coordinate system according to inverse projection processing aiming at the corrected pictures; determining a difference image after binarization according to image difference processing aiming at the reverse projection image; and determining an interested area in the differential image according to the positioning information of the obstacle, traversing obstacle pixel points in the interested area, and determining the distance from the obstacle to the vehicle. The invention only provides a simple image transformation algorithm without using complex algorithms such as definite target detection or three-dimensional reconstruction and the like, realizes rapid near-target obstacle detection and positioning, simultaneously ensures the obstacle detection accuracy and saves the cost.

Description

Near-target rapid detection method and system based on vehicle-mounted panoramic inverse projection
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a method and a system for quickly detecting a near target based on vehicle-mounted panoramic inverse projection.
Background
In recent years, with the rapid development of vehicle intelligent technologies, many driving Assistance and automatic driving technologies have been widely studied, and particularly, a Parking Assistance System (Parking Assistance System) for an intelligent vehicle, in which intelligent Parking of the vehicle is also an indispensable component of a higher-level unmanned System.
In the vehicle intelligent parking system, whether obstacles exist in a parking space needs to be detected quickly and accurately. The existing technology for detecting obstacles in parking spaces can be divided into two types according to the working principle, wherein the first type is a three-dimensional reconstruction method based on a sensor such as an ultrasonic radar, a millimeter wave radar and a laser radar, and the depth detection is simultaneously carried out based on a stereo camera. In this type of method, the sensing accuracy of the sensor depends on the position of the adjacent vehicle, the detection speed and the detection accuracy are difficult to meet the practical requirements, and the high cost of various sensors hinders the commercialization of the sensors as automobile accessories. The second type is a method based on target detection and recognition, and generally employs deep learning to detect the parking space obstacle condition, although the method has better superiority in recognizing specific targets (vehicles, people, and the like), the method has poor robustness for other obstacles (roadblocks, pillars, and the like), and is only suitable for a single scene. In summary, the existing parking space obstacle detection technology is complex in implementation method, and the detection speed and the detection precision are difficult to effectively meet the real-time requirement, so how to provide a method for quickly and accurately detecting a near-target obstacle is a problem to be solved urgently.
Disclosure of Invention
In view of the above, a near-target fast detection method and system based on vehicle-mounted looking-around inverse projection are needed to be provided to solve the problems that the obstacle detection method in the prior art is complex, and the detection speed and the detection accuracy are difficult to effectively meet the real-time requirement.
The invention provides a near-target rapid detection method based on vehicle-mounted panoramic inverse projection, which comprises the following steps:
acquiring shot pictures of a plurality of fisheye cameras, wherein the fisheye cameras are used for acquiring the shot pictures of roads in different directions in the driving process of a vehicle;
determining a correction picture according to distortion correction preprocessing aiming at the shot picture;
according to the combined calibration external parameters of the fisheye cameras, determining and unifying the reverse projection images of the corrected pictures in the same coordinate system;
determining a differential image after binaryzation according to image differential processing aiming at inverse projection images unified under the same coordinate system, wherein a pixel is 1 to represent an obstacle pixel point, and a pixel is 0 to represent a ground point;
and determining an interested area in the differential image according to the positioning information of the obstacle, traversing the obstacle pixel points in the interested area, and determining the distance from the obstacle to the vehicle.
Further, the shot pictures comprise a first direction picture and a second direction picture, wherein a front-view fisheye camera or a rear-view fisheye camera is used for shooting the first direction picture, and a left-view fisheye camera or a right-view fisheye camera is used for shooting the second direction picture.
Further, the determining a rectified picture according to the distortion correction preprocessing for the taken picture includes:
correcting the first direction picture and the second direction picture respectively through a fisheye camera imaging model and a distortion correction principle according to the individually calibrated internal parameters of the fisheye cameras, wherein the internal parameters are determined by individually calibrating the fisheye cameras, and the individually calibrated internal parameters are determined by individually calibrating the fisheye cameras;
carrying out graying processing on the corrected first direction picture to determine the corresponding first correction picture;
and carrying out graying processing on the corrected second direction picture to determine the corresponding second correction picture.
Further, the jointly calibrating external parameters include R, T matrixes between the fisheye cameras corresponding to the first corrective picture and the second corrective picture, and the determining, according to the jointly calibrating external parameters of the plurality of fisheye cameras, the inverse projection images of the corrective pictures to be unified under the same coordinate system includes:
respectively carrying out inverse mapping on the first correction picture and the second correction picture according to the individually calibrated external parameters of the plurality of fisheye cameras;
and determining a corresponding first inverse projection image and a corresponding second inverse projection image which are unified under the same coordinate system through coordinate system conversion according to R, T matrixes between the fisheye cameras corresponding to the first corrected picture and the second corrected picture.
Further, the determining a difference image after binarization according to image difference processing for inverse projection images unified under the same coordinate system includes:
determining a corresponding disparity map according to a difference result of the first inverse projection image and the second inverse projection image;
and carrying out binarization processing on the corresponding disparity map according to a preset binarization threshold value, and determining the difference image after binarization.
Further, after the step of determining a difference image after binarization according to image difference processing for inverse projection images unified under the same coordinate system, the method further includes:
determining an expansion corrosion kernel function according to the base length between the fisheye cameras corresponding to the first correction picture and the second correction picture;
and performing morphological filtering of corrosion before expansion on the differential image through the expansion corrosion kernel function.
Further, the determining an interested area in the differential image according to the positioning information of the obstacle, traversing the obstacle pixel points in the interested area, and determining the distance from the obstacle to the vehicle includes:
determining an interested area in the differential image according to the positioning information of the obstacle;
traversing barrier pixel points in the region of interest, and determining a first barrier pixel point closest to the midpoint of the camera;
and determining the distance from the barrier to the vehicle according to the sum of a first distance and a second distance, wherein the first distance is the distance between the back projection starting point and the first barrier pixel point, and the second distance is the distance between the back projection starting point and the vehicle.
Further, the method further comprises:
counting barrier pixel points in the region of interest, and determining the proportion corresponding to the region of interest;
and judging whether a vehicle blind area detected correspondingly to the first direction picture and the second direction picture has an obstacle or not according to a comparison result of the ratio and a preset ratio threshold, wherein the vehicle blind area is a common area of the visual field between a camera corresponding to the first direction picture and a camera corresponding to the second direction picture.
The invention also provides a near target rapid detection system based on the vehicle-mounted looking-around back projection, which comprises a near target rapid detection device based on the vehicle-mounted looking-around back projection, a plurality of fisheye cameras, a voltage-stabilized direct-current power supply, an image data transmission line and a power line, wherein, the device for rapidly detecting the near target based on the vehicle-mounted looking-around back projection comprises a processor and a memory, the memory is stored with a computer program, and the computer program is executed by the processor to realize the vehicle-mounted looking-around back projection-based near target rapid detection method, the fisheye cameras are used for acquiring shot pictures of roads in different directions in the driving process of the vehicle, the image data transmission line is used for transmitting the shot picture to the near-target rapid detection device based on the vehicle-mounted looking-around back projection, and the voltage-stabilizing direct-current power supply and the power line are used for providing power.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of preprocessing the distortion correction of a shot picture to ensure the accuracy of image processing; further, by jointly calibrating external parameters, inverse projection images of various correction pictures are effectively unified under the same coordinate system, and the actual position coordinates of the inverse projection images are determined; obtaining the approximate position and shape of the obstacle through differential processing, and further obtaining more accurate and stable obstacle contour information through binarization processing; and finally, traversing by combining the positioning information of the obstacles to realize the accurate positioning of the obstacles in the blind area. In conclusion, the invention only provides a simple image transformation algorithm without using complex algorithms such as definite target detection or three-dimensional reconstruction and the like, realizes rapid near-target obstacle detection and positioning, simultaneously ensures the obstacle detection accuracy and saves the cost.
Drawings
FIG. 1 is a schematic flow chart of a near-target rapid detection method based on vehicle-mounted panoramic reverse projection according to the present invention;
FIG. 2 is a schematic flow chart of the back projection provided by the present invention;
FIG. 3 is a schematic diagram of a back projection model provided by the present invention;
FIG. 4 is a schematic plan projection provided by the present invention;
FIG. 5 is a schematic flow chart of the difference processing provided by the present invention;
FIG. 6 is a schematic flow chart of obstacle ranging provided in the present invention;
FIG. 7 is a schematic diagram of obstacle detection provided by the present invention;
FIG. 8 is a schematic structural diagram of a near-target rapid detection system based on vehicle-mounted looking-around inverse projection provided by the present invention;
fig. 9 is a schematic structural diagram of a near-target rapid detection device based on vehicle-mounted looking-around inverse projection provided by the invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
Example 1
The embodiment of the present invention provides a method for rapidly detecting a near target based on a vehicle-mounted looking-around back projection, and referring to fig. 1, fig. 1 is a schematic flow chart of the method for rapidly detecting a near target based on a vehicle-mounted looking-around back projection provided by the present invention, where the method for rapidly detecting a near target based on a vehicle-mounted looking-around back projection includes steps S1 to S5, where:
in step S1, capturing photographs of a plurality of fisheye cameras, wherein the fisheye cameras are used for capturing photographs of roads in different directions during the driving of the vehicle;
in step S2, a correction picture is determined according to distortion correction preprocessing for a taken picture;
in step S3, determining and unifying the inverse projection images of the rectification picture in the same coordinate system according to the joint calibration external parameters of the plurality of fisheye cameras;
in step S4, a binarized difference image is determined according to image difference processing for the reverse projection images unified in the same coordinate system, where a pixel of 1 represents an obstacle pixel point and a pixel of 0 represents a ground point;
in step S5, an interested region in the difference image is determined according to the positioning information of the obstacle, and the obstacle pixel points in the interested region are traversed to determine the distance from the obstacle to the vehicle.
In the embodiment of the invention, the pictures shot by the two fisheye cameras are preprocessed, then the two views acquired by the two fisheye cameras are reversely projected to the reference plane to generate two reverse projection images, a difference image is generated according to the difference between the two reverse projection images, the obstacle detection is realized, the approximate position and shape of the obstacle are obtained, the obstacle position information is calculated according to the difference image, and the accurate positioning of the obstacle in the blind area is realized.
Preferably, the shot pictures comprise a first direction picture and a second direction picture, wherein the front-view fisheye camera or the back-view fisheye camera is used for shooting the first direction picture, and the left-view fisheye camera or the right-view fisheye camera is used for shooting the second direction picture.
Therefore, the invention can effectively detect a plurality of visual angle blind areas by combining the first direction picture and the second direction picture, wherein the visual blind areas comprise the left front visual blind area, the right front visual blind area, the left rear visual blind area and the right rear visual blind area of the vehicle.
It should be noted that the fisheye camera includes a front-view camera, a rear-view camera, a left-view camera, and a right-view camera of the vehicle-mounted surround-view camera, and is used for taking pictures of blind areas of each side of the vehicle, and each picture at least includes ground information of the blind area. The front-view camera is embedded in the front LOGO position of the vehicle; the left camera and the right camera are arranged below the left rear-view mirror and the right rear-view mirror, and form an angle with the vertical direction, and the visual angle faces the outer side of the vehicle; the rear-view camera is arranged below the duck tail of the trunk, forms an angle with the vertical direction, and has a backward visual angle. In addition, the shooting angle of the fisheye camera is wider, and the four fisheye cameras need to be calibrated independently and jointly before use.
Preferably, the correction pictures include a first correction picture and a second correction picture, and the step S2 specifically includes the following three steps:
respectively correcting the first direction picture and the second direction picture according to the independent calibration internal parameters of the fisheye cameras through a fisheye camera imaging model and a distortion correction principle, wherein the independent calibration internal parameters are determined by independently calibrating the fisheye cameras;
graying the corrected first direction picture to determine a corresponding first correction picture;
and carrying out graying processing on the corrected second direction picture to determine a corresponding second correction picture.
Therefore, the fisheye camera has a larger imaging view field than a common camera, but a picture shot by the fisheye camera has serious distortion, so that the picture shot by the fisheye camera needs to be subjected to distortion correction before being used, for example, when an on-board computer receives the pictures shot by a front-view camera and a right-view camera and then performs preprocessing on the pictures, the picture is corrected according to an imaging model and a distortion correction principle of the fisheye camera, and then the corrected picture is grayed, so that a color picture is changed into a gray-scale picture, and the subsequent back projection processing is facilitated.
Preferably, referring to fig. 2, fig. 2 is a schematic flow chart of the back projection provided by the present invention, and step S3 specifically includes steps S31 to S32, where:
in step S31, inverse mapping is performed on the first correction picture and the second correction picture according to the individually calibrated external parameters of the plurality of fisheye cameras;
in step S32, the corresponding first and second inverse projection images unified in the same coordinate system are determined by coordinate system conversion from the R, T matrix between the fisheye cameras corresponding to the first and second rectified pictures.
Therefore, the obtained first correction picture and the second correction picture are respectively subjected to inverse projection, namely, camera images are subjected to inverse mapping by using camera external parameters obtained by calibration, actual coordinates on a plane image under a world coordinate system are obtained, and three-dimensional point information is obtained from two-dimensional points. The transformation mainly includes perspective deformation and three-dimensional characteristics of the object. Since the three-dimensional characteristics of the object itself are of interest, the perspective distortion needs to be removed by means of coordinate system transformation. According to the R, T matrix and the inverse projection formula between the two fisheye cameras, the inverse projections of the two fisheye cameras can be unified under the same coordinate system through coordinate system conversion, so that the subsequent graphic difference transformation is facilitated, and the accuracy of inverse projection processing is ensured.
In a specific embodiment of the present invention, taking projection models of a left-view camera and a front-view camera as an example, referring to fig. 3 and 4, fig. 3 is a schematic diagram of an inverse projection model provided by the present invention, fig. 4 is a schematic diagram of a projection provided by the present invention, where a camera height is h, a point O is a vertical projection of an optical center of the left-view camera C1 to the ground, a point O is defined as an origin, an OC1 establishes a coordinate system of the camera in the figure for a Z-axis, an ABCD is a trapezoidal area on a road plane photographed by a video camera, and corresponding points of each point A, B, C, D, P in the projection plane are shown in fig. 4 and are respectively end points a1, B1, C1, D1, and P1 of an image plane rectangle. Defining the middle point of the rectangle as the origin, parallel to the X, Y axis, establishes the image coordinate system for the u and v directions, H and W being the height and width of the image plane, respectively.
If point (x)l,yl)TThe coordinate of the corresponding point in the projection plane of the forward-looking camera is one point on the road plane
Figure BDA0002799523220000081
The back projection formula of the available forward-looking camera is as follows:
Figure BDA0002799523220000082
where h is the mounting height of the camera, 2 beta0For the horizontal viewing angle of the camera lens, 2 alpha0Is the vertical field angle, gamma, of the camera lens0The pitch angle of the camera.
The back projection coordinate of one camera is known, and the corresponding coordinate of the other camera can be obtained through external parameter calibration between the two cameras. Suppose a point in space (x)l,yl,zl)TThe position of the projection plane of the front-view camera is
Figure BDA0002799523220000091
According to the imaging principle, the method comprises the following steps:
Figure BDA0002799523220000092
the right-view camera imaging formula is also available:
Figure BDA0002799523220000093
since the two camera coordinate systems can be mutually converted through the space matrix M. Where R is a rotation matrix and T is a translation matrix, the spatial matrix M is represented by the following equation:
M=[R T] (4)
the forward-looking camera and the right-looking camera position transformation relationship is expressed as follows:
Figure BDA0002799523220000094
wherein, R and T can be obtained by calibrating external parameters of the camera, and the back projection coordinates of the right-view camera are as follows:
Figure BDA0002799523220000095
from equations (1) and (6), the right-view fisheye camera back projection equation is expressed as the following equation:
Figure BDA0002799523220000101
the back projections of the two fisheye cameras can be unified under the same coordinate system through the formulas (1) and (7), so that the subsequent graphic differential transformation is facilitated.
Preferably, referring to fig. 5, fig. 5 is a schematic flowchart of the difference processing provided by the present invention, and step S4 specifically includes steps S41 to S42, where:
in step S41, determining a corresponding disparity map from the difference result between the first and second inverse projection images;
in step S42, a binarization process is performed on the corresponding disparity map according to a preset binarization threshold, and a binarized difference image is determined.
Thus, the shape and position of the obstacle are obtained by the difference calculation, and the contour of the obstacle is further obtained by binarization.
In a specific embodiment of the invention, P is definedl(xl,yl) And Pr(xr,yr) In the first and second inverse projection images after the coordinate compensation, the disparity map D (x, y) is expressed by the following equation:
D(x,y)=Pl(xl,yl)-Pr(xr,yr) (8)
and (3) carrying out binarization processing on the disparity map, setting a binarization threshold value, and highlighting the target area, wherein the binarization processing process is expressed as the following formula:
Figure BDA0002799523220000102
where B (x, y) is a binarized difference image, τ is a binarized threshold value, B (x, y) 0 indicates a ground surface, and B (x, y) 1 indicates an obstacle other than a ground surface. Note that the binarization threshold τ is determined by an iterative threshold method.
Preferably, after the steps of determining an interested area in the differential image according to the positioning information of the obstacle, traversing obstacle pixel points in the interested area, and determining the distance from the obstacle to the vehicle, the method further comprises:
determining an expansion corrosion kernel function according to the base length between the fisheye cameras corresponding to the first correction picture and the second correction picture;
and performing morphological filtering of corrosion first and then expansion on the differential image through an expansion corrosion kernel function.
Thus, after the difference map is divided by binarization, the image usually has many noise points. Therefore, we need to perform post-processing on the difference map to further distinguish the ground plane point from the out-of-plane point. In contrast, the invention proposes a morphological filtering method using an open operation to eliminate noise, and further performs a corrosion expansion operation to filter the difference image, thereby ensuring that the outline of the obstacle is clearly highlighted.
In a specific embodiment of the present invention, the noise is removed by morphological filtering with an on-operation, which is expressed as:
Figure BDA0002799523220000111
wherein G (x, y) is a filtered difference image, and B (x, y) is a binarized difference image
Figure BDA0002799523220000112
In order to be corroded, the metal powder is added,
Figure BDA0002799523220000113
is expanded, S isA corrosion kernel function.
It should be noted that the dilation-erosion kernel S provided by the present invention is set according to the baseline length d between two fisheye cameras, i.e., S is less than or equal to d.
Preferably, referring to fig. 6, fig. 6 is a schematic flow chart of obstacle ranging provided by the present invention, and step S5 includes steps S51 to S53, where:
in step S51, a region of interest in the difference image is determined based on the positioning information of the obstacle;
in step S52, traversing barrier pixel points in the region of interest, and determining a first barrier pixel point closest to a midpoint of the camera;
in step S53, a distance from the obstacle to the vehicle is determined according to a sum of a first distance and a second distance, wherein the first distance is a distance between the back projection start point and the first obstacle pixel point, and the second distance is a distance between the back projection start point and the vehicle.
Therefore, the point where G (x, y) ═ 1 is traversed in the region of interest (ROI region), the corresponding coordinates are acquired, the point closest to the camera is set as the starting point of the obstacle position calculation, the first distance in the world coordinate system can be obtained from the distance between the two points on the image, the back projection position cannot start from the origin O due to the influence of the field angles of the two fisheye cameras, and therefore the back projection starting point has a deviation distance from the vehicle, that is, the second distance (the second distance is the distance between O and cd as seen in fig. 2), and finally the distance from the obstacle to the vehicle is calculated as the sum of the first distance and the second distance.
Specifically, referring to fig. 7, fig. 7 is a schematic diagram of obstacle detection provided by the present invention, in which the camera moves forward at a speed V with the vehicle, and at a certain time, the optical center of the front-view camera is point C1, and the optical center of the right-view camera is point C2. As can be seen from the figure, for a point Q1 on the ground, the projections of the camera on the ground at C1 and C2 are both Q1, and the gray values are the same; and for a point Q2 above the ground, the projections of the image on the ground are Q3 and Q4 respectively, and the pixel values are different.
Preferably, the method further comprises:
counting barrier pixel points in the region of interest, and determining the proportion of the barrier pixel points in the region of interest;
and judging whether the vehicle blind areas correspondingly detected by the first direction picture and the second direction picture have obstacles or not according to the comparison result of the ratio and a preset ratio threshold.
Therefore, due to the fact that the binary difference image is obtained through the two fisheye camera reverse projection images, the detected obstacle point can be directly used for positioning, and whether the detected vehicle right front blind area has an obstacle or a parking space is occupied or not can be determined through setting of the ratio threshold.
In a specific embodiment of the present invention, to obtain the position and distance information of the obstacle, a region to be detected is first set as a region of interest (ROI region) in the picture after binarization; for the obstacle detection of the parking space, a parking space line needs to be identified by using methods such as Hough transform and the like, the position information of the parking space is acquired, and meanwhile, a region of interest (ROI) is set. The number of pixels on the binary image in the ROI can be calculated and the proportion of the obstacle pixel points can be calculated by the following formula:
Figure BDA0002799523220000131
wherein, (x, y) is the coordinates within the ROI region; Σ G (x, y) is expressed as a statistical sum of pixel points in which G (x, y) is 1 in the filtered difference image, that is, a sum of the obstacle pixel points; Σ (1-G (x, y)) represents the statistical sum of pixels in which G (x, y) is 0 in the filtered difference image, that is, the sum of ground pixels.
It should be noted that, referring to fig. 8, fig. 8 is a schematic structural diagram of a system for rapidly detecting a near-target obstacle according to the present invention, and the system includes, for example, a front-view camera 2 and a right-view camera 3, an on-board computer 1, the front-view camera 2, the right-view camera 3, a regulated dc power supply 4, an image data transmission line 5, a power line 6, a current vehicle 7, and an obstacle 8 (which may be a person, a road block, a vehicle, etc.). The front right blind area 9 is detected in the figure, and the front right blind area 9 of the current vehicle 7, namely the common field of vision of the front-view camera and the right-view camera, is displayed.
The front-view camera 2 and the right-view camera 3 are used for shooting pictures of a current vehicle right front blind area, namely image data, including ground information in the right front blind area and barrier information protruding out of the ground, an image data transmission line 5 of each camera is connected with the vehicle-mounted computer 1 and transmits the shot image data to the vehicle-mounted computer 1, wherein internal parameters obtained by calibrating the front-view camera 2 and the right-view camera 3 independently and external parameters obtained by calibrating the front-view camera and the right-view camera together are prestored in the vehicle-mounted computer 1, and the shot right front blind area pictures are processed by calling the algorithm.
Example 2
An embodiment of the present invention provides a device for quickly detecting a near-target obstacle, and referring to fig. 9, fig. 9 is a schematic structural diagram of the device for quickly detecting a near-target obstacle provided by the present invention, where the device 900 for quickly detecting a near-target obstacle includes:
an obtaining unit 901, configured to obtain captured pictures of a plurality of fisheye cameras, where the fisheye cameras are used to acquire road information in different directions during a vehicle driving process;
a preprocessing unit 902, configured to determine a correction picture according to distortion correction preprocessing for a captured picture;
an inverse projection unit 903, configured to determine inverse projection images unified in the same coordinate system according to inverse projection processing for the rectified picture;
a difference unit 904, configured to determine a difference image after binarization according to image difference processing for an inverse projection image, where a pixel is 1 to represent an obstacle pixel point, and a pixel is 0 to represent a ground point;
the data processing unit 905 is configured to determine an interested region in the difference image according to the positioning information of the obstacle, traverse obstacle pixel points in the interested region, and determine a distance from the obstacle to the vehicle.
Example 3
The embodiment of the invention provides a device for quickly detecting a near-target obstacle, which comprises a processor and a memory, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the method for quickly detecting the near target based on the vehicle-mounted looking-around back projection is realized.
Example 4
The embodiment of the invention provides a system for quickly detecting a near-target obstacle, which comprises a device for quickly detecting the near-target obstacle provided by embodiment 3, and further comprises a plurality of fisheye cameras, a voltage-stabilized direct-current power supply 4, an image data transmission line 5 and a power line 6, wherein the fisheye cameras are used for acquiring shot pictures of roads in different directions in the driving process of a vehicle, the image data transmission line 5 is used for transmitting the shot pictures to the device for quickly detecting the near-target obstacle, and the voltage-stabilized direct-current power supply 4 and the power line 6 are used for providing power.
Referring to fig. 8, the device for rapidly detecting a near-target obstacle provided in embodiment 3 of the present invention includes, but is not limited to, the vehicle-mounted computer 1 in fig. 8, as long as the related image processing function can be implemented.
Example 5
The embodiment of the invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for rapidly detecting a near target based on vehicle-mounted looking-around back projection is realized.
The invention discloses a near-target rapid detection method and a system based on vehicle-mounted panoramic inverse projection, wherein in the method, the distortion correction pretreatment is carried out on a shot picture through pretreatment so as to ensure the accuracy of image processing; further, through inverse projection processing, various correction pictures are effectively unified under a unified coordinate system, and the actual position coordinates of the correction pictures are determined; obtaining the approximate position and shape of the obstacle through differential processing, and further quickly obtaining more accurate and stable obstacle contour information through binarization processing; and finally, traversing by combining the positioning information of the obstacles, and quickly realizing the accurate positioning of the obstacles in the blind area.
According to the technical scheme, a simple image transformation algorithm is provided without using complex algorithms such as definite target detection or three-dimensional reconstruction, rapid near-target obstacle detection and positioning are realized on the basis of a rapid and simple algorithm, the obstacle detection accuracy is guaranteed, and the cost is saved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (9)

1. A near target rapid detection method based on vehicle-mounted panoramic inverse projection is characterized by comprising the following steps:
acquiring shot pictures of a plurality of fisheye cameras, wherein the fisheye cameras are used for acquiring the shot pictures of roads in different directions in the driving process of a vehicle;
determining a correction picture according to distortion correction preprocessing aiming at the shot picture;
according to the combined calibration external parameters of the fisheye cameras, determining and unifying the reverse projection images of the corrected pictures in the same coordinate system;
determining a differential image after binaryzation according to image differential processing aiming at inverse projection images unified under the same coordinate system, wherein a pixel is 1 to represent an obstacle pixel point, and a pixel is 0 to represent a ground point;
and determining an interested area in the differential image according to the positioning information of the obstacle, traversing the obstacle pixel points in the interested area, and determining the distance from the obstacle to the vehicle.
2. The method for rapidly detecting the near target based on the vehicle-mounted looking-around back projection as claimed in claim 1, wherein the captured pictures comprise a first direction picture and a second direction picture, wherein a front-view fisheye camera or a rear-view fisheye camera is used for capturing the first direction picture, and a left-view fisheye camera or a right-view fisheye camera is used for capturing the second direction picture.
3. The method for rapidly detecting the near target based on the vehicle-mounted looking-around inverse projection as claimed in claim 2, wherein the rectification pictures comprise a first rectification picture and a second rectification picture, and the determining the rectification pictures according to the distortion correction preprocessing aiming at the shot pictures comprises:
correcting the first direction picture and the second direction picture respectively through a fisheye camera imaging model and a distortion correction principle according to the individual calibration internal parameters of the fisheye cameras, wherein the individual calibration internal parameters are determined through individual calibration of the fisheye cameras;
carrying out graying processing on the corrected first direction picture to determine the corresponding first correction picture;
and carrying out graying processing on the corrected second direction picture to determine the corresponding second correction picture.
4. The method for rapidly detecting the near target based on the vehicle-mounted looking-around inverse projection as claimed in claim 3, wherein the jointly calibrated external parameters include R, T matrixes between the fisheye cameras corresponding to the first corrected picture and the second corrected picture, and the determining and unifying the inverse projection images of the corrected pictures under the same coordinate system according to the jointly calibrated external parameters of the plurality of fisheye cameras comprises:
respectively carrying out inverse mapping on the first correction picture and the second correction picture according to the individually calibrated external parameters of the plurality of fisheye cameras;
and determining a corresponding first inverse projection image and a corresponding second inverse projection image which are unified under the same coordinate system through coordinate system conversion according to R, T matrixes between the fisheye cameras corresponding to the first corrected picture and the second corrected picture.
5. The method for rapidly detecting the near target based on the vehicle-mounted looking-around inverse projection as claimed in claim 4, wherein the determining the binarized difference image according to the image difference processing for the inverse projection image unified under the same coordinate system comprises:
determining a corresponding disparity map according to a difference result of the first inverse projection image and the second inverse projection image;
and carrying out binarization processing on the corresponding disparity map according to a preset binarization threshold value, and determining the difference image after binarization.
6. The method for rapidly detecting the near target based on the vehicle-mounted looking-around inverse projection as claimed in claim 5, further comprising, after determining the difference image after binarization according to the image difference processing for the inverse projection images unified under the same coordinate system:
determining an expansion corrosion kernel function according to the base length between the fisheye cameras corresponding to the first correction picture and the second correction picture;
and performing morphological filtering of corrosion before expansion on the differential image through the expansion corrosion kernel function.
7. The method for rapidly detecting the near target based on the vehicle-mounted looking-around back projection as claimed in claim 5, wherein the determining the region of interest in the differential image according to the positioning information of the obstacle, traversing the obstacle pixel points in the region of interest, and determining the distance from the obstacle to the vehicle comprises:
determining an interested area in the differential image according to the positioning information of the obstacle;
traversing barrier pixel points in the region of interest, and determining a first barrier pixel point closest to the midpoint of the camera;
and determining the distance from the barrier to the vehicle according to the sum of a first distance and a second distance, wherein the first distance is the distance between the back projection starting point and the first barrier pixel point, and the second distance is the distance between the back projection starting point and the vehicle.
8. The method for rapidly detecting the near target based on the vehicle-mounted looking-around inverse projection as claimed in claim 7, further comprising:
counting barrier pixel points in the region of interest, and determining the proportion corresponding to the region of interest;
and judging whether a vehicle blind area detected correspondingly to the first direction picture and the second direction picture has an obstacle or not according to a comparison result of the ratio and a preset ratio threshold, wherein the vehicle blind area is a common area of the visual field between a camera corresponding to the first direction picture and a camera corresponding to the second direction picture.
9. The utility model provides a nearly target short-term test system based on-vehicle all-round back projection which characterized in that, includes nearly target short-term test device based on-vehicle all-round back projection, a plurality of fisheye cameras, steady voltage DC power supply, image data transmission line, power cord, wherein:
the device for rapidly detecting the near target based on the vehicle-mounted looking-around back projection comprises a processor and a memory, wherein the memory is stored with a computer program, and the computer program is executed by the processor to realize the method for rapidly detecting the near target based on the vehicle-mounted looking-around back projection according to any one of claims 1 to 8;
the fisheye cameras are used for acquiring shot pictures of roads in different directions in the driving process of the vehicle;
the image data transmission line is used for transmitting the shot picture to the near-target rapid detection device based on the vehicle-mounted looking-around back projection;
the voltage-stabilizing direct-current power supply and the power line are used for providing power.
CN202011344562.XA 2020-11-25 2020-11-25 Near-target rapid detection method and system based on vehicle-mounted looking-around back projection Active CN112270311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011344562.XA CN112270311B (en) 2020-11-25 2020-11-25 Near-target rapid detection method and system based on vehicle-mounted looking-around back projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011344562.XA CN112270311B (en) 2020-11-25 2020-11-25 Near-target rapid detection method and system based on vehicle-mounted looking-around back projection

Publications (2)

Publication Number Publication Date
CN112270311A true CN112270311A (en) 2021-01-26
CN112270311B CN112270311B (en) 2023-12-19

Family

ID=74340851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011344562.XA Active CN112270311B (en) 2020-11-25 2020-11-25 Near-target rapid detection method and system based on vehicle-mounted looking-around back projection

Country Status (1)

Country Link
CN (1) CN112270311B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976460A (en) * 2010-10-18 2011-02-16 胡振程 Generating method of virtual view image of surveying system of vehicular multi-lens camera
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109446909A (en) * 2018-09-27 2019-03-08 山东省科学院自动化研究所 A kind of monocular ranging auxiliary parking system and method
CN109712194A (en) * 2018-12-10 2019-05-03 深圳开阳电子股份有限公司 Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium
CN111860270A (en) * 2020-07-13 2020-10-30 辽宁石油化工大学 Obstacle detection method and device based on fisheye camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976460A (en) * 2010-10-18 2011-02-16 胡振程 Generating method of virtual view image of surveying system of vehicular multi-lens camera
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109446909A (en) * 2018-09-27 2019-03-08 山东省科学院自动化研究所 A kind of monocular ranging auxiliary parking system and method
CN109712194A (en) * 2018-12-10 2019-05-03 深圳开阳电子股份有限公司 Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium
CN111860270A (en) * 2020-07-13 2020-10-30 辽宁石油化工大学 Obstacle detection method and device based on fisheye camera

Also Published As

Publication number Publication date
CN112270311B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
US11270134B2 (en) Method for estimating distance to an object via a vehicular vision system
US10354151B2 (en) Method of detecting obstacle around vehicle
CN107577988B (en) Method, device, storage medium and program product for realizing side vehicle positioning
CN109631896B (en) Parking lot autonomous parking positioning method based on vehicle vision and motion information
KR101188588B1 (en) Monocular Motion Stereo-Based Free Parking Space Detection Apparatus and Method
CN107133985B (en) Automatic calibration method for vehicle-mounted camera based on lane line vanishing point
CN108765496A (en) A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
EP3678096A1 (en) Method for calculating a tow hitch position
US11393126B2 (en) Method and apparatus for calibrating the extrinsic parameter of an image sensor
US11410334B2 (en) Vehicular vision system with camera calibration using calibration target
US20230351625A1 (en) A method for measuring the topography of an environment
JP4344860B2 (en) Road plan area and obstacle detection method using stereo image
Matuszyk et al. Stereo panoramic vision for monitoring vehicle blind-spots
CN111860270B (en) Obstacle detection method and device based on fisheye camera
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
CN111243021A (en) Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN112270311B (en) Near-target rapid detection method and system based on vehicle-mounted looking-around back projection
EP3629292A1 (en) Reference point selection for extrinsic parameter calibration
CN113674361B (en) Vehicle-mounted all-round-looking calibration implementation method and system
CN111986248B (en) Multi-vision sensing method and device and automatic driving automobile
WO2022141262A1 (en) Object detection
JP7311407B2 (en) Posture estimation device and posture estimation method
JP3447461B2 (en) Moving obstacle detecting apparatus and method
CN114762019A (en) Camera system
CN113705403B (en) Front target vehicle collision early warning method fused with panoramic imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant