CN110909620A - Vehicle detection method and device, electronic equipment and storage medium - Google Patents

Vehicle detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110909620A
CN110909620A CN201911047352.1A CN201911047352A CN110909620A CN 110909620 A CN110909620 A CN 110909620A CN 201911047352 A CN201911047352 A CN 201911047352A CN 110909620 A CN110909620 A CN 110909620A
Authority
CN
China
Prior art keywords
vanishing point
vehicle
image
determining
traffic scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911047352.1A
Other languages
Chinese (zh)
Inventor
刘志康
张弛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201911047352.1A priority Critical patent/CN110909620A/en
Publication of CN110909620A publication Critical patent/CN110909620A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a vehicle detection method, a vehicle detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an image sequence, wherein the image sequence is obtained by carrying out image acquisition on a target traffic scene; determining a first vanishing point, a second vanishing point and a third vanishing point of a target traffic scene in an image according to the image sequence, wherein the first vanishing point is a vanishing point in the vehicle driving direction, the second vanishing point is a vanishing point vertical to the vehicle driving direction, and the third vanishing point is a vanishing point vertical to a road surface; determining the vehicle contour of the vehicle in the image under the target traffic scene according to the image sequence; and determining a 3D vehicle frame of the vehicle under the target traffic scene according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle outline. By implementing the method, the vehicle detection based on the 2D image does not need manual participation, the operation process can be simplified, and the time consumption and the cost are reduced.

Description

Vehicle detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a vehicle detection method and apparatus, an electronic device, and a storage medium.
Background
In recent years, with the rapid development of image processing technology, 3D vehicle detection technology based on 2D images plays an increasingly important role in the technical fields of unmanned driving, automatic traffic supervision, and the like. In the prior art, 3D vehicle detection is mainly carried out by manually obtaining calibration information of a camera, so that the detection process is complex in operation, time-consuming and high in cost.
Disclosure of Invention
The embodiment of the invention provides a vehicle detection method, a vehicle detection device, electronic equipment and a storage medium, and aims to solve the technical problems of complex 3D vehicle detection operation, time consumption and high cost in the prior art.
According to a first aspect of the present invention, a vehicle detection method is disclosed, which is applied to an electronic device, the method comprising:
acquiring an image sequence, wherein the image sequence is obtained by carrying out image acquisition on a target traffic scene by an image acquisition device;
determining a first vanishing point, a second vanishing point and a third vanishing point of the target traffic scene in the image according to the image sequence, wherein the first vanishing point is a vanishing point in the vehicle driving direction, the second vanishing point is a vanishing point vertical to the vehicle driving direction, and the third vanishing point is a vanishing point vertical to the road surface;
determining the vehicle contour of the vehicle in the image under the target traffic scene according to the image sequence;
and determining a 3D vehicle frame of the vehicle under the target traffic scene according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle outline.
Optionally, as an embodiment, the determining, according to the image sequence, a first vanishing point, a second vanishing point and a third vanishing point of the target traffic scene in the image includes:
determining a first vanishing point of the target traffic scene in an image according to the image sequence;
determining a second vanishing point of the target traffic scene in the image according to the image sequence and the first vanishing point;
and determining a third vanishing point according to the first vanishing point and the second vanishing point.
Optionally, as an embodiment, the determining, according to the image sequence, a first vanishing point of the target traffic scene in an image includes:
acquiring a vehicle optical flow of a vehicle in the image sequence under the target traffic scene, and performing Hough transform voting processing on the vehicle optical flow to obtain a first vanishing point; alternatively, the first and second electrodes may be,
and acquiring a lane line of the road surface in the image sequence under the target traffic scene, and performing Hough transform voting processing on the lane line to obtain a first vanishing point.
Optionally, as an embodiment, the determining a second vanishing point of the target traffic scene in the image according to the image sequence and the first vanishing point includes:
acquiring the edge of a moving object in the target traffic scene in the image sequence;
determining a target edge pixel point pointing to a second vanishing point in the edge according to the spatial position relationship between the edge and the first vanishing point;
and carrying out Hough transform voting processing on the target edge pixel points to obtain the second vanishing point.
Optionally, as an embodiment, the determining a third vanishing point according to the first vanishing point and the second vanishing point includes:
and calculating the position coordinate of a third vanishing point according to the position coordinate of the first vanishing point, the position coordinate of the second vanishing point and a correlation formula of a vector product, wherein the correlation formula of the vector product is used for calculating a third vector vertical to the two vectors.
Optionally, as an embodiment, the determining, according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle contour, a 3D vehicle frame of the vehicle in the target traffic scene includes:
for each vehicle contour, drawing a tangent line of the vehicle contour by using the first vanishing point, the second vanishing point and the third vanishing point to obtain 6 vertexes for determining a 3D vehicle frame;
obtaining another 2 vertexes for determining the 3D vehicle frame according to the 6 vertexes and the shape rule of the 3D vehicle frame;
and performing line connection on the obtained 8 vertexes to obtain the 3D vehicle frame.
Optionally, as an embodiment, the hough transform voting process includes the following operations:
defining a voting space for a hough transform;
converting a straight line to be processed in an image space into a coordinate system of the voting space to obtain a corresponding conversion straight line;
determining the position where the transformation straight lines intersect most under the voting space;
and converting the position into the image space to obtain a corresponding vanishing point.
According to a second aspect of the present invention, there is disclosed a vehicle detection apparatus applied to an electronic device, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image sequence, and the image sequence is obtained by image acquisition of a target traffic scene by an image acquisition device;
the first determining module is used for determining a first vanishing point, a second vanishing point and a third vanishing point of the target traffic scene in the image according to the image sequence, wherein the first vanishing point is a vanishing point in the vehicle driving direction, the second vanishing point is a vanishing point vertical to the vehicle driving direction, and the third vanishing point is a vanishing point vertical to the road surface;
the second determining module is used for determining the vehicle outline of the vehicle in the image under the target traffic scene according to the image sequence;
and the third determining module is used for determining a 3D vehicle frame of the vehicle in the target traffic scene according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle outline.
Optionally, as an embodiment, the first determining module includes:
the first determining submodule is used for determining a first vanishing point of the target traffic scene in an image according to the image sequence;
the second determining submodule is used for determining a second vanishing point of the target traffic scene in the image according to the image sequence and the first vanishing point;
and the third determining submodule is used for determining a third vanishing point according to the first vanishing point and the second vanishing point.
Optionally, as an embodiment, the first determining sub-module includes:
the first processing unit is used for acquiring a vehicle optical flow of a vehicle in the target traffic scene in the image sequence, and performing Hough transform voting processing on the vehicle optical flow to obtain a first vanishing point; alternatively, the first and second electrodes may be,
and the second processing unit is used for acquiring a lane line of the road surface under the target traffic scene in the image sequence, and performing Hough transformation voting processing on the lane line to obtain a first vanishing point.
Optionally, as an embodiment, the second determining sub-module includes:
the edge acquisition unit is used for acquiring the edge of the moving object in the target traffic scene in the image sequence;
a target edge pixel point determining unit, configured to determine a target edge pixel point pointing to a second vanishing point in the edge according to a spatial position relationship between the edge and the first vanishing point;
and the third processing unit is used for carrying out Hough transform voting processing on the target edge pixel point to obtain the second vanishing point.
Optionally, as an embodiment, the third determining sub-module includes:
and the calculating unit is used for calculating the position coordinate of a third vanishing point according to the position coordinate of the first vanishing point, the position coordinate of the second vanishing point and a correlation formula of a vector product, wherein the correlation formula of the vector product is used for calculating a third vector vertical to the two vectors.
Optionally, as an embodiment, the third determining module includes:
a first processing submodule, configured to draw, for each vehicle contour, a tangent line of the vehicle contour using the first vanishing point, the second vanishing point, and the third vanishing point, to obtain 6 vertices for determining a 3D vehicle frame;
the second processing submodule is used for obtaining another 2 vertexes for determining the 3D vehicle frame according to the 6 vertexes and the shape rule of the 3D vehicle frame;
and the third processing submodule is used for performing line connection on the obtained 8 vertexes to obtain the 3D vehicle frame.
Optionally, as an embodiment, the hough transform voting process includes the following operations:
defining a voting space for a hough transform;
converting a straight line to be processed in an image space into a coordinate system of the voting space to obtain a corresponding conversion straight line;
determining the position where the transformation straight lines intersect most under the voting space;
and converting the position into the image space to obtain a corresponding vanishing point.
According to a third aspect of the present invention, there is disclosed an electronic device comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps in any of the vehicle detection methods as described above.
According to a fourth aspect of the invention, a computer readable storage medium is disclosed, having a computer program stored thereon, which, when being executed by a processor, carries out the steps of any of the vehicle detection methods as described above.
In the embodiment of the invention, for a traffic scene needing to be monitored, a 2D image of the traffic scene can be collected, a vanishing point of the traffic scene in the vehicle driving direction in the image, a vanishing point vertical to the vehicle driving direction, a vanishing point vertical to a road surface and a 3D vehicle frame of a vehicle in the traffic scene in the image are obtained according to the collected image, and the 3D vehicle frame of the vehicle in the traffic scene is determined according to the determined vanishing point and the vehicle outline, so that the vehicle in the traffic scene can be detected according to the determined 3D vehicle frame. Therefore, in the embodiment of the invention, the vehicle detection based on the 2D image does not need manual participation, the operation process can be simplified, and the time consumption and the cost are reduced.
Drawings
FIG. 1 is a flow chart of a vehicle detection method of one embodiment of the present invention;
FIG. 2 is an exemplary diagram of a vanishing point visualization of one embodiment of the invention;
FIG. 3 is an exemplary illustration of a vehicle profile of one embodiment of the present invention;
FIG. 4 is an exemplary diagram of a vehicle 3D frame of one embodiment of the present invention;
fig. 5 is a block diagram showing the structure of a vehicle detection device according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
In recent years, with the rapid development of image processing technology, 3D vehicle detection technology based on 2D images plays an increasingly important role in the technical fields of unmanned driving, automatic traffic supervision, and the like. In the prior art, 3D vehicle detection is mainly carried out by manually obtaining calibration information of a camera, so that the detection process is complex in operation, time-consuming and high in cost.
In order to solve the technical problem, embodiments of the present invention provide a vehicle detection method, an apparatus, an electronic device, and a storage medium.
The following first describes a vehicle detection method provided by an embodiment of the present invention.
FIG. 1 is a flow chart of a vehicle detection method, performed by an electronic device, as shown in FIG. 1, which may include the steps of: step 101, step 102, step 103 and step 104, wherein,
in step 101, an image sequence is acquired, wherein the image sequence is acquired by an image acquisition device through image acquisition of a target traffic scene.
In the embodiment of the invention, the image acquisition device can be a part of the electronic equipment, and under the condition, the electronic equipment can directly instruct the image acquisition device to acquire the images of the traffic scene (namely the target traffic scene) to be monitored to obtain the image sequence; alternatively, the image capturing apparatus may also be a part of other electronic devices, in which case the electronic device acquires the image sequence from the other electronic devices, which is not limited in this embodiment of the present invention. In practical application, the image acquisition device can be a monocular camera.
In step 102, determining a first vanishing point, a second vanishing point and a third vanishing point of the target traffic scene in the image according to the image sequence; the first vanishing point is a vanishing point in the vehicle running direction, the second vanishing point is a vanishing point perpendicular to the vehicle running direction, and the third vanishing point is a vanishing point perpendicular to the road surface.
In the embodiment of the invention, in order to acquire the 3D frame of the vehicle in the target traffic scene, the corresponding relation between the 2D image and the 3D world needs to be acquired, and the corresponding relation can be established through vanishing point detection. Wherein, the definition of the vanishing point is: in a camera view, a line segment which is originally parallel in the real world, such as a lane line, eventually intersects at a point on an image, and the point is called a vanishing point, and if the parallel line segment does not intersect on the image in the real world, the vanishing point is called as being at infinity.
In the embodiment of the invention, the first vanishing point, the second vanishing point and the third vanishing point are used for determining the 3D vehicle frame of the vehicle in the target traffic scene, wherein the first vanishing point, the second vanishing point and the third vanishing point can be determined independently and can also be determined based on the relationship among the first vanishing point, the second vanishing point and the third vanishing point. For ease of understanding, fig. 2 shows an example view of a vanishing point visualization made up of the vanishing points determined in step 102.
When the first vanishing point, the second vanishing point, and the third vanishing point are determined based on the relationship between the first vanishing point, the second vanishing point, and the third vanishing point, in an embodiment of the present invention, the step 102 may specifically include the following steps (not shown in the figure): step 1021, step 1022, and step 1023, wherein,
in step 1021, a first vanishing point of the target traffic scene in the image is determined according to the image sequence.
Optionally, in an embodiment provided by the present invention, the first vanishing point may be determined based on an optical flow of the vehicle, in this case, the step 1021 may specifically include the following steps:
and acquiring a vehicle optical flow of the vehicle in the image sequence under the target traffic scene, and performing Hough transform voting processing on the vehicle optical flow to obtain a first vanishing point.
In the embodiment of the invention, the vehicle optical flow can be obtained by tracking the movement track of the feature points on the vehicle through the image sequence, and the vehicle optical flow points to the first vanishing point when the vehicle moves straight, that is, all the optical flows in a video are voted by Hough transform to obtain the first vanishing point.
In the embodiment of the present invention, the hough transform voting process may include the following operations: defining a voting space for a hough transform; converting the straight line to be processed in the image space into a coordinate system of a voting space to obtain a corresponding conversion straight line; determining the position where the transformation straight lines in the voting space intersect most; the position is transformed into image space to obtain the corresponding vanishing point.
That is, for the hough transform voting, the definition is roughly as follows: defining a voting space, converting a straight line found in the original image into a straight line under a coordinate system of the voting space, and adopting the transformation mode for all straight lines used for voting under the coordinate system of the original image. And after all the straight lines are projected to the voting space, finding out the position with the most intersection, and projecting the position back to the original image coordinate system to obtain the position of the final vanishing point. In practical application, the voting space may be a polar coordinate system, and in this case, the hough transform voting is to convert a straight line in the original x and y coordinate systems to a polar coordinate system.
Optionally, in another embodiment provided by the present invention, the first vanishing point may be determined based on the lane line, in this case, the step 1021 specifically includes the following steps:
obtaining a lane line of a road surface in an image sequence under a target traffic scene, and performing Hough transform voting processing on the lane line to obtain a first vanishing point.
In the embodiment of the invention, the lane lines on the road surface can be directly voted by Hough transform to obtain the first vanishing points, the lane lines on the road surface can be obtained in a mask mode, specifically, a large number of training samples containing lane line images can be obtained, an initial model created based on U-Net is trained to obtain a lane line segmentation model, and an image sequence is processed through the lane line segmentation model to obtain the lane lines in the image sequence.
In step 1022, a second vanishing point of the target traffic scene in the image is determined according to the image sequence and the first vanishing point.
Optionally, in an embodiment provided by the present invention, the step 1022 specifically includes the following steps (not shown in the figure): step 10221, step 10222 and step 10223, wherein in step 10221, the edge of the moving object in the image sequence under the target traffic scene is acquired;
in the embodiment of the present invention, an edge detector (e.g., canny operator) based on a visual algorithm may be used to obtain all edges in the image sequence, and at the same time, a gradient map of each frame of image in the image sequence is subtracted from a previous frame to determine the position of the moving object in the image sequence, so as to obtain the edges of all moving objects in the image sequence.
In step 10222, determining a target edge pixel point pointing to a second vanishing point in the acquired edge according to a spatial position relationship between the acquired edge and the first vanishing point;
in step 10223, a hough transform voting process is performed on the target edge pixel point to obtain the second vanishing point.
In the embodiment of the invention, after the edge of the moving object in the image sequence is obtained, the edge pointing to the second vanishing point can be screened by adopting the angle difference between the edge angle and the first vanishing point as the weight (the edge pixel selected by the weight larger than the preset threshold value) and the obtained edge is represented by a plurality of pixels in the image.
In step 1023, a third vanishing point is determined based on the first vanishing point and the second vanishing point.
Optionally, in an embodiment provided by the present invention, the step 1023 may specifically include the following steps:
and calculating the position coordinate of the third vanishing point according to the position coordinate of the first vanishing point, the position coordinate of the second vanishing point and a correlation formula of a vector product, wherein the correlation formula of the vector product is used for calculating a third vector vertical to the two vectors.
In the embodiment of the present invention, the correlation formula of the vector product may include the following formula:
Figure BDA0002254459380000101
Figure BDA0002254459380000102
Figure BDA0002254459380000103
wherein, X1Is the position coordinate of the first vanishing point, X2Is the position coordinate, X, of the second vanishing point3Is the position coordinate of the third vanishing point, f is the focal length of the image capturing device, P is the coordinate of the light center point (the position of the lens of the image capturing device, which is generally default at the center of the image pixel), PX1Is a straight line from the center point to the first vanishing point, PX2Is a straight line from the optical center point to the second vanishing point.
In step 103, vehicle contours of vehicles in the images of the target traffic scene are determined from the sequence of images.
In the embodiment of the present invention, a vehicle contour of a vehicle in an image in a target traffic scene may be obtained in a masking manner, specifically, a large number of training samples containing vehicle images may be obtained, an initial model created based on the RCNN is trained to obtain a vehicle contour segmentation model, and an image sequence is processed by the vehicle contour segmentation model to obtain a vehicle contour in the image sequence, as shown in fig. 3, fig. 3 shows an example diagram of the vehicle contour.
In step 104, a 3D vehicle frame of the vehicle in the target traffic scene is determined according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle contour.
Optionally, in an embodiment provided by the present invention, the step 104 may specifically include the following steps (not shown in the figure): step 1041, step 1042, and step 1043, wherein,
in step 1041, for each vehicle contour, drawing a tangent line of the vehicle contour using the first vanishing point, the second vanishing point, and the third vanishing point, to obtain 6 vertices for determining the 3D vehicle frame;
in step 1042, according to the 6 vertices and the shape rule of the 3D vehicle frame, another 2 vertices for determining the 3D vehicle frame are obtained;
that is, given the 6 vertices of the cube, the other 2 vertices of the cube are determined according to the rule that these 6 vertices are parallel to the opposite edges of the cube.
In step 1043, the obtained 8 vertices are connected by lines to obtain a 3D vehicle frame.
In one example, as shown in fig. 4, drawing a tangent line on the 2D vehicle contour using the first vanishing point, the second vanishing point, and the third vanishing point may result in 6 tangent line intersection points: point a, point B, point C, point D, point E, and point F, i.e., 6 vertices of the 3D box. Then, according to the obtained 6 vertexes and the shape rule of the 3D frame of the vehicle, obtaining the other 2 vertexes of the 3D frame: point G and point H. And finally, connecting 8 vertexes of the point A, the point B, the point C, the point D, the point E, the point F, the point G and the point H to obtain the 3D frame of the vehicle.
In the embodiment of the invention, after the 3D vehicle frame of the vehicle in the target traffic scene is obtained, the vehicle in the target traffic scene can be detected according to the 3D vehicle frame, for example, whether the vehicle is pressed is judged according to the position relation between the 3D vehicle frame and a lane line in an image sequence, so that the vehicle in the target traffic scene is detected; or, the 3D position coordinates of the vehicle are determined according to the 3D vehicle frame, the focal length, and the actual size of the vehicle, and any one of the related technologies may be used in the specific determination process, which is not described herein again.
As can be seen from the foregoing embodiments, in the embodiment, for a traffic scene that needs to be monitored, a 2D image of the traffic scene may be collected, a vanishing point of the traffic scene in a vehicle driving direction in the image, a vanishing point perpendicular to the vehicle driving direction, a vanishing point perpendicular to a road surface, and a 3D vehicle frame of a vehicle in the traffic scene in the image are obtained according to the collected image, and a 3D vehicle frame of the vehicle in the traffic scene is determined according to the determined vanishing point and a vehicle contour, so that the vehicle in the traffic scene is detected according to the determined 3D vehicle frame. Therefore, in the embodiment of the invention, the vehicle detection based on the 2D image does not need manual participation, the operation process can be simplified, and the time consumption and the cost are reduced.
Fig. 5 is a block diagram of a vehicle detection device according to an embodiment of the present invention, and as shown in fig. 5, the vehicle detection device 500 may include: an acquisition module 501, a first determination module 502, a second determination module 503, and a third determination module 504, wherein,
an obtaining module 501, configured to obtain an image sequence, where the image sequence is obtained by an image acquisition device performing image acquisition on a target traffic scene;
a first determining module 502, configured to determine, according to the image sequence, a first vanishing point, a second vanishing point, and a third vanishing point of the target traffic scene in the image, where the first vanishing point is a vanishing point in a vehicle driving direction, the second vanishing point is a vanishing point perpendicular to the vehicle driving direction, and the third vanishing point is a vanishing point perpendicular to a road surface;
a second determining module 503, configured to determine a vehicle contour of the vehicle in the image according to the image sequence;
a third determining module 504, configured to determine a 3D vehicle frame of the vehicle in the target traffic scene according to the first vanishing point, the second vanishing point, the third vanishing point, and the vehicle contour.
As can be seen from the foregoing embodiments, in the embodiment, for a traffic scene that needs to be monitored, a 2D image of the traffic scene may be collected, a vanishing point of the traffic scene in a vehicle driving direction in the image, a vanishing point perpendicular to the vehicle driving direction, a vanishing point perpendicular to a road surface, and a 3D vehicle frame of a vehicle in the traffic scene in the image are obtained according to the collected image, and a 3D vehicle frame of the vehicle in the traffic scene is determined according to the determined vanishing point and a vehicle contour, so that the vehicle in the traffic scene is detected according to the determined 3D vehicle frame. Therefore, in the embodiment of the invention, the vehicle detection based on the 2D image does not need manual participation, the operation process can be simplified, and the time consumption and the cost are reduced.
Optionally, as an embodiment, the first determining module 502 may include:
the first determining submodule is used for determining a first vanishing point of the target traffic scene in an image according to the image sequence;
the second determining submodule is used for determining a second vanishing point of the target traffic scene in the image according to the image sequence and the first vanishing point;
and the third determining submodule is used for determining a third vanishing point according to the first vanishing point and the second vanishing point.
Optionally, as an embodiment, the first determining sub-module may include:
the first processing unit is used for acquiring a vehicle optical flow of a vehicle in the target traffic scene in the image sequence, and performing Hough transform voting processing on the vehicle optical flow to obtain a first vanishing point; alternatively, the first and second electrodes may be,
and the second processing unit is used for acquiring a lane line of the road surface under the target traffic scene in the image sequence, and performing Hough transformation voting processing on the lane line to obtain a first vanishing point.
Optionally, as an embodiment, the second determining sub-module may include:
the edge acquisition unit is used for acquiring the edge of the moving object in the target traffic scene in the image sequence;
a target edge pixel point determining unit, configured to determine a target edge pixel point pointing to a second vanishing point in the edge according to a spatial position relationship between the edge and the first vanishing point;
and the third processing unit is used for carrying out Hough transform voting processing on the target edge pixel point to obtain the second vanishing point.
Optionally, as an embodiment, the third determining sub-module may include:
and the calculating unit is used for calculating the position coordinate of a third vanishing point according to the position coordinate of the first vanishing point, the position coordinate of the second vanishing point and a correlation formula of a vector product, wherein the correlation formula of the vector product is used for calculating a third vector vertical to the two vectors.
Optionally, as an embodiment, the third determining module 504 may include:
a first processing submodule, configured to draw, for each vehicle contour, a tangent line of the vehicle contour using the first vanishing point, the second vanishing point, and the third vanishing point, to obtain 6 vertices for determining a 3D vehicle frame;
the second processing submodule is used for obtaining another 2 vertexes for determining the 3D vehicle frame according to the 6 vertexes and the shape rule of the 3D vehicle frame;
and the third processing submodule is used for performing line connection on the obtained 8 vertexes to obtain the 3D vehicle frame.
Optionally, as an embodiment, the hough transform voting process includes the following operations:
defining a voting space for a hough transform;
converting a straight line to be processed in an image space into a coordinate system of the voting space to obtain a corresponding conversion straight line;
determining the position where the transformation straight lines intersect most under the voting space;
and converting the position into the image space to obtain a corresponding vanishing point.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
According to still another embodiment of the present invention, there is also provided an electronic apparatus including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps in the vehicle detection method according to any one of the embodiments described above.
According to still another embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps in the vehicle detection method according to any one of the above-mentioned embodiments.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The vehicle detection method, the vehicle detection device, the electronic device and the storage medium provided by the invention are described in detail, specific examples are applied in the description to explain the principle and the implementation of the invention, and the description of the embodiments is only used to help understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A vehicle detection method is applied to an electronic device, and is characterized by comprising the following steps:
acquiring an image sequence, wherein the image sequence is obtained by carrying out image acquisition on a target traffic scene by an image acquisition device;
determining a first vanishing point, a second vanishing point and a third vanishing point of the target traffic scene in the image according to the image sequence, wherein the first vanishing point is a vanishing point in the vehicle driving direction, the second vanishing point is a vanishing point vertical to the vehicle driving direction, and the third vanishing point is a vanishing point vertical to the road surface;
determining the vehicle contour of the vehicle in the image under the target traffic scene according to the image sequence;
and determining a 3D vehicle frame of the vehicle under the target traffic scene according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle outline.
2. The method of claim 1, wherein determining the first vanishing point, the second vanishing point, and the third vanishing point of the target traffic scene in the image according to the sequence of images comprises:
determining a first vanishing point of the target traffic scene in an image according to the image sequence;
determining a second vanishing point of the target traffic scene in the image according to the image sequence and the first vanishing point;
and determining a third vanishing point according to the first vanishing point and the second vanishing point.
3. The method of claim 2, wherein determining the first vanishing point of the target traffic scene in the image from the sequence of images comprises:
acquiring a vehicle optical flow of a vehicle in the image sequence under the target traffic scene, and performing Hough transform voting processing on the vehicle optical flow to obtain a first vanishing point; alternatively, the first and second electrodes may be,
and acquiring a lane line of the road surface in the image sequence under the target traffic scene, and performing Hough transform voting processing on the lane line to obtain a first vanishing point.
4. The method of claim 2, wherein determining a second vanishing point of the target traffic scene in the image from the sequence of images and the first vanishing point comprises:
acquiring the edge of a moving object in the target traffic scene in the image sequence;
determining a target edge pixel point pointing to a second vanishing point in the edge according to the spatial position relationship between the edge and the first vanishing point;
and carrying out Hough transform voting processing on the target edge pixel points to obtain the second vanishing point.
5. The method of any of claims 2 to 4, wherein determining a third vanishing point based on the first vanishing point and the second vanishing point comprises:
and calculating the position coordinate of a third vanishing point according to the position coordinate of the first vanishing point, the position coordinate of the second vanishing point and a correlation formula of a vector product, wherein the correlation formula of the vector product is used for calculating a third vector vertical to the two vectors.
6. The method of claim 1, wherein determining the 3D vehicle frame of the vehicle in the target traffic scene from the first vanishing point, the second vanishing point, the third vanishing point, and the vehicle contour comprises:
for each vehicle contour, drawing a tangent line of the vehicle contour by using the first vanishing point, the second vanishing point and the third vanishing point to obtain 6 vertexes for determining a 3D vehicle frame;
obtaining another 2 vertexes for determining the 3D vehicle frame according to the 6 vertexes and the shape rule of the 3D vehicle frame;
and performing line connection on the obtained 8 vertexes to obtain the 3D vehicle frame.
7. The method of claim 3 or 4, wherein the Hough transform voting process comprises the following operations:
defining a voting space for a hough transform;
converting a straight line to be processed in an image space into a coordinate system of the voting space to obtain a corresponding conversion straight line;
determining the position where the transformation straight lines intersect most under the voting space;
and converting the position into the image space to obtain a corresponding vanishing point.
8. A vehicle detection device applied to electronic equipment is characterized by comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image sequence, and the image sequence is obtained by image acquisition of a target traffic scene by an image acquisition device;
the first determining module is used for determining a first vanishing point, a second vanishing point and a third vanishing point of the target traffic scene in the image according to the image sequence, wherein the first vanishing point is a vanishing point in the vehicle driving direction, the second vanishing point is a vanishing point vertical to the vehicle driving direction, and the third vanishing point is a vanishing point vertical to the road surface;
the second determining module is used for determining the vehicle outline of the vehicle in the image under the target traffic scene according to the image sequence;
and the third determining module is used for determining a 3D vehicle frame of the vehicle in the target traffic scene according to the first vanishing point, the second vanishing point, the third vanishing point and the vehicle outline.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps in the vehicle detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps in the vehicle detection method according to any one of claims 1 to 7.
CN201911047352.1A 2019-10-30 2019-10-30 Vehicle detection method and device, electronic equipment and storage medium Pending CN110909620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911047352.1A CN110909620A (en) 2019-10-30 2019-10-30 Vehicle detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911047352.1A CN110909620A (en) 2019-10-30 2019-10-30 Vehicle detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110909620A true CN110909620A (en) 2020-03-24

Family

ID=69815025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911047352.1A Pending CN110909620A (en) 2019-10-30 2019-10-30 Vehicle detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110909620A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950662A (en) * 2021-03-24 2021-06-11 电子科技大学 Traffic scene space structure extraction method
CN113033456A (en) * 2021-04-08 2021-06-25 阿波罗智联(北京)科技有限公司 Method and device for determining grounding point of vehicle wheel, road side equipment and cloud control platform
CN113034484A (en) * 2021-04-08 2021-06-25 阿波罗智联(北京)科技有限公司 Method and device for determining boundary points of bottom surface of vehicle, road side equipment and cloud control platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510590A (en) * 2017-02-24 2018-09-07 北京图森未来科技有限公司 A kind of method and device generating three-dimensional boundaries frame
CN108645625A (en) * 2018-03-21 2018-10-12 北京纵目安驰智能科技有限公司 3D vehicle checking methods, system, terminal and the storage medium that tail end is combined with side
CN109446917A (en) * 2018-09-30 2019-03-08 长安大学 A kind of vanishing Point Detection Method method based on cascade Hough transform
CN110307791A (en) * 2019-06-13 2019-10-08 东南大学 Vehicle length and speed calculation method based on three-dimensional vehicle bounding box

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510590A (en) * 2017-02-24 2018-09-07 北京图森未来科技有限公司 A kind of method and device generating three-dimensional boundaries frame
CN108645625A (en) * 2018-03-21 2018-10-12 北京纵目安驰智能科技有限公司 3D vehicle checking methods, system, terminal and the storage medium that tail end is combined with side
CN109446917A (en) * 2018-09-30 2019-03-08 长安大学 A kind of vanishing Point Detection Method method based on cascade Hough transform
CN110307791A (en) * 2019-06-13 2019-10-08 东南大学 Vehicle length and speed calculation method based on three-dimensional vehicle bounding box

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JAKUB SOCHOR等: "BoxCars: Improving Fine-Grained Recognition of Vehicles Using 3-D Bounding Boxes in Traffic Surveillance", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
MARKÉTA DUBSKÁ等: "Automatic Camera Calibration for Traffic Understanding", 《PROCEDDINGS BRITISH MACHINE VISION CONFERENCE 2014》 *
MARKÉTA DUBSKÁ等: "Real Projective Plane Mapping for Detection of Orthogonal Vanishing Points", 《BRITISH MACHINE VISION CONFERENCE》 *
奔跑的汉堡包: "论文四连读(2)利用菱形空间检测照片中的消失点 Real Projective Plane Mapping for Detection of Orthogonal Vanishing Points", 《CSDN》 *
李婵等: "高速公路云台相机的自动标定", 《中国图象图形学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950662A (en) * 2021-03-24 2021-06-11 电子科技大学 Traffic scene space structure extraction method
CN112950662B (en) * 2021-03-24 2022-04-01 电子科技大学 Traffic scene space structure extraction method
CN113033456A (en) * 2021-04-08 2021-06-25 阿波罗智联(北京)科技有限公司 Method and device for determining grounding point of vehicle wheel, road side equipment and cloud control platform
CN113034484A (en) * 2021-04-08 2021-06-25 阿波罗智联(北京)科技有限公司 Method and device for determining boundary points of bottom surface of vehicle, road side equipment and cloud control platform
CN113033456B (en) * 2021-04-08 2023-12-19 阿波罗智联(北京)科技有限公司 Method and device for determining grounding point of vehicle wheel, road side equipment and cloud control platform

Similar Documents

Publication Publication Date Title
Shin et al. Vision-based navigation of an unmanned surface vehicle with object detection and tracking abilities
JP6230751B1 (en) Object detection apparatus and object detection method
CN112444242B (en) Pose optimization method and device
EP2874097A2 (en) Automatic scene parsing
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
US9524557B2 (en) Vehicle detecting method and system
CN110909620A (en) Vehicle detection method and device, electronic equipment and storage medium
CN111209770A (en) Lane line identification method and device
CN111340922A (en) Positioning and mapping method and electronic equipment
KR20190030474A (en) Method and apparatus of calculating depth map based on reliability
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN112950725A (en) Monitoring camera parameter calibration method and device
Jung et al. Object detection and tracking-based camera calibration for normalized human height estimation
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
Guo et al. Visibility detection approach to road scene foggy images
US20200191577A1 (en) Method and system for road image reconstruction and vehicle positioning
CN110738696B (en) Driving blind area perspective video generation method and driving blind area view perspective system
EP4250245A1 (en) System and method for determining a viewpoint of a traffic camera
Kiran et al. Automatic hump detection and 3D view generation from a single road image
CN113450457B (en) Road reconstruction method, apparatus, computer device and storage medium
CN111260538A (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
van de Wouw et al. Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences
CN113011212B (en) Image recognition method and device and vehicle
CN112868049B (en) Efficient self-motion estimation using patch-based projection correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination