Tail end and side surface combined 3D vehicle detection method, system, terminal and storage medium
Technical Field
The invention relates to the technical field of automotive electronics, in particular to a method, a system, a terminal and a storage medium for detecting a 3D vehicle with a tail end combined with a side surface.
Background
The ADAS, namely an advanced driving assistance system, is also called an active safety system, and mainly comprises an electronic stability system ESC of a vehicle body, an adaptive cruise system ACC, a lane deviation warning system LDW, a lane keeping system LKA, a forward collision warning system FCW, a door opening warning DOW, an automatic emergency braking system AEB, a traffic sign recognition TSR, a blind spot detection BSD, a night vision system NV, an automatic parking system APS, and the like.
At present, the vehicle detection based on the image mostly adopts a plane marking mode to detect the vehicle.
Disclosure of Invention
In order to solve the above and other potential technical problems, the present invention provides a method, a system, a terminal, and a storage medium for detecting a 3D vehicle with a combined tail end and a side surface, wherein first, a three-dimensional vehicle surrounding frame is obtained from a captured image, which provides a basis for three-dimensional accurate positioning of an obstacle; a three-dimensional vehicle surrounding frame is obtained according to a captured image, and basic conditions are provided for predicting information such as vehicle head and tail coordinates, component visibility, vehicle self size and the like.
A tail end and side combined 3D vehicle detection method comprises the following steps:
s01: acquiring a captured image, detecting the end face of the tail end of the vehicle, and predicting a vertical straight line of a visible top of the vehicle head;
s02: judging whether the area enclosed by the vertical straight line of the head of the vehicle and the end face of the tail end of the vehicle is the side face of the vehicle or not by using a lateral classifier;
s03: if the vehicle side is judged, predicting the candidate positions of the wheels; if it is determined that the vehicle is not a side of the vehicle, the process returns to step S02;
s04: judging whether wheels exist at the wheel candidate positions in the step S03 by using the wheel classifier;
s05: if the wheels are judged to be available, determining the inclination angle of the vehicle body according to the vertical line of the wheels and the vehicle head; judging that no wheel exists, returning to the step S03;
s06: determining the bottom edge of the side surface of the vehicle according to the inclination angle of the vehicle body and the vertex of the end surface of the tail end of the vehicle, which is close to the wheel;
s07: and completing the three-dimensional frame by taking the bottom edge of the side surface of the vehicle, the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle as references to obtain the three-dimensional vehicle surrounding frame.
Further, the specific steps of predicting the vertical straight line of the visible vertex of the vehicle head in the step S01 are as follows:
and acquiring texture boundary lines between the vehicles and the road surface according to the texture information of the vehicles at the head positions of the vehicles and the road surface, and pre-judging the texture boundary lines to be vertical lines of visible vertexes at the vehicle head, wherein the vertical lines are one or more.
Further, the specific steps of predicting the vertical straight line of the visible vertex of the vehicle head in the step S01 are as follows:
s011: acquiring a captured image, and detecting the end face of the tail end of the vehicle;
s012: extracting texture information according to the end face of the tail end of the detected vehicle; calculating and analyzing to obtain a vertical line belonging to the vehicle height direction in the texture information, thereby obtaining direction information of the vertical line in the vehicle height direction in the captured image;
s013: and obtaining texture boundary lines between the vehicles and the road surface according to the texture information of the vehicles and the road surface at the head positions of the vehicles, and screening vertical straight lines of visible vertexes at the head positions of the boundary lines according to the direction information of the vertical lines in the height direction of the vehicles judged in the texture information of the end surfaces at the tail ends of the vehicles in the captured image, wherein the vertical straight lines are one or more.
Further, the lateral classifier in the step S02 is a lateral classifier generated by learning a classification rule by using known training data of the vehicle lateral image of a given class; the captured image is classified by a lateral classifier to find a region in the captured image that matches, initially identified as the vehicle side.
Further, the specific steps of predicting the wheel candidate position in step S03 are as follows:
sliding a sliding window in an area on the side of the vehicle according to a rule, and calculating characteristic information in the area where the sliding window is located every time when the sliding window arrives at a position; then screening the characteristic information by using a trained wheel candidate classifier, judging whether the area where the sliding window is located is a wheel candidate position, and if the area is the wheel candidate position, reserving the area; if the wheel candidate position is not the wheel candidate position, continuing sliding the sliding window to the next position; when the sliding window traverses the area to the side of the vehicle, all the wheel candidate position information is collected.
Further, when the wheel classifier in the step S04 classifies the wheel candidate positions, the wheel classifier at least includes at least one wheel body feature extracted from the wheel body and/or at least one wheel periphery feature extracted from the wheel periphery and/or at least one association feature between the extracted at least one wheel body and the wheel periphery.
Further, in the step S05, a specific method for determining the inclination angle of the vehicle body according to the vertical straight line between the vehicle wheel and the vehicle head is as follows:
connecting the first tangent point and the second tangent point to form a horizontal connecting line by taking the position of the front wheel and the first tangent point of the road surface and the position of the rear wheel and the second tangent point of the road surface, which are determined in the step S04, as the horizontal reference according to the vertical reference of the side surface of the vehicle, which is the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle, and the two ends of the horizontal connecting line are respectively extended to the vertical straight line of the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle; and an included angle between the horizontal connecting line and the vertical straight line of the vehicle head is used as the inclination angle of the vehicle body.
Further, in the step S06, a specific method for determining the bottom edge of the side surface of the vehicle according to the vehicle body inclination angle and the vertex of the end surface of the tail end of the vehicle close to the wheel is as follows:
and taking the vertex of the end face at the tail end of the vehicle, which is close to the wheel, as an original point, and taking the extending direction of the horizontal connecting line as a direction to form a horizontal line at the side surface of the vehicle, and prolonging the intersection of the horizontal line at the side surface of the vehicle and the vertical straight line at the head of the vehicle to form the bottom edge of the side surface of the vehicle.
Further, when the three-dimensional frame is completed based on the bottom side of the side surface of the vehicle, the end surface of the tail end of the vehicle, and the vertical straight line of the head of the vehicle in step S07, the three-dimensional frame is completed based on the parallelogram rule.
A3D vehicle detection system combining a tail end and a side surface comprises a tail end detection module, a head end detection module, a side surface detection module and a three-dimensional completion module;
the tail end detection module is used for detecting the position of the vehicle in the captured image according to the obtained captured image and detecting the end face of the tail end of the vehicle according to the position of the vehicle;
the head end detection module is used for predicting the position of a vertical straight line of a visible vertex of the head in a captured image;
the side detection module comprises a wheel candidate position prediction module, a wheel identification module, a side bottom edge generation module and a side completion module, and the wheel candidate position module screens characteristic information by using a trained wheel candidate classifier and judges whether the area where the sliding window is located is a wheel candidate position or not; the wheel identification module is used for identifying whether the wheel is real or not according to the wheel body characteristics, the wheel surrounding characteristics and the associated characteristics; the side surface bottom edge generating module forms a horizontal connecting line according to the position of a wheel, extends the horizontal connecting line to a vertical straight line of a vehicle head, obtains an included angle between the horizontal connecting line and the vertical straight line of the vehicle head as an inclination angle of a vehicle body, and takes the top point of the four top points of the end surface of the tail end of the vehicle, which is close to the wheel, as an original point and the extending direction of the horizontal connecting line as the direction to serve as the bottom edge of the side surface of the vehicle; the side face complementing module is used for complementing the end face of the side face;
and the three-dimensional completion module is used for completing the three-dimensional frame of the vehicle by taking the bottom edge of the side surface of the vehicle, the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle as references.
The 3D vehicle detection terminal is characterized by comprising a processor and a memory, wherein the memory stores program instructions, and the processor executes the program instructions to realize the steps of the method.
A computer-readable storage medium having stored thereon a computer program, characterized in that: which when executed by a processor implements the steps of the method described above.
As described above, the present invention has the following advantageous effects:
first, a three-dimensional vehicle enclosure is obtained from the captured image, providing a basis for accurate three-dimensional positioning of obstacles.
Secondly, a three-dimensional vehicle enclosure frame is obtained from the captured image, and basic conditions are provided for predicting information such as vehicle head and tail coordinates, component visibility, and vehicle size.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 shows a flow chart of the present invention.
Fig. 2 shows the side area of the vehicle between the end face of the rear end of the vehicle and the vertical straight line of the vehicle head of the invention.
FIG. 3 is a diagram illustrating a 3D frame detected by the present invention.
FIG. 4 is a diagram of a 3D frame detected by the present invention in another embodiment.
FIG. 5 is a diagram of a 3D frame detected by the present invention in another embodiment.
FIG. 6 is a diagram of a 3D frame detected by the present invention in another embodiment.
FIG. 7 is a diagram of a 3D frame detected by the present invention in another embodiment.
FIG. 8 is a diagram of a 3D frame detected by the present invention in another embodiment.
Fig. 9 is a schematic view of the end face of the rear end of the vehicle and the side face of the vehicle detected by the present invention.
Fig. 10 is a schematic view illustrating the detected vehicle body inclination angle according to the present invention.
In the figure: 1-vehicle tail end face; 2-vehicle side.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention. In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the corresponding terms may be changed or adjusted without substantial technical change.
Referring to fig. 1 to 10, a method for detecting a 3D vehicle with a tail end combined with a side surface includes the steps of:
s01: acquiring a captured image, detecting the end face of the tail end of the vehicle, and predicting a vertical straight line of a visible top of the vehicle head;
s02: judging whether the area enclosed by the vertical straight line of the head of the vehicle and the end face of the tail end of the vehicle is the side face of the vehicle or not by using a lateral classifier;
s03: if the vehicle side is judged, predicting the candidate positions of the wheels; if it is determined that the vehicle is not a side of the vehicle, the process returns to step S02;
s04: judging whether wheels exist at the wheel candidate positions in the step S03 by using the wheel classifier;
s05: if the wheels are judged to be available, determining the inclination angle of the vehicle body according to the vertical line of the wheels and the vehicle head; judging that no wheel exists, returning to the step S03;
s06: determining the bottom edge of the side surface of the vehicle according to the inclination angle of the vehicle body and the vertex of the end surface of the tail end of the vehicle, which is close to the wheel;
s07: and completing the three-dimensional frame by taking the bottom edge of the side surface of the vehicle, the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle as references to obtain the three-dimensional vehicle surrounding frame.
As a preferred embodiment, the specific steps of predicting the vertical straight line of the visible vertex of the vehicle head in step S01 are as follows:
and acquiring texture boundary lines between the vehicles and the road surface according to the texture information of the vehicles at the head positions of the vehicles and the road surface, and pre-judging the texture boundary lines to be vertical lines of visible vertexes at the vehicle head, wherein the vertical lines are one or more.
As a preferred embodiment, the specific steps of predicting the vertical straight line of the visible vertex of the vehicle head in step S01 are as follows:
s011: acquiring a captured image, and detecting the end face of the tail end of the vehicle;
s012: extracting texture information according to the end face of the tail end of the detected vehicle; calculating and analyzing to obtain a vertical line belonging to the vehicle height direction in the texture information, thereby obtaining direction information of the vertical line in the vehicle height direction in the captured image;
s013: and obtaining texture boundary lines between the vehicles and the road surface according to the texture information of the vehicles and the road surface at the head positions of the vehicles, and screening vertical straight lines of visible vertexes at the head positions of the boundary lines according to the direction information of the vertical lines in the height direction of the vehicles judged in the texture information of the end surfaces at the tail ends of the vehicles in the captured image, wherein the vertical straight lines are one or more.
As a preferred embodiment, the lateral classifier in step S02 is generated by learning classification rules using known training data of vehicle lateral images of a given category; the captured image is classified by a side classifier to find a region in the captured image that is coincident with the captured image and is initially identified as a vehicle side.
As a preferred embodiment, the specific steps of predicting the wheel candidate position in step S03 are as follows:
sliding a sliding window in an area on the side of the vehicle according to a rule, and calculating characteristic information in the area where the sliding window is located every time when the sliding window arrives at a position; then screening the characteristic information by using a trained wheel candidate classifier, judging whether the area where the sliding window is located is a wheel candidate position, and if the area is the wheel candidate position, reserving the area; if the wheel candidate position is not the wheel candidate position, continuing sliding the sliding window to the next position; when the sliding window traverses the area to the side of the vehicle, all the wheel candidate position information is collected.
In a preferred embodiment, when the wheel classifier in step S04 classifies the wheel candidate positions, the wheel classifier at least includes at least one wheel body feature extracted from the wheel body and/or at least one wheel periphery feature extracted from the wheel periphery and/or at least one wheel body-to-wheel periphery related feature extracted.
As a preferred embodiment, the specific method for determining the inclination angle of the vehicle body according to the vertical straight line between the wheel and the vehicle head in the step S05 is as follows:
connecting the first tangent point and the second tangent point to form a horizontal connecting line by taking the position of the front wheel and the first tangent point of the road surface and the position of the rear wheel and the second tangent point of the road surface, which are determined in the step S04, as the horizontal reference according to the vertical reference of the side surface of the vehicle, which is the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle, and the two ends of the horizontal connecting line are respectively extended to the vertical straight line of the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle; and an included angle between the horizontal connecting line and the vertical straight line of the vehicle head is used as the inclination angle of the vehicle body.
As a preferred embodiment, the specific method for determining the bottom edge of the side surface of the vehicle according to the inclination angle of the vehicle body and the vertex of the end surface of the tail end of the vehicle close to the wheel in step S06 is as follows:
and taking the vertex of the end face at the tail end of the vehicle, which is close to the wheel, as an original point, and taking the extending direction of the horizontal connecting line as a direction to form a horizontal line at the side surface of the vehicle, and prolonging the intersection of the horizontal line at the side surface of the vehicle and the vertical straight line at the head of the vehicle to form the bottom edge of the side surface of the vehicle.
In a preferred embodiment, when the three-dimensional frame is completed based on the bottom side of the side surface of the vehicle, the end surface of the tail end of the vehicle, and the vertical straight line of the head of the vehicle in step S07, the three-dimensional frame is completed based on the parallelogram rule.
A3D vehicle detection system combining a tail end and a side surface comprises a tail end detection module, a head end detection module, a side surface detection module and a three-dimensional completion module;
the tail end detection module is used for detecting the position of the vehicle in the captured image according to the obtained captured image and detecting the end face of the tail end of the vehicle according to the position of the vehicle;
the head end detection module is used for predicting the position of a vertical straight line of a visible vertex of the head in a captured image;
the side detection module comprises a wheel candidate position prediction module, a wheel identification module, a side bottom edge generation module and a side completion module, and the wheel candidate position module screens characteristic information by using a trained wheel candidate classifier and judges whether the area where the sliding window is located is a wheel candidate position or not; the wheel identification module is used for identifying whether the wheel is real or not according to the wheel body characteristics, the wheel surrounding characteristics and the associated characteristics; the side surface bottom edge generating module forms a horizontal connecting line according to the position of a wheel, extends the horizontal connecting line to a vertical straight line of a vehicle head, obtains an included angle between the horizontal connecting line and the vertical straight line of the vehicle head as an inclination angle of a vehicle body, and takes the top point of the four top points of the end surface of the tail end of the vehicle, which is close to the wheel, as an original point and the extending direction of the horizontal connecting line as the direction to serve as the bottom edge of the side surface of the vehicle; the side face complementing module is used for complementing the end face of the side face;
and the three-dimensional completion module is used for completing the three-dimensional frame of the vehicle by taking the bottom edge of the side surface of the vehicle, the end surface of the tail end of the vehicle and the vertical straight line of the head of the vehicle as references.
The 3D vehicle detection terminal is characterized by comprising a processor and a memory, wherein the memory stores program instructions, and the processor executes the program instructions to realize the steps of the method.
A computer-readable storage medium having stored thereon a computer program, characterized in that: which when executed by a processor implements the steps of the method described above.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which may be accomplished by those skilled in the art without departing from the spirit and scope of the present invention as set forth in the appended claims.