CN115082901B - Vehicle import detection method, device and equipment based on algorithm fusion - Google Patents
Vehicle import detection method, device and equipment based on algorithm fusion Download PDFInfo
- Publication number
- CN115082901B CN115082901B CN202210856052.3A CN202210856052A CN115082901B CN 115082901 B CN115082901 B CN 115082901B CN 202210856052 A CN202210856052 A CN 202210856052A CN 115082901 B CN115082901 B CN 115082901B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- algorithm
- image
- pixel point
- lane line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a vehicle import detection method, a device and equipment based on algorithm fusion, wherein the method comprises the following steps: acquiring a vehicle image; applying a target detection algorithm to the image to obtain a first algorithm image, wherein the first algorithm image comprises a vehicle outline frame; obtaining a second algorithm image by applying a semantic segmentation algorithm and the image, wherein the second algorithm image comprises a vehicle characteristic region and a lane line characteristic region of a vehicle; adding a contour frame to the vehicle characteristic area to obtain a single vehicle area; and comparing the single vehicle area with the lane line characteristic area, and judging that the vehicle is imported if the single vehicle area and the lane line characteristic area are overlapped. According to the method, the device and the equipment for detecting vehicle influx based on algorithm fusion, provided by the invention, the vehicle image is obtained and the two algorithms are combined to obtain the contour frame, the vehicle characteristic region and the lane line characteristic region, the lane line characteristic region is marked through the contour frame, a single vehicle region is obtained and is compared with the lane line characteristic region, whether the vehicle is possibly converged into the lane is judged, and the detection precision is improved.
Description
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a vehicle import detection method, device and equipment based on algorithm fusion.
Background
In the field of intelligent driving, intelligent driving network navigation is mainly used for generating road conditions selected by vehicles and treatment schemes for driving safety in driving processes. For example, in a vehicle driving process, in a case that a vehicle enters a lane (a host lane) to which the host vehicle belongs, an existing intelligent driving technology generally adopts a target detection and/or semantic segmentation algorithm to perform inference. In the former method, a target vehicle is selected in a frame mode, and whether the target vehicle possibly enters the lane is judged according to the edge of the rectangular frame; the latter employs pixel-level segmentation and prediction of the target vehicle.
However, both of the above two detection algorithms have certain disadvantages. For the target detection algorithm, the possibility that the non-vehicle area is wrongly judged to be converged into the lane exists in the rectangular frame in the framing process, so that the correct judgment of the road condition is influenced; for the semantic segmentation algorithm, because the target recognition at the pixel level is adopted, the method has the recognition capability on irregular shapes, but the method cannot accurately divide the front overlapped vehicles. Therefore, an algorithm fusion method is needed to improve the detection accuracy of vehicle influx.
Disclosure of Invention
The invention provides a vehicle import detection method, a vehicle import detection device and vehicle import detection equipment based on algorithm fusion, and aims to improve the detection precision of vehicle import by combining a target detection algorithm and a semantic segmentation algorithm.
In a first aspect, an embodiment of the present invention provides a vehicle influx detection method based on algorithm fusion, including:
acquiring images of vehicles in front adjacent lanes;
applying a target detection algorithm to the image to obtain a first algorithm image, the first algorithm image including a contour box about the vehicle;
applying a semantic segmentation algorithm to the image to obtain a second algorithm image, wherein the second algorithm image comprises a vehicle characteristic region of a vehicle and a lane line characteristic region of a shared lane line of the lane and an adjacent lane;
adding a contour frame to the vehicle characteristic area to obtain a single vehicle area in the vehicle characteristic area;
and comparing the single vehicle area with the lane line characteristic area, and judging that the vehicles are imported if the single vehicle area and the lane line characteristic area are superposed.
Optionally, acquiring an image about the vehicle specifically includes:
acquiring video about a vehicle in real time;
and processing the video in a frame division manner to obtain an image.
Optionally, adding a contour frame to the vehicle feature area to obtain a single vehicle area in the vehicle feature area, specifically including:
establishing a world coordinate system by taking the vertex of the first algorithm image as a first origin, taking the horizontal direction as an x axis and taking the vertical direction as a y axis;
acquiring coordinate values corresponding to the outline frame in a world coordinate system;
copying a world coordinate system to the second algorithm image by taking the corresponding vertex in the second algorithm image as a second origin;
adding a contour frame at the coordinate value in the second algorithm image;
according to the outline box, a single vehicle area is obtained.
Optionally, comparing the single vehicle region with the lane line feature region, and if the two regions coincide, determining that there is a vehicle merge, specifically including:
under a world coordinate system, selecting a first pixel point on the edge of a single vehicle area to obtain a vehicle outline of a vehicle; the first pixel points are pixel points of a vehicle characteristic region in the second algorithm image;
selecting a specific pixel point of the vehicle contour, wherein the specific pixel point has a vertical coordinate with a maximum absolute value in the vehicle contour;
and comparing the specific pixel point with a second pixel point, wherein the second pixel point is a pixel point of a lane line characteristic region in the second algorithm image, and judging that the vehicle is imported if the specific pixel point is superposed with the second pixel point.
Optionally, specific pixel points and second pixel points are compared, the second pixel points are pixel points of a lane line feature region in the second algorithm image, and if the specific pixel points and the second pixel points are overlapped, it is determined that there is vehicle entry, which specifically includes:
acquiring a lane line parameter equation of coordinate values of second pixel points under the world coordinate system, wherein the lane line parameter equation is a set of the second pixel points under the world coordinate system;
if the coordinate value of the specific pixel point enables the lane line parameter equation to be smaller than and/or equal to 0, the specific pixel point is overlapped with the lane line characteristic region, and the fact that vehicles are imported is judged to exist;
otherwise, it is determined that there is no vehicle merge.
In a second aspect, the present invention further provides a vehicle influx detection device based on algorithm fusion, wherein the vehicle influx detection method provided in the first aspect is applied, and the method includes:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is used for acquiring images about vehicles in front adjacent lanes;
the first algorithm image acquisition module is used for applying a target detection algorithm to the image to obtain a first algorithm image, and the first algorithm image comprises a contour frame about the vehicle;
the second algorithm image acquisition module is used for applying a semantic segmentation algorithm to the image to obtain a second algorithm image, and the second algorithm image comprises a vehicle characteristic region of the vehicle and a lane line characteristic region of a shared lane line of the current lane and an adjacent lane;
the single vehicle area acquisition module is used for adding a contour frame to the vehicle characteristic area to obtain a single vehicle area in the vehicle characteristic area;
and the vehicle import judgment module is used for comparing the single vehicle area with the lane line characteristic area, and judging that the vehicle import exists if the single vehicle area and the lane line characteristic area are superposed.
Optionally, the image acquisition module is configured to perform the following operations:
acquiring videos about a vehicle in real time;
and processing the video in a frame division manner to obtain an image.
Optionally, the single vehicle zone acquisition module is configured to perform the following operations:
establishing a world coordinate system by taking the vertex of the first algorithm image as a first origin, taking the horizontal direction as an x axis and taking the vertical direction as a y axis;
acquiring coordinate values corresponding to the outline frame in a world coordinate system;
copying a world coordinate system to the second algorithm image by taking the corresponding vertex in the second algorithm image as a second origin;
adding a contour frame at the coordinate value in the second algorithm image;
according to the outline box, a single vehicle area is obtained.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the algorithm fusion based vehicle intrusion detection method as provided by any of the embodiments of the invention.
In a fourth aspect, embodiments of the present invention provide a storage medium containing computer-executable instructions for performing a method for algorithm fusion-based vehicle intrusion detection, as provided by any of the embodiments of the present invention, when the computer-executable instructions are executed by a computer processor.
The invention provides a vehicle import detection method, a device and equipment based on algorithm fusion, which are used for obtaining an image of a vehicle, obtaining a contour frame, a vehicle characteristic area and a lane line characteristic area of the vehicle by combining a target detection algorithm and a semantic segmentation algorithm, marking the lane line characteristic area through the contour frame, dividing to obtain a single vehicle area, and comparing with the lane line characteristic area, so that whether the vehicle is possibly imported into a vehicle lane is judged, and the detection precision of vehicle import is improved.
Drawings
Fig. 1 is a flowchart of a vehicle influx detection method based on algorithm fusion according to an embodiment of the present invention;
fig. 2 is a flowchart of an image acquisition in a vehicle import detection method based on algorithm fusion according to an embodiment of the present invention;
fig. 3 is a flowchart for dividing a single vehicle region in a vehicle influx detection method based on algorithm fusion according to a second embodiment of the present invention;
fig. 4 is a flowchart for comparing a single vehicle region with a lane line feature region in a vehicle intrusion detection method based on algorithm fusion according to the second embodiment of the present invention;
fig. 5 is a flowchart of comparing a specific pixel point and a second pixel point in the vehicle import detection method based on algorithm fusion according to the second embodiment of the present invention;
fig. 6 is an effect diagram of a vehicle import detection method based on algorithm fusion according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a vehicle influx detection device based on algorithm fusion according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of a vehicle influx detection device based on algorithm fusion according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
In view of the above disadvantages, the present invention provides an algorithm fusion-based vehicle influx detection algorithm, as shown in fig. 1 and 2, comprising:
s10: acquiring images of vehicles in front adjacent lanes; wherein, step S10 specifically includes:
s11: acquiring video about a vehicle in real time; the mode of acquiring the video comprises but is not limited to adopting a vehicle-mounted camera;
s12: and processing the video in a frame division manner to obtain an image. It should be added here that since a dual-network structure is adopted for the target detection algorithm and the semantic segmentation algorithm in the subsequent steps, it is preferable to acquire the video about the vehicle by using the vehicle-mounted front-view camera. The framing processing is to divide the video by taking a frame as a unit, and the obtained result is an image.
And storing and copying the images to obtain a plurality of image files (such as two image files) of each frame of image, and applying the image files of the same frame to a subsequent target detection algorithm and a semantic segmentation algorithm.
S20: applying a target detection algorithm to the image to obtain a first algorithm image, the first algorithm image including a contour box about the vehicle; the outline frame can be used for framing the vehicles in the first algorithm image, but the non-vehicle area is easy to be selected by framing operation, and further, whether the vehicles are converged into the lane is judged wrongly.
S30: applying a semantic segmentation algorithm to the image to obtain a second algorithm image, wherein the second algorithm image comprises a vehicle characteristic region of a vehicle and a lane line characteristic region of a shared lane line of the lane and the adjacent lane;
the vehicle characteristic region refers to a region where a vehicle is located in the image, and similarly, the lane line characteristic region refers to a region where a lane line is located in the image.
The vehicle feature region may be an image region formed by overlapping a plurality of vehicles, for example, in an adjacent lane, due to a shooting angle, front and rear vehicles are visually overlapped, and thus the formed image is easy to form a vehicle feature region in which the front and rear vehicles are connected in the process of semantic segmentation.
The method of semantic segmentation is simply adopted, and the regions of different vehicles in the vehicle characteristic region cannot be segmented; even if the vehicle feature region is a single one, the vehicle feature region cannot be divided independently when superimposed on the lane line feature region.
Thus, by S40: adding a contour frame to the vehicle characteristic area to obtain a single vehicle area in the vehicle characteristic area; when the outline box circumscribes all and/or part of a vehicle feature area, the circumscribed portion is a single vehicle area.
The manner of adding the outline frame to the vehicle feature region may include the following two ways:
(1) And establishing a world coordinate system under the first algorithm image, establishing the same world coordinate system under the second algorithm image according to the relation that the first algorithm image and the second algorithm image have the same pixel value and size information (under the condition of no image processing), and copying the coordinate values of the first algorithm image to the corresponding coordinate values in the second algorithm image according to the outline frame.
(2) The second method is to establish coordinate systems under the first algorithm image and the second algorithm image respectively, and copy the outline frame from one coordinate system to the other according to the principle that the first algorithm image and the second algorithm image have the same pixel value and size information, which is not described herein again.
S50: and comparing the single vehicle area with the lane line characteristic area, and judging that the vehicles are imported if the single vehicle area and the lane line characteristic area are superposed. As shown in fig. 6, the comparison may be an intuitive observation or a calculation of a positional relationship of the contour information with a single vehicle region.
The invention provides a vehicle import detection method based on algorithm fusion, which obtains a contour frame, a vehicle characteristic region and a lane line characteristic region of a vehicle by acquiring an image of the vehicle and combining a target detection algorithm and a semantic segmentation algorithm, divides the vehicle characteristic region through the contour frame to obtain a single vehicle region, and compares the single vehicle region with the lane line characteristic region to judge whether the vehicle is possibly imported into a vehicle lane or not, thereby improving the detection precision of vehicle import.
Example two
As further shown in fig. 3, in this embodiment, further refining is performed on the basis of the above technical solution, and an outline frame is added to the vehicle feature area to obtain a single vehicle area in the vehicle feature area, which specifically includes:
s41: establishing a world coordinate system by taking the vertex of the first algorithm image as a first origin, taking the horizontal direction as an x axis and taking the vertical direction as a y axis; preferably, the world coordinate system selects a left vertex of the first algorithm image as a first origin, the first algorithm image comprises a plurality of sequentially arranged pixel points, and the pixel points are all corresponding to coordinate values in the world coordinate system.
S42: and acquiring coordinate values corresponding to the outline frame in the world coordinate system. In an alternative embodiment, coordinate values of corresponding vertices in the outline frame are obtained. The coordinate value of the vertex corresponding to the outline frame can be obtained by the coordinate value of the pixel point corresponding to the vertex.
S43: and copying the world coordinate system to the second algorithm image by taking the corresponding vertex in the second algorithm image as a second origin, and selecting the horizontal direction as the x axis and the vertical direction as the y axis in the copying process. And correspondingly selecting a left vertex in the second algorithm image as a second origin.
S44: and adding a contour frame at the coordinate value in the second algorithm image. An alternative adding manner may be to add the contour box by adding the 4 vertices belonging to the contour box to the second algorithm image, respectively, and then by connecting the vertices in sequence.
For example, one vertex of the outline frame (for example, the coordinate value in the first coordinate system is (144, 250)) is added by labeling a vertex with the same coordinate value in the second coordinate system. And the rest of the vertexes of the contour box are analogized, and the addition of the contour box is completed.
S45: according to the outline box, a single vehicle area is obtained. The single vehicle region corresponds to a single vehicle surrounded by the outline box in the first algorithm image.
Further, in a preferred embodiment, as shown in fig. 4 and 5, step S50: comparing the single vehicle area with the lane line characteristic area, if the single vehicle area and the lane line characteristic area are overlapped, judging that the vehicle is imported, and the method comprises the following steps:
s51: in a world coordinate system, the world coordinate system is established in a mode that the vertex of the first algorithm image is a first origin, the horizontal direction is an x axis, and the vertical direction is a y axis, and the world coordinate system is obtained by copying the vertex of the first algorithm image to the second algorithm image. Selecting a first pixel point at the edge of a single vehicle area to obtain a vehicle profile of a vehicle; the first pixel points are pixel points of a vehicle characteristic region in the second algorithm image; in the second algorithm image, the vehicle feature region is composed of first pixel points, wherein the first pixel points of a circle of the edge of the single vehicle region are the vehicle outline.
S52: a specific pixel point of the vehicle contour of the vehicle is selected, and the specific pixel point can be a vertical coordinate with the maximum absolute value in the contour frame. The specific pixel point is the rear tire (left or right) of the vehicle.
S53: and comparing the specific pixel point with a second pixel point, wherein the second pixel point is a pixel point of a lane line characteristic region in the second algorithm image, and judging that the vehicle is imported if the specific pixel point is superposed with the second pixel point. When the coordinate value of the specific pixel point is coincident with the coordinate value of any pixel point in the second pixel points, the fact that the vehicle presses the lane is judged, and the fact that the vehicle is possibly converged is judged.
In the process of comparing the specific pixel point with the lane line characteristic region, since the second pixel point comprises a plurality of pixel points, the comparison process can be simplified by establishing a parameter equation related to the second pixel point. Specifically, as shown in fig. 5, the method includes:
s531: acquiring a lane line parameter equation of coordinate values of second pixel points under the world coordinate system, wherein the lane line parameter equation is a set of the second pixel points under the world coordinate system;
the acquisition mode of the lane line parameter equation can be realized by selecting any two second pixel points (such as intersection points with an x axis and a y axis) of the lane line characteristic region to respectively obtain coordinate values of the two second pixel points; calculating to obtain a calculation formula of the characteristic area of the lane line: kx + b-y =0. Under actual road conditions, the lane line characteristic region can be a straight line or an arc line, and a quadratic equation or a cubic equation calculation formula is required to be adopted for the calculation formula of the arc lane line characteristic region according to conditions, such as a quadratic curve ax 2 + bx + c-y =0 to describe the line type of the curved lane line characteristic region.
S532: if the coordinate value of the specific pixel point enables the lane line parameter equation to be smaller than and/or equal to 0, the specific pixel point is overlapped with the lane line characteristic region, and the fact that vehicles are imported is judged.
S533: otherwise, if the result of the lane line parameter equation is greater than 0, it is determined that no vehicle is imported.
Taking the linear lane line feature region as an example, it is assumed that the lane line parameter equation calculated according to the coordinate value of the second pixel point is: 2x +4-y =0, and then when the coordinate value of the specific pixel point is (1,1), the lane line parameter equation is obtained by substituting the lane line parameter equation to be more than 0, and then the situation that no vehicle is merged is judged to exist; if the coordinate value of the specific pixel point is (-2,1), the lane line parameter equation is obtained by substituting the coordinate value into the lane line parameter equation, and the lane line parameter equation is smaller than 0, and then the vehicle is judged to be merged.
According to the embodiment of the invention, on the basis of the first embodiment, according to the world coordinate systems with the same first algorithm image and the same second algorithm image, the original points with the same coordinate values under the two images are selected to copy the first algorithm image to the second algorithm image, so that the vehicle outline (specific pixel points) of the vehicle is obtained, the lane line parameter equation related to the lane line characteristic region is established, and the position relation between the outline information (or the specific pixel points) and the second pixel points represented by the lane line parameter equation is compared, so that whether the vehicle possibly merges into the vehicle lane is judged, and the detection accuracy of vehicle merging is improved.
EXAMPLE III
The embodiment of the present invention provides a vehicle influx detection device based on algorithm fusion, which is applied to the vehicle influx detection device provided in the foregoing embodiment, as shown in fig. 7, and includes:
the image acquisition module 01 is used for acquiring images of vehicles in front adjacent lanes;
the first algorithm image acquisition module 02 is used for applying a target detection algorithm to the image to obtain a first algorithm image; the first algorithm image includes a contour box about the vehicle;
the second algorithm image acquisition module 03 is configured to apply a semantic segmentation algorithm to the image to obtain a second algorithm image; the second algorithm image comprises a vehicle characteristic region of the vehicle and a lane line characteristic region about a lane;
the single vehicle area obtaining module 04 is used for adding a contour frame to the vehicle characteristic area and dividing the vehicle characteristic area into a single vehicle area;
and the vehicle import judgment module 05 is used for comparing the single vehicle area with the lane line characteristic area, and judging that the vehicle import exists if the single vehicle area and the lane line characteristic area are superposed.
And, the image acquisition module 01 is configured to perform the following operations:
acquiring video about a vehicle in real time;
and processing the video in a frame division manner to obtain an image.
Optionally, the single vehicle zone acquisition module 04 is further configured to perform the following operations:
establishing a world coordinate system by taking the vertex of the first algorithm image as a first origin, taking the horizontal direction as an x axis and taking the vertical direction as a y axis;
obtaining a coordinate value corresponding to the contour frame under a world coordinate system;
copying a world coordinate system to the second algorithm image by taking the corresponding vertex in the second algorithm image as a second origin;
adding a contour frame at the coordinate value in the second algorithm image;
according to the outline box, a single vehicle area is obtained.
Optionally, the vehicle influx determination module 05 is further configured to perform the following operations:
under a world coordinate system, selecting a first pixel point on the edge of a single vehicle area to obtain a vehicle outline of a vehicle; the first pixel points are pixel points of a vehicle characteristic region in the second algorithm image;
selecting a specific pixel point of the vehicle contour, wherein the specific pixel point has a vertical coordinate with a maximum absolute value in the vehicle contour;
and comparing the specific pixel point with a second pixel point, wherein the second pixel point is a pixel point of a lane line characteristic region in the second algorithm image, and judging that the vehicle is imported if the specific pixel point is superposed with the second pixel point. In a preferred embodiment, the alignment process can be performed as follows:
acquiring a lane line parameter equation of coordinate values of second pixel points under the world coordinate system, wherein the lane line parameter equation is a set of the second pixel points under the world coordinate system;
if the coordinate value of the specific pixel point enables the lane line parameter equation to be smaller than and/or equal to 0, the specific pixel point is overlapped with the lane line characteristic region, and the fact that vehicles are imported is judged to exist;
otherwise, it is determined that there is no vehicle merge.
The vehicle influx detection device based on algorithm fusion provided by the embodiment of the invention adopts the same technical means as the vehicle influx detection method based on algorithm fusion to achieve the same technical effect, and is not repeated herein.
Example four
Fig. 8 is a schematic structural diagram of a vehicle influx detection device based on algorithm fusion according to a fourth embodiment of the present invention, as shown in fig. 8, the vehicle influx detection device based on algorithm fusion includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the vehicle influx detection device based on the algorithm fusion can be one or more, and one processor 410 is taken as an example in fig. 8; the processor 410, the memory 420, the input device 430, and the output device 440 of the vehicle intrusion detection device based on algorithm fusion may be connected by a bus or other means, and fig. 8 illustrates the connection by the bus as an example.
The memory 420 may be used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the algorithm fusion-based path detection method in the embodiment of the present invention (for example, the image acquisition module 01, the first algorithm image acquisition module 02, the second algorithm image acquisition module 03, the single vehicle region acquisition module, and the vehicle import determination module 05). The processor 410 executes various functional applications and data processing of the vehicle influx detection device based on algorithm fusion by executing software programs, instructions and modules stored in the memory 420, namely, implements the vehicle influx detection method based on algorithm fusion.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to the algorithm fusion-based vehicle intrusion detection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the algorithm-fusion based vehicle intrusion detection apparatus. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a vehicle intrusion detection method based on algorithm fusion, and the method includes:
acquiring images of vehicles in front adjacent lanes;
applying a target detection algorithm to the image to obtain a first algorithm image, the first algorithm image including a contour box about the vehicle;
applying a semantic segmentation algorithm to the image to obtain a second algorithm image, wherein the second algorithm image comprises a vehicle characteristic region of a vehicle and a lane line characteristic region of a shared lane line of the lane and an adjacent lane;
adding a contour frame to the vehicle characteristic area to obtain a single vehicle area in the vehicle characteristic area;
and comparing the single vehicle area with the lane line characteristic area, and judging that the vehicles are imported if the single vehicle area and the lane line characteristic area are superposed.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the above method operations, and may also perform related operations in the vehicle influx detection method based on algorithm fusion provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the vehicle influx detection device based on algorithm fusion, each included unit and module is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Although the invention has been described in detail hereinabove by way of general description, specific embodiments and experiments, it will be apparent to those skilled in the art that many modifications and improvements can be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (9)
1. A vehicle influx detection method based on algorithm fusion is characterized by comprising the following steps:
acquiring images of vehicles in front adjacent lanes;
applying a target detection algorithm to the image to obtain a first algorithm image, the first algorithm image including a contour box about the vehicle;
applying a semantic segmentation algorithm to the image to obtain a second algorithm image, wherein the second algorithm image comprises a vehicle characteristic region of the vehicle and a lane line characteristic region of a shared lane line of the lane and the adjacent lane;
adding the outline frame to the vehicle characteristic region to obtain a single vehicle region in the vehicle characteristic region;
comparing the single vehicle area with the lane line characteristic area, and if the single vehicle area and the lane line characteristic area are overlapped, judging that vehicles are imported; selecting a first pixel point at the edge of the single vehicle area under a world coordinate system to obtain a vehicle contour of the vehicle; the first pixel points are pixel points of the vehicle characteristic region in the second algorithm image;
selecting a specific pixel point of the vehicle contour, wherein the specific pixel point has a vertical coordinate with a maximum absolute value in the vehicle contour;
and comparing the specific pixel point with a second pixel point, wherein the second pixel point is a pixel point of the lane line characteristic region in the second algorithm image, and if the specific pixel point is overlapped with the second pixel point, judging that the vehicle is converged.
2. The vehicle influx detection method based on algorithm fusion as claimed in claim 1, wherein the acquiring of the image about the vehicle in the adjacent lane ahead specifically comprises:
obtaining video about the vehicle in real-time;
and processing the video in a frame mode to obtain the image.
3. The method for detecting vehicle influx based on algorithm fusion as claimed in claim 1, wherein the adding the contour frame to the vehicle feature region to obtain a single vehicle region in the vehicle feature region specifically comprises:
establishing a world coordinate system by taking the vertex of the first algorithm image as a first origin, the horizontal direction as an x axis and the vertical direction as a y axis;
obtaining coordinate values corresponding to the outline frame under the world coordinate system;
copying the world coordinate system to the second algorithm image by taking the corresponding vertex in the second algorithm image as a second origin;
adding the outline frame to the coordinate value in the second algorithm image;
and obtaining the single vehicle area according to the outline frame.
4. The method according to claim 1, wherein the comparing of the specific pixel point with a second pixel point is performed, and the second pixel point is a pixel point in the lane line feature region in the second algorithm image, and if the specific pixel point is overlapped with the second pixel point, it is determined that there is a vehicle entry, specifically including:
acquiring a lane line parameter equation of the coordinate value of the second pixel point in the world coordinate system, wherein the lane line parameter equation is a set of the second pixel point in the world coordinate system;
if the coordinate value of the specific pixel point enables the lane line parameter equation to be smaller than and/or equal to 0, the specific pixel point is overlapped with the lane line characteristic region, and the fact that vehicles are imported is judged to exist;
otherwise, it is determined that there is no vehicle merge.
5. A vehicle influx detection device based on algorithm fusion is characterized by comprising:
the system comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring images of vehicles in front adjacent lanes;
a first algorithm image acquisition module for applying a target detection algorithm to the image to obtain a first algorithm image, the first algorithm image including a contour box about the vehicle;
the second algorithm image acquisition module is used for applying a semantic segmentation algorithm to the image to obtain a second algorithm image, and the second algorithm image comprises a vehicle characteristic region of the vehicle and a lane line characteristic region of a shared lane line of the lane and the adjacent lane;
the single vehicle area obtaining module is used for adding the outline frame to the vehicle characteristic area to obtain a single vehicle area in the vehicle characteristic area;
the vehicle import judgment module is used for comparing the single vehicle area with the lane line characteristic area, and judging that vehicle import exists if the single vehicle area and the lane line characteristic area are overlapped; selecting a first pixel point on the edge of the single vehicle area under a world coordinate system to obtain a vehicle outline of the vehicle; the first pixel points are pixel points of the vehicle characteristic region in the second algorithm image;
selecting a specific pixel point of the vehicle contour, wherein the specific pixel point has a vertical coordinate with the maximum absolute value in the vehicle contour;
and comparing the specific pixel point with a second pixel point, wherein the second pixel point is a pixel point of the lane line characteristic region in the second algorithm image, and if the specific pixel point is overlapped with the second pixel point, judging that the vehicle is converged.
6. The vehicle influx detection apparatus according to claim 5, wherein said image acquisition module is configured to perform the following operations:
obtaining video about the vehicle in real-time;
and processing the video in a frame mode to obtain the image.
7. The vehicle influx detection apparatus according to claim 5, wherein said single vehicle zone acquisition module is configured to:
establishing a world coordinate system by taking the vertex of the first algorithm image as a first origin, the horizontal direction as an x axis and the vertical direction as a y axis;
obtaining coordinate values corresponding to the outline frame under the world coordinate system;
copying the world coordinate system to the second algorithm image by taking the corresponding vertex in the second algorithm image as a second origin;
adding the outline frame to the coordinate value in the second algorithm image;
and obtaining the single vehicle area according to the outline frame.
8. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the algorithm fusion-based vehicle intrusion detection method of any one of claims 1-4.
9. A storage medium containing computer-executable instructions for performing the algorithm fusion-based vehicle influx detection method according to any one of claims 1-4 when executed by a computer processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210856052.3A CN115082901B (en) | 2022-07-21 | 2022-07-21 | Vehicle import detection method, device and equipment based on algorithm fusion |
PCT/CN2023/104337 WO2024017003A1 (en) | 2022-07-21 | 2023-06-30 | Vehicle merging detection method and apparatus based on combined algorithms, and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210856052.3A CN115082901B (en) | 2022-07-21 | 2022-07-21 | Vehicle import detection method, device and equipment based on algorithm fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115082901A CN115082901A (en) | 2022-09-20 |
CN115082901B true CN115082901B (en) | 2023-01-17 |
Family
ID=83258776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210856052.3A Active CN115082901B (en) | 2022-07-21 | 2022-07-21 | Vehicle import detection method, device and equipment based on algorithm fusion |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115082901B (en) |
WO (1) | WO2024017003A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082901B (en) * | 2022-07-21 | 2023-01-17 | 天津所托瑞安汽车科技有限公司 | Vehicle import detection method, device and equipment based on algorithm fusion |
CN115830881A (en) * | 2023-02-20 | 2023-03-21 | 常州海图信息科技股份有限公司 | Parking detection method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018059585A1 (en) * | 2016-09-30 | 2018-04-05 | 比亚迪股份有限公司 | Vehicle identification method and device, and vehicle |
CN208238805U (en) * | 2018-03-30 | 2018-12-14 | 郑州宇通客车股份有限公司 | A kind of automatic Pilot car context aware systems and automatic Pilot car |
CN109002797A (en) * | 2018-07-16 | 2018-12-14 | 腾讯科技(深圳)有限公司 | Vehicle lane change detection method, device, storage medium and computer equipment |
CN113232650A (en) * | 2021-05-31 | 2021-08-10 | 吉林大学 | Vehicle collision avoidance control system and method for converging vehicles with front sides |
CN113997934A (en) * | 2021-12-09 | 2022-02-01 | 中国第一汽车股份有限公司 | Lane changing method, lane changing device, computer equipment and storage medium |
CN114120254A (en) * | 2021-10-29 | 2022-03-01 | 上海高德威智能交通系统有限公司 | Road information identification method, device and storage medium |
CN114299468A (en) * | 2021-12-29 | 2022-04-08 | 苏州智加科技有限公司 | Method, device, terminal, storage medium and product for detecting convergence of lane |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6902899B2 (en) * | 2017-03-30 | 2021-07-14 | 株式会社日立情報通信エンジニアリング | Lane line recognition device and lane line recognition program |
CN110458050B (en) * | 2019-07-25 | 2023-06-06 | 清华大学苏州汽车研究院(吴江) | Vehicle cut-in detection method and device based on vehicle-mounted video |
CN110781768A (en) * | 2019-09-30 | 2020-02-11 | 奇点汽车研发中心有限公司 | Target object detection method and device, electronic device and medium |
CN111401186B (en) * | 2020-03-10 | 2024-05-28 | 北京精英智通科技股份有限公司 | Vehicle line pressing detection system and method |
US11989951B2 (en) * | 2020-04-30 | 2024-05-21 | Boe Technology Group Co., Ltd. | Parking detection method, system, processing device and storage medium |
CN115082901B (en) * | 2022-07-21 | 2023-01-17 | 天津所托瑞安汽车科技有限公司 | Vehicle import detection method, device and equipment based on algorithm fusion |
-
2022
- 2022-07-21 CN CN202210856052.3A patent/CN115082901B/en active Active
-
2023
- 2023-06-30 WO PCT/CN2023/104337 patent/WO2024017003A1/en unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018059585A1 (en) * | 2016-09-30 | 2018-04-05 | 比亚迪股份有限公司 | Vehicle identification method and device, and vehicle |
CN208238805U (en) * | 2018-03-30 | 2018-12-14 | 郑州宇通客车股份有限公司 | A kind of automatic Pilot car context aware systems and automatic Pilot car |
CN109002797A (en) * | 2018-07-16 | 2018-12-14 | 腾讯科技(深圳)有限公司 | Vehicle lane change detection method, device, storage medium and computer equipment |
CN113232650A (en) * | 2021-05-31 | 2021-08-10 | 吉林大学 | Vehicle collision avoidance control system and method for converging vehicles with front sides |
CN114120254A (en) * | 2021-10-29 | 2022-03-01 | 上海高德威智能交通系统有限公司 | Road information identification method, device and storage medium |
CN113997934A (en) * | 2021-12-09 | 2022-02-01 | 中国第一汽车股份有限公司 | Lane changing method, lane changing device, computer equipment and storage medium |
CN114299468A (en) * | 2021-12-29 | 2022-04-08 | 苏州智加科技有限公司 | Method, device, terminal, storage medium and product for detecting convergence of lane |
Also Published As
Publication number | Publication date |
---|---|
CN115082901A (en) | 2022-09-20 |
WO2024017003A1 (en) | 2024-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115082901B (en) | Vehicle import detection method, device and equipment based on algorithm fusion | |
US10922843B2 (en) | Calibration method and calibration device of vehicle-mounted camera, vehicle and storage medium | |
US9113049B2 (en) | Apparatus and method of setting parking position based on AV image | |
CN110634153A (en) | Target tracking template updating method and device, computer equipment and storage medium | |
JP4416039B2 (en) | Striped pattern detection system, striped pattern detection method, and striped pattern detection program | |
JPH10208056A (en) | Line detection method | |
CN111144330B (en) | Deep learning-based lane line detection method, device and equipment | |
US20130016915A1 (en) | Crosswalk detection device, crosswalk detection method and recording medium | |
JP5728080B2 (en) | Wrinkle detection device, wrinkle detection method and program | |
CN112528807B (en) | Method and device for predicting running track, electronic equipment and storage medium | |
EP3631675B1 (en) | Advanced driver assistance system and method | |
JPH11195127A (en) | Method for recognizing white line and device therefor | |
CN113283347B (en) | Assembly job guidance method, device, system, server and readable storage medium | |
US11458892B2 (en) | Image generation device and image generation method for generating a composite image | |
CN113297939B (en) | Obstacle detection method, obstacle detection system, terminal device and storage medium | |
CN112232175B (en) | Method and device for identifying state of operation object | |
CN113112846A (en) | Parking navigation method, device, equipment and storage medium | |
CN113954836A (en) | Segmented navigation lane changing method and system, computer equipment and storage medium | |
CN115123291B (en) | Behavior prediction method and device based on obstacle recognition | |
JP5773334B2 (en) | Optical flow processing apparatus and display radius map generation apparatus | |
CN113657277A (en) | System and method for judging shielded state of vehicle | |
CN110163829B (en) | Image generation method, device and computer readable storage medium | |
EP3896387A1 (en) | Image processing device | |
TW201619577A (en) | Image monitoring system and method | |
CN116612194B (en) | Position relation determining method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |