CN116152753A - Vehicle information identification method and system, storage medium and electronic device - Google Patents

Vehicle information identification method and system, storage medium and electronic device Download PDF

Info

Publication number
CN116152753A
CN116152753A CN202211711893.1A CN202211711893A CN116152753A CN 116152753 A CN116152753 A CN 116152753A CN 202211711893 A CN202211711893 A CN 202211711893A CN 116152753 A CN116152753 A CN 116152753A
Authority
CN
China
Prior art keywords
vehicle
target
frame
frames
axle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211711893.1A
Other languages
Chinese (zh)
Inventor
蔡鄂
舒瑞康
杨勇刚
石楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wanji Photoelectric Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202211711893.1A priority Critical patent/CN116152753A/en
Publication of CN116152753A publication Critical patent/CN116152753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The application discloses a vehicle information identification method and system, a storage medium and an electronic device, wherein the method comprises the following steps: under the condition that the laser sensor is triggered, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera; under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle; under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera; and identifying a target vehicle type matched with the target vehicle from a set of preset vehicle types according to a set of vehicle frames.

Description

Vehicle information identification method and system, storage medium and electronic device
Technical Field
The present application relates to the field of vehicle identification, and in particular, to a vehicle information identification method and system, a storage medium, and an electronic device.
Background
At present, in a free vehicle type recognition system, a mode of fusing a laser sensor and a video is adopted for vehicle detection, wherein the vehicle detection is mainly performed by the laser sensor, the laser sensor detects a vehicle in a laser section in real time, and when the ending of the vehicle is detected, the laser vehicle type is output; meanwhile, sending a ending trigger signal to the camera; the camera carries out framing according to the triggering time of the head and the tail of the vehicle to obtain a whole vehicle map, and further determines and outputs a video vehicle type; the final vehicle model can be determined by fusing the laser vehicle model and the video vehicle model.
However, the laser ranging is greatly influenced by rainwater and water mist in rainy and snowy weather, so that the laser ranging is inaccurate, the laser is easy to trigger by mistake and the laser is easy to trigger by omission, and the final fusion effect is influenced. As can be seen from this, the vehicle information identification method in the related art has a problem that accuracy of vehicle identification is low because laser ranging is susceptible to weather.
Disclosure of Invention
The embodiment of the application provides a vehicle information identification method and system, a storage medium and an electronic device, which at least solve the problem that the accuracy of vehicle information identification is low because laser ranging is easily affected by weather in the vehicle information identification method in the related art.
According to an aspect of the embodiments of the present application, there is provided a vehicle information identification method, including: under the condition that a laser sensor is triggered, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by a target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera; under the condition that the vehicle head of the target vehicle is identified from the reference video frames, carrying out vehicle ending detection on the video frames collected by the target camera to obtain an ending detection result of the target vehicle; determining a group of vehicle frames of the target vehicle from video frames collected by the target camera under the condition that the ending detection result is used for indicating that the target vehicle has ended; and identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to the group of vehicle frames.
According to another aspect of the embodiments of the present application, there is also provided a vehicle information identification system including: a laser sensor for detecting a vehicle passing through a laser section of the laser sensor; the target camera is used for collecting video frames of the visual field range of the target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera; the data processing component is used for selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera under the condition that the laser sensor is triggered; under the condition that the vehicle head of the target vehicle is identified from the reference video frames, carrying out vehicle ending detection on the video frames collected by the target camera to obtain an ending detection result of the target vehicle; determining a group of vehicle frames of the target vehicle from video frames collected by the target camera under the condition that the ending detection result is used for indicating that the target vehicle has ended; and identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to the group of vehicle frames.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the above-described vehicle information identification method when run.
According to still another aspect of the embodiments of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the vehicle information identification method described above through the computer program.
In the embodiment of the application, a mode of combining laser triggering with a vehicle video frame to determine the vehicle type is adopted, and a reference video frame with the acquisition time matched with the triggering time of the laser sensor is selected from video frames acquired by a target camera under the condition that the laser sensor is triggered, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera; under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle; under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera; according to a group of vehicle frames, a target vehicle type matched with a target vehicle is identified from a group of preset vehicle types, as laser is only used for triggering and identifying video frames collected by a camera, the head and tail detection time of the vehicle are determined by analyzing the video frames, the influence of weather on the video frames collected by the camera is smaller compared with that on the laser ranging, the aim of weakening the influence of weather can be fulfilled, the technical effect of improving the accuracy of vehicle information identification is achieved, and the problem that the accuracy of vehicle information identification in the related art is low due to the fact that the laser ranging is easy to influence of weather is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of an alternative vehicle information identification method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative vehicle information identification method according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an alternative vehicle information identification method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of another alternative vehicle information identification method according to an embodiment of the present application;
FIG. 5 is a schematic illustration of yet another alternative vehicle information identification method according to an embodiment of the present application;
FIG. 6 is a schematic illustration of yet another alternative vehicle information identification method according to an embodiment of the present application;
FIG. 7 is a schematic illustration of yet another alternative vehicle information identification method according to an embodiment of the present application;
FIG. 8 is a schematic illustration of yet another alternative vehicle information identification method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of yet another alternative vehicle information identification method according to an embodiment of the present application;
fig. 10 is a block diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiments of the present application, a vehicle information identification method is provided. Alternatively, in the present embodiment, the above-described vehicle information identification method may be applied to a hardware environment including the identification means 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the identifying unit 102 through a network, and may be used for identifying vehicle information based on the detection data of the identifying unit 102, for example, identifying the vehicle type information of the vehicle, and a database may be provided on the server or independent of the server, for providing a data storage service for the server 104. Here, the identifying means 102 and the server 104 may both belong to the vehicle information identifying system.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth. The identification component 102 can include a laser sensor and a camera, wherein the laser sensor can be, but is not limited to, a lidar.
The vehicle information identification method of the embodiment of the present application may be executed by the server 104, may be executed by the identification means 102, or may be executed by both the server 104 and the identification means 102. Taking the example that the method for identifying vehicle information in the present embodiment is executed by the server 104 as an example, fig. 2 is a schematic flow chart of an alternative method for identifying vehicle information according to an embodiment of the present application, as shown in fig. 2, the flow chart of the method may include the following steps:
Step S202, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera under the condition that the laser sensor is triggered, wherein the laser section of the laser sensor is positioned in the field of view of the target camera.
The vehicle information identification method in the embodiment may be applied to a scene in which vehicle information identification is performed on passing vehicles in a preset area, may be applied to a vehicle information identification system, and the vehicle information identification system may include a free-flow vehicle type identification system. The preset area may be a highway or other areas where vehicle information needs to be identified, and may include multiple lanes. The vehicle information may include a model of the vehicle, where the model may include various types of toll vehicles, for example, a class 1 truck, a class 2 truck, a class 3 truck, a class 4 truck, a class 5 truck, etc., among trucks, a class 1 truck, a class 2 truck, a class 3 truck, etc., among buses. In some examples of the present embodiment, a free-stream vehicle type recognition system applied to an expressway is described as an example.
The free-stream vehicle type recognition system may include recognition components for vehicle information recognition including, but not limited to, laser sensors and target cameras. Here, the laser sensor may be a laser radar that uses laser light as a signal source, receives the laser light returned from the object, and calculates the distance. The target camera may be a camera that performs real-time image acquisition of the lane and the vehicle on the lane.
In order to improve the overall communication efficiency of vehicles, more and more expressway entrances begin to adopt free-flow vehicle type recognition systems to realize vehicle type recognition of charge vehicles so as to ensure that charging of the charge vehicles is completed when the corresponding vehicles pass through a ramp, and the vehicles do not need to stop or slow down. At present, a free vehicle type recognition system generally adopts a laser and video fusion mode, a laser sensor is taken as a main mode, when the laser sensor detects ending, besides a laser vehicle type, ending triggering information is sent to a camera, the camera can splice frames according to the triggering time of a vehicle head and a vehicle tail to obtain a whole image of a vehicle, then a video vehicle type is obtained, and finally the laser and the video vehicle type are fused to obtain a toll vehicle type.
For example, as shown in fig. 3, the laser may detect the vehicle under the section in real time, when the laser determines that the vehicle ends, the laser vehicle type is given, and the ending triggering information is sent to the camera, the triggering camera finds the corresponding video frame according to the triggering time of the laser head and the vehicle tail, and frames the video frame to obtain the whole image of the vehicle, and the video vehicle type is obtained through model identification, and finally the toll vehicle type is obtained by fusing the laser and the video vehicle type.
However, the vehicle type recognition mode using the laser sensor as a main part and the video detection as an auxiliary part is suitable for sunny days, and because the laser ranging is relatively accurate on sunny days, the obtained laser vehicle type is relatively accurate, and therefore, after the video vehicle type is fused, the vehicle type recognition method has a forward superposition effect on the accuracy of vehicle type recognition. Under abnormal weather such as rainy days, the influence of rainwater and water smoke on laser ranging is great, and laser ranging accuracy is relatively poor, leads to vehicle characteristics and size judgment to be error, and the effect of negative superposition is actually achieved after video is fused, and the vehicle type recognition accuracy after fusion is reduced instead.
Besides the vehicle type recognition mode with the laser sensor as a main part and the video detection as an auxiliary part, the vehicle type recognition mode with the pure video can also be adopted, the information such as the vehicle head, the vehicle tail and the like can be obtained by recognizing the video frame, and the vehicle type can be obtained by recognizing the vehicle characteristics. In a sunny environment, the vehicle type recognition mode with the laser sensor as a main part and the video detection as an auxiliary part is superior to the vehicle type recognition mode of the pure video. In abnormal weather environments such as rainy days, laser ranging is inaccurate, laser false triggering and missing triggering are more, and the fused vehicle type recognition accuracy is lower than that of pure video, wherein one video frame can be an image.
However, video detection is difficult to solve the vehicle occlusion problem in a free-flow environment unless one camera is configured per lane, which increases system cost and integration difficulty. In order to solve at least some of the problems described above, in this embodiment, a vehicle type recognition mode of laser triggering and video detection for vehicle ending may be adopted, and by strengthening the effect of laser qualitative triggering, the problem of inaccurate laser quantitative ranging in rainy days is weakened, so as to solve the problem of reduced vehicle flow and vehicle type precision caused by inaccurate ranging in rainy days. Meanwhile, the shielding problem during vehicle merging can be effectively solved by adopting a laser triggering mode, and meanwhile, a plurality of lanes can be covered simultaneously by only one laser sensor by adopting a high-frequency scanning laser sensor, so that the shielding problem is more friendly to solve. In addition, the system resource consumption pressure can be effectively reduced by adopting a laser triggering mode, because most of current video detection adopts a deep learning model, the deep learning model generally needs to be matched with hardware platform support with strong computing power such as GPU (Graphics Processing Unit, graphic processor), edge computer, AI (Artificial Intelligence ) chip and the like, and the frequency of a system calling model can be effectively reduced by adopting the laser triggering mode, so that the system operation pressure is well relieved.
In this embodiment, in order to facilitate vehicle information detection, the laser sensor may be installed at a central position of the entire detection lane, which is advantageous in shielding prevention. The laser profile of the laser sensor may be within the field of view of the target camera. Here, the laser profile, i.e. the scan profile. Because certain distortion can appear at the edge of the picture that target camera gathered, in order to prevent the great distortion from appearing in the vehicle image in the video frame that gathers, laser sensor's laser section is perpendicular to road surface, and the intersection line of laser section and road surface falls in the midstream position of target camera's camera visual field, is favorable to motorcycle type recognition algorithm to calculate the best frame position of piecing together and gives the certain fault tolerance of algorithm.
The laser sensor and the target camera may both detect in real time, or one may detect in real time, one under the triggering of the other. In normal weather, a vehicle type recognition mode with a laser sensor as a main part and video detection as an auxiliary part can be adopted, and in the case of determining that the vehicle is in an abnormal weather state at present, a vehicle type recognition mode with laser triggering and video detection for ending of a vehicle can be adopted. In this model recognition mode, the laser sensor is used only for triggering. The laser sensor can detect whether a vehicle passes under the laser section in real time, and if the vehicle passes, trigger information is sent to the target camera (evidence obtaining camera), and the laser trigger detection algorithm triggers as much as possible and does not leak triggering as much as possible when processing. Here, the laser sensor may have multiple detection conditions due to interference of water mist in the detection process, and the water mist is mistakenly used as a vehicle to trigger, so that the laser sensor may be firstly used as a normal vehicle to trigger in a laser triggering detection algorithm, and then a model or other modes are used to identify the vehicle head to delete.
In this embodiment, when the laser sensor detects the headstock, a reference video frame whose acquisition time matches the trigger time of the laser sensor may be selected from video frames acquired by the target camera. Here, matching the acquisition time with the trigger time of the laser sensor may mean that the acquisition time is the same as or close to the trigger time. The reference video frame may include one or more video frames.
Optionally, for the selected reference video frame, the vehicle head may be identified, where the identifying manner may be to detect, through the head model, whether the vehicle head exists in each video frame. The vehicle head model can be constructed through deep learning, whether the triggering information really exists in the vehicle or not can be confirmed through the deep learning model, and the problem of flow in rainy days can be effectively solved.
Alternatively, in addition to the above-mentioned laser triggering manner, the camera may be configured to be self-triggering, that is, the camera has a self-triggering function. For the situation that the laser triggering detection algorithm possibly appears and the vehicle is not detected, the problem of vehicle loss can be solved by the triggering mode of the camera. In view of the field computer resource problem, the self-detection of the camera can be triggered once by a plurality of frames.
In step S204, in the case that the vehicle head of the target vehicle is identified from the reference video frame, the video frame collected by the target camera is subjected to vehicle ending detection, so as to obtain the ending detection result of the target vehicle.
The vehicle head of the target vehicle may or may not be identified from the reference video frame. For the case that the vehicle head of the target vehicle is identified from the reference video frame, the target vehicle can be considered to enter (the target vehicle is a new vehicle, namely, a new vehicle passes through the laser section), and the video frame acquired by the target camera can be subjected to vehicle ending detection, so that an ending detection result of the target vehicle can be obtained. Here, the vehicle ending detection may be to detect a video frame after an acquisition time corresponding to a video frame identifying a vehicle head of the target vehicle to determine whether the target vehicle has ended. The ending detection result may be that the target vehicle has ended or that the target vehicle has not ended.
Alternatively, it may be determined whether the target vehicle has ended by comparing the differences of the adjacent two frames of video frames. When a plurality of identical video frames appear in succession, it may be determined that the target vehicle has ended. The difference between two adjacent frames of video frames can be determined by a frame difference method. Here, the frame difference method may be to take a sequence of two consecutive frames, subtract the previous frame from the next frame, determine whether the two frames are different from each other according to the result thereof, and determine the sizes of the different portions.
Optionally, in order to avoid error in vehicle ending detection caused by relatively consistent vehicle sides, feature point matching can be performed on a video frame collected by a target camera (a video frame after a reference video frame) and a background, and when the collected video frame is relatively high in matching with the background, the tail frame detection can be considered to be correct. The background may be a background image obtained by directly photographing the lane without passing the vehicle, and may be all or a part of the background image.
In step S206, in the case that the ending detection result is used to indicate that the target vehicle has ended, a set of vehicle frames of the target vehicle is determined from the video frames collected by the target camera.
In this embodiment, if the ending detection result is used to indicate that the target vehicle has ended, a set of vehicle frames of the target vehicle may be determined from the video frames collected by the target camera. Here, the set of vehicle frames may be a head frame (may be a reference video frame, or a certain frame among the reference video frames) including a vehicle head of the target vehicle identified for the first time, a tail frame of the target vehicle, and one or more video frames whose acquisition times are located between the acquisition times of the head frame and the tail frame.
Step S208, identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to a group of vehicle frames.
In this embodiment, according to the determined set of vehicle frames, a target vehicle type that matches the target vehicle may be identified from a set of preset vehicle types. The preset vehicle type may be various vehicle types which are preset and need to be charged, and may include, but not limited to, the various charging vehicle types described above, and may also include vehicle types which do not need to be charged, such as ambulances, fire-fighting vehicles, service vehicles for executing tasks, and the like. The target vehicle model may be one of preset vehicle models. Based on the determined target vehicle type, subsequent vehicle charging, vehicle passing, and the like processes may be performed.
Through the steps S202 to S208, when the laser sensor is triggered, selecting a reference video frame with a collection time matching the triggering time of the laser sensor from video frames collected by the target camera, wherein the laser section of the laser sensor is located within the field of view of the target camera; under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle; under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera; according to a group of vehicle frames, a target vehicle type matched with a target vehicle is identified from a group of preset vehicle types, the problem that the accuracy of vehicle information identification is low due to the fact that laser ranging is easily affected by weather in a vehicle information identification method in the related art is solved, and the accuracy of vehicle information identification is improved.
In one exemplary embodiment, selecting a reference video frame whose acquisition time matches a trigger time of a laser sensor from video frames acquired by a target camera, includes:
s11, selecting a target video frame with the minimum time interval between the acquisition time and the trigger time from video frames acquired by the target camera, wherein the reference video frame comprises the target video frame.
Because the acquisition time of the target camera and the scanning time of the laser sensor may not be completely consistent, and meanwhile, the laser sensor may have a scanning error in the case of rainy days, when selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor, a video frame with the acquisition time closest to the triggering time (i.e., the video frame with the minimum time interval) can be selected from video frames acquired by the target camera, namely, the target video frame, and vehicle head identification is performed on the target video frame. Correspondingly, the reference video frame comprises a target video frame.
If the vehicle head of the target vehicle is identified from the target video frame, a subsequent processing flow can be executed, and if the vehicle head of the vehicle is not identified from the target video frame, no new vehicle can be directly considered to enter, and the laser is triggered by mistake. In order to ensure the accuracy of vehicle detection, when the vehicle head of the vehicle is not identified from the target video frame, the vehicle head identification is performed on the front and rear multiframes of the target video frame.
Correspondingly, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera further comprises:
s12, selecting the front M frames and/or the rear M frames of the target video frames from the video frames acquired by the target camera under the condition that the vehicle head of the vehicle is not identified from the target video frames.
In the case where the vehicle head is not recognized from the target video frames, the front M frames and/or the rear M frames of the target video frames may be selected from the video frames acquired by the target camera. Here, M may be a positive integer greater than or equal to 1. Correspondingly, the reference video frames may include the target video frame, and may further include the first M frames and/or the last M frames of the target video frame.
Alternatively, in the case where the vehicle head of any vehicle is not recognized from the front M frame and/or the rear M frame of the target video frame, it may be considered that no new vehicle is entered, the laser false triggering is not performed, and the following related operations such as vehicle end detection and vehicle type recognition are not performed.
For example, after laser triggering, a video frame with the closest time to the laser triggering is taken from video frames to start to detect whether a headstock exists, if the headstock does not exist, the front and rear M frames of the frame are found, if the headstock does not exist, the laser is considered to be wrongly triggered, no vehicle processing is performed, namely, the following related operations such as vehicle ending detection, vehicle type recognition and the like are not performed.
According to the embodiment, the vehicle head identification operation is carried out on one or more video frames adjacent to the acquisition time and the triggering time, so that missing of vehicle identification caused by errors in laser triggering can be avoided, and accuracy of vehicle type identification is improved.
In an exemplary embodiment, after selecting a reference video frame whose acquisition time matches the trigger time of the laser sensor from among video frames acquired from the target camera, the method further includes:
s21, in the case that the vehicle head of the candidate vehicle is identified from the reference video frame, determining a first detection moment corresponding to the vehicle head of the candidate vehicle;
and S22, performing de-duplication processing on the candidate vehicle in the case that the time difference between the first detection time and the second detection time is smaller than a first time threshold, wherein the second detection time is before the first detection time when the head of the vehicle is detected last time.
After the reference video frame is selected, if the vehicle head of the vehicle is identified from the reference video frame, the vehicle can be considered as a vehicle head of a new entering vehicle. I.e. the vehicle head of the target vehicle. Considering that the laser sensor may not only be triggered by mistake in rainy days, but also may be triggered by multiple times (i.e., multiple times for one vehicle), if each trigger of the laser sensor is considered to be a new entering vehicle, accuracy of identifying vehicle information may be reduced. In order to avoid the abnormal situation that one vehicle is identified as a plurality of vehicles due to multiple triggering of laser, in this embodiment, a same lane de-duplication operation may be added when the head of the vehicle is identified, that is, the detection time of the vehicle is compared with the detection time of the previous vehicle, and when the detection time is too close, the vehicle may be considered as the previous vehicle, and the scanning information of the vehicle is deleted.
In the present embodiment, if a vehicle head is identified from the reference video frame, a vehicle corresponding to the vehicle head identified from the reference video frame may be noted as a candidate vehicle, and a first detection time corresponding to the vehicle head of the candidate vehicle may be determined. The first detection time here may be a time at which the target camera collects the vehicle head of the candidate vehicle.
After the first detection time is determined, a time at which the vehicle head was last detected before the first detection time, that is, a second detection time, may be determined. When the time difference between the first detection time and the second detection time is smaller than the first time threshold, the candidate vehicle and the previous vehicle can be considered to be the same vehicle, and the candidate vehicle is subjected to the de-duplication processing. Here, the first time threshold may be a value set in advance according to the safe travel distance of the two vehicles, for example, 300ms. Alternatively, the preceding vehicle of the candidate vehicle and the candidate vehicle may be vehicles in the same lane, and the lane number of the candidate vehicle may be determined according to the contact width ratio of the head of the candidate vehicle with each lane in the corresponding video frame.
For example, if it is detected that there is a head in the video frame or the front and rear M frames of the frame with the closest laser trigger time, the stand horse performs the deduplication operation. When the deduplication operation is performed, it may be first determined whether the time difference between the head detection times of the front and rear vehicles in the same lane exceeds 300ms, and if the head detection times of the front and rear vehicles are less than 300ms, the rear vehicle is directly deleted.
According to the embodiment, the abnormal conditions that the same vehicle is identified as a plurality of vehicles and the like caused by laser multi-triggering are eliminated by comparing the time of detecting the vehicle head twice before and after, and the accuracy of vehicle information identification can be improved.
In one exemplary embodiment, after determining the first detection time corresponding to the vehicle head of the candidate vehicle, the method further includes:
s31, determining the candidate vehicle as a target vehicle when the time difference between the first detection time and the second detection time is greater than or equal to a first time threshold value and the time difference between the first detection time and the vehicle ending time of the previous vehicle is greater than or equal to a second time threshold value;
s32, when the time difference between the first detection time and the second detection time is larger than or equal to a first time threshold value and the time difference between the first detection time and the vehicle ending time of the previous vehicle is smaller than a second time threshold value, vehicle head recognition is carried out on the video frames after the reference video frames, and a vehicle head recognition result is obtained;
s33, in the case that the vehicle head recognition result is used for indicating that the vehicle head is recognized from the video frames after the reference video frame, determining the candidate vehicle as a target vehicle;
And S34, performing de-duplication processing on the candidate vehicle in the case that the vehicle head identification result is used for indicating that the vehicle head is not identified in the video frames after the reference video frame.
If the vehicle is traveling slower, or the vehicle information identification is inaccurate due to inaccurate vehicle end detection, etc., in addition to setting the time difference threshold (i.e., the first time threshold) of the vehicle head detection of the front vehicle and the rear vehicle, a time difference threshold of the vehicle tail detection of the front vehicle and the vehicle head detection of the rear vehicle, i.e., the second time threshold, may be set, if the time difference of the vehicle head detection of the front vehicle and the rear vehicle is greater than or equal to the first time threshold, and the time difference of the vehicle tail detection of the front vehicle and the vehicle head detection of the rear vehicle is greater than or equal to the second time threshold, then both are not considered to be the same vehicle. Here, the second time threshold may be a value preset according to a safe driving distance of the front and rear vehicles, and the value thereof may be smaller than the first time threshold, for example, 120ms.
For the candidate vehicle, when the time difference between the first detection time and the second detection time is greater than or equal to the first time threshold value and the time difference between the first detection time and the vehicle ending time of the preceding vehicle is greater than or equal to the second time threshold value, the candidate vehicle and the preceding vehicle may be considered to be not the same vehicle, and the candidate vehicle may be determined as the target vehicle. For example, if the detection time of the front and rear vehicle heads in the same lane exceeds 300ms and the ending time of the front vehicle and the rear vehicle head time exceed 120ms, a new vehicle is considered to enter, and a new vehicle cache can be created.
If the time difference between the first detection time and the second detection time is greater than or equal to a first time threshold and the time difference between the first detection time and the vehicle ending time of the previous vehicle is less than a second time threshold, whether the candidate vehicle is a new entering vehicle or not is considered to be unable to be directly determined, at this time, vehicle head recognition can be performed on video frames after the reference video frame, and a vehicle head recognition result is obtained. In order to improve the efficiency of the head detection while avoiding the detection of the head of the following real vehicle, the number of frames of the video frame following the reference video frame for the head recognition of the vehicle may be defined, for example, 2 frames.
Based on the vehicle head recognition result, it may be determined whether the candidate vehicle is a newly entered vehicle. In the case where the vehicle head recognition result is used to indicate that the vehicle head is recognized from the video frame subsequent to the reference video frame, the candidate vehicle may be determined as the target vehicle. And in the case where the vehicle head recognition result is used to indicate that the vehicle head is not recognized from the video frame subsequent to the reference video frame, the laser sensor may be considered to be multi-triggered, the vehicle head of the candidate vehicle is erroneously recognized, and the deduplication process is performed on the candidate vehicle.
For example, if the time difference between the two car head detection times exceeds 300ms, but the time difference between the front car ending time and the rear car head time is less than 120ms, it is possible to continue detecting whether 2 frames have a car head, and if both have car heads, consider a new car to enter, and create a new car buffer.
According to the embodiment, by comparing the detection time of the vehicle heads of the front vehicle and the rear vehicle with the detection time of the vehicle heads of the rear vehicle, the abnormal conditions that the same vehicle is identified as a plurality of vehicles and the like due to the fact that the laser is triggered more can be avoided, and the accuracy of vehicle information identification is improved.
In an exemplary embodiment, the vehicle ending detection is performed on a video frame collected by a target camera to obtain an ending detection result of the target vehicle, including:
s41, respectively extracting the characteristics of each video frame in a group of video frames acquired by the target camera after the reference video frame to obtain the video characteristics of each video frame;
s42, determining the feature difference degree of adjacent video frames in a group of video frames according to the video features of each video frame, wherein the adjacent video frames comprise two adjacent video frames in the group of video frames, and the feature difference degree of the adjacent video frames is used for representing the difference between the video features of the two video frames in the adjacent video frames;
S43, determining the previous video frame of N video frames in a group of video frames as a vehicle ending frame of the target vehicle under the condition that the feature difference degree of any one of N continuous video frames in the group of video frames is smaller than or equal to a difference degree threshold value according to the feature difference degree of the adjacent video frames, wherein N is a positive integer larger than or equal to 2.
In this embodiment, the difference between two adjacent frames may be detected by the frame difference method, so as to determine whether the vehicle ends. For a group of video frames acquired by the target camera after the reference video frame, feature extraction can be performed on each video frame in the group of video frames respectively, so as to obtain the video feature of each video frame. And determining the feature difference degree of the adjacent video frames in the group of video frames according to the video features of each video frame. Here, an adjacent video frame may comprise any adjacent two video frames in a group of video frames. The feature difference degree of adjacent video frames may be used to represent feature differences between video features of two of the adjacent video frames.
When it is determined that the feature difference degree of any one of the N consecutive video frames in the group of video frames is smaller than or equal to the difference degree threshold according to the feature difference degrees of the adjacent video frames, most of the image contents in the N consecutive frames can be considered to be the same, and at this time, the target vehicle can be considered to have ended, and the previous video frame of the N video frames in the group of video frames is determined to be the vehicle ending frame of the target vehicle. The difference threshold may be preset here for determining whether the two video frames are substantially identical. N can be a positive integer greater than or equal to 2, and when N frames are identical in succession, the vehicle ending can be judged.
According to the embodiment, the difference degree of two adjacent frames is detected through the frame difference method, and whether the vehicle ends or not is judged based on the difference degree of the two adjacent frames, so that the accuracy of vehicle end detection can be improved.
In one exemplary embodiment, after determining a previous video frame of the N video frames in the set of video frames as a vehicle ending frame of the target vehicle, the method further comprises:
s51, determining a background area corresponding to the head position of a target vehicle, wherein a target road where the target vehicle is located comprises a plurality of lanes, the head position is the position of the head of the vehicle in a reference video frame, and the background area is an area selected from non-vehicle frames acquired by a target camera according to the head position;
s52, performing feature point matching on each video frame in the N video frames and the background area to obtain a feature point matching result corresponding to each video frame;
s53, determining that the vehicle ending frame is verified to pass under the condition that N video frames are matched with the background area according to the characteristic point matching result corresponding to each video frame;
and S54, under the condition that the fact that the video frames in the N video frames are not matched with the background area is determined according to the characteristic point matching result, the fact that the verification of the vehicle ending frame is not passed is determined, and the vehicle ending detection is carried out on the video frames collected by the target camera again.
Whether the vehicle ends or not is judged by detecting the difference degree of two adjacent frames through a frame difference method, and the situation of false detection of the vehicle ends is easy to occur for vehicles (such as container vehicles) with more consistent partial side surfaces. In order to improve accuracy of vehicle ending detection and avoid a situation of vehicle ending detection error caused by consistent sides of vehicles, in this embodiment, a feature point matching manner may be added, that is, vehicle ending detection is performed based on an adjacent frame difference method in combination with a feature point matching method, and feature point matching is performed by using a current frame and a background area to detect whether the vehicle ends.
The lane region of the lane in which the vehicle is located may be used as a background region for performing feature point matching. But will no longer be applicable when the vehicle suddenly crosses the road. In order to prevent errors in background area selection caused by crossing of a vehicle (the vehicle walks from the middle of a lane), so that characteristic points are not matched, and final vehicle ending judgment errors are caused, in the embodiment, a background area for characteristic point matching can be determined according to the head position of the vehicle, so that the position of the background area is kept consistent with the head position, dynamic adjustment of the background area is realized, and accuracy of vehicle ending detection is improved. Here, the head position of the target vehicle may be a position where the vehicle head of the target vehicle is located in the reference video frame.
For a target vehicle, a background region corresponding to a head position of the target vehicle may be determined. Here, the target road on which the target vehicle is located may include a plurality of lanes, and the background area may be an area selected from non-vehicle frames acquired from the target camera according to the head position. Alternatively, the non-vehicle frames used to select the background area may be video frames captured by the target camera when no vehicle passes under the laser profile for a duration (e.g., one minute).
When the feature point matching is performed with the background area, feature point matching can be performed on each video frame in the N video frames and the background area, and a feature point matching result corresponding to each video frame is obtained. And under the condition that N video frames are matched with the background area according to the characteristic point matching result, the verification of the vehicle ending frame can be determined to pass. Here, matching may mean that each video frame is mostly identical to the feature point of the background area, for example, 80% or more of the feature points are identical, that is, the video frame is considered to be matched to the background area.
Correspondingly, under the condition that the fact that the video frames are not matched with the background area is determined in the N video frames according to the characteristic point matching result, the fact that the verification of the vehicle ending frame is failed can be determined, the fact that the video frames are possibly side frames of the target vehicle is indicated, the target vehicle does not end, the vehicle ending detection needs to be carried out on the video frames collected by the target camera again, and the fact that the target vehicle ends is directly detected.
According to the embodiment, the vehicle ending frame is checked by selecting the mode that the background area is matched with the possible vehicle ending frame in the characteristic points, so that the accuracy of vehicle ending judgment can be improved, and the accuracy of vehicle type recognition is improved.
In one exemplary embodiment, a target road on which a target vehicle is located includes a plurality of lanes, and two cameras are oppositely disposed on both sides of the road of the target road. For example, as shown in fig. 4, the laser detection unit (i.e., the laser sensor) is located right above the middle of the lane, the scanning section is perpendicular to the road surface, the camera detection units (i.e., the first camera and the second camera) are located at both sides of the portal, and the intersection line of the laser scanning section and the road surface falls at the midstream position of the camera field of view.
In order to avoid that the accuracy of the ending detection of the vehicle is reduced due to the shielding of the vehicles on different lanes, in this embodiment, during the process of image acquisition of the target vehicle by the target camera, the method further includes:
s61, under the condition that the vehicle head of the target vehicle is identified from the reference video frame, determining a target lane where the target vehicle is located according to a position area where the vehicle head is located in the reference video frame, wherein the target camera is a first camera which is used for shooting the target vehicle currently by two cameras;
S62, determining that the target is in a shielding state of the vehicle when the target lane is not the lane closest to the first camera among the lanes and the preset shielding condition is met;
s63, switching a camera shooting a target vehicle from a first camera to a second camera in the two cameras to obtain an updated target camera;
wherein the preset shielding condition comprises at least one of the following: detecting that a vehicle exists in an adjacent lane from a plurality of vehicle frames corresponding to the reference video frame; the time difference between the acquisition time of the reference video frame and the vehicle entering time and/or the vehicle exiting time of the vehicle in the adjacent lane is smaller than or equal to a preset time difference threshold value, and the adjacent lane is a lane adjacent to the target lane and close to one side of the first camera in the multiple lanes.
After the new vehicle is determined to enter, the vehicle track number can be judged according to the position and the area of the vehicle head frame, whether the vehicle is shielded or not can be judged according to the time of entering and exiting the vehicle in the adjacent lane, whether the vehicle is shielded or not can be determined by directly judging whether the vehicle is in the adjacent lane or not according to the detected video frame of the target vehicle, and the two modes can be combined to judge whether the vehicle is shielded or not. For the target vehicle, in the case where the vehicle head of the target vehicle is identified from the reference video frame, the target lane in which the target vehicle is located may be determined from the position area (i.e., the head frame position) in which the vehicle head is located in the reference video frame. Alternatively, due to the installation position and angle of the camera, when the video frame of the target vehicle acquired by the first camera is subjected to head recognition, the head frame position of the video frame may relate to multiple lanes of the target road, and in addition, when the target vehicle crosses a lane, the head of the target vehicle occupies the positions of multiple lanes simultaneously. In this regard, when determining the target lane, the target lane may be determined according to a ratio of the lane width occupied by the head frame position in each lane to the total width of the lane.
For example, as shown in fig. 5, the head frame position of the target vehicle includes 1, 2, and 3 lanes on the target road, but the head frame position is different in the width ratio of 3 lanes, the width ratio of 1 lane is smaller, and the width of 3 occupied lanes is likely to be caused by the acquisition angle of the camera, so that the target lane where the target vehicle is located is 2 lanes.
In addition, in the process of vehicle driving, a situation of parallel multiple vehicles often occurs, and the possibility that the target vehicle is blocked by other vehicles in the acquired vehicle video frame exists. In this embodiment, a preset shielding condition may be preset to determine whether the target vehicle is in a shielding state. Here, the preset shielding condition may include at least one of: detecting that a vehicle exists in an adjacent lane from a plurality of vehicle frames corresponding to the reference video frame; the time difference between the acquisition time of the reference video frame and the vehicle entering time and/or the vehicle exiting time of the vehicle in the adjacent lane is smaller than or equal to a preset time difference threshold value, and the adjacent lane is a lane adjacent to the target lane and close to one side of the first camera in the multiple lanes. Here, entering may refer to the head of the vehicle reaching the laser section, and exiting may refer to the tail of the vehicle leaving the laser section.
In the case where the target lane is not the lane closest to the first camera among the plurality of lanes and the above-described preset blocking condition is satisfied, it may be determined that the target vehicle is in the blocked state.
Alternatively, in the case where the time of entry of the target vehicle (which may be the acquisition time of the reference video frame) is located after the time of entry and before the time of exit of the vehicle in the adjacent lane, the target vehicle may be considered to have a possibility of being blocked by the vehicle on the adjacent lane.
Under the condition that the target vehicle is blocked by the vehicles on the adjacent lanes, the camera shooting the target vehicle can be switched from the first camera to the second camera in the two cameras, the updated target camera is obtained, and the acquired video frames of the second camera are used for carrying out the vehicle information identification, so that the vehicles in the adjacent lanes can not be blocked by the vehicles in the adjacent lanes, and the camera used for detecting the ending of the vehicles on the adjacent lanes can still be the first camera.
According to the embodiment, whether the vehicle is shielded or not can be determined through the position of the head of the vehicle, the acquisition time and the vehicle entering and exiting time of the adjacent lanes, and the accuracy of vehicle information identification can be improved.
In one exemplary embodiment, the mounting positions of the target camera and the laser sensor may be matched with each other, the target camera is mounted on one side of a lane where the target vehicle is located in a flip-chip 90 degree manner, the scanning section of the laser sensor is perpendicular to the road surface, and the intersection line with the road surface is located at a middle position (midstream position) of the camera field of view of the target camera.
At present, in the installation mode of the camera in the free-flow vehicle type recognition system, the field of view of the camera in the driving direction is relatively large, the field of view in the lane width direction is relatively small, and the field of view angle of the installed camera can be shown on the left side of fig. 6. By adopting the camera mounting mode, the view field of the camera in the driving direction is larger, so that the distortion of an object acquired in the driving direction is larger, the reduction degree of the identified picture is poorer, and a plurality of cameras are required to be mounted on roads with more lanes to accurately detect vehicles on all lanes.
In this embodiment, the target camera is mounted on one side of the lane where the target vehicle is located, and may be mounted in a flip-chip manner by 90 degrees (i.e., a flip-chip manner rotated by 90 degrees), and the mounting direction of the camera is rotated by 90 degrees (as shown in fig. 7), and the viewing angle of the mounted camera may be narrowed in the driving direction and widened in the lane width direction as shown in the right side of fig. 6, so as to ensure that the driving direction is a narrow viewing field and the lane width direction is a wide viewing field, contrary to the previous mounting manner.
Here, the field of view of the camera in the driving direction becomes smaller, the distortion of the camera in the driving direction becomes smaller, and the reduction degree of the identified picture is better. Meanwhile, considering that under a high-speed environment, only one vehicle exists in the next frame of picture of the same lane, the view field in the driving direction is reduced, and the condition of multiple vehicles is also facilitated to be filtered. The field of view is widened on the width of the lane, so that a camera can be used for identifying a plurality of lanes, the number of cameras used can be reduced, and the cost is saved.
Since the laser sensor is mainly used for triggering in the free-flow vehicle type recognition system, the scanning section of the laser sensor is perpendicular to the road surface, and the intersection line of the laser sensor and the road surface is positioned at the middle position (midstream position) of the camera view field of the target camera, as shown in fig. 8, the calculation of the optimal framing position by the vehicle type recognition algorithm is facilitated, and certain fault tolerance is given to the algorithm.
For example, taking a 2+1 lane as an example, the overall layout of the system is shown in fig. 4, a laser sensor is installed right above the middle of the 2 lanes, the scanning section is perpendicular to the road surface, a evidence obtaining camera is inversely installed at two sides of a portal frame by 90 degrees, and the intersection line of the scanning section of the laser sensor and the road surface falls at the middle position of the camera view field.
According to the embodiment, the camera is mounted in a flip-chip 90-degree manner, so that the distortion degree of images in video frames can be reduced, the reduction degree of recognized pictures is improved, and the number of cameras to be mounted can be reduced; meanwhile, the intersection line of the scanning section of the laser sensor and the road surface falls at the midstream position of the camera view field, so that the accuracy of vehicle information identification can be improved.
In one exemplary embodiment, identifying a target vehicle model matching a target vehicle from a set of preset vehicle models based on a set of vehicle frames includes:
s71, determining a locomotive model corresponding to the head of a vehicle in a group of preset vehicle models according to the reference video frames, wherein the group of vehicle frames comprise the reference video frames;
s72, carrying out frame spelling processing on a group of vehicle frames according to the acquisition time to obtain vehicle frame spelling images corresponding to the target vehicle;
s73, vehicle type recognition is carried out on the vehicle frame image, and a frame-splicing vehicle type matched with the vehicle frame image in a group of preset vehicle types is obtained;
s74, carrying out axle tracking on each axle of the target vehicle according to a group of vehicle frames to obtain axle tracking information of each axle;
s75, fusing the locomotive model, the frame splicing model and the axle tracking information of each axle to obtain a target model.
In the present embodiment, the vehicle type of the target vehicle may be identified in various ways. For example, a vehicle model corresponding to a vehicle head in a set of preset vehicle models may be determined according to the reference video frame, the set of vehicle frames includes the reference video frame, and the determined vehicle model may be directly used as the target vehicle model. For another example, a group of vehicle frames may be subjected to frame spelling processing to obtain a vehicle frame spelling image corresponding to the target vehicle; and performing vehicle type recognition on the vehicle frame image to obtain a frame-spliced vehicle type matched with the vehicle frame-spliced image in a group of preset vehicle types, wherein the obtained frame-spliced vehicle type can be directly used as a target vehicle type. Optionally, in order to improve accuracy of vehicle type recognition, a vehicle head vehicle type and a frame-spliced vehicle type can be fused to obtain a target vehicle type.
Considering that the charging standard of most trucks is related to the axle number of vehicles, the axle tracking information of the target vehicles can be increased during vehicle type recognition, so that the correction of the vehicle type recognition result is carried out according to the axle number of the target vehicles, and errors of video frames caused by the conditions of shielding and the like of the vehicles are avoided, so that the errors of vehicle type recognition are caused. Here, the axle tracking information may be information such as the number of axles of the target vehicle determined by tracking the axles of the target vehicle. Optionally, the vehicle head model, the frame splicing model and the axle tracking information of each axle can be fused to obtain the target model. Here, the headstock model, the frame-spliced model and the axis tracking information can be mutually checked to ensure that the recognition of the target model has higher accuracy.
Here, the vehicle head model may be a model determined based on the vehicle head, and according to the vehicle head model, a possible model range of the model of the target vehicle may be narrowed. The frame spelling process can be to a group of complete vehicle frames, or can be to frame spelling after clipping the corresponding video frames according to the target lane or the head position of the target vehicle, correspondingly, clipping the video frames can be to remove the lower edge of the lane edge in each frame image according to the lane. As shown in fig. 5, in the case where it is determined that the lane number in which the target vehicle is located is 2 lanes, clipping is started from the connecting edge of 2 lanes and 1 lane, and only the image in the direction from 2 lanes to 3 lanes is retained.
The axis tracking information of the target vehicle may be axis tracking information of each axis obtained by axle tracking of each axis of the target vehicle according to a set of vehicle frames. Alternatively, the flow of the vehicle axis tracking algorithm for determining axis tracking information may be: the axle position of the axle and the acquisition time of the corresponding vehicle frame are identified from one or more vehicle frames in a set of vehicle frames, the axle position of the identified axle in a subsequent vehicle frame is predicted, the axle tracking information of the identified axle is determined by comparing the identified axle predicted position with the actual position, and it is determined whether a new axle is detected, for which axle tracking information may be determined in a similar manner.
Through this embodiment, through fusing locomotive motorcycle type, frame motorcycle type and the axle tracking information of every axle confirm the motorcycle type of vehicle, can improve the accuracy of motorcycle type discernment.
In one exemplary embodiment, axle tracking is performed on each axle of a target vehicle according to a set of vehicle frames to obtain axle tracking information for each axle, including:
and S81, sequentially executing axle identification operation by taking each vehicle frame in the group of vehicle frames as the current vehicle frame to obtain the vehicle frame where each axle is located and the position in the located vehicle frame. The axle identifying operation for a group of vehicle frames may be performed in order of the acquisition time of the vehicle frames from first to last, that is, the axle identifying operation is performed for the vehicle frame whose acquisition time is earliest first, and the axle identifying operation is performed sequentially for the vehicle frames whose acquisition time is later.
When the axle identifying operation is performed on the current vehicle frame, the axle identifying operation may be performed on the current vehicle frame, and the candidate axle included in the current vehicle frame and the axle position of the candidate axle may be obtained. The candidate axle may be a new axle or an existing axle, and the axle position of the candidate axle may be the position of the candidate axle in the current vehicle frame, may be the lane position where the candidate axle is located, may be the occupied axle area, or may be the axle position obtained after converting the occupied axle area into the world coordinate system.
To facilitate tracking of the same axle in different vehicle frames, the position of the axle in a subsequent vehicle frame may be predicted based on the position of the axle in one vehicle frame and matched to the identified axle position in the subsequent vehicle frame based on the predicted position of the axle to determine the actual position of the axle in the subsequent vehicle frame. For axles that do not have a matching predicted position, it can be considered a new axle, either an axle that was not identified in the previous vehicle frame or a new axle that entered the camera field of view.
For a candidate shaft, if a set of existing shafts of the target vehicle already exists and the shaft position of the candidate shaft matches a predicted shaft position of a first existing shaft of the set of existing shafts, the candidate shaft is marked as the first existing shaft and the shaft position of the candidate shaft is marked as the shaft position of the first existing shaft in the current vehicle frame. Here, the predicted axis position of the first existing axis is an axis position of the first existing axis in the current vehicle frame predicted from an axis position of the first existing axis in the first vehicle frame preceding the current vehicle frame. The first vehicle frame may be a previous vehicle frame to the current vehicle frame, or may collect a vehicle frame earlier in time. Position matching may refer to the IOU (Intersection over Union, overlap) being greater than a set threshold for two positions, where position refers to the area position occupied by the axle.
In the event that the shaft position of the candidate shaft does not match the predicted shaft position of any of the set of existing shafts, the candidate shaft may be marked as the second existing shaft newly created by the target vehicle and the shaft position of the candidate shaft may be marked as the shaft position of the second existing shaft in the current vehicle frame. Here, the predicted shaft position of any existing shaft may be a shaft position of any existing shaft in the current vehicle frame predicted from a shaft position of any existing shaft in a vehicle frame preceding the current vehicle frame.
For example, a new shaft is detected at time t, the shaft ID is newly created, and the shaft position at time t is recorded. From the acquisition time of any two frames and the position of the same axis in both frames, the speed of that axis (i.e., a priori value) can be estimated. And predicting the position of the axle at the time t+1 according to the prior value, calculating the IOU of the actual position of the axle at the time t+1 and the predicted position of the axle, and judging whether the axle at the time t+1 is the associated axle or the newly-built axle of the axle. In addition, it is possible to determine whether the axis disappears from the screen, and the probability that the axis is a true axis can be calculated after the axis disappears.
According to the embodiment, the axle of the vehicle is tracked through the predicted position of the axle and the actual position of the axle in the vehicle frame, so that the axle tracking information of each axle is determined, and the accuracy of axle tracking can be improved.
In one exemplary embodiment, the axle identification operation is performed sequentially with each vehicle frame in the set of vehicle frames as the current frame, further comprising:
s91, determining the shaft position of the first existing shaft in the second vehicle frame as the shaft position of the first existing shaft in the second vehicle frame, wherein the shaft position of the candidate shaft is matched with the predicted shaft position of the first existing shaft in the group of existing shafts, and the first vehicle frame is not the previous vehicle frame of the current vehicle frame;
s92, predicting the shaft position of the first existing shaft in at least one vehicle frame after the current vehicle frame according to the shaft position of the candidate shaft, and obtaining the predicted shaft position of the first existing shaft in the at least one vehicle frame.
An axle may not be continuously detected in a set of consecutive vehicle frames due to recognition accuracy, object occlusion, etc., where a set of consecutive vehicle frames may include a vehicle frame that detected the axle for the first time, a vehicle frame that detected the axle for the last time, and a vehicle frame in between. For a set of consecutive vehicle frames, vehicle frames in which an axle is not identified may be combined with axle positions in vehicle frames preceding and following the vehicle frame, and the axle positions in the vehicle frames may be checked to determine whether the axle is not present in the vehicle frame or whether the axle is not identified due to a problem such as a shadow or an identification error.
In the present embodiment, for the case where the axis position of the candidate axis matches the predicted axis position of the first existing axis in the set of existing axes, and the first vehicle frame is not the previous vehicle frame of the current vehicle frame, there is a second vehicle frame (the number of second vehicle frames may be one or more) between the first vehicle frame and the current vehicle frame, the position thereof in the second vehicle frame predicted from the actual position of the first existing axis in the first vehicle frame may be determined as the actual position of the first existing axis in the second vehicle frame, that is, the axis position of the first existing axis in the second vehicle frame predicted from the axis position of the first existing axis in the first vehicle frame may be determined as the axis position of the first existing axis in the second vehicle frame.
Here, since in the current vehicle frame the predicted position of the first existing axis can be matched with the actual position, meaning that the predicted axis position of the first existing axis in the current vehicle frame based on the axis position of the first existing axis in the first vehicle frame is accurate, the predicted axis position of the first existing axis in the second vehicle frame based on the axis position of the first existing axis in the first vehicle frame should also be accurate, which can be determined as the axis position of the first existing axis in the second vehicle frame.
For example, no axis is identified in the video frame of the X frame (the axis may be blocked by water mist, etc.), and the corresponding position of the X frame is determined to have an axis according to the prior value calculation, at this time, if the axis position in the x+1 frame is successfully matched with the predicted position, the axis predicted by the X frame is a true axis.
In addition, in the event that the shaft position of the candidate shaft matches a predicted shaft position of a first existing shaft of the set of existing shafts, the shaft position of the first existing shaft in at least one vehicle frame subsequent to the current vehicle frame may be predicted from the shaft position of the candidate shaft to obtain a predicted shaft position of the first existing shaft in the at least one vehicle frame. Based on the obtained predicted axle position, an axle position of the first existing axle in at least one vehicle frame may be verified.
According to the embodiment, the position of the axle in the axle frame without the identified axle is supplemented based on the matching result of the actual position and the predicted position of the axle in the vehicle frame, so that the accuracy of axle tracking can be improved.
In one exemplary embodiment, the axle identification operation is performed sequentially with each vehicle frame in the set of vehicle frames as the current frame, further comprising:
s101, under the condition that the shaft positions of the candidate shafts are not matched with the shaft positions of any existing shaft in a group of existing shafts, predicting the shaft positions of the second existing shaft in at least one vehicle frame after the current vehicle frame according to the shaft positions of the candidate shafts, and obtaining the predicted shaft positions of the second existing shaft in the at least one vehicle frame.
In this embodiment, for the case where the shaft position of the candidate shaft does not match the shaft position of any one of the existing shafts in the set of existing shafts, while determining that the candidate shaft is a new shaft and marking as the second existing shaft, the corresponding shaft position in the subsequent vehicle frame may be predicted from the shaft position of the candidate shaft, that is, the shaft position of the second existing shaft in at least one vehicle frame subsequent to the current vehicle frame may be predicted from the shaft position of the candidate shaft, to obtain the predicted shaft position of the second existing shaft in at least one vehicle frame.
By means of the method and the device, when the new shaft is identified, the corresponding shaft position of the new shaft in the subsequent vehicle frame is predicted according to the shaft position of the new shaft in the current vehicle frame, and therefore accuracy of shaft tracking can be improved.
The vehicle information identification method in the embodiment of the present application is explained below in conjunction with alternative examples. In this alternative example, the vehicle model is a toll vehicle model, and the target camera is a evidence obtaining camera.
In order to solve the problem that the accuracy of vehicle type recognition is low due to inaccurate laser ranging in rainy days, the method for recognizing the free flow system is provided in the optional example, and vehicle type recognition is performed by combining laser triggering and model head recognition with duplication removal operation, so that the problem of inaccurate laser quantitative ranging in rainy days is weakened, the advantage of qualitative laser triggering is enhanced, and the flow accuracy in rainy days can be guaranteed. As shown in fig. 9, the flow of the vehicle information identification method in this alternative example may include the steps of:
Step 1, detecting whether a car passes under the laser section in real time by laser, and sending trigger information to a evidence obtaining camera if the car is detected. Here, the evidence obtaining camera can perform self-detection once at intervals of multiple frames of cameras so as to detect the headstock; when the laser triggering algorithm is used for processing, as many triggers as possible (as long as objects exist, the triggers can be given), and the triggers are prevented from being missed as much as possible. In the detection process of the laser sensor, the situation that the water mist is interfered by the water mist to have multiple detections is likely to occur, the water mist is mistakenly used as a vehicle to trigger, the same vehicle is likely to have multiple triggering situations, the same vehicle can be firstly used as a normal vehicle to trigger in a laser triggering algorithm, and a vehicle head model is subsequently used for identifying and filtering a vehicle head.
Step 2, after laser triggering, taking a video frame closest to the laser triggering time from video frames acquired by a evidence obtaining camera, and detecting whether a headstock exists in a current frame through a headstock model to prevent false triggering, if the headstock does not exist, finding the front M frame and the rear M frame of the current frame, if the headstock does not exist, considering the false triggering, and not performing vehicle processing; if a headstock is present, step 3 is performed.
And 3, executing the vehicle duplicate removal operation, and preventing multiple triggers from causing multiple vehicles to go out. If the time difference between the detection time of the front and rear vehicle heads under the same lane exceeds 300ms and the time between the ending time of the front vehicle and the time of the rear vehicle head exceeds 120ms, a new vehicle is considered to enter, and a new vehicle cache can be created; if the time of the front and rear vehicle heads is less than 300ms, the rear vehicle is directly deleted; if the time difference between the detection time of the two vehicle heads exceeds 300ms, but the time difference between the ending time of the front vehicle and the time of the rear vehicle head is less than 120ms, the detection needs to be continued for 2 frames to determine whether the vehicle heads exist, and if the vehicle heads exist, a new vehicle is considered to enter, and a new vehicle cache is created.
And 4, after the new vehicle is determined to enter, judging the number of the vehicle according to the position and the area of the vehicle head frame, judging whether the vehicle is shielded according to the time of entering and exiting the vehicle in the adjacent lane, and if the vehicle is shielded, switching the evidence obtaining camera, and using the evidence obtaining camera on the opposite side to carry out vehicle ending detection.
And 5, performing ending detection by using the video to prevent the vehicle ending error caused by the fact that the laser cannot distinguish water mist. When judging whether the vehicle ends, the difference degree of two adjacent frames can be detected through a frame difference method, when N frames are identical continuously, the vehicle can be judged to end, or the characteristic point matching (the background position is consistent with the head position) can be carried out by the current frame and the background, and whether the vehicle ends is checked.
And 6, merging and identifying the charged vehicle type according to the vehicle head type, the shaft tracking information and the frame spelling picture identification information.
In order to determine the charged vehicle type of the vehicle, the vehicle head type and the frame spelling type (i.e., frame spelling picture identification information) can be determined based on the vehicle frame of the vehicle, and the axle tracking information of the vehicle can also be determined through an axle tracking algorithm; and determining the charging vehicle type of the vehicle by fusing the vehicle head type, the frame splicing type and the axle tracking information.
By the alternative example, the vehicle type is identified by combining laser triggering and model head identification with duplication removal operation, and when the laser sensor detects that a vehicle passes through a laser section, trigger information is sent to the evidence obtaining camera, so that the vehicle is prevented from being missed; whether the vehicle exists in the laser triggering information is confirmed through the vehicle head model, if the vehicle exists, the duplicate removal operation is carried out, multiple vehicles caused by laser multi-triggering are prevented, and the accuracy of vehicle information identification can be improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM (Read-Only Memory)/RAM (Random Access Memory), magnetic disk, optical disk), including instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a vehicle information identification system for implementing the above-described vehicle information identification method. The vehicle information identification system may include:
a laser sensor for detecting a vehicle passing through a laser section of the laser sensor;
the target camera is used for collecting video frames of the visual field range of the target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera;
the data processing component is used for selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera under the condition that the laser sensor is triggered; under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle; under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera; and identifying a target vehicle type matched with the target vehicle from a set of preset vehicle types according to a set of vehicle frames.
It should be noted that the data processing means may be a means, such as a processor, a controller, or the like, for executing the foregoing determination of the reference video frame, the ending detection result, the set of vehicle frames, and the determination of the vehicle model based on the set of vehicle frames on the server or some processing device. The manner of determining the reference video frame, the ending detection result, the set of vehicle frames, and the vehicle model based on the set of vehicle frames is similar to that in the foregoing embodiment, and is already described and will not be repeated here.
Through the vehicle information identification system, under the condition that the laser sensor is triggered, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera; under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle; under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera; according to a group of vehicle frames, a target vehicle type matched with a target vehicle is identified from a group of preset vehicle types, the problem that the accuracy of vehicle information identification is low due to the fact that laser ranging is easily affected by weather in a vehicle information identification method in the related art is solved, and the accuracy of vehicle information identification is improved.
In one exemplary embodiment, the target camera is flip-chip mounted on one side of the lane where the target vehicle is located, and the intersection line of the scanning section of the laser sensor and the road surface is located at the middle position of the camera field of view of the target camera.
In an exemplary embodiment, the data processing unit is further configured to select a target video frame with a minimum time interval between the acquisition time and the trigger time from the video frames acquired by the target camera, where the reference video frame includes the target video frame; and under the condition that the vehicle head of the vehicle is not identified in the target video frames, selecting front M frames and/or rear M frames of the target video frames from the video frames acquired by the target camera, wherein M is a positive integer greater than or equal to 1, and the reference video frames further comprise the front M frames and/or the rear M frames of the target video frames.
In one exemplary embodiment, the data processing component is further configured to determine a first detection time corresponding to a vehicle head of the candidate vehicle in a case where the vehicle head of the candidate vehicle is identified from the reference video frame; and performing de-duplication processing on the candidate vehicle in a case where a time difference between the first detection time and a second detection time, which is a time at which the vehicle head was last detected before the first detection time, is smaller than a first time threshold.
In one exemplary embodiment, the data processing means is further configured to determine the candidate vehicle as the target vehicle in a case where a time difference between the first detection time and the second detection time is greater than or equal to a first time threshold and a time difference between the first detection time and a vehicle ending time of a preceding vehicle is greater than or equal to a second time threshold; under the condition that the time difference between the first detection time and the second detection time is larger than or equal to a first time threshold value and the time difference between the first detection time and the vehicle ending time of the previous vehicle is smaller than a second time threshold value, vehicle head recognition is carried out on the video frames after the reference video frames, and a vehicle head recognition result is obtained; in a case where the vehicle head recognition result is used to indicate that the vehicle head is recognized from the video frame subsequent to the reference video frame, determining the candidate vehicle as the target vehicle; in a case where the vehicle head recognition result is used to indicate that the vehicle head is not recognized from the video frame subsequent to the reference video frame, the candidate vehicle is subjected to the deduplication process.
In an exemplary embodiment, the data processing unit is further configured to perform feature extraction on each video frame in a set of video frames acquired by the target camera after the reference video frame, so as to obtain a video feature of each video frame; determining feature difference degrees of adjacent video frames in a group of video frames according to the video features of each video frame, wherein the adjacent video frames comprise two adjacent video frames in the group of video frames, and the feature difference degrees of the adjacent video frames are used for representing differences between the video features of the two video frames in the adjacent video frames; and under the condition that the feature difference degree of any adjacent video frame in the N continuous video frames in the group of video frames is smaller than or equal to the difference degree threshold value according to the feature difference degree of the adjacent video frames, determining the previous video frame of the N video frames in the group of video frames as a vehicle ending frame of the target vehicle, wherein N is a positive integer larger than or equal to 2.
In an exemplary embodiment, the data processing component is further configured to determine a background area corresponding to a head position of the target vehicle, where the target road on which the target vehicle is located includes a plurality of lanes, the head position is a position where a vehicle head is located in a reference video frame, and the background area is an area selected from non-vehicle frames acquired by the target camera according to the head position; performing feature point matching on each video frame in the N video frames and the background area to obtain a feature point matching result corresponding to each video frame; under the condition that N video frames are matched with the background area according to the characteristic point matching result corresponding to each video frame, determining that the vehicle ending frame is verified; under the condition that the fact that the video frames are not matched with the background area is determined in the N video frames according to the feature point matching result, the fact that the verification of the vehicle ending frame is not passed is determined, and vehicle ending detection is conducted on the video frames collected by the target camera again.
In an exemplary embodiment, the data processing component is further configured to determine, when the vehicle head of the target vehicle is identified from the reference video frame, a target lane where the target vehicle is located according to a location area where the vehicle head is located in the reference video frame, where a target road where the target vehicle is located includes a plurality of lanes, two cameras are oppositely disposed on two sides of the road of the target road, and the target camera is a first camera where the two cameras currently photograph the target vehicle; determining that the target vehicle is in a shielding state when the target lane is not the lane closest to the first camera among the plurality of lanes and the preset shielding condition is met; switching a camera shooting a target vehicle from a first camera to a second camera in the two cameras to obtain an updated target camera; wherein the preset shielding condition comprises at least one of the following: detecting that a vehicle exists in an adjacent lane from a plurality of vehicle frames corresponding to the reference video frame; the time difference between the acquisition time of the reference video frame and the vehicle entering time and/or the vehicle exiting time of the vehicle in the adjacent lane is smaller than or equal to a preset time difference threshold value, and the adjacent lane is a lane adjacent to the target lane and close to one side of the first camera in the multiple lanes.
In an exemplary embodiment, the data processing component is further configured to determine a vehicle model corresponding to a vehicle head in a set of preset vehicle models according to the reference video frame, where the set of vehicle frames includes the reference video frame; performing frame spelling processing on a group of vehicle frames according to the acquisition time to obtain a vehicle frame spelling image corresponding to the target vehicle; vehicle type recognition is carried out on the vehicle frame image, so that a frame-sharing vehicle type matched with the vehicle frame image in a group of preset vehicle types is obtained; axle tracking is carried out on each axle of the target vehicle according to a group of vehicle frames, so that axle tracking information of each axle is obtained; and fusing the locomotive model, the frame splicing model and the axle tracking information of each axle to obtain a target model.
In an exemplary embodiment, the data processing unit is further configured to sequentially perform the axle identification operation with each vehicle frame in the set of vehicle frames as a current vehicle frame, to obtain a vehicle frame in which each axle is located and a position in the located vehicle frame: carrying out axle identification on the current vehicle frame to obtain candidate shafts and shaft positions of the candidate shafts contained in the current vehicle frame; marking the candidate shaft as a first existing shaft and marking the shaft position of the candidate shaft as the shaft position of the first existing shaft in the current vehicle frame in the case that the shaft position of the candidate shaft matches the predicted shaft position of the first existing shaft in the set of existing shafts, wherein the predicted shaft position of the first existing shaft is the shaft position of the first existing shaft in the current vehicle frame predicted from the shaft position of the first existing shaft in the first vehicle frame preceding the current vehicle frame; and marking the candidate shaft as a second existing shaft newly built by the target vehicle and marking the shaft position of the candidate shaft as the shaft position of the second existing shaft in the current vehicle frame under the condition that the shaft position of the candidate shaft is not matched with the shaft position of any existing shaft in a group of existing shafts, wherein the predicted shaft position of any existing shaft is predicted according to the shaft position of any existing shaft in a vehicle frame before the current vehicle frame.
In one exemplary embodiment, the data processing component is further configured to determine an axle position of the first existing axle in the second vehicle frame as an axle position of the first existing axle in the second vehicle frame, where the axle position of the candidate axle matches a predicted axle position of the first existing axle in the set of existing axles and the first vehicle frame is not a previous vehicle frame to the current vehicle frame, the axle position of the first existing axle in the second vehicle frame being predicted from the axle position of the first existing axle in the first vehicle frame, wherein the second vehicle axle is any vehicle axle between the first vehicle frame and the current vehicle frame; and predicting the shaft position of the first existing shaft in at least one vehicle frame after the current vehicle frame according to the shaft position of the candidate shaft to obtain the predicted shaft position of the first existing shaft in the at least one vehicle frame.
In an exemplary embodiment, the data processing means is further adapted to predict an axle position of the second existing axle in at least one vehicle frame subsequent to the current vehicle frame based on the axle position of the candidate axle if the axle position of the candidate axle does not match an axle position of any of the set of existing axles, resulting in a predicted axle position of the second existing axle in the at least one vehicle frame.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to yet another aspect of embodiments of the present application, there is also provided a storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used to execute the program code of any one of the above-described vehicle information identification methods in the embodiments of the present application.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
s1, under the condition that a laser sensor is triggered, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by a target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera;
S2, under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle;
s3, under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera;
s4, identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to a group of vehicle frames.
Alternatively, specific examples in the present embodiment may refer to examples described in the above embodiments, which are not described in detail in the present embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the above-mentioned vehicle information identification method, where the electronic device may be a server, a terminal, or a combination thereof.
Fig. 10 is a block diagram of an alternative electronic device, according to an embodiment of the present application, including a processor 1002, a communication interface 1004, a memory 1006, and a communication bus 1008, as shown in fig. 10, wherein the processor 1002, the communication interface 1004, and the memory 1006 communicate with each other via the communication bus 1008, wherein,
A memory 1006 for storing a computer program;
processor 1002, when executing computer programs stored on memory 1006, performs the following steps:
s1, under the condition that a laser sensor is triggered, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by a target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera;
s2, under the condition that the vehicle head of the target vehicle is identified from the reference video frame, carrying out vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle;
s3, under the condition that the ending detection result is used for indicating that the target vehicle is ended, determining a group of vehicle frames of the target vehicle from video frames collected by the target camera;
s4, identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to a group of vehicle frames.
Alternatively, the communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus. The communication interface is used for communication between the electronic device and other equipment.
The memory may include RAM or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be understood by those skilled in the art that the structure shown in fig. 10 is only illustrative, and the device implementing the vehicle information identifying method may be a terminal device, and the terminal device may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palmtop computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 10 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or at least two units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (15)

1. A vehicle information identification method, characterized by comprising:
under the condition that a laser sensor is triggered, selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by a target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera;
under the condition that the vehicle head of the target vehicle is identified from the reference video frames, carrying out vehicle ending detection on the video frames collected by the target camera to obtain an ending detection result of the target vehicle; determining a group of vehicle frames of the target vehicle from video frames collected by the target camera under the condition that the ending detection result is used for indicating that the target vehicle has ended;
and identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to the group of vehicle frames.
2. The method according to claim 1, wherein selecting a reference video frame whose acquisition time matches a trigger time of the laser sensor from among the video frames acquired from the target camera, comprises:
selecting a target video frame with the smallest time interval between the acquisition time and the trigger time from video frames acquired by the target camera, wherein the reference video frame comprises the target video frame;
and under the condition that the vehicle head of the vehicle is not identified in the target video frames, selecting front M frames and/or rear M frames of the target video frames from video frames acquired by the target camera, wherein M is a positive integer greater than or equal to 1, and the reference video frames further comprise the front M frames and/or the rear M frames of the target video frames.
3. The method of claim 1, wherein after selecting a reference video frame whose acquisition time matches a trigger time of the laser sensor from among the video frames acquired from the target camera, the method further comprises:
determining a first detection time corresponding to a vehicle head of a candidate vehicle in a case where the vehicle head of the candidate vehicle is identified from the reference video frame;
And performing de-duplication processing on the candidate vehicle in a case where a time difference between the first detection time and a second detection time is smaller than a first time threshold, wherein the second detection time is a time at which the vehicle head was last detected before the first detection time.
4. A method according to claim 3, wherein after said determining a first detection instant corresponding to a vehicle head of the candidate vehicle, the method further comprises:
determining the candidate vehicle as the target vehicle in a case where a time difference between the first detection time and the second detection time is greater than or equal to the first time threshold and a time difference between the first detection time and a vehicle ending time of the preceding vehicle is greater than or equal to a second time threshold;
when the time difference between the first detection time and the second detection time is greater than or equal to the first time threshold value and the time difference between the first detection time and the vehicle ending time of the previous vehicle is smaller than the second time threshold value, vehicle head recognition is carried out on the video frames after the reference video frames, and a vehicle head recognition result is obtained;
Determining the candidate vehicle as the target vehicle in a case where the vehicle head recognition result is used to indicate that a vehicle head is recognized from a video frame subsequent to the reference video frame;
and in the case that the vehicle head identification result is used for indicating that the vehicle head is not identified in the video frames after the reference video frame, performing de-duplication processing on the candidate vehicle.
5. The method of claim 1, wherein the performing vehicle ending detection on the video frame acquired by the target camera to obtain an ending detection result of the target vehicle comprises:
respectively extracting the characteristics of each video frame in a group of video frames acquired by the target camera after the reference video frame to obtain the video characteristics of each video frame;
determining feature difference degrees of adjacent video frames in the group of video frames according to the video features of each video frame, wherein the adjacent video frames comprise two adjacent video frames in the group of video frames, and the feature difference degrees of the adjacent video frames are used for representing differences between video features of the two video frames in the adjacent video frames;
And under the condition that the characteristic difference degree of any adjacent video frame in the N continuous video frames in the group of video frames is smaller than or equal to a difference degree threshold value according to the characteristic difference degree of the adjacent video frames, determining the previous video frame of the N video frames in the group of video frames as a vehicle ending frame of the target vehicle, wherein N is a positive integer larger than or equal to 2.
6. The method of claim 5, wherein after said determining a previous video frame of said N video frames of said set of video frames as a vehicle ending frame of said target vehicle, said method further comprises:
determining a background area corresponding to a head position of the target vehicle, wherein a target road where the target vehicle is located comprises a plurality of lanes, the head position is a position where the vehicle head is located in the reference video frame, and the background area is an area selected from non-vehicle frames acquired by the target camera according to the head position;
performing feature point matching on each video frame in the N video frames and the background area to obtain a feature point matching result corresponding to each video frame;
Under the condition that the N video frames are matched with the background area according to the characteristic point matching result corresponding to each video frame, determining that the vehicle ending frame is verified;
and under the condition that the fact that the video frames in the N video frames are not matched with the background area is determined according to the characteristic point matching result, determining that the vehicle ending frame verification is not passed, and carrying out vehicle ending detection on the video frames collected by the target camera again.
7. The method according to claim 1, wherein the method further comprises:
under the condition that a vehicle head of the target vehicle is identified from the reference video frame, determining a target lane where the target vehicle is located according to a position area where the vehicle head is located in the reference video frame, wherein a target road where the target vehicle is located comprises a plurality of lanes, two cameras are oppositely arranged on two sides of the road of the target road, and the target camera is a first camera which is used for shooting the target vehicle currently by the two cameras;
determining that the target vehicle is in a blocking state when the target lane is not the lane closest to the first camera among the plurality of lanes and a preset blocking condition is met;
Switching a camera shooting the target vehicle from the first camera to a second camera of the two cameras to obtain the updated target camera;
wherein the preset shielding condition comprises at least one of the following: detecting that a vehicle exists in an adjacent lane from a plurality of vehicle frames corresponding to the reference video frame; the time difference between the acquisition time of the reference video frame and the vehicle entering time and/or the vehicle exiting time of the vehicle in the adjacent lane is smaller than or equal to a preset time difference threshold value, and the adjacent lane is a lane adjacent to the target lane and close to one side of the first camera in the multiple lanes.
8. The method according to any one of claims 1 to 7, wherein the identifying a target vehicle model matching the target vehicle from a set of preset vehicle models from the set of vehicle frames includes:
determining a locomotive model corresponding to the head of the vehicle in the group of preset vehicle models according to the reference video frame, wherein the group of vehicle frames comprises the reference video frame;
performing frame spelling processing on the group of vehicle frames according to the acquisition time to obtain vehicle frame spelling images corresponding to the target vehicle;
Performing vehicle type recognition on the vehicle frame image to obtain a frame-sharing vehicle type matched with the vehicle frame image in the group of preset vehicle types;
axle tracking is carried out on each axle of the target vehicle according to the group of vehicle frames, so that axle tracking information of each axle is obtained;
and fusing the locomotive model, the frame splicing model and the axle tracking information of each axle to obtain the target model.
9. The method of claim 8, wherein the axle tracking each axle of the target vehicle from the set of vehicle frames to obtain axle tracking information for each axle, comprises:
sequentially executing axle identification operation by taking each vehicle frame in the group of vehicle frames as the current vehicle frame to obtain the vehicle frame where each axle is located and the position in the located vehicle frame:
carrying out axle identification on the current vehicle frame to obtain a candidate axle contained in the current vehicle frame and an axle position of the candidate axle;
marking the candidate shaft as a first existing shaft and marking the candidate shaft as the first existing shaft position in the current vehicle frame if the shaft position of the candidate shaft matches a predicted shaft position of a first existing shaft in a set of existing shafts, wherein the predicted shaft position of the first existing shaft is a shaft position in the current vehicle frame predicted from a shaft position of the first existing shaft in a first vehicle frame preceding the current vehicle frame;
And marking the candidate shaft as a second existing shaft newly built by the target vehicle and marking the shaft position of the candidate shaft as the shaft position of the second existing shaft in the current vehicle frame under the condition that the shaft position of the candidate shaft is not matched with the predicted shaft position of any existing shaft in the group of existing shafts, wherein the predicted shaft position of any existing shaft is predicted according to the shaft position of any existing shaft in a vehicle frame before the current vehicle frame.
10. The method of claim 9, wherein the sequentially performing axle identification operations with each vehicle frame of the set of vehicle frames as a current frame further comprises:
determining an axle position of a first existing axle in a second vehicle frame that is predicted from an axle position of the first existing axle in the first vehicle frame as an axle position of the first existing axle in the second vehicle frame, where the axle position of the candidate axle matches a predicted axle position of a first existing axle in a set of existing axles and the first vehicle frame is not a previous vehicle frame to the current vehicle frame;
And predicting the shaft position of the first existing shaft in at least one vehicle frame after the current vehicle frame according to the shaft position of the candidate shaft, so as to obtain the predicted shaft position of the first existing shaft in the at least one vehicle frame.
11. The method of claim 9, wherein the sequentially performing axle identification operations with each vehicle frame of the set of vehicle frames as a current frame further comprises:
and predicting the shaft position of the second existing shaft in at least one vehicle frame after the current vehicle frame according to the shaft position of the candidate shaft under the condition that the shaft position of the candidate shaft is not matched with the shaft position of any existing shaft in the group of existing shafts, so as to obtain the predicted shaft position of the second existing shaft in the at least one vehicle frame.
12. A vehicle information identification system, characterized by comprising:
a laser sensor for detecting a vehicle passing through a laser section of the laser sensor;
the target camera is used for collecting video frames of the visual field range of the target camera, wherein the laser section of the laser sensor is positioned in the visual field range of the target camera;
The data processing component is used for selecting a reference video frame with the acquisition time matched with the triggering time of the laser sensor from video frames acquired by the target camera under the condition that the laser sensor is triggered; under the condition that the vehicle head of the target vehicle is identified from the reference video frames, carrying out vehicle ending detection on the video frames collected by the target camera to obtain an ending detection result of the target vehicle;
determining a group of vehicle frames of the target vehicle from video frames collected by the target camera under the condition that the ending detection result is used for indicating that the target vehicle has ended; and identifying a target vehicle type matched with the target vehicle from a group of preset vehicle types according to the group of vehicle frames.
13. The system of claim 12, wherein the target camera is flip-chip mounted on a side of a lane in which the target vehicle is located, and an intersection line of a scanning section of the laser sensor and a road surface is located at a middle position of a camera field of view of the target camera.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 11.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 11 by means of the computer program.
CN202211711893.1A 2022-12-29 2022-12-29 Vehicle information identification method and system, storage medium and electronic device Pending CN116152753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211711893.1A CN116152753A (en) 2022-12-29 2022-12-29 Vehicle information identification method and system, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211711893.1A CN116152753A (en) 2022-12-29 2022-12-29 Vehicle information identification method and system, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116152753A true CN116152753A (en) 2023-05-23

Family

ID=86338227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211711893.1A Pending CN116152753A (en) 2022-12-29 2022-12-29 Vehicle information identification method and system, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116152753A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740662A (en) * 2023-08-15 2023-09-12 贵州中南锦天科技有限责任公司 Axle recognition method and system based on laser radar

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740662A (en) * 2023-08-15 2023-09-12 贵州中南锦天科技有限责任公司 Axle recognition method and system based on laser radar
CN116740662B (en) * 2023-08-15 2023-11-21 贵州中南锦天科技有限责任公司 Axle recognition method and system based on laser radar

Similar Documents

Publication Publication Date Title
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
US9008359B2 (en) Detection of static object on thoroughfare crossings
US10212397B2 (en) Abandoned object detection apparatus and method and system
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN111932901B (en) Road vehicle tracking detection apparatus, method and storage medium
CN109766867B (en) Vehicle running state determination method and device, computer equipment and storage medium
CN114037924A (en) Vehicle brake-passing judgment method based on image recognition technology and related device
CN111497741B (en) Collision early warning method and device
CN113676702A (en) Target tracking monitoring method, system and device based on video stream and storage medium
JP2015090679A (en) Vehicle trajectory extraction method, vehicle region extraction method, vehicle speed estimation method, vehicle trajectory extraction program, vehicle region extraction program, vehicle speed estimation program, vehicle trajectory extraction system, vehicle region extraction system, and vehicle speed estimation system
KR20180127245A (en) Method for Predicting Vehicle Collision Using Data Collected from Video Games
US11373417B2 (en) Section line recognition device
CN116152753A (en) Vehicle information identification method and system, storage medium and electronic device
CN111967384A (en) Vehicle information processing method, device, equipment and computer readable storage medium
CN112447060A (en) Method and device for recognizing lane and computing equipment
WO2023071874A1 (en) Roadside assistance working node determining method and apparatus, electronic device, and storage medium
WO2022267266A1 (en) Vehicle control method based on visual recognition, and device
CN115965636A (en) Vehicle side view generating method and device and terminal equipment
CN114973157A (en) Vehicle separation method, electronic device, and computer-readable storage medium
CN113674311A (en) Abnormal behavior detection method and device, electronic equipment and storage medium
CN113177509A (en) Method and device for recognizing backing behavior
CN112597924A (en) Electric bicycle track tracking method, camera device and server
CN110581979B (en) Image acquisition system, method and device
CN113112814B (en) Snapshot method and device without stopping right turn and computer storage medium
JP2006268678A (en) Device and method for detecting stopping or low-speed vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231213

Address after: 430074 room 01-04, 6-7 / F, building B5, phase II construction project of financial back office service center base, 77 Guanggu Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Wanji Photoelectric Technology Co.,Ltd.

Address before: Wanji space, building 12, Zhongguancun Software Park, yard 8, Dongbei Wangxi Road, Haidian District, Beijing 100193

Applicant before: BEIJING WANJI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right