CN115331181A - Vehicle image fusion method and device, computer equipment and storage medium - Google Patents

Vehicle image fusion method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115331181A
CN115331181A CN202210961734.0A CN202210961734A CN115331181A CN 115331181 A CN115331181 A CN 115331181A CN 202210961734 A CN202210961734 A CN 202210961734A CN 115331181 A CN115331181 A CN 115331181A
Authority
CN
China
Prior art keywords
vehicle
image
images
detection
detection positions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210961734.0A
Other languages
Chinese (zh)
Inventor
胡中华
刘园
刘鸣
陈琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Signalway Technologies Co ltd
Original Assignee
Beijing Signalway Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Signalway Technologies Co ltd filed Critical Beijing Signalway Technologies Co ltd
Priority to CN202210961734.0A priority Critical patent/CN115331181A/en
Publication of CN115331181A publication Critical patent/CN115331181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a vehicle image fusion method, a vehicle image fusion device, a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring a plurality of vehicle images of at least two detection positions; the plurality of vehicle images are obtained by shooting the running vehicle by the cameras at the at least two detection positions respectively; matching the plurality of vehicle images at different detection positions according to the time difference values between the plurality of vehicle images at different detection positions respectively to obtain vehicle images associated with the vehicle at each detection position; and fusing vehicle images associated with the vehicle at each detection position. By adopting the method, the information of a plurality of vehicles at different positions can be matched according to the time difference value, the matching result of the vehicle images at different angles is determined, and the vehicle images associated with the vehicles at the detection positions are accurately fused.

Description

Vehicle image fusion method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a vehicle image fusion method, apparatus, computer device, storage medium, and computer program product.
Background
In the existing vehicle type recognition technology, a vehicle type recognition method based on laser or infrared grating has the defects of lack of effective and intuitive detection evidence, complex construction, higher manufacturing cost, easy influence of complex weather such as rain, snow, heavy fog and the like; although the image stitching and identification method based on video overcomes the defects of laser and infrared grating, the method becomes the current mainstream method. But there are also problems: the method has higher requirements on the environment, and is easy to cause poor recognition effect due to factors such as external (light, non-motor vehicles) interference and the like; the vehicle type identification method based on multi-sensor fusion has the problem that how to cooperatively process sensors of different types and different working modes to achieve the optimal state of the whole system, and the images and the structural information of the vehicle at different moments and different scenes cannot be effectively fused.
Disclosure of Invention
In view of the above, it is necessary to provide a vehicle image fusion method, apparatus, computer device, computer readable storage medium and computer program product capable of improving efficiency according to the above technical problems.
In a first aspect, the present application provides a vehicle image fusion method, including:
acquiring a plurality of vehicle images of at least two detection positions; the plurality of vehicle images are obtained by shooting running vehicles by the cameras at the at least two detection positions respectively;
according to the time difference values between the plurality of vehicle images of different detection positions respectively, matching a plurality of vehicle images of different detection positions to obtain vehicle images of the vehicle associated with each detection position;
and fusing vehicle images associated with the vehicle at each detection position.
In one embodiment, the matching the plurality of vehicle images at different detection positions according to the time difference values between the plurality of vehicle images at different detection positions respectively to obtain the vehicle image associated with the vehicle at each detection position includes:
respectively obtaining the image detection time of each vehicle image according to the plurality of vehicle images at different detection positions and identifying the vehicle speed;
calculating estimated running time of the vehicle running between the adjacent detection positions according to the distance between the adjacent detection positions and the vehicle speed;
generating time difference values between a plurality of vehicle images of the adjacent detection positions according to the estimated driving time and the detection time of each vehicle image of the adjacent detection positions;
according to the time difference value, determining a vehicle image associated with the vehicle at each detection position in a plurality of vehicle images of the adjacent detection positions; the vehicle images at the detection positions of the vehicle correspond to the adjacent detection positions one by one.
In one embodiment, the determining, according to the time difference value, a vehicle image associated with the vehicle at each of the detection positions in the plurality of vehicle images at the adjacent detection positions includes:
selecting a target time difference value according to the time difference value;
when the target time difference value is judged to be smaller than the interval abnormal threshold value, respectively determining the vehicle images corresponding to the target time difference value as the vehicle images of the vehicle at each detection position;
when the target time difference value is judged to be larger than the interval abnormal threshold value, calculating the matching degree between the vehicle images corresponding to the target time difference value, and determining whether the vehicle image corresponding to the target time difference value is the vehicle image of the vehicle at each detection position according to the matching degree.
In one embodiment, license plate information does not exist in one to-be-fused vehicle image of the first to-be-fused vehicle image and the second to-be-fused vehicle image, and the matching degree comprises color matching degree; when the first to-be-fused vehicle image and the second to-be-fused vehicle image both have license plate information, the matching degree comprises the color matching degree and the license plate matching degree, and the priority of the license plate matching degree is higher than the priority of the color matching degree.
In one embodiment, the fusing the vehicle images associated with the vehicle at the detection positions further comprises:
when the vehicle speed is smaller than a vehicle speed threshold value, acquiring a corresponding predicted shooting time sequence according to the sequence of each detection position, and judging whether the predicted shooting time sequence is matched with the shooting time of the vehicle images at different detection positions to obtain a shooting time matching result; judging whether to fuse vehicle images associated with the vehicle at each detection position based on the shooting time matching result;
and when the vehicle speed is greater than the vehicle speed threshold, judging whether the shooting time interval corresponding to the sequence of each detection position corresponds to the interval threshold parameter or not, and fusing vehicle images associated with the vehicle at each detection position.
In one embodiment, the fusing the plurality of vehicle images associated with the vehicle at each of the detection positions further comprises:
acquiring a position identification sequence when the vehicle is shot according to different detection positions;
matching the identifiers carried by the vehicle images at different detection positions according to the position identifier sequence to obtain an identifier matching result;
and judging whether to fuse the vehicle images associated with the vehicle at each detection position or not based on the identification matching result.
In one embodiment, the at least two cameras for detecting positions respectively shoot the running vehicles, and the method comprises the following steps:
calculating a vehicle speed when the camera of the first detection position detects a running vehicle;
the camera at the first detection position predicts the predicted shooting time for the running vehicle to reach the second detection position according to the vehicle speed;
and the camera at the first detection position sends the estimated shooting time to the camera at the second detection position, so that the camera at the second detection position shoots.
In a second aspect, the application further provides a vehicle image fusion device. The device comprises:
the system comprises an image acquisition module, a detection module and a display module, wherein the image acquisition module is used for acquiring a plurality of vehicle images of at least two detection positions; the plurality of vehicle images are obtained by shooting running vehicles by the cameras at the at least two detection positions respectively;
the image matching module is used for matching the plurality of vehicle images at different detection positions according to time difference values among the plurality of vehicle images at different detection positions respectively to obtain vehicle images associated with the vehicle at each detection position;
and the image fusion module is used for fusing vehicle images associated with the vehicle at each detection position.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the vehicle image fusion steps in any embodiment when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of vehicle image fusion in any of the embodiments described above.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of vehicle image fusion in any of the embodiments described above.
According to the vehicle image fusion method, the device, the computer equipment, the storage medium and the computer program product, a plurality of vehicle images of at least two detection positions are obtained, and vehicle images shot at different angles are obtained; matching the plurality of vehicle images at different detection positions according to the time difference values between the plurality of vehicle images at different detection positions respectively, matching the plurality of vehicle information at different positions according to the time difference values, determining the matching results of the vehicle images at different angles, not directly calculating the image similarity, but also directly obtaining the vehicle images associated with the vehicle at each detection position; and then the vehicle images associated with the vehicles at all the detection positions are fused, the images of the head, the side and the tail of the same vehicle, the license plate number and the vehicle type are precisely fused, information such as the side images and the vehicle types is added to the conventional charging system at the entrance and exit of the logistics park, and basis and guarantee are provided for classified charging according to the vehicle types.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a vehicle image fusion method;
FIG. 2 is a schematic flow chart diagram of a vehicle image fusion method according to one embodiment;
FIG. 3 is a schematic flow chart diagram illustrating a vehicle image fusion method according to another embodiment;
FIG. 4 is a schematic flow chart diagram illustrating a method for fusing images of a vehicle according to one embodiment;
FIG. 5 is a block diagram showing the construction of a vehicle image fusion apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vehicle image fusion method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a vehicle image fusion method is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, acquiring a plurality of vehicle images of at least two detection positions; the plurality of vehicle images are obtained by imaging the running vehicle with at least two cameras for detecting positions.
The detection positions are positions where the cameras are located, the detection positions are used for determining angles for shooting the vehicles, the angles are used for shooting different areas of the vehicles respectively, and a vehicle head image, a vehicle side image and a vehicle tail image are obtained respectively. A certain detection location is exemplarily set forth. The camera is arranged on the side of the road at the entrance and the exit, the installation height is 1.2 m-1.8 m, the center line of the visual field of the vehicle side identification unit is vertical to the advancing direction of vehicles at the entrance and the exit, the downward overlooking angle is 0-15 degrees, and the distance from the edge of the road is 0.5-1.2 m. The camera is a fisheye lens with the diameter of 1.4-2 mm. And adjusting the horizontal visual angle to enable the driving direction of the road surface to be horizontal in the image, and ensuring the horizontal movement of the vehicle in the image.
The plurality of vehicle images of the detected position are obtained by capturing images of the vehicle traveling at a certain angle. The vehicle driving process refers to the change of the vehicle position at different times on the road, and the images shot along with the change of the vehicle position form a plurality of vehicle images. The vehicle image can be identified, the identification result is information of the vehicle shot at the corresponding angle extracted from the vehicle image, the information extracted from the vehicle image can be information of one or more vehicles, and the information of the one or more vehicles can determine the associated vehicle image through the scheme of the application. Therefore, the associated vehicle images are determined through detecting the positions so as to carry out image splicing.
When the plurality of vehicle images are each image in the video, the detection position can determine the associated vehicle image and perform recognition, so that a recognition result with high reliability is obtained.
And under the condition of dealing with the video abnormity, the terminal realizes the identification function in a mode of caching the large image single frame, the vehicle side identification unit detects the time of the vehicle in real time, and informs the vehicle head and vehicle tail identification units to use the cached large image to identify the single frame.
In one embodiment, at least two cameras for detecting positions, respectively photographing a running vehicle, comprise: when the camera of the first detection position detects a running vehicle, calculating the speed of the vehicle; the camera of the first detection position predicts the predicted shooting time when the running vehicle reaches the second detection position according to the vehicle speed; and the camera at the first detection position sends the estimated shooting time to the camera at the second detection position, so that the camera at the second detection position shoots.
The first detection position and the second detection position are corresponding detection positions. The information extracted from the vehicle image shot by the camera at the first detection position is used for determining the associated vehicle image and determining the shooting time of the camera at the second detection position so as to control the camera at the second detection position to shoot the vehicle according to the estimated shooting time.
Illustratively, the first detection position is a position of a vehicle-side recognition unit, and the second detection position is a position of at least one of a vehicle head recognition unit and a vehicle tail recognition unit. When the vehicle side identification unit shoots the vehicle head, calculating the vehicle speed, estimating estimated shooting time when the running vehicle reaches the position corresponding to the vehicle head identification unit according to the vehicle speed, and sending the estimated shooting time to the vehicle head identification unit so that the vehicle head identification unit shoots the vehicle at the estimated shooting time; and then, when the vehicle side identification unit shoots the vehicle tail, calculating the speed of the vehicle, estimating the estimated shooting time of the running vehicle reaching the position corresponding to the vehicle tail identification unit according to the speed of the vehicle, and sending the estimated shooting time of the vehicle reaching the position corresponding to the vehicle tail identification unit so that the vehicle tail identification unit shoots the vehicle at the estimated shooting time. The vehicle head identification unit is a camera for shooting the vehicle head and corresponding equipment, and the vehicle tail identification unit is a camera for shooting the vehicle tail and corresponding equipment.
The camera of control second detection position shoots the vehicle according to estimating the shooting time, includes: the vehicle head identification unit shoots the vehicle head of the vehicle for the first time at the corresponding estimated shooting time; in the shooting process, each time the vehicle-side identification unit detects that the vehicle runs for a preset distance (such as 6 meters), the vehicle-head identification unit is triggered to shoot once. Similarly, the vehicle tail identification unit shoots the vehicle tail at the corresponding estimated shooting time. Whether the image is shot for the first time or not, a timing cache picture can be obtained and used for outputting a single-frame identification result, and the single-frame identification result is information extracted from the vehicle image. The timed cache picture is an alternative result in this exceptional case. The vehicle side identification unit is used for informing the vehicle head and vehicle tail identification unit to carry out single-frame identification by using the cache large image by detecting the time of the vehicle passing through the vehicle side identification unit in real time.
And caching the single-frame identification result of the large image, wherein the type is an alternative result in the case of dealing with the abnormal situation. The vehicle side identification unit is used for informing the vehicle head and vehicle tail identification unit to carry out single-frame identification by using the cache large image by detecting the time of the vehicle passing through the vehicle side identification unit in real time.
Specifically, a plurality of vehicle images of each detection position respectively form a cache queue, and a video shooting mechanism and a cache mechanism are as follows:
when the vehicle side identification unit detects that a vehicle passes through, the vehicle side identification unit estimates the first estimated shooting time of the vehicle arriving at the vehicle head identification unit by calculating the vehicle speed, and sends the estimated shooting time and the ID of the vehicle head to the vehicle head identification unit when a trigger signal is sent. And after the estimated shooting time of the vehicle head, the vehicle side identification unit sends out the cache instruction again when detecting that the vehicle moves for 6 meters, and triggers the vehicle head identification unit to shoot. When the vehicle is detected to completely pass through the vehicle side identification unit, the estimated shooting time of the vehicle reaching the vehicle tail identification unit is estimated by calculating the vehicle speed, and a signal is sent to the vehicle tail identification unit with the estimated shooting time and the ID.
Correspondingly, if the vehicle head identification unit or the vehicle tail identification unit can output the identification result of the video stream, the corresponding vehicle information is extracted by preferentially adopting the identification result of the video stream according to a video shooting mechanism so as to perform image fusion; if one identification unit of the head identification unit and the tail identification unit can not output the identification result of the video stream, the identification unit which can not output the identification result of the video stream starts a cache mechanism, and one large image is cached every 500ms according to the cache mechanism and put into a cache queue; when a signal triggered by a cache instruction is received, a cache large image with the closest time is searched in a cache queue through the attached pre-estimated shooting time, single-frame identification is carried out, and a result is output. And a trigger ID field is attached to the identification result, the ID of the video stream result is 0, and the ID of the single-frame identification result is carried when the vehicle side triggers, so that the two results are distinguished.
When the vehicle side identification result can calculate the difference value of at least one dimension through a fusion algorithm, searching one identification result of the vehicle head identification result or the vehicle tail identification result according to the difference value of the corresponding dimension; if a certain recognition result is found, the single-frame recognition result is not used; otherwise, the single frame recognition result is used for the fusion method to select the proper vehicle image to be fused.
And 204, respectively matching the plurality of vehicle images at different detection positions according to the time difference values among the plurality of vehicle images at different detection positions to obtain vehicle images associated with the vehicle at each detection position.
The time difference value is a difference value between a plurality of vehicle images at different detection positions in the dimension of the time axis. When a plurality of vehicle images at a certain detection position form image sets such as a buffer queue or a stack, each image is selected from the image set and the image set at another detection position to calculate the time difference value, and then vehicle images related to the two image sets at each detection position are obtained. The vehicle images associated with the detection positions are vehicle images to be fused of the same vehicle, and are vehicle images to be fused, which are calculated by difference values based on time.
For example: the camera at the first detection position sequentially shoots an A1 image and an A2 image, and the camera at the second detection position sequentially shoots a B image, so that a time difference value between the A1 image and the B image is calculated, and then a time difference value between the A2 image and the B image is calculated; comparing the two time difference values to obtain a time difference value matching result of the B image; and the time difference value matching result of the B image is used for selecting the vehicle image associated with the B image of the second detection position from the A1 image and the A2 image.
In one embodiment, by using the NTP as a time synchronization tool of the multi-camera, the time of the cameras at the detection positions is ensured to be positioned on the same time axis, and the identification results of the vehicles are also sequentially output when a plurality of vehicles are sequentially shot by the cameras at the detection positions. Secondly, when the discontinuous shooting time sequence or the non-conformity with the predicted shooting time sequence is caused by the interference of environment and human factors, the calculation is facilitated according to the fusion algorithm and the structural information in the result, so that the abnormal interference is corrected and the wrong result is filtered, and the correctness of the fusion result is ensured.
In one embodiment, matching the plurality of vehicle images at different detection positions according to time difference values between the plurality of vehicle images at different detection positions respectively to obtain vehicle images associated with the vehicle at each detection position includes:
respectively acquiring the image detection time of each vehicle image according to a plurality of vehicle images at different detection positions and identifying the vehicle speed; calculating estimated running time of the vehicle running between the adjacent detection positions according to the distance between the adjacent detection positions and the vehicle speed; generating time difference values between a plurality of vehicle images of adjacent detection positions according to the estimated driving time and the detection time of each vehicle image of the adjacent detection positions; according to the time difference value, determining a vehicle image associated with the vehicle at each detection position in a plurality of vehicle images at adjacent detection positions; the vehicle images correspond to adjacent detection positions one by one at each detection position.
The image detection time is the time when a camera at a certain detection position shoots a certain vehicle image, and each vehicle image has one image detection time. For a plurality of image detection times of one detection position, the respective image detection times should be sequentially arranged.
Unlike the image detection time directly acquired, the vehicle speed is recognized based on at least two vehicle images of one detected position. The identified vehicle speed is the speed at which the vehicle is traveling between adjacent detection locations. If the images shot by the cameras at the adjacent detection positions are different, matching can be carried out according to the vehicle images.
The estimated travel time of the vehicle traveling between the adjacent detection positions is calculated, and the calculation process may be a process of calculating according to a ratio. The calculated estimated travel time may be an estimated photographing time, and may also be used to determine an estimated photographing time for a first photographing.
In one embodiment, determining a vehicle image associated with the vehicle at each detection position among a plurality of vehicle images at adjacent detection positions according to the time difference value includes: when the time difference value is a time difference value; searching a plurality of vehicle images of adjacent detection positions according to the minimum value of the time difference value; and determining the vehicle image of each detection position corresponding to the minimum value of the time difference value as the vehicle image associated with the vehicle at each detection position. The vehicle images of the detection positions corresponding to the minimum value are in one-to-one correspondence with the number of the detection positions. Therefore, under the normal condition that the cameras of all the detection units shoot, the selection of the target time difference value is realized. It should be understood that the time difference value is not necessarily a time difference value or a difference value calculated based on the time difference value, and may also be a difference value calculated by means of a ratio, a variance, or the like.
Further, the vehicle image associated with the detection position can be determined regardless of the normal condition or the abnormal condition of the photographing. The interval anomaly threshold refers to a multiple of the time interval threshold parameter between adjacent detected locations, which may be twice. Considering similar abnormal situations may be: after a vehicle a passes through the equipment, due to various abnormal reasons (for example, a license plate cannot be recognized when the vehicle is too close to the vehicle), the recognition result of the vehicle a is not available in the head result queue of the head recognition unit, and the head result of the previous vehicle B is available in the head result queue (the previous vehicle B is recognized by the vehicle a one hour ago), at this time, the head result is the vehicle B, the body result is the vehicle a, and the time difference between the two is large.
In order to solve the problem, the method for determining the vehicle image associated with the vehicle at each detection position in a plurality of vehicle images of adjacent detection positions according to the time difference value comprises the following steps:
and selecting a target time difference value according to the time difference value.
And when the target time difference value is judged to be smaller than the interval abnormal threshold value, respectively determining the vehicle images corresponding to the target time difference value as the vehicle images of the vehicle at each detection position. When the target time difference value is smaller than the interval abnormal threshold value, the adjacent detection positions can normally shoot the vehicle, the target time difference value is a selected time difference value, and the selected time difference value can be the minimum value between vehicle images of different detection positions and can also be selected in other modes.
And when the target time difference value is judged to be larger than the interval abnormal threshold value, calculating the matching degree between the vehicle images corresponding to the target time difference value, and determining whether the vehicle image corresponding to the target time difference value is the vehicle image of the vehicle at each detection position according to the matching degree. Therefore, when the difference value of the target time is smaller than the interval abnormal threshold value, the matched image is calculated more conveniently without extracting information aiming at the image; when the target time difference value exceeds the interval abnormal threshold value, the matching degree between the vehicle images can be calculated to judge whether the vehicle image corresponding to the target time difference value is the vehicle image of the vehicle at each detection position.
The method comprises the steps that a vehicle image to be fused in a first vehicle image to be fused and a second vehicle image to be fused does not have license plate information, and the matching degree comprises color matching degree; when the first to-be-fused vehicle image and the second to-be-fused vehicle image both have license plate information, the matching degree comprises color matching degree and license plate matching degree, and the priority of the license plate matching degree is higher than that of the color matching degree.
Specifically, the color matching degree is the similarity of at least two colors; this may be obtained by calculating the euclidean distance between the two colors, or by calculating the similarity of the colors to the expected combination to represent the two colors. When the vehicle images of the vehicle head area and the vehicle side area are fused, the fusion process can be the first fusion, and only the color matching degree can be calculated.
The license plate matching degree is calculated based on license plate information extracted from the vehicle image; which can be identified by any method of image identification, for determining whether the license plate information of different vehicle images match. When the first to-be-fused vehicle image is obtained by fusing the vehicle images of the vehicle head area and the vehicle side area, the license plate information exists in the first to-be-fused vehicle image, and therefore the first to-be-fused vehicle image can be fused with the vehicle image of the vehicle tail area, which is the second to-be-fused vehicle image.
In one embodiment, the priority of the license plate matching degree and the color matching degree is mainly discussed, and the priority is used for determining the calculation order of the matching degree. When the license plate matching degree represents that the license plate information between the first to-be-fused vehicle image and the second to-be-fused vehicle image is consistent, determining that the first to-be-fused vehicle image and the second to-be-fused vehicle image are fused; and when the license plate matching degree represents that the license plate information between the first vehicle image to be fused and the second vehicle image to be fused is inconsistent, and the first vehicle image to be fused and the second vehicle image to be fused are inconsistent with the vehicle information of other vehicle images, determining whether the first vehicle image to be fused and the second vehicle image to be fused are fused or not according to the calculated color matching degree.
In one embodiment, fusing the vehicle images associated with the vehicle at the respective detection positions, the method further comprises:
when the vehicle speed is less than the vehicle speed threshold value, acquiring a corresponding predicted shooting time sequence according to the sequence of each detection position, and judging whether the predicted shooting time sequence is matched with the shooting time of the vehicle images at different detection positions to obtain a shooting time matching result; and judging whether to fuse the vehicle images associated with the vehicles at the detection positions or not based on the shooting time matching result.
And when the vehicle speed is greater than the vehicle speed threshold, judging whether the shooting time interval corresponding to the sequence of each detection position corresponds to the time interval threshold parameter or not, and fusing vehicle images associated with the vehicle at each detection position or not.
The vehicle speed threshold is the identified maximum vehicle speed. When the vehicle speed calculated by the traffic head identification unit is greater than the vehicle speed threshold preset by the fusion algorithm, the current vehicle is considered to be abnormal when passing through the vehicle side identification unit or the corresponding terminal, and the situation may be one of the following situations: a. the vehicle speed is not the speed of the actual movement; b. when the current vehicle speed is too high, the equipment needs to be given a processing time; c. other abnormal situations. Therefore, whether the shooting time interval corresponding to the sequence of each detection position corresponds to the time interval threshold parameter or not is judged, and the robustness of the fusion algorithm is improved through the relatively loose judgment standard.
When the vehicle speed is smaller than the vehicle speed threshold value, acquiring the corresponding predicted shooting time sequence according to the sequence of the detection positions, and sequentially judging whether the associated vehicle images are abnormal or not. For example: when the arrangement sequence of the detection positions is sequentially the positions of the vehicle head identification unit and the positions of the vehicle side identification unit, the expected shooting time sequence is the shooting time of the vehicle head identification unit and the shooting time of the vehicle side identification unit, and when the shooting time sequence is matched with the shooting time of the vehicle images of different detection positions, the vehicle images associated with the vehicles at the detection positions are fused; otherwise, the vehicle images associated with the vehicle at the detection positions are not fused.
In one embodiment, prior to fusing the plurality of vehicle images associated with the vehicle at the respective detection locations, the method further comprises: acquiring a position identification sequence when the vehicle is shot according to different detection positions; matching the identifiers carried by the vehicle images at different detection positions according to the position identifier sequence to obtain an identifier matching result; and judging whether to fuse the vehicle images associated with the vehicles at the detection positions or not based on the identification matching result.
The position identification sequence is the sequence of the identification generated by each detection position in the shooting process in sequence and is used for determining whether the images shot by different detection positions are matched.
For example: the position identification sequence means that the position identification sequence of the vehicle image shot by the vehicle head detection unit is sequentially an identification 1 and an identification 2, and correspondingly, the position identification sequence of the vehicle image shot by the vehicle side identification unit is sequentially an identification A and an identification B;
when the identification carried by the vehicle image at the detection position of the vehicle head identification unit is the identification 1 and the identification 2 in sequence, and the identification carried by the vehicle image at the detection position of the vehicle side identification unit is the identification A and the identification B in sequence;
thus, the vehicle image carrying the identifier 1 is determined, and the vehicle image carrying the identifier A is the vehicle image associated with a certain vehicle at each detection position; and the vehicle image carrying the identification 2 and the vehicle image carrying the identification B are determined to be the vehicle images associated with the other vehicle at the respective detection positions.
And step 206, fusing a plurality of vehicle images associated with the vehicle at each detection position.
In one embodiment, fusing the plurality of vehicle images associated with the vehicle at the detection positions means fusing the vehicle images associated with the same vehicle at the detection positions to obtain a fused image of the vehicle. For example: the method comprises the following steps of accurately capturing images of passing vehicles, wherein the images of the vehicles comprise a head map, a side map and a side map. And identifying the structured vehicle type information by respectively identifying the vehicle head diagram, the vehicle side diagram and the vehicle side diagram, and accurately fusing the vehicle type information and the vehicle side diagram together to obtain a corresponding vehicle image.
In the vehicle image fusion method, a plurality of vehicle images of at least two detection positions are obtained, and vehicle images shot at different angles are obtained; matching the plurality of vehicle images at different detection positions according to time difference values among the plurality of vehicle images at different detection positions respectively, matching the plurality of vehicle information at different positions according to the time difference values, determining the matching results of the vehicle images at different angles, not directly calculating the image similarity, but also directly obtaining the vehicle images associated with the vehicle at each detection position; and then the vehicle images associated with the vehicles at all detection positions are fused, the images of the head, the side and the tail of the same vehicle, the license plate number and the vehicle type are accurately fused, the information such as the side images and the vehicle types is added for the conventional charging system at the entrance and the exit of the logistics park, and the basis and the guarantee are provided for the classified charging according to the vehicle types.
In one embodiment, the terminal uses a time axis as a main basis, and completes information fusion of multi-angle snap shots of a vehicle through a fusion algorithm in cooperation with the structural information (license plate number, snap shot image, vehicle head color, detection time, trigger ID and the like) output by the vehicle head identification unit, the structural information (vehicle side splicing large image, vehicle head detection time, vehicle tail detection time, vehicle speed, wheel start detection time, wheel end detection time, trigger ID and the like) output by the vehicle side identification unit and the structural information (license plate number, snap shot image, vehicle side color, detection time, trigger ID and the like) output by the vehicle tail identification unit.
In one embodiment, the first detection position is a position of the vehicle-side identifying unit, and the second detection position is a position of the vehicle-head identifying unit.
Vehicle head setting detectionCell identification result H n N represents the number-th identification result of the locomotive identification; the shooting time of the vehicle head detection unit is H n .T 0 Number of vehicle head and number plate H n P, color of vehicle head H n C, head speed H n V, trigger position ID is H n ID, S is the distance from the identification position of the vehicle head identification unit to the position of the vehicle side detection unit.
Set car side result B n N represents the number-th recognition result of the vehicle-side recognition; the detection time of the head is B n .T 0 Wheel start detection time B n .T 1 Wheel end detection time B n .T 2 The tail detection time is B n .T 3 The time when the vehicle passes the vehicle side is B n .T 4 Side color of car B n C, vehicle length B n L, the passing speed of the vehicle is B n V, trigger location ID is B n .ID。
Figure BDA0003793553660000131
Figure BDA0003793553660000132
Figure BDA0003793553660000133
Where n1 and n2 are independent parameters. Therefore, the vehicle head and the vehicle side identification unit and the vehicle head identification unit can be combined under normal conditions. Wherein, the formula (1) is used for selecting a target time difference value; formula (2) acquiring a corresponding predicted shooting time sequence according to the sequence of each detection position, and determining that the predicted shooting time sequence is matched with the shooting time of the vehicle images at different detection positions; and the formula (3) is used for determining that the identifications carried by the vehicle images at different detection positions are matched according to the position identification sequence.
Go toWhen the vehicle speed is higher than the vehicle speed threshold value, whether a shooting time interval corresponding to the sequence of each detection position corresponds to a time interval threshold value parameter or not is judged, and whether vehicle images (H) associated with the vehicle at each detection position are fused or not is judged n .V>P 2 Then, the alternative equation (2) is as follows:
Figure BDA0003793553660000141
the purpose is that H calculated by the traffic head recognition unit n V is greater than a threshold P preset by the fusion algorithm 2 In the meantime, the fusion algorithm considers that the current vehicle is abnormal when passing through the equipment, and there may be the following situations: a. vehicle speed H n V is not the actual moving vehicle speed; b. when the current vehicle speed is too fast, the equipment needs to be given a processing time; c. other abnormal situations. A relatively loose judgment threshold formula (4) is required to replace the formula (2) for carrying out the fusion algorithm, so that the robustness of the fusion algorithm is improved.
Further, if the interval exception threshold is twice the time interval threshold parameter and the target time difference value exceeds the interval exception threshold, equation (5) is satisfied, as follows:
Figure BDA0003793553660000142
then, when calculating the matching degree between the vehicle images corresponding to the target time difference value, using formula (6), as follows:
Figure BDA0003793553660000143
in order to calculate the similarity between two colors, similar1 represents the similarity between two colors by calculating the euclidean distance between two colors. If the two recognition results of the vehicle images to be fused satisfy the formula (5), the system is considered to be in a special state, if the vehicle is followed, the vehicle is not in a license plate, and the like, the formula (6) is added to ensure the correctness of the fusion algorithm, and if the two recognition results do not satisfy the formula (6), the system continues to wait for a new recognition result to perform the fusion algorithm.
In one embodiment, the first detection position is a position of a vehicle-side recognition unit, and the second detection position is a position of a vehicle-rear recognition unit.
Set the end result E n N represents the number of recognition results of tail recognition; the detection time of the tail of the vehicle is E n .T 0 Vehicle tail number plate E n Color of vehicle tail E n D, speed of vehicle tail E n V, trigger location ID is E n ID, S is the distance from the identification position of the vehicle rear identification unit to the vehicle side detection unit. Setting a vehicle tail and vehicle side result time interval threshold parameter P 3 Vehicle speed threshold P 4
When E is n .P==H n And when P is required, the license plate matching degree is directly used for completing matching fusion. Otherwise, the image fusion is completed using the formulas (7) to (12), and (13) in one-to-one correspondence with the formulas (1) to (6).
Figure BDA0003793553660000151
Figure BDA0003793553660000152
Figure BDA0003793553660000153
Specially, if
Figure BDA0003793553660000154
Then alternative equation (8) is as follows:
Figure BDA0003793553660000155
in particular, if,
Figure BDA0003793553660000156
then it is also necessary to satisfy:
Figure BDA0003793553660000157
Figure BDA0003793553660000158
wherein, the similarity 1 is calculated for the color similarity as described above; similarity 2 is used for calculating the similarity of two license plates, the similarity of the two license plates is expressed by obtaining the same digit number by comparing the character strings of the two license plates, and the current fusion algorithm defaults that 5-bit characters are the same and the two license plates are similar. When the final found result of the fusion algorithm also satisfies the formula (11), it means that the selected result may not be the most suitable result, so it is necessary to add the judgment condition formula (12) and formula (13), tighten the judgment condition, and improve the accuracy of the algorithm. If the formula (12) and the formula (13) are not satisfied, the fusion is continued by waiting for a new recognition result.
Therefore, the method can accurately fuse the images of the head, the side and the tail of the same vehicle, the license plate number and the vehicle type to form vehicle multidimensional data, increase information such as the side images and the vehicle types for the conventional charging system at the exit and entrance of the logistics park, and provide basis and guarantee for classified charging according to the vehicle types. In actual tests, whether congestion exists or not, and under the condition that non-motor vehicles and pedestrians move and interfere in the background environment, the fusion rate can reach more than 99.5%.
In one embodiment, as shown in fig. 3, the corresponding detection process is determined from the shooting of the vehicle head identification unit, the vehicle side identification unit and the vehicle tail identification unit from three angles.
In the process of video stream identification, a head identification unit and a tail identification unit respectively output corresponding video stream results at regular time, a head result queue and a tail result queue are generated based on the video stream results, then the head result queue and the tail result queue are respectively filtered, and image fusion is carried out according to the filtered results.
After the vehicle side identification unit detects the vehicle head, the vehicle head identification unit is triggered to cache the picture at regular time, and the picture is triggered to be shot at regular time every time when the vehicle side identification unit detects that the vehicle moves for 6 meters, so that a single-frame identification result of the vehicle head is obtained, and correspondingly, a single-frame identification result of the vehicle tail can be obtained in a similar mode. The single frame identifies the result, this type is an alternative result in dealing with abnormal situations. The vehicle side identification unit is used for informing the vehicle head and vehicle tail identification unit to use the cache large image to carry out single-frame identification by detecting the time of the vehicle passing through the vehicle side identification unit in real time.
And then, obtaining result queues of the head, the side and the tail of the vehicle, respectively filtering enough, fusing according to the scheme of the embodiment, fusing images of the head and the side of the vehicle to obtain a first image of the vehicle to be fused, and fusing the first image of the vehicle to be fused with a second image (image of the tail of the vehicle) of the vehicle to output complete fusion information of the vehicle.
In a more complete embodiment, as shown in FIG. 4. This patent provides a polyphaser multi-angle picture fuses technique, can be with the vehicle of process, and the structured motorcycle type information is discerned to accurate head picture, rear of a vehicle picture, the car profile map of shooing down to accurate vehicle image of fusing each angle. Compared with the prior art, the method has the advantages that a plurality of sensors of the same type and different angles are used, interference factors in the sensors are filtered under a complex environment, effective vehicle type information is extracted, and recognition and fusion processing are completed. The method comprises the following specific steps:
s401, the camera is installed on the side of the entrance road.
The specific installation parameters relate to that the installation height is 1.2-1.8 m, the center line of the visual field of the vehicle side identification unit is vertical to the advancing direction of vehicles at the entrance and exit road, the downward overlooking angle is 0-15 degrees, and the distance from the road edge is 0.5-1.2 m. A fisheye lens with the diameter of 1.4-2 mm is selected. And adjusting the horizontal visual angle to enable the driving direction of the road surface to be horizontal in the image, and ensuring the horizontal movement of the vehicle in the image.
S402, when the vehicle passes through the vehicle head identification unit: and identifying and outputting structural information of the vehicle through an AI algorithm, wherein the structural information comprises information such as a license plate, shooting time, the color of the vehicle head and the like, and a panoramic large image of the vehicle head.
S403, when the vehicle passes through the vehicle side recognition unit: and detecting and identifying structural information such as vehicle speed, vehicle type, axle number and the like, and information such as a vehicle side splicing panoramic image and the like by using a vehicle side splicing technology.
S404, when the vehicle passes through the vehicle tail identification unit: and identifying and outputting structural information of the vehicle through an AI algorithm, wherein the structural information comprises information such as a license plate, shooting time, tail color and the like, and a panoramic large image of the tail.
And S405, the vehicle head identification unit and the vehicle tail identification unit respectively transmit the identified vehicle information to the vehicle side identification unit through network protocols.
S406, fusion time: and finding the results of the vehicle head and the vehicle tail through the vehicle side result, and respectively finding the results of the vehicle head and the vehicle tail meeting the conditions after the vehicle side result exists.
S407, the vehicle head identification unit and the vehicle tail identification unit output two results: video stream recognition results and caching large image single frame recognition results.
In step S407, in order to deal with the situation that the results of the vehicle head or the vehicle tail are lost due to abnormal factors such as human, environmental or no license plate, it is ensured that each vehicle has complete data of the vehicle head, the vehicle side and the vehicle tail.
a. And identifying a result by the video stream, wherein the result of the type has the highest credibility and is preferentially used.
b. And caching the large single-frame identification result, wherein the type is an alternative result in the case of coping with the abnormal situation. The vehicle side identification unit is used for informing the vehicle head and vehicle tail identification unit to carry out single-frame identification by using the cache large image by detecting the time of the vehicle passing through the vehicle side identification unit in real time.
In the result output mechanism of the S407, the vehicle head identification unit, and the vehicle tail identification unit, the specific caching mechanism is as follows:
vehicle side identification unit: a. when a vehicle is detected to pass, the time of reaching the vehicle head identification unit is estimated by calculating the speed of the vehicle, and the estimated time and the estimated ID are attached to the vehicle head identification unit when the trigger signal is sent. b. And then, when the vehicle side detects that the vehicle moves for 6 meters, the vehicle head identification unit is triggered again. c. After the vehicle is detected to completely pass through the vehicle side identification unit, the time of reaching the vehicle tail identification unit is estimated by calculating the vehicle speed, and a signal is sent to the vehicle tail identification unit with the estimated time and the ID.
Head and tail identification unit: a. and outputting the identification result of the video stream. b. And buffering one large graph to a buffer queue every 500 ms. When the trigger signal is received, the cache large image with the closest time is searched in the cache queue through the attached estimated time, single-frame identification is carried out, and the result is output. And a trigger ID field is attached to the identification result, the ID of the video stream result is 0, and the ID of the single-frame identification result is carried when the vehicle side triggers, so that the two results are distinguished.
When the vehicle side result can find the corresponding vehicle head and vehicle tail results through the fusion algorithm, the single-frame recognition result cannot be used. When the vehicle head and tail results meeting the conditions cannot be found through the fusion algorithm, the single-frame recognition result is used for the fusion method to select the most suitable result as the final result.
S408, a fusion method: and the vehicle side identification unit fuses the results of the vehicle head, the vehicle side and the vehicle tail in the queue according to a fusion algorithm and outputs the results.
And (3) fusion algorithm of the results of the vehicle side and the vehicle head: condition 1: the shooting time of the head of the vehicle is less than the shooting starting time of the side of the vehicle, so as to filter the head result of the previous vehicle, and the formula is as follows:
Figure BDA0003793553660000181
condition 2: the wheel shooting time subtracts the locomotive shooting time, subtracts the shooting time that the vehicle went to car side recognition unit again, obtains the time difference value, recalculates the minimum of time difference value, and its purpose is in order to guarantee the matching of current locomotive result and car side result, filters because block up when leading to a plurality of locomotive results to appear, filters the locomotive result of preceding or next, and the formula is as follows:
Figure BDA0003793553660000182
condition 3: the identification time of the vehicle head is longer than the vehicle side shooting starting time of the previous vehicle.
Condition 4: the vehicle head ID is 0, or the vehicle head ID is equal to the vehicle side ID.
Condition 5: when the vehicle speed reaches the threshold value P2, the matching condition 1 is appropriately relaxed, and the following conditions are satisfied:
Figure BDA0003793553660000183
therefore, the problem that the device with the higher vehicle speed does not output a result under the abnormal conditions of processing identification and the like is solved.
Condition 6: if the minimum value in Condition 2 is greater than 2*P 1 The threshold value of (2) is also required to satisfy the following color similarity condition to complete matching fusion:
Figure BDA0003793553660000184
and synthesizing the conditions to meet the head selection result, and if the conditions are not met, continuing to wait for a new identification result. And when the vehicle side result of the next vehicle is received after time-out, directly using the single-frame recognition result as a substitute result for fusion so as to ensure that each vehicle side result can find the vehicle head result.
And (3) a fusion algorithm of the vehicle side result and the vehicle tail result:
condition 1: and finding out the license plate of the fused vehicle head side result by using the vehicle tail license plate, and directly fusing the vehicle head side result and the vehicle tail of the same vehicle if the license plate numbers are fused uniformly.
Condition 2: the tail identification time is longer than the splicing end time of the vehicle side, and the purpose is to filter the tail result of the previous vehicle.
Condition 3: and subtracting the vehicle-side vehicle tail shooting time-the time taken by the vehicle to travel from the vehicle-side identification unit to the vehicle tail identification unit from the vehicle tail shooting time, searching for the minimum value, and filtering out the result false detection or missed detection of the vehicle tail. The formula is as follows:
Figure BDA0003793553660000191
condition 4: the vehicle tail ID is 0, or the vehicle tail ID is less than or equal to the vehicle side ID.
If the satisfied vehicle tail result is selected according to the conditions, if the vehicle tail result is overtime or the vehicle side result of the next vehicle is received, the single-frame identification result is used as a substitute result for fusion, so that the vehicle tail result can be found by each vehicle side result.
S409, the three camera identification units use the same NTP server as an NTP time source to perform time synchronization, the time is synchronized once every 5 minutes, and the condition that the time of 3 cameras is not synchronized is prevented.
S410, through the steps, the fusion result of the images of the head, the side and the tail of the same vehicle can be obtained, so that the fusion work is completed, and clear and reliable basis is provided for vehicle type charging of the logistics park entrance and exit.
In one embodiment, the application is applied to a method for fusing images of the same vehicle by multi-camera multi-angle snapshot at an entrance and an exit of a logistics park, and the method comprises the following steps:
the installation step: the camera is arranged on the side of the road at the entrance and the exit, the installation height is 1.2 m-1.8 m, the center line of the visual field is vertical to the advancing direction of vehicles at the entrance and the exit, the downward overlooking angle is 0-15 degrees, and the distance from the edge of the road is 0.5-1.2 m. A2 mm fisheye lens is selected. And adjusting the horizontal visual angle to enable the driving direction of the road surface to be horizontal in the image, and ensuring the horizontal movement of the vehicle in the image. Specifically, the camera is arranged on the side of the road at the entrance and the exit, the installation height is 1.2 m-1.8 m, the visual field center line is vertical to the advancing direction of vehicles at the entrance and the exit, the downward overlooking angle is 0-15 degrees, and the distance from the road edge is 0.5-1.2 m. A fisheye lens with the diameter of 1.4-2 mm is selected. And adjusting the horizontal visual angle to enable the driving direction of the road surface to be horizontal in the image, and ensuring the horizontal movement of the vehicle in the image.
The identification steps of the vehicle head and vehicle tail identification units are as follows: the structured information (license plate number, snapshot image, color, detection time and the like) output by the vehicle head identification unit and the structured information (license plate number, snapshot image, color, detection time and the like) output by the vehicle tail identification unit.
A vehicle side identification unit identification step: structured information (a vehicle side splicing large image, vehicle head detection time, vehicle tail detection time, vehicle speed, wheel detection time and the like) output by the vehicle side identification unit; and calculating the speed of the vehicle, predicting the time of the vehicle passing through the vehicle head and vehicle tail identification units, and triggering the vehicle head and vehicle tail identification units to perform single-frame identification. The vehicle side calculates the vehicle speed and the estimated time, the time of the vehicle passing through the vehicle head recognition unit is predicted, the vehicle head is triggered to output a single-frame recognition board result, and reliable alternative vehicle head recognition results are provided for a fusion algorithm.
Fusing images of the vehicle head and the vehicle side: and fusing images of the head and the side of the vehicle through detection time, license plate numbers, colors, head starting detection time and the like. And judging that the fusion of the images of the head and the side of the vehicle is finished aiming at false detection identification result filtering or other abnormal condition judgment conditions.
Fusing images of the vehicle side and the vehicle tail: and fusing images of the car side and the car tail through the detection time, the license plate number, the color and the car tail end detection time. And judging the completion of the fusion of the images of the tail and the side of the vehicle according to false detection identification result filtering or other abnormal condition judgment conditions.
Therefore, as an important node in a logistics system, the vehicle type identification device is an important component part for informatization and unmanned construction of a logistics park. Through the multi-angle picture of the vehicle based on video vehicle type identification equipment output, the video of crossing the car, the vehicle identification information of structurization, can be clear, effective, audio-visual show for the user, for the commodity circulation garden charge provide the basis, provide the guarantee for commodity circulation garden discrepancy current efficiency.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a vehicle image fusion device for realizing the vehicle image fusion method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so the specific limitations in one or more embodiments of the vehicle image fusion device provided below can be referred to the limitations in the vehicle image fusion method in the above, and details are not repeated here.
In one embodiment, as shown in FIG. 5, a vehicle image fusion apparatus is provided. The device comprises:
an image acquisition module 502 for acquiring a plurality of vehicle images of at least two detection locations; the plurality of vehicle images are obtained by shooting running vehicles by the cameras at the at least two detection positions respectively;
an image matching module 504, configured to match the plurality of vehicle images at different detection positions according to time difference values between the plurality of vehicle images at different detection positions, respectively, so as to obtain vehicle images associated with the vehicle at each detection position;
and an image fusion module 506, configured to fuse vehicle images associated with the vehicle at each of the detection positions.
In one embodiment, the image matching module 504 is configured to:
respectively obtaining the image detection time of each vehicle image according to the plurality of vehicle images at different detection positions and identifying the vehicle speed;
calculating estimated running time of the vehicle running between the adjacent detection positions according to the distance between the adjacent detection positions and the vehicle speed;
generating time difference values between a plurality of vehicle images of the adjacent detection positions according to the estimated driving time and the detection time of each vehicle image of the adjacent detection positions;
according to the time difference value, determining a vehicle image associated with the vehicle at each detection position in a plurality of vehicle images of the adjacent detection positions; the vehicle images at the detection positions of the vehicle correspond to the adjacent detection positions one by one.
In one embodiment, the image matching module 504 is specifically configured to:
selecting a target time difference value according to the time difference value;
when the target time difference value is judged to be smaller than the interval abnormal threshold value, respectively determining the vehicle images corresponding to the target time difference value as the vehicle images of the vehicle at each detection position;
when the target time difference value is judged to be larger than the interval abnormal threshold value, calculating the matching degree between the vehicle images corresponding to the target time difference value, and determining whether the vehicle image corresponding to the target time difference value is the vehicle image of the vehicle at each detection position according to the matching degree.
In one embodiment, license plate information does not exist in one to-be-fused vehicle image of the first to-be-fused vehicle image and the second to-be-fused vehicle image, and the matching degree comprises color matching degree; when the first to-be-fused vehicle image and the second to-be-fused vehicle image both have license plate information, the matching degree comprises the color matching degree and the license plate matching degree, and the priority of the license plate matching degree is higher than the priority of the color matching degree.
In one embodiment, the image matching module 504 is further configured to:
when the vehicle speed is smaller than a vehicle speed threshold value, acquiring a corresponding predicted shooting time sequence according to the sequence of each detection position, and judging whether the predicted shooting time sequence is matched with the shooting time of the vehicle images at different detection positions to obtain a shooting time matching result; judging whether to fuse vehicle images associated with the vehicle at each detection position based on the shooting time matching result;
and when the vehicle speed is greater than the vehicle speed threshold, judging whether the shooting time interval corresponding to the sequence of each detection position corresponds to the interval threshold parameter or not, and fusing vehicle images associated with the vehicle at each detection position.
In one embodiment, the image matching module 504 is further configured to:
acquiring a position identification sequence when the vehicle is shot according to different detection positions;
matching the identifiers carried by the vehicle images at different detection positions according to the position identifier sequence to obtain an identifier matching result;
and judging whether to fuse vehicle images associated with the vehicle at each detection position based on the identification matching result.
In one embodiment, the image acquisition module 502 is configured to:
when the camera of the first detection position detects a running vehicle, calculating the speed of the vehicle;
the camera of the first detection position predicts the predicted shooting time when the running vehicle reaches the second detection position according to the vehicle speed;
and the camera at the first detection position sends the estimated shooting time to the camera at the second detection position, so that the camera at the second detection position shoots.
The various modules in the vehicle image fusion apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a vehicle image fusion method. The display unit of the computer equipment is used for forming a visual and visible picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A vehicle image fusion method, characterized in that the method comprises:
acquiring a plurality of vehicle images of at least two detection positions; the plurality of vehicle images are obtained by shooting running vehicles by the cameras at the at least two detection positions respectively;
matching the plurality of vehicle images at different detection positions according to the time difference values between the plurality of vehicle images at different detection positions respectively to obtain vehicle images associated with the vehicle at each detection position;
and fusing vehicle images associated with the vehicle at each detection position.
2. The method according to claim 1, wherein the matching the plurality of vehicle images at different detection positions according to the time difference values between the plurality of vehicle images at different detection positions respectively to obtain the vehicle image associated with the vehicle at each detection position comprises:
respectively obtaining the image detection time of each vehicle image according to the plurality of vehicle images at different detection positions and identifying the vehicle speed;
calculating estimated running time of the vehicle running between the adjacent detection positions according to the distance between the adjacent detection positions and the vehicle speed;
generating time difference values between a plurality of vehicle images of the adjacent detection positions according to the estimated driving time and the detection time of each vehicle image of the adjacent detection positions;
according to the time difference value, determining a vehicle image associated with the vehicle at each detection position in a plurality of vehicle images of the adjacent detection positions; the vehicle images at the detection positions of the vehicle correspond to the adjacent detection positions one by one.
3. The method of claim 2, wherein determining the vehicle image associated with the vehicle at each of the detection locations among the plurality of vehicle images at the adjacent detection locations based on the time-difference values comprises:
selecting a target time difference value according to the time difference value;
when the target time difference value is judged to be smaller than the interval abnormal threshold value, respectively determining the vehicle images corresponding to the target time difference value as the vehicle images of the vehicle at each detection position;
when the target time difference value is judged to be larger than the interval abnormal threshold value, calculating the matching degree between the vehicle images corresponding to the target time difference value, and determining whether the vehicle image corresponding to the target time difference value is the vehicle image of the vehicle at each detection position according to the matching degree.
4. The method according to claim 3, wherein the license plate information does not exist in one of the first vehicle image to be fused and the second vehicle image to be fused, and the matching degree comprises a color matching degree; when the first to-be-fused vehicle image and the second to-be-fused vehicle image both have license plate information, the matching degree comprises the color matching degree and the license plate matching degree, and the priority of the license plate matching degree is higher than the priority of the color matching degree.
5. The method of claim 1, wherein said fusing the vehicle images associated with the vehicle at each of the detected locations is preceded by:
when the vehicle speed is smaller than a vehicle speed threshold value, acquiring a corresponding estimated shooting time sequence according to the sequence of each detection position, and judging whether the estimated shooting time sequence is matched with the shooting time of the vehicle images at different detection positions to obtain a shooting time matching result; judging whether to fuse vehicle images associated with the vehicle at each detection position based on the shooting time matching result;
and when the vehicle speed is greater than the vehicle speed threshold, judging whether the shooting time interval corresponding to the sequence of each detection position corresponds to the interval threshold parameter or not, and fusing vehicle images associated with the vehicle at each detection position.
6. The method of claim 1, wherein said fusing the vehicle precedes the plurality of vehicle images associated with each of the detected locations, the method further comprising:
acquiring a position identification sequence when the vehicle is shot according to different detection positions;
matching the identifiers carried by the vehicle images at different detection positions according to the position identifier sequence to obtain an identifier matching result;
and judging whether to fuse vehicle images associated with the vehicle at each detection position based on the identification matching result.
7. The method according to any one of claims 1 to 6, wherein the at least two cameras for detecting positions respectively photograph a running vehicle, comprises:
when the camera of the first detection position detects a running vehicle, calculating the speed of the vehicle;
the camera of the first detection position predicts the predicted shooting time when the running vehicle reaches the second detection position according to the vehicle speed;
and the camera at the first detection position sends the estimated shooting time to the camera at the second detection position, so that the camera at the second detection position shoots.
8. A vehicle image fusion apparatus, characterized in that the apparatus comprises:
the system comprises an image acquisition module, a detection module and a display module, wherein the image acquisition module is used for acquiring a plurality of vehicle images of at least two detection positions; the plurality of vehicle images are obtained by shooting running vehicles by the cameras at the at least two detection positions respectively;
the image matching module is used for matching the plurality of vehicle images at different detection positions according to time difference values among the plurality of vehicle images at different detection positions respectively to obtain vehicle images associated with the vehicle at each detection position;
and the image fusion module is used for fusing vehicle images associated with the vehicle at each detection position.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210961734.0A 2022-08-11 2022-08-11 Vehicle image fusion method and device, computer equipment and storage medium Pending CN115331181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210961734.0A CN115331181A (en) 2022-08-11 2022-08-11 Vehicle image fusion method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210961734.0A CN115331181A (en) 2022-08-11 2022-08-11 Vehicle image fusion method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115331181A true CN115331181A (en) 2022-11-11

Family

ID=83921220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210961734.0A Pending CN115331181A (en) 2022-08-11 2022-08-11 Vehicle image fusion method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115331181A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455792A (en) * 2023-12-25 2024-01-26 武汉车凌智联科技有限公司 Method for synthesizing and processing 360-degree panoramic image built-in vehicle
CN117523505A (en) * 2023-10-30 2024-02-06 深圳市大道至简信息技术有限公司 Method for judging real vehicle based on video

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523505A (en) * 2023-10-30 2024-02-06 深圳市大道至简信息技术有限公司 Method for judging real vehicle based on video
CN117455792A (en) * 2023-12-25 2024-01-26 武汉车凌智联科技有限公司 Method for synthesizing and processing 360-degree panoramic image built-in vehicle
CN117455792B (en) * 2023-12-25 2024-03-22 武汉车凌智联科技有限公司 Method for synthesizing and processing 360-degree panoramic image built-in vehicle

Similar Documents

Publication Publication Date Title
JP6844043B2 (en) Visual positioning methods, devices, electronics and systems
CN115331181A (en) Vehicle image fusion method and device, computer equipment and storage medium
US9779311B2 (en) Integrated control system and method using surveillance camera for vehicle
TWI770420B (en) Vehicle accident identification method and device, electronic equipment
US10984275B1 (en) Determining location coordinates of a vehicle based on license plate metadata and video analytics
CN111372037B (en) Target snapshot system and method
JP6175846B2 (en) Vehicle tracking program, server device, and vehicle tracking method
JP2020519989A (en) Target identification method, device, storage medium and electronic device
JP4957807B2 (en) Moving object detection apparatus and moving object detection program
CN111860352B (en) Multi-lens vehicle track full tracking system and method
WO2014061342A1 (en) Information processing system, information processing method, and program
CN106503622A (en) A kind of vehicle antitracking method and device
CN107862072B (en) Method for analyzing vehicle urban-entering fake plate crime based on big data technology
CN111444798A (en) Method and device for identifying driving behavior of electric bicycle and computer equipment
CN111753639B (en) Perception map generation method, device, computer equipment and storage medium
WO2021053689A1 (en) Methods and systems for managing storage of videos in a storage device
KR20190115501A (en) a vehicle recognizing system
JP2012137320A (en) Guidance apparatus, guidance method, guidance program and recording medium
CN114897684A (en) Vehicle image splicing method and device, computer equipment and storage medium
CN109727268A (en) Method for tracking target, device, computer equipment and storage medium
CN111183464B (en) System and method for estimating saturation flow of signal intersection based on vehicle trajectory data
JP7293174B2 (en) Road Surrounding Object Monitoring Device, Road Surrounding Object Monitoring Program
JP6089698B2 (en) Information processing apparatus and method
JP2016195323A (en) Information processing apparatus, information processing method, and program
CN111651547A (en) Method and device for acquiring high-precision map data and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination