CN115965636A - Vehicle side view generating method and device and terminal equipment - Google Patents

Vehicle side view generating method and device and terminal equipment Download PDF

Info

Publication number
CN115965636A
CN115965636A CN202211562710.4A CN202211562710A CN115965636A CN 115965636 A CN115965636 A CN 115965636A CN 202211562710 A CN202211562710 A CN 202211562710A CN 115965636 A CN115965636 A CN 115965636A
Authority
CN
China
Prior art keywords
target
image
frame
vehicle
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211562710.4A
Other languages
Chinese (zh)
Inventor
罗烨
沈峰
朱胜超
刘梁梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202211562710.4A priority Critical patent/CN115965636A/en
Publication of CN115965636A publication Critical patent/CN115965636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a method, a device and a terminal device for generating a vehicle side view, wherein the method comprises the following steps: acquiring a target image corresponding to a target vehicle; cutting each frame of target image to determine a target lane image where a target vehicle is located, wherein each frame of target image comprises the target lane image; detecting a moving object for each frame of target lane image to determine a target vehicle area contained in each frame of target lane image; determining a pixel offset value queue of the target vehicle according to pixel position difference between target vehicle areas contained in each adjacent target lane image; and splicing each frame of target image according to the pixel deviation value queue to generate a side view of the target vehicle. Therefore, the acquired images of the target vehicle are screened out, and the images are spliced according to the pixel offset value of the target vehicle between the adjacent images to generate a complete side view of the target vehicle, so that the accuracy of generating the side view of the vehicle is improved.

Description

Vehicle side view generating method and device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for generating a vehicle side view, a terminal device, and a computer-readable storage medium.
Background
With the increasing standardization and intellectualization of traffic management, for large buses, trucks and special work vehicles, the charging standard is different, so that the problems of difficult charging and difficult audit exist. According to the new regulations of the department of transportation, the charging mode of calculating the vehicle type according to the vehicle length, the axle type, the passenger capacity and the like and determining the charging standard according to the vehicle type is determined. However, identifying the features of the vehicle such as the length, the axle type, and the passenger capacity requires acquiring an accurate side view of the vehicle in advance.
In the related art, a side view of a passing vehicle is generally captured by installing a close-up side-shooting camera near a lane. However, since the field of view of the close-range side camera is limited and the sizes of the vehicle bodies of the vehicles are different, it is easy to cause that a complete side view of the vehicle cannot be acquired, and thus the vehicle side features of the vehicle, such as the length, the axle type, and the passenger capacity, cannot be accurately identified.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a vehicle side view, a terminal device and a storage medium, which can solve the problem that the complete side view of a vehicle cannot be acquired easily due to the limited visual field range of close-range side cameras and different body sizes of the vehicle in a mode that the close-range side cameras are arranged near a lane to acquire the side view of the passing vehicle, so that the vehicle side features of the vehicle, such as the length, the axle type and the passenger capacity, cannot be accurately identified.
In a first aspect, an embodiment of the present application provides a method for generating a vehicle side view, including: acquiring a target image queue corresponding to a target vehicle, wherein the target image queue comprises a plurality of frames of target images acquired by image acquisition equipment at a fixed position when the target vehicle passes through; according to a target lane where a target vehicle is located and a preset lane configuration file, each frame of target image is cut to determine a target lane image included in each frame of target image; detecting a moving object for each frame of target lane image to determine a target vehicle area contained in each frame of target lane image; determining a pixel offset value queue corresponding to the target vehicle according to the pixel position difference between the target vehicle areas contained in the adjacent target lane images; and splicing each frame of target image according to the pixel deviation value queue and a preset lane configuration file to generate a lateral map corresponding to the target vehicle.
In a possible implementation manner of the first aspect, the number of the target lane images is N, where N is a positive integer; correspondingly, the determining a pixel offset value queue corresponding to the target vehicle according to the position difference between the target vehicle regions included in the adjacent target lane images includes:
determining that the i frame and the (i + 1) frame target lane images contain the vehicle characteristic positions in the target vehicle region according to a preset characteristic position extraction rule, wherein i is a positive integer which is greater than or equal to 1 and smaller than N;
determining a pixel offset value of the target vehicle between the target lane image of the i frame and the target lane image of the i +1 frame according to the pixel position difference between the vehicle characteristic positions contained in the target lane image of the i frame and the target lane image of the i +1 frame;
and writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of the target lane image into the pixel offset value queue as the ith element in the pixel offset value queue.
Optionally, in another possible implementation manner of the first aspect, before writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of the target lane image into the pixel offset value queue, the method further includes:
it is determined that a pixel offset value of the target vehicle between the i-th frame and the i + 1-th frame target lane image is greater than a first threshold value.
Optionally, in another possible implementation manner of the first aspect, after writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of the target lane image into the pixel offset value queue, the method further includes:
when the queue length of the pixel deviation value queue is larger than a second threshold value, acquiring a first preset number of first reference pixel deviation values from the head of the pixel deviation value queue;
determining an average of the first reference pixel offset values;
the first abnormal pixel offset value having a difference from the average value of the first reference pixel offset values greater than a third threshold value is removed from the pixel offset value queue.
Optionally, in another possible implementation manner of the first aspect, after writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of the target lane image into the pixel offset value queue, the method further includes:
determining a current queue length of a pixel offset value queue;
when the length of the current queue is larger than a fourth threshold value, a second preset number of second reference pixel deviation values are obtained from the tail of the pixel deviation value queue;
determining that the pixel offset value queue is completed when a difference between the maximum value and the minimum value of the second reference pixel offset value is less than a fifth threshold value.
Optionally, in another possible implementation manner of the first aspect, the preset feature position extraction rule includes a feature position start position determination rule and a size of a feature position template; correspondingly, the determining the vehicle feature position in the target vehicle region contained in the i frame and the i +1 frame target lane images according to the preset feature position extraction rule includes:
determining the initial position of the vehicle characteristic position in each target vehicle area according to the characteristic position initial position determination rule;
and determining the vehicle characteristic position in each target vehicle area according to the starting position of the vehicle characteristic position in each target vehicle area and the size of the characteristic position template.
Optionally, in another possible implementation manner of the first aspect, the stitching each frame of the target image according to the pixel offset value queue and a preset lane configuration file to generate a side view corresponding to the target vehicle includes:
determining an average value of all pixel deviation values in the pixel deviation value queue, circularly removing all second abnormal pixel deviation values with the largest difference value with the average value until the difference value between all the remaining pixel deviation values in the pixel deviation value queue is smaller than a sixth threshold value, and generating a filtered pixel deviation value queue by utilizing all the remaining pixel deviation values;
determining the average value of all pixel deviation values in the screened pixel deviation value queue as an average displacement value corresponding to the target vehicle;
and splicing each frame of target image according to the average displacement value and a preset lane configuration file to generate a lateral map corresponding to the target vehicle.
Optionally, in another possible implementation manner of the first aspect, the preset lane configuration file includes a preset splicing range corresponding to each lane in a current road, where the current road is a road where the image acquisition device is currently located, and the preset splicing range is an image range corresponding to a splicing area of each lane in an image acquired by the image acquisition device; correspondingly, the generating a side view corresponding to the target vehicle by performing stitching processing on each frame of target image according to the average displacement value and a preset lane configuration file includes:
determining a preset splicing range corresponding to the target lane according to the target lane and the preset splicing ranges corresponding to the lanes;
determining an image area to be spliced corresponding to each frame of target image according to a preset splicing range corresponding to the target lane;
and splicing the image areas to be spliced corresponding to each frame of target image according to the average displacement value to generate a side map corresponding to the target vehicle.
Optionally, in another possible implementation manner of the first aspect, the preset lane configuration file includes preset image ranges corresponding to lanes in a current road, where the current road is a road where the image acquisition device is currently located, and the preset image ranges are image ranges corresponding to the lanes in an image acquired by the image acquisition device; correspondingly, the above-mentioned clipping processing is performed on each frame of target image according to the target lane where the target vehicle is located and a preset lane configuration file to determine the target lane image included in each frame of target image, including:
determining the corresponding image range of the target lane in each target image according to the target lane and the corresponding preset image range of each lane;
and performing cutting processing on each frame of target image according to the corresponding image range of the target lane in each target image to determine the target lane image included in each frame of target image.
Optionally, in another possible implementation manner of the first aspect, after performing a clipping process on each frame of target images according to a target lane where the target vehicle is located and a preset lane configuration file to determine a target lane image included in each frame of target images, the method further includes:
determining lane line cutting amplitude according to the number of pixels contained in a unit area of an image acquired by image acquisition equipment;
and performing lane line cutting processing on each frame of target lane image according to the lane line cutting amplitude so as to remove lane lines contained in each frame of target lane image.
Optionally, in another possible implementation manner of the first aspect, the acquiring a target image queue corresponding to a target vehicle includes:
acquiring the vehicle entering time when a target vehicle enters a monitoring range corresponding to the image acquisition equipment and the vehicle exiting time when the target vehicle leaves the monitoring range;
and determining the image which is acquired by the image acquisition equipment and has the acquisition time between the time of entering the vehicle and the time of leaving the vehicle as a target image and adding the target image into a target image queue.
Optionally, in another possible implementation manner of the first aspect, after determining that the image acquired by the image acquisition device and having the acquisition time between the vehicle entry time and the vehicle exit time is a target image and adding the target image into the target image queue, the method further includes:
and determining a third preset number of images before the time of entering the train and a fourth preset number of images after the time of exiting the train as target images and adding the target images into a target image queue.
In a second aspect, an embodiment of the present application provides an apparatus for generating a vehicle side view, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image queue corresponding to a target vehicle, and the target image queue comprises a plurality of frames of target images acquired by image acquisition equipment at a fixed position when the target vehicle passes through; the first cutting module is used for cutting each frame of target image according to a target lane where a target vehicle is located and a preset lane configuration file so as to determine a target lane image included in each frame of target image; the first determining module is used for detecting moving objects of each frame of target lane image so as to determine a target vehicle region contained in each frame of target lane image; the second determining module is used for determining a pixel offset value queue corresponding to the target vehicle according to the pixel position difference between the target vehicle areas contained in the adjacent target lane images; and the splicing module is used for splicing each frame of target image according to the pixel deviation value queue and a preset lane configuration file so as to generate a lateral map corresponding to the target vehicle.
In a possible implementation manner of the second aspect, the number of the target lane images is N, where N is a positive integer; accordingly, the second determining module includes:
the first determining unit is used for determining vehicle characteristic positions in target vehicle areas contained in the ith frame and the (i + 1) th frame of target lane images according to a preset characteristic position extraction rule, wherein i is a positive integer which is greater than or equal to 1 and smaller than N;
a second determination unit for determining a pixel offset value of the target vehicle between the i-th frame and the i + 1-th frame of the target lane image according to a pixel position difference between vehicle feature positions contained in the i-th frame and the i + 1-th frame of the target lane image;
and the writing unit is used for writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of target lane image into the pixel offset value queue.
Optionally, in another possible implementation manner of the second aspect, the second determining module further includes:
a third determination unit for determining that a pixel offset value of the target vehicle between the i-th frame and the i + 1-th frame target lane image is greater than the first threshold.
Optionally, in another possible implementation manner of the second aspect, the second determining module further includes:
a first obtaining unit, configured to obtain a first preset number of first reference pixel offset values from a head of a pixel offset value queue when a queue length of the pixel offset value queue is greater than a second threshold;
a fourth determining unit for determining an average value of the first reference pixel offset values;
a removing unit for removing the first abnormal pixel offset value having a difference value greater than a third threshold value from the pixel offset value queue from the average value of the first reference pixel offset values.
Optionally, in another possible implementation manner of the second aspect, the second determining module further includes:
a fifth determining unit for determining a current queue length of the pixel offset value queue;
the second obtaining unit is used for obtaining a second preset number of second reference pixel deviation values from the tail of the pixel deviation value queue when the length of the current queue is larger than a fourth threshold value;
a sixth determining unit, configured to determine that the pixel offset value queue is completely constructed when a difference between a maximum value and a minimum value of the second reference pixel offset value is less than a fifth threshold value.
Optionally, in another possible implementation manner of the second aspect, the preset feature position extraction rule includes a feature position start position determination rule and a size of a feature position template; correspondingly, the first determining unit is specifically configured to:
determining the initial position of the vehicle characteristic position in each target vehicle area according to the characteristic position initial position determination rule;
and determining the vehicle characteristic position in each target vehicle area according to the starting position of the vehicle characteristic position in each target vehicle area and the size of the characteristic position template.
Optionally, in another possible implementation manner of the second aspect, the splicing module includes:
a seventh determining unit, configured to determine an average value of each pixel deviation value in the pixel deviation value queue, and cyclically remove each second abnormal pixel deviation value having a largest difference value with the average value, until differences between remaining pixel deviation values in the pixel deviation value queue are smaller than a sixth threshold, and generate a filtered pixel deviation value queue using each remaining pixel deviation value;
the eighth determining unit is used for determining the average value of all pixel deviation values in the filtered pixel deviation value queue as the average displacement value corresponding to the target vehicle;
and the splicing unit is used for splicing each frame of target image according to the average displacement value and a preset lane configuration file so as to generate a lateral map corresponding to the target vehicle.
Optionally, in another possible implementation manner of the second aspect, the preset lane configuration file includes preset splicing ranges corresponding to lanes in a current road, where the current road is a road where the image acquisition device is currently located, and the preset splicing ranges are image ranges corresponding to splicing areas of the lanes in an image acquired by the image acquisition device; correspondingly, the splicing unit is specifically configured to:
determining a preset splicing range corresponding to the target lane according to the target lane and the preset splicing ranges corresponding to the lanes;
determining an image area to be spliced corresponding to each frame of target image according to a preset splicing range corresponding to the target lane;
and splicing the image areas to be spliced corresponding to each frame of target image according to the average displacement value to generate a side map corresponding to the target vehicle.
Optionally, in another possible implementation manner of the second aspect, the preset lane configuration file includes preset image ranges corresponding to lanes in a current road, where the current road is a road where the image acquisition device is currently located, and the preset image ranges are image ranges corresponding to the lanes in an image acquired by the image acquisition device; correspondingly, the first cutting module comprises:
the ninth determining unit is used for determining the corresponding image range of the target lane in each frame of target image according to the target lane and the preset image range corresponding to each lane;
and the tenth determining unit is used for performing cutting processing on each frame of target image according to the corresponding image range of the target lane in each frame of target image so as to determine the target lane image included in each frame of target image.
Optionally, in another possible implementation manner of the second aspect, the apparatus further includes:
the third determining module is used for determining lane line cutting amplitude according to the number of pixels contained in a unit area of an image acquired by the image acquisition equipment;
and the second cutting module is used for carrying out lane line cutting processing on each frame of target lane image according to the lane line cutting amplitude so as to remove the lane lines contained in each frame of target lane image.
Optionally, in another possible implementation manner of the second aspect, the first obtaining module includes:
the third acquisition unit is used for acquiring the vehicle entering time when the target vehicle enters the monitoring range corresponding to the image acquisition equipment and the vehicle leaving time when the target vehicle leaves the monitoring range;
and the eleventh determining unit is used for determining the images which are acquired by the image acquisition equipment and have the acquisition time between the time of entering the vehicle and the time of leaving the vehicle as target images and adding the target images into the target image queue.
Optionally, in another possible implementation manner of the second aspect, the first obtaining module further includes:
and the twelfth determining unit is used for determining the third preset number of images before the vehicle entering moment at the collecting moment and the fourth preset number of images after the vehicle leaving moment at the collecting moment as the target images and adding the target images into the target image queue.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the method for generating a vehicle profile as described above.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for generating a vehicle side map as described above.
In a fifth aspect, the present application provides a computer program product, when the computer program product runs on a terminal device, the terminal device is caused to execute the method for generating the vehicle side map as described above.
Compared with the prior art, the embodiment of the application has the beneficial effects that: the target images of the target vehicles acquired by the image acquisition equipment are screened out, and the target images of the frames are spliced according to the pixel offset value of the target vehicles between the adjacent target images to generate a complete side view of the target vehicles, so that the accuracy of side view generation is improved, and the accuracy of vehicle feature extraction through the vehicle side view is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating a method for generating a side view of a vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a method for generating a side view of a vehicle according to another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a method for generating a vehicle profile according to yet another embodiment of the present application;
FIG. 4 is a schematic structural diagram of a device for generating a side view of a vehicle according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless otherwise specifically stated.
A method, an apparatus, a terminal device, a storage medium, and a computer program for generating a vehicle side view provided by the present application are described in detail below with reference to the accompanying drawings.
FIG. 1 shows a schematic flowchart of a method for generating a vehicle side view according to an embodiment of the present application.
Step 101, a target image queue corresponding to a target vehicle is obtained, wherein the target image queue comprises a plurality of frames of target images acquired by image acquisition equipment in a fixed position when the target vehicle passes through.
It should be noted that the method for generating a vehicle side view according to the embodiment of the present application may be executed by the device for generating a vehicle side view according to the embodiment of the present application. The vehicle side view generation apparatus according to the embodiment of the present application may be configured in any terminal device to execute the vehicle side view generation method according to the embodiment of the present application. For example, the device for generating a vehicle side view according to the embodiment of the present application may be configured at a terminal device for calculating a fee at a toll gate, so as to obtain a complete side view of a passing vehicle, so as to provide an accurate data base for determining a vehicle type according to the side view of the vehicle in the following step, and further calculating a passing fee according to the vehicle type.
The image capturing device may be any type of image capturing device, such as a camera, installed in a fixed position, for capturing a side view of a vehicle passing through a specific road segment, which is not limited in the embodiments of the present application. For example, the image acquisition device may be installed on a portal frame in an expressway to identify vehicles running on the expressway, that is, the method for generating the vehicle profile of the embodiment of the present application may be applied to a free flow scene; or, image acquisition equipment can also be installed on the portal of toll station in all kinds of roads to gather the profile map of passing vehicle, and be used for the processing procedures such as the discernment of follow-up vehicle, charge, this application embodiment's image acquisition equipment can also use in the bayonet socket scene promptly.
The target vehicle may be any vehicle that passes through the field of view of the image capturing device and currently needs to acquire a side view of the target vehicle.
For example, when the method of the embodiment of the application is applied to a toll gate, one image acquisition device may be respectively arranged at fixed positions on the sides of bayonets in two directions of the toll gate, so that the image acquisition devices can shoot side images of vehicles passing through the toll gate, and a target vehicle may be any vehicle passing through the toll gate.
In this embodiment of the application, the images acquired by the image acquisition device may sequentially store the acquired images into an image queue according to an acquisition time sequence, and then when determining a side view of a target vehicle, an image related to the target vehicle may be extracted from the image queue as a target image, and the target image queue is configured by using each frame of the target image.
Further, the target image corresponding to the target vehicle can be extracted from the image queue corresponding to the image acquisition device according to the time when the target vehicle enters the visual field of the image acquisition device and the time when the target vehicle leaves the visual field of the image acquisition device. That is, in a possible implementation manner of this embodiment of the present application, step 101 may include:
acquiring the vehicle entering time when a target vehicle enters a monitoring range corresponding to the image acquisition equipment and the vehicle exiting time when the target vehicle leaves the monitoring range;
and determining the image which is acquired by the image acquisition equipment and has the acquisition time between the time of entering the vehicle and the time of leaving the vehicle as a target image and adding the target image into a target image queue.
As a possible implementation mode, the passing vehicles can be monitored simultaneously through the image acquisition equipment and the laser radar, the laser radar can monitor and identify the vehicles entering the gate and leaving the gate, the time of each vehicle entering the gate and the time of each vehicle leaving the gate are recorded, and the time of each vehicle entering the gate and the time of each vehicle leaving the gate are associated with the vehicle identification to generate vehicle data acquired by the laser radar. During actual use, the vehicle identification can be information capable of identifying the vehicle, such as a license plate number, and the like, and the embodiment of the application does not limit the information.
And then, when determining that a certain vehicle leaves the gate according to the vehicle data collected by the laser radar, determining the vehicle as a target vehicle. The image acquisition equipment is used for acquiring information of all vehicles entering the gate, so that the vehicles can be considered to enter the monitoring range of the image acquisition equipment when entering the gate, and the vehicles can be considered to leave the monitoring range of the image acquisition equipment when leaving the gate, so that the time of the target vehicles entering the gate can be determined as the time of the target vehicles entering the gate, the time of the target vehicles leaving the gate can be determined as the time of the target vehicles leaving the gate, and further, each frame of image between the time of the target vehicles entering the gate and the time of the target vehicles leaving the gate can be acquired from the image queue corresponding to the image acquisition equipment as a target image according to the time of the target vehicles entering the gate and the time of the target vehicles leaving the gate, and each acquired frame of target image is added into the target image queue.
Furthermore, because the image of the vehicle and the time for getting in and out of the gate are respectively acquired by the image acquisition device and the laser radar, in order to reduce the influence of time delay between different devices on the acquisition of the target image, a plurality of frames of images can be additionally taken as the target image before the time of getting in the target vehicle and after the time of getting out of the target vehicle, so as to prevent all monitoring images of the target vehicle from being unavailable due to the time delay between the laser radar and the image acquisition device. That is, in a possible implementation manner of the embodiment of the present application, after determining, as the target image and adding the target image queue, the image acquired by the image acquisition device and having the acquisition time between the time of entering the vehicle and the time of leaving the vehicle, the method may further include:
and determining a third preset number of images before the time of entering the train and a fourth preset number of images after the time of exiting the train as target images and adding the target images into a target image queue.
The third preset number may be the number of target images that are taken more before the entry time of the target vehicle; the fourth preset number may refer to the number of target images that are taken more after the departure time of the target vehicle. In actual use, values of the third preset quantity and the fourth preset quantity, such as 3 frames and 5 frames, can be determined according to actual application scenes and experiences; moreover, the third preset number and the fourth preset number may be the same or different, and this is not limited in this embodiment of the present application.
In the embodiment of the application, a third preset number of images can be taken before the vehicle entering time of the target vehicle, a fourth preset number of images can be taken after the vehicle leaving time of the target vehicle as the target images, and the target images of frames acquired before and between the vehicle entering time and the vehicle leaving time are added into the target image queue.
As a possible implementation manner, the third preset threshold may not be set to a value too large to prevent taking too many invalid image frames or taking images of other vehicles, which not only increases the calculation time, but also easily affects the accuracy of generating the vehicle side map.
As an example, since a certain safety distance is usually maintained between the front and rear vehicles when the vehicle is running to ensure the running safety, and a driver needs a certain reaction time to control the vehicle when driving the vehicle, the safety distance between the vehicles is generally greater than the distance that the vehicle can run in the reaction time of the driver in order to ensure the running safety; therefore, assuming that the reaction time of the driver is T seconds, the leading vehicle of the target vehicle generally leaves the visual field of the image capturing device within T seconds before the target vehicle enters the visual field of the image capturing device, i.e. the image capturing device does not capture an image of the leading vehicle within T seconds before the target vehicle enters the visual field of the image capturing device; when the target vehicle leaves the visual field of the image acquisition device, the distance between the target vehicle and the rear vehicle is generally larger than the distance which can be traveled by the target vehicle within T seconds, namely, the image acquisition device cannot acquire the image of the rear vehicle within T seconds after the target vehicle leaves the visual field of the image acquisition device. Therefore, in the embodiment of the application, the threshold values corresponding to the third preset number and the fourth preset number may be determined according to the reaction time of the driver and the frame rate of the image acquisition device, so as to ensure that the target image of the target vehicle does not include images of other vehicles. For example, the product of the reaction time of the driver and the frame rate of the image capturing device may be determined as the threshold values of the third preset number and the fourth preset number.
For example, if the reaction time of the driver is 0.3 seconds and the frame rate of the image capturing device is 25Hz, the threshold values of the third preset number and the fourth preset number may be determined to be 0.3 × 25=7.5, so that the third preset number and the fourth preset number may be determined to be any positive integer less than 7.5, such as 5 frames.
And step 102, according to a target lane where the target vehicle is located and a preset lane configuration file, cutting each frame of target image to determine a target lane image included in each frame of target image.
The target lane may be a lane where the target vehicle passes through a monitoring range of the image capturing device.
The preset lane configuration file may include configuration information of each lane in the road monitored by the image capturing device in the image captured by the image capturing device.
In the embodiment of the application, since a plurality of lanes may be included in the monitoring range of the image acquisition device, and vehicles in the plurality of lanes may be in parallel, if the target image is directly processed, the target image is easily affected by vehicles in other lanes, so that the reliability of generating the vehicle side map is affected. Therefore, when it is determined that the monitoring range of the image acquisition device includes a plurality of lanes according to the preset lane configuration file, that is, when it is determined that the image acquired by the image acquisition device may include a plurality of lanes, the position of the target lane in the target image may be determined according to the target lane and the preset lane configuration file, and then each frame of target image is cut according to the position of the target lane in the target image, so that an image area corresponding to the target lane included in each frame of target image is cut out as a target lane image corresponding to each frame of target image, so that only each target lane image is processed in a subsequent processing process, thereby reducing interference of other lanes.
Further, lane frames corresponding to the lanes in the image of the road monitored by the image acquisition device may be preset and stored in the configuration file, so as to cut the target image according to the lane frames corresponding to the lanes, so as to generate an image containing only one lane. In a possible implementation manner of the embodiment of the application, the preset lane configuration file includes preset image ranges corresponding to lanes in a current road, where the current road is a road where the image acquisition device is currently located; accordingly, the step 102 may include:
determining the corresponding image range of the target lane in each frame of target image according to the target lane and the corresponding preset image range of each lane;
and performing cutting processing on each frame of target image according to the corresponding image range of the target lane in each frame of target image to determine the target lane image included in each frame of target image.
The preset image range corresponding to the lane may refer to a range of an image area corresponding to the lane in an image acquired by the image acquisition device. In actual use, the preset image range corresponding to the lane can be represented by a plurality of pixel coordinates. For example, when the preset image range corresponding to the lane is a rectangle, 4 pixel coordinates may be used to represent the preset image range corresponding to the lane, and a rectangle surrounded by the 4 pixel coordinates in the image may be used as the preset image range corresponding to the lane.
It should be noted that, because the position of the image capturing device is fixed, the distance between the image capturing device and each lane is also fixed, and thus the corresponding image area of each lane in the image captured by the image capturing device is also fixed. Therefore, in the embodiment of the application, the preset image range corresponding to each lane can be configured, so that the target lane image can be cut out from each frame of target image.
In the embodiment of the application, the target lane where the target vehicle is located can be obtained through the laser radar, so that the target lane corresponding to the target vehicle can be determined from monitoring data obtained by the laser radar, then a preset lane configuration file can be inquired, a preset image range corresponding to each lane can be obtained, and the preset image range matched with the target lane is determined as the corresponding image range of the target lane in each frame of target image. For a frame of target image, an image area of a target lane in a corresponding image range in the frame of target image may be cropped out as a target lane image included in the frame of target image. Similarly, each frame of target image is cut according to the above process, and a target lane image included in each frame of target image can be determined.
Furthermore, when a vehicle passes through a gate, the image acquisition equipment is easy to shake, the focal length is easy to change and the like, so that the positions of static objects such as lane lines and the like in different frame images are changed, and the static objects are easy to be recognized as moving objects by mistake, thereby influencing vehicle recognition. That is, in a possible implementation manner of the embodiment of the present application, after the step 102, the method may further include:
determining lane line cutting amplitude according to the number of pixels contained in a unit area of an image acquired by image acquisition equipment;
and performing lane line cutting processing on each frame of target lane image according to the lane line cutting amplitude to remove lane lines contained in each frame of target lane image.
The lane line clipping amplitude may be a ratio between an image area removed when the lane line is clipped for the target lane image and the target lane image.
In the embodiment of the application, the target vehicle is easy to shake when passing through the image acquisition equipment, so that the positions of fixed objects such as lane lines and the like in the target image in different frame images are easy to change and are mistakenly identified as moving objects, thereby influencing the accuracy of subsequent detection of the vehicle. Therefore, after the target lane image contained in the target image is acquired, the target lane image can be cut continuously to remove the lane lines contained in the target lane image, and the detection error caused by shaking of the image acquisition equipment is reduced.
It can be understood that, due to the different viewing angles of the image capturing devices, the visual field ranges of the image capturing devices are also different, and when the visual field range of the image capturing device is smaller, the situation that the proportion of the vehicle in the adjacent lane to the target vehicle is larger can occur, so that the recognition of the target vehicle is influenced. Therefore, the lane line cutting amplitude can be determined according to the view angle of the image acquisition equipment, so that the image area possibly containing the vehicles in the adjacent lanes in the target lane image is removed as much as possible while the lane line of the target lane is completely removed.
As a possible implementation manner, since the number of pixels included in a unit area of the acquired image is different when the viewing angles of the image acquisition devices are different, the lane line cropping amplitude may be determined according to the number of pixels included in the unit area of the image acquired by the image acquisition device.
As an example, when the image capturing device is a far-angle camera, the number of pixels included in a unit area of an image captured by the image capturing device is small; when the image acquisition equipment is a near-viewing-angle camera, the number of pixels contained in a unit area of an acquired image is large; therefore, when the number of pixels contained in a unit area of an image acquired by the image acquisition equipment is greater than the threshold value of the number of pixels, the image acquisition equipment is determined to be a near-view camera, and the lane line cutting amplitude is determined to be a larger value so as to remove an image area which may contain vehicles in an adjacent lane in the target lane image; when the number of pixels contained in a unit area of an image collected by the image collecting device is smaller than a pixel number threshold value, the image collecting device is determined to be a far-view camera, and lane line cutting amplitude is determined to be a small value, so that more target lane images can be reserved while image areas and lane lines of adjacent lane vehicles possibly contained in the target lane images are removed, and the target vehicles are prevented from being identified due to excessive cutting.
In the embodiment of the application, after the lane line clipping amplitude is determined, each frame of target lane image can be clipped according to the lane line clipping amplitude, so as to remove the lane lines included in each frame of target lane image.
And 103, detecting a moving object for each frame of target lane image to determine a target vehicle area contained in each frame of target lane image.
The target vehicle region may be a region including the target vehicle in the target lane image.
In the embodiment of the application, because the image acquisition device is fixed, other objects in the road are also fixed, and the vehicle is in continuous motion, any moving object detection algorithm can be adopted to detect moving objects in each frame of target lane image, and an image area corresponding to a moving object detected in each frame of target lane image is determined as a target vehicle area.
For example, in the embodiment of the present application, an image background difference algorithm may be used to perform moving object detection on a target lane image, and a vehicle contour included in the target lane image is determined through background subtraction, binarization, graying, median filtering and contour detection, and when the vehicle contour is determined to be included in the target lane image, it is determined that the target vehicle is included in the target lane image, and an image area in the detected vehicle contour is determined as a target vehicle area.
And 104, determining a pixel offset value queue corresponding to the target vehicle according to the pixel position difference between the target vehicle areas contained in the adjacent target lane images.
The adjacent target lane images may be target lane images included in two adjacent frame target images in the target image queue.
The pixel position difference between the target vehicle regions may be a difference value of pixel values of the target vehicle regions in the target vehicle movement direction of the adjacent lane images.
For example, the target lane image includes a height direction and a width direction, and the width direction may represent a moving direction of the target vehicle, so that a difference value of pixel values of the target vehicle region in the width direction of the adjacent target lane image may be determined as a pixel position difference of the target vehicle region in the adjacent lane image. The following description of the embodiments of the present application will be given taking the width direction of the target lane image as the moving direction of the target lane as an example.
In the embodiment of the application, after the target vehicle area included in each frame of target lane image is determined, for any two frames of adjacent target lane images, the pixel values of the same position in the target vehicle area in the two frames of adjacent target lane images can be determined, and then the pixel offset value of the target vehicle between the two frames of adjacent target lane images is determined according to the difference value of the pixel values of the same position in the target vehicle area in the width direction of the two frames of adjacent target lane images, and the pixel offset value is added to the pixel offset value queue until the pixel offset value between the adjacent target lane images is determined, so that the pixel offset value queue can be determined to be constructed.
For example, if the target lane image a and the target lane image B are two adjacent target lane images, and the acquisition time of the target image corresponding to the target lane image a is before the target image corresponding to the target lane image B, the pixel value range of the target image area in the target lane image a in the width direction of the target lane image a is 0-100, and the pixel value range of the target image area in the target lane image B in the width direction of the target lane image B is 0-120, it may be determined that the pixel offset value of the target vehicle between the target lane images a and B is 20.
When the moving object is detected in the target lane image, if the target image area is not detected in the target lane image, that is, the target vehicle is not included in the target lane image, the target lane image may be skipped when the pixel offset value queue is determined, and the pixel offset values between the target lane image and the adjacent target lane images may be undetermined.
And 105, splicing each frame of target image according to the pixel deviation value queue and a preset lane configuration file to generate a lateral map corresponding to the target vehicle.
In the embodiment of the application, since the pixel offset value queue may include the pixel offset values of the target vehicle in each adjacent target image, the corresponding adjacent target images may be subjected to stitching processing according to each pixel offset value, so as to generate a complete side view corresponding to the target vehicle.
For example, the target image a and the target image B are two adjacent frame target images, the acquisition time of the target image a is before the target image B, the target image a includes the target lane image a, the target image B includes the target lane image B, the pixel value range of the target vehicle region in the target lane image a in the width direction of the target lane image a is 0-100 pixels, the pixel value range of the target vehicle region in the target lane image B in the width direction of the target lane image B is 0-120 pixels, that is, the pixel offset value of the target vehicle between the target image a and the target image B is 20 pixels, so that the image region of the target image B with the pixel value in the width direction of 0-20 pixels can be cut out, and the cut-out 20 pixel position of the image region is spliced to the 0 pixel position of the target image a. Then, in the same manner as described above, the respective target images are sequentially stitched to the newly generated image at each time to generate a complete side view corresponding to the target vehicle.
According to the method for generating the vehicle lateral map, each frame of target image of a target vehicle is cut according to a target lane where the target vehicle is located and a preset lane configuration file, so that the target lane image included in each frame of target image is determined, a moving object is detected according to each frame of target lane image, a target vehicle area included in each frame of target lane image is determined, then a pixel deviation value queue corresponding to the target vehicle is determined according to pixel position differences between the target vehicle areas included in each adjacent target lane image, and then splicing processing is performed on each frame of target image according to the pixel deviation value queue and the preset lane configuration file, so that the lateral map corresponding to the target vehicle is generated. From this, through screening the target image of the target vehicle that image acquisition equipment gathered to splice each frame target image according to the pixel offset value of target vehicle between adjacent target image, with the complete profile of generation target vehicle, thereby promoted the accuracy that the profile generated, and then promoted the accuracy of carrying out vehicle characteristic extraction through the vehicle profile.
In a possible implementation form of the present application, when a pixel offset value queue corresponding to a target vehicle is constructed, a pixel offset value between adjacent target lane images is determined according to a position difference of a characteristic position of the target vehicle in the adjacent target lane images, and an abnormal value is removed in the process of constructing the pixel offset value queue, so as to improve accuracy of the pixel offset value queue, and further improve accuracy of a generated vehicle side map.
The method for generating the vehicle side view provided by the embodiment of the present application is further described below with reference to fig. 2.
Fig. 2 is a schematic flow chart illustrating a method for generating another vehicle side view according to an embodiment of the present application.
As shown in fig. 2, the method for generating the vehicle side view includes the following steps:
step 201, a target image queue corresponding to a target vehicle is obtained, wherein the target image queue comprises a plurality of frames of target images acquired by image acquisition equipment in a fixed position when the target vehicle passes through.
Step 202, according to a target lane where the target vehicle is located and a preset lane configuration file, each frame of target image is cut to determine a target lane image included in each frame of target image.
Step 203, performing moving object detection on each frame of target lane image to determine a target vehicle region contained in each frame of target lane image.
The detailed implementation process and principle of the steps 201-203 may refer to the detailed description of the above embodiments, and are not described herein again.
And 204, determining that the i frame and the (i + 1) frame target lane images contain the vehicle characteristic positions in the target vehicle region according to a preset characteristic position extraction rule, wherein i is a positive integer which is greater than or equal to 1 and smaller than N.
Wherein N is the number of target lane images, and N is a positive integer.
The characteristic position may refer to a vehicle characteristic position included in the target vehicle, where the vehicle characteristic position may clearly mark the position of the target vehicle in the target image. For example, the characteristic position may be a distinct characteristic position of a wheel, a head, or the like of the vehicle.
In the embodiment of the application, for N frames of target lane images, vehicle feature positions may be sequentially extracted from target vehicle regions included in the N frames of target lane images according to a preset feature position extraction rule.
As a possible implementation manner, the preset feature position extraction rule may be a preset target detection algorithm capable of identifying a feature position of a vehicle, so that each frame of target lane image may be input into the preset target detection algorithm to determine a vehicle feature position included in each frame of target lane image.
For example, if the characteristic position of the vehicle is a wheel, the preset target detection algorithm may be a pre-trained target detection algorithm that can identify the wheel of the vehicle, so that after the target lane image is input into the preset target detection algorithm, the wheel position included in the target lane image may be output; if the characteristic position of the vehicle is the vehicle head, the preset target detection algorithm can be a pre-trained target detection algorithm capable of identifying the vehicle head, so that after the target lane image is input into the preset target detection algorithm, the vehicle head position contained in the target lane image can be output.
As a possible implementation manner, a vehicle feature position template may be preset, and a vehicle feature position in the target lane image is selected through a vehicle feature position template frame, so as to reduce the computational complexity of vehicle feature position recognition. That is, in a possible implementation manner of the embodiment of the present application, the preset feature position extraction rule may include a feature position initial position determination rule and a size of a feature position template; accordingly, the step 204 may include:
determining the initial positions of the vehicle characteristic positions in the target vehicle areas of the ith frame and the (i + 1) th frame according to the characteristic position initial position determination rule;
and determining the vehicle characteristic positions in the target vehicle areas of the ith frame and the (i + 1) th frame according to the starting positions of the vehicle characteristic positions in the target vehicle areas of the ith frame and the (i + 1) th frame and the sizes of the characteristic position templates.
The characteristic position initial position determination rule may be a rule for determining an initial position of a vehicle characteristic position according to a relative position relationship between an initial position of the vehicle characteristic position and a target vehicle region in a target vehicle image. For example, the feature position start position determination rule may be: the proportion of the distance between the starting position of the vehicle characteristic position and the starting position of the target vehicle area in the total width of the target vehicle area is A, wherein the value of A is in the interval of (0, 1). For example, a may be 0.3.
The size of the feature position template may include the width and height of the feature position template. As an example, the height of the characteristic position template may be preset as a fixed value, and a ratio B of the width of the characteristic position template to the width of the target vehicle area may be preset, where the value of B is in the (0, 1) interval. For example, B may be 0.5.
As a possible implementation manner, after the moving object detection is performed on the i-th frame of target lane image and the target vehicle region included in the i-th frame of target lane image is determined, the starting position of the vehicle feature position in the i-th frame of target lane image may be determined according to the starting position of the target vehicle region in the i-th frame of target lane image and a preset feature position starting position determination rule, and then a feature position template may be generated from the starting position of the vehicle feature position according to the size of the feature position target, and the image region in the feature position template is determined as the vehicle feature position in the i-th frame of target lane image. In the same manner, the vehicle feature position in the i +1 th frame target lane image can be determined.
For example, the ratio of the distance between the starting position of the vehicle feature position and the starting position of the target vehicle region to the total width of the target vehicle region is 0.3, the height of the feature position template is 100 pixels, the ratio of the width of the feature position template to the width of the target vehicle region is 0.5, and in the target lane image of the ith frame, assuming that the pixel coordinate of the lower left corner is (1, 1), the starting position of the target vehicle region is the 0 th pixel in the width direction, and the width of the target vehicle region is 200 pixels, the pixel coordinate of (60, 1) in the target lane image of the ith frame can be determined as the starting position of the vehicle feature position, and the width of the feature position template is 100 pixels, so that the pixel coordinate of (60, 1) can be used as the top left corner of the feature position template, the feature position template of a rectangle can be generated by using 100 pixels as the height and 100 pixels as the width, and the image region in the feature position template can be determined as the vehicle feature position in the target lane image region of the ith frame.
It should be noted that, in order to improve the accuracy of determining the vehicle characteristic position, a plurality of characteristic position templates may be preset, so as to frame a plurality of image regions from the target lane image as the vehicle characteristic position, so as to prevent a single characteristic position template from being unable to frame the vehicle characteristic position of the target vehicle effectively. For example, the number of the feature position templates may be two, the width and the height of the two feature position templates may be the same, and one feature position template may be located above the other feature position template to select the vehicle feature position from the middle and bottom frames of the target vehicle area, respectively, so as to improve the accuracy of the vehicle feature position selection.
And step 205, determining a pixel offset value of the target vehicle between the target lane image of the i frame and the target lane image of the i +1 frame according to the pixel position difference between the vehicle characteristic positions contained in the target lane image of the i frame and the target lane image of the i +1 frame.
In the embodiment of the application, after the vehicle characteristic positions included in the target lane images of the i-th frame and the i + 1-th frame are determined, the pixel offset value of the target vehicle between the target lane images of the i-th frame and the i + 1-th frame can be determined according to the pixel position difference between the vehicle characteristic positions.
As one possible implementation, a pixel value of a start position of the vehicle feature position in the ith frame target lane image in the width direction of the ith frame target lane image and a pixel value of a start position of the vehicle feature position in the (i + 1) th frame target lane image in the width direction of the (i + 1) th frame target lane image may be determined, and then a difference value of the two pixel values may be determined as a pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame target lane image.
For example, it is assumed that the pixel coordinate value of the start position of the vehicle feature position in the i-th frame target lane image is (30, 1), that is, the pixel value of the start position of the vehicle feature position in the i-th frame target lane image in the width direction of the i-th frame target lane image is 30; assuming that the pixel coordinate value of the start position of the vehicle feature position in the i +1 th frame target lane image is (80, 1), that is, the pixel value of the start position of the vehicle feature position in the i th frame target lane image in the width direction of the i th frame target lane image is 80, it may be determined that the pixel offset value of the target vehicle between the i th frame and the i +1 th frame target lane image is 50.
And step 206, writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of the target lane image into a pixel offset value queue.
In the embodiment of the application, after the pixel offset value of the target vehicle between two frames of target lane images is determined, the pixel offset value can be written into the pixel offset value queue. For example, after determining the pixel offset value of the target vehicle between the i-th frame and the i + 1-th frame of the target lane image, the pixel offset value may be written into the pixel offset value queue.
Furthermore, before the pixel deviation value is added into the pixel deviation value queue, the pixel deviation value can be subjected to primary screening to remove the obviously unreliable pixel deviation value, and therefore the accuracy of generating the vehicle side map is improved. That is, in a possible implementation manner of the embodiment of the present application, before the step 206, the method may further include:
determining that a pixel offset value of the target vehicle between the i frame and the i +1 frame target lane image is greater than a first threshold.
As a possible implementation manner, since the speed of the vehicle when passing through the gate is relatively smooth, the pixel offset value of the target vehicle between each adjacent target lane image is also relatively close and stable, and if the vehicle feature position extraction is wrong (no vehicle obvious feature is extracted, such as extraction of a highly similar region like a vehicle body position), it is easy to cause that the target vehicle cannot be recognized to have an offset in the adjacent target lane image, that is, a situation that the offset value of the target vehicle in two frames of adjacent target lane images is too small occurs. Therefore, whether the pixel offset value is obviously inaccurate can be judged through the set first threshold value.
As an example, if the pixel offset value is greater than the first threshold value, the pixel offset value may be determined to be a relatively accurate pixel offset value and written to a pixel offset value queue; if the pixel offset value is less than or equal to the first threshold, it may be determined that the pixel offset value is too small and there is a significant error, and therefore the pixel offset value may be discarded and not written to the pixel offset value queue.
It should be noted that, in actual use, a specific value of the first threshold may be determined according to actual needs and a specific application scenario, which is not limited in the embodiment of the present application.
Further, for a cart with an excessively long cart body, due to the fact that the cart body is similar in height, the pixel deviation value cannot be determined at the feature position extracted from the cart body, and therefore not only is the calculation time consumed increased when the cart body is excessively long, but also the accuracy of determining the pixel deviation value is affected, and the accuracy of generating the vehicle side map is affected. That is, in a possible implementation manner of the embodiment of the present application, after the step 206, the method may further include:
determining a current queue length of a pixel offset value queue;
when the length of the current queue is larger than a fourth threshold value, a second preset number of second reference pixel deviation values are obtained from the tail of the pixel deviation value queue;
determining that the pixel offset value queue is completed when a difference between the maximum value and the minimum value of the second reference pixel offset value is less than a fifth threshold value.
The current queue length may refer to the number of pixel offset values currently included in the pixel offset value queue.
The fourth threshold and the second preset number may be set according to actual needs and specific application scenarios, which are not limited in the embodiments of the present application. For example, the fourth threshold may be 5, and the second predetermined number may be 4, 5, etc.
In the embodiment of the application, when the vehicle body is too long, or the shooting is not clear in rainy weather, so that the number of target images in the target image queue is too large, the calculation of the large vehicle and the rainy weather is long, and the accuracy of determining the pixel deviation value is easily influenced due to the fact that the vehicle body position does not have obvious characteristics.
As a possible implementation manner, after each pixel offset value is generated and written into the pixel offset value queue, whether the current queue length of the pixel offset value queue is greater than a fourth threshold value or not may be determined, and if not, it may be determined that the length of the pixel offset value queue is still shorter; if the length of the current queue is larger than the fourth threshold, it can be determined that the target vehicle is a cart with a longer vehicle body or a scene with poor shooting conditions such as rainy days, and therefore it can be determined whether the current vehicle receiving condition is met according to whether each pixel deviation value at the tail of the pixel deviation value queue tends to be stable.
As an example, when the current queue length is greater than the fourth threshold, a second preset number of second reference pixel offset values, that is, a second preset number of newly generated pixel offset values in the pixel offset value queue, may be obtained from the tail of the pixel offset value queue. If the difference value between the maximum value and the minimum value of the second reference pixel offset value is smaller than a fifth threshold value, determining that each pixel offset value in the pixel offset value queue tends to be stable, so that the pixel offset values of the target vehicle in subsequent target lane images can not be obtained continuously, and determining that the pixel offset value queue is constructed completely; if the difference value between the maximum value and the minimum value of the second reference pixel offset value is greater than or equal to the fifth threshold value, it may be determined that each pixel offset value in the pixel offset value queue is still unstable, that is, it may be determined that the current processed target lane image does not belong to a vehicle body position with insignificant characteristics, or the target vehicle is not a large vehicle with an excessively long vehicle body, and therefore, the subsequent target lane images may be continuously processed to determine the pixel offset value of the target vehicle in the subsequent target lane images until it is determined that the pixel offset value queue satisfies the vehicle-receiving condition, or the pixel offset value queue is stopped being constructed after all the target lane images are processed.
It should be noted that, in actual use, a specific value of the fifth threshold may be determined according to actual needs and specific application scenarios, which is not limited in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, for each frame of target lane image, after the frame of target lane image is processed according to the processes of steps 204 to 206 in sequence, the next frame of target lane image is processed according to the processes of steps 204 to 206, so that after it is determined that the pixel deviation value queue meets the vehicle receiving condition, all processing processes for subsequent target lane images can be directly omitted, so as to greatly reduce the processing time.
Furthermore, the pixel offset value queue is also longer due to the fact that the vehicle body is too long, but the vehicle bodies of the large vehicles are generally similar, and effective characteristic positions cannot be extracted to calculate the pixel offset values, so that when the pixel offset value queue is longer, the pixel offset values at the tail of the pixel offset value queue are generally not as accurate as the pixel offset values at the head of the pixel offset value queue, and therefore the weight of the pixel offset values at the head of the pixel offset value queue can be properly increased, and the accuracy of generating the vehicle side map is improved. That is, in a possible implementation manner of the embodiment of the present application, after the step 206, the method may further include:
when the queue length of the pixel deviation value queue is larger than a second threshold value, acquiring a first preset number of first reference pixel deviation values from the head of the pixel deviation value queue;
determining an average value of the first reference pixel offset values;
the first abnormal pixel offset value having a difference greater than a third threshold value from the average value of the first reference pixel offset value is removed from the pixel offset value queue.
As a possible implementation manner, because the trolley passing speed is high and the trolley body is short, the number of corresponding target images is small, and correspondingly, the queue length of the corresponding pixel deviation value queue is short; the large vehicle has a low passing speed and a long vehicle body, so that the number of the corresponding target images is large, and the queue length of the corresponding pixel deviation value queue is long. Therefore, after the construction of the pixel offset value queue is completed, whether the queue length of the pixel offset value queue is greater than the second threshold value or not can be judged; if the queue length of the pixel offset value queue is smaller than or equal to the second threshold, it can be determined that the length of the pixel offset value queue is shorter, that is, the target vehicle is not a large vehicle with an excessively long body, and therefore, further processing on the pixel offset value queue can be omitted.
Correspondingly, if the queue length of the pixel deviation value queue is greater than the second threshold, it can be determined that the length of the pixel deviation value queue is longer, that is, the target vehicle is a cart with an excessively long vehicle body, so that a first preset number of first reference pixel deviation values can be obtained from the pixel deviation value queue, an average value of the first reference pixel deviation values is determined, the difference value between the pixel deviation value queue and the average value is greater than a third threshold pixel deviation value and is determined as a first abnormal pixel deviation value, and the first abnormal pixel deviation value is removed from the pixel deviation value queue, so that the weight of a head element in the pixel deviation value queue is increased, and the accuracy of generating the vehicle side map is improved.
It should be noted that, in actual use, specific values of the first preset number and the third threshold may be determined according to actual needs and specific application scenarios, which is not limited in the embodiment of the present application. For example, the first preset number may be 3.
And step 207, splicing each frame of target image according to the pixel deviation value queue and a preset lane configuration file to generate a lateral map corresponding to the target vehicle.
The detailed implementation process and principle of step 207 may refer to the detailed description of the above embodiments, and are not described herein again.
According to the method for generating the vehicle lateral map, each frame of target images of a target vehicle is cut according to a target lane where the target vehicle is located and a preset lane configuration file, the target lane images included in each frame of target images are determined, moving object detection is performed on each frame of target lane images, a target vehicle area included in each frame of target lane images is determined, then pixel offset values between adjacent target lane images are determined according to position differences of characteristic positions of the target vehicle in the adjacent target lane images, abnormal values are removed in the process of constructing a pixel offset value queue, and then each frame of target images are spliced according to the pixel offset value queue and the preset lane configuration file, so that the lateral map corresponding to the target vehicle is generated. Therefore, when a pixel deviation value queue corresponding to the target vehicle is constructed, the pixel deviation value between the adjacent target lane images is determined according to the position difference of the characteristic position of the target vehicle in the adjacent target lane images, abnormal values are removed in the process of constructing the pixel deviation value queue, and all the frame target images are spliced according to the pixel deviation value queue to generate a complete side map of the target vehicle, so that the accuracy of the pixel deviation value queue is improved, the accuracy of generating the side map of the vehicle is further improved, and the accuracy of extracting the vehicle characteristics through the side map of the vehicle is further improved.
In a possible implementation form of the method, the average displacement value of the target vehicle when passing through the gate can be determined according to the pixel offset value queue, and each frame of target image is spliced according to the average displacement value, so that the accuracy of generating the vehicle side map is further improved.
The method for generating the vehicle side view provided by the embodiment of the present application is further described below with reference to fig. 3.
Fig. 3 is a schematic flowchart illustrating a method for generating a vehicle side view according to an embodiment of the present application.
As shown in fig. 3, the method for generating the side view of the vehicle includes the following steps:
step 301, a target image queue corresponding to a target vehicle is obtained, wherein the target image queue comprises multiple frames of target images acquired by an image acquisition device in a fixed position when the target vehicle passes through.
Step 302, according to a target lane where the target vehicle is located and a preset lane configuration file, each frame of target image is cut to determine a target lane image included in each frame of target image.
Step 303, performing moving object detection on each frame of target lane image to determine a target vehicle region contained in each frame of target lane image.
Step 304, determining a pixel offset value queue corresponding to the target vehicle according to the pixel position difference between the target vehicle areas contained in the adjacent target lane images.
The detailed implementation process and principle of the steps 301 to 304 may refer to the detailed description of the above embodiments, and are not described herein again.
Step 305, determining an average value of each pixel deviation value in the pixel deviation value queue, and circularly removing each second abnormal pixel deviation value with the largest difference value with the average value until the difference values between each remaining pixel deviation value in the pixel deviation value queue are all smaller than a sixth threshold value, and then generating the filtered pixel deviation value queue by using each remaining pixel deviation value.
The second abnormal pixel offset value may be a pixel offset value having a larger difference from an average value of the pixel offset value queue.
In the embodiment of the present application, since the speed of the vehicle when passing through the gate is generally relatively constant, the pixel offset values of the target vehicle between the respective adjacent target lane images are also relatively close, so that if a pixel offset value having a relatively large difference between the average values corresponding to the pixel offset value queue exists in the pixel offset value queue, the pixel offset value may be determined as an inaccurate second abnormal pixel offset value, and each second abnormal pixel offset value may be removed from the pixel offset value queue.
As a possible implementation manner, an average value of each pixel offset value in the pixel offset value queue may be determined, a difference value between each pixel offset value and the average value may be determined, and a pixel offset value having a largest difference value with the average value may be determined as a second abnormal pixel offset value, and then the second abnormal pixel offset value may be removed from the pixel offset value queue, so as to generate an updated pixel offset value queue; and then, determining a maximum pixel deviation value and a minimum pixel deviation value in the updated pixel deviation value queue, and if the difference value between the maximum pixel deviation value and the minimum pixel deviation value is smaller than a sixth threshold value, determining that each pixel deviation value in the updated pixel deviation value queue is more accurate, so that the updated pixel deviation value queue can be determined as the screened pixel deviation value queue.
Correspondingly, if the difference value between the maximum pixel offset value and the minimum pixel offset value in the updated pixel offset value queue is greater than or equal to the sixth threshold value, it may be determined that an inaccurate second abnormal pixel offset value still exists in the updated pixel offset value queue, so that the average value of each pixel offset value in the updated pixel offset value queue may be continuously calculated, and the second abnormal pixel offset value having the largest difference value with the average value in the updated pixel offset value queue is removed to generate a pixel offset value queue after being updated again, and then it is continuously determined whether the difference value between the maximum pixel offset value and the minimum pixel offset value in the pixel offset value queue after being updated again is smaller than the sixth threshold value; and repeating the steps in a circulating manner until the difference value between the maximum pixel deviation value and the minimum pixel deviation value in the finally updated pixel deviation value queue is smaller than a sixth threshold value, and generating the screened pixel deviation value queue by utilizing each residual pixel deviation value in the pixel deviation value queue.
For example, the pixel offset value queue corresponding to the target vehicle is [131,130,143,139,131,144,145,64,137,156,137,135, 136,60,74], and after the operation of circularly removing the second abnormal pixel value is performed on the pixel offset value queue in the above manner, the pixel offset values 64, 156, 60,74 may be determined as the second abnormal pixel offset value and removed, so that the filtered pixel offset value queue is [131,130,143,139,131,144,145,137, 135,136].
And step 306, determining the average value of all the pixel deviation values in the filtered pixel deviation value queue as the average displacement value corresponding to the target vehicle.
In the embodiment of the application, the average value of each pixel offset value in the filtered pixel offset value queue is determined as the average displacement value corresponding to the target vehicle, and each frame of target image is spliced according to the average displacement value to generate the side map corresponding to the target vehicle.
For example, in the above example, where the generated filtered pixel offset value queue is [131,130,143,139,131,144,145,137, 135,136], then the average displacement value corresponding to the target vehicle may be determined as: (131 +130+143+139+ 144+145+137+ 135+ 136)/12 =136.
And 307, splicing each frame of target image according to the average displacement value and a preset lane configuration file to generate a lateral map corresponding to the target vehicle.
In this embodiment of the application, the average displacement value corresponding to the target vehicle may be used as the pixel displacement value of the target vehicle between each adjacent target image, and the target images of each frame are sequentially spliced to generate the side map corresponding to the target vehicle.
As a possible implementation manner, the cutout areas corresponding to the lanes in the current road can be preset, and the cutout areas included in the target image are spliced according to the average displacement value to generate a side view corresponding to the target vehicle, so as to further improve the accuracy of generating the vehicle side view. In a possible implementation manner of the embodiment of the present application, the preset lane configuration file includes preset splicing ranges corresponding to lanes in a current road, where the current road is a road where the image acquisition device is currently located, and the preset splicing ranges are image ranges corresponding to splicing areas of the lanes in an image acquired by the image acquisition device; accordingly, the step 307 may include:
determining a preset splicing range corresponding to the target lane according to the target lane and the preset splicing ranges corresponding to the lanes;
determining an image area to be spliced corresponding to each frame of target image according to a preset splicing range corresponding to the target lane;
and splicing the image areas to be spliced corresponding to each frame of target image according to the average displacement value to generate a side map corresponding to the target vehicle.
The preset splicing range corresponding to the lane can be a corresponding cutout area of the lane in an image acquired by image acquisition equipment; and for one lane, the position and the size of the preset splicing range corresponding to the lane in each frame image are fixed.
In the embodiment of the application, in order to obtain a complete side view of a vehicle and remove the interference of the vehicle to the side view splicing in other lanes, a fixed cutout area can be set for each lane, and when target images are spliced, only the cutout areas corresponding to target lanes are spliced to generate the accurate and complete side view corresponding to the target vehicle.
As a possible implementation manner, a preset splicing range and a target lane corresponding to each lane may be preset, the preset splicing range corresponding to the target lane is determined, an image region in each frame of target image within the preset splicing range is determined as an image region to be spliced corresponding to each frame of target image, an average displacement value corresponding to the target vehicle is determined as a pixel offset value between adjacent target images of each frame, and the image regions to be spliced corresponding to each frame of target image are spliced to generate a side view corresponding to the target vehicle.
For example, assuming that the average displacement value corresponding to the target vehicle is 136, the target image queue includes 10 frames of target images, and the target vehicle region corresponding to the target vehicle gradually moves from the 1 st frame of target image to the 10 th frame of target image from left to right, so that an image region with a width of 136 pixels on the leftmost side in the image region to be stitched corresponding to the 2 nd frame of target image may be stitched to the left side of the image region to be stitched corresponding to the 1 st frame of target image to generate a 1 st partial side view; splicing an image area with the width of 136 pixels at the leftmost side of an image area to be spliced corresponding to the 3 rd frame target image to the left side of the 1 st partial side map to generate a 2 nd partial side map; and repeating the steps until the image areas with the width of 136 pixels at the leftmost side of the image areas to be spliced corresponding to the 10 th frame of target image are spliced, and determining to generate the side map corresponding to the target vehicle.
According to the method for generating the vehicle lateral map, each frame of target image of a target vehicle is cut according to a target lane where the target vehicle is located and a preset lane configuration file, the target lane image included in each frame of target image is determined, moving object detection is performed on each frame of target lane image, a target vehicle area included in each frame of target lane image is determined, then a pixel offset value queue corresponding to the target vehicle is determined according to pixel position difference between the target vehicle areas included in each adjacent target lane image, abnormal pixel values in the pixel offset value queue are circularly removed according to an average value corresponding to the pixel offset value queue, an average displacement value of the target vehicle is determined according to an average value corresponding to the filtered pixel offset value queue, and image areas to be spliced in each frame of target image are spliced according to the average displacement value and the preset lane configuration file to generate the lateral map corresponding to the target vehicle. Therefore, abnormal pixel deviation values with large differences with most pixel deviation values are removed according to the numerical differences of the pixel deviation values in the pixel deviation value queue, and when the side map is generated, only the cutout areas corresponding to the target lane where the target vehicle is located are spliced according to the average displacement value of the target vehicle, so that the accuracy of the average displacement value of the target vehicle and the accuracy of selecting the spliced areas are improved, the accuracy of generating the vehicle side map is further improved, and the accuracy of extracting the vehicle features through the vehicle side map is further improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 shows a block diagram of a vehicle side view generation device provided in an embodiment of the present application, corresponding to the vehicle side view generation method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of explanation.
Referring to fig. 4, the apparatus 40 includes:
the first obtaining module 41 is configured to obtain a target image queue corresponding to a target vehicle, where the target image queue includes multiple frames of target images that are collected by an image collecting device in a fixed position when the target vehicle passes through;
the first cutting module 42 is configured to cut each frame of target image according to a target lane where the target vehicle is located and a preset lane configuration file, so as to determine a target lane image included in each frame of target image;
a first determining module 43, configured to perform moving object detection on each frame of target lane image to determine a target vehicle region included in each frame of target lane image;
a second determining module 44, configured to determine a pixel offset value queue corresponding to the target vehicle according to a pixel position difference between target vehicle regions included in each adjacent target lane image;
and the splicing module 45 is configured to splice each frame of target image according to the pixel offset value queue and a preset lane configuration file, so as to generate a side view corresponding to the target vehicle.
In practical use, the vehicle side map generation device provided in the embodiment of the present application may be configured in any terminal device to execute the vehicle side map generation method.
According to the device for generating the vehicle lateral map, each frame of target image of the target vehicle is cut according to the target lane where the target vehicle is located and the preset lane configuration file, so that the target lane image included in each frame of target image is determined, the moving object detection is performed on each frame of target lane image, so that the target vehicle area included in each frame of target lane image is determined, then the pixel deviation value queue corresponding to the target vehicle is determined according to the pixel position difference between the target vehicle areas included in each adjacent target lane image, and then the splicing processing is performed on each frame of target image according to the pixel deviation value queue and the preset lane configuration file, so that the lateral map corresponding to the target vehicle is generated. Therefore, the target images of the target vehicles acquired by the image acquisition equipment are screened out, and all frames of target images are spliced according to the pixel offset values of the target vehicles between the adjacent target images to generate a complete side view of the target vehicles, so that the accuracy of side view generation is improved, and the accuracy of vehicle feature extraction through the vehicle side view is improved.
In one possible implementation form of the present application, the number of the target lane images is N, where N is a positive integer; accordingly, the second determining module 44 includes:
the first determining unit is used for determining vehicle characteristic positions in target vehicle areas contained in the ith frame and the (i + 1) th frame of target lane images according to a preset characteristic position extraction rule, wherein i is a positive integer which is greater than or equal to 1 and less than N;
a second determination unit for determining a pixel offset value of the target vehicle between the i-th frame and the i + 1-th frame of the target lane image according to a pixel position difference between the vehicle feature positions contained in the i-th frame and the i + 1-th frame of the target lane image;
and the writing unit is used for writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of target lane image into the pixel offset value queue.
Further, in another possible implementation form of the present application, the second determining module 44 further includes:
a third determination unit for determining that a pixel offset value of the target vehicle between the i-th frame and the i + 1-th frame of the target lane image is greater than a first threshold value.
Further, in another possible implementation form of the present application, the second determining module 44 further includes:
a first obtaining unit, configured to obtain a first preset number of first reference pixel offset values from a head of a pixel offset value queue when a queue length of the pixel offset value queue is greater than a second threshold;
a fourth determining unit for determining an average value of the first reference pixel offset values;
a removing unit for removing the first abnormal pixel offset value having a difference value greater than a third threshold value from the pixel offset value queue from the average value of the first reference pixel offset values.
Further, in another possible implementation form of the present application, the second determining module 44 further includes:
a fifth determining unit for determining a current queue length of the pixel offset value queue;
the second obtaining unit is used for obtaining a second preset number of second reference pixel deviation values from the tail part of the pixel deviation value queue when the length of the current queue is larger than a fourth threshold value;
a sixth determining unit, configured to determine that the pixel offset value queue is completely constructed when a difference between a maximum value and a minimum value of the second reference pixel offset value is smaller than a fifth threshold value.
Further, in another possible implementation form of the present application, the preset feature position extraction rule includes a feature position initial position determination rule and a size of a feature position template; correspondingly, the first determining unit is specifically configured to:
determining the initial position of the vehicle characteristic position in each target vehicle area according to the characteristic position initial position determination rule;
and determining the vehicle characteristic position in each target vehicle area according to the starting position of the vehicle characteristic position in each target vehicle area and the size of the characteristic position template.
Further, in another possible implementation form of the present application, the splicing module 45 includes:
a seventh determining unit, configured to determine an average value of each pixel deviation value in the pixel deviation value queue, and cyclically remove each second abnormal pixel deviation value having a largest difference value with the average value until the difference value between each remaining pixel deviation value in the pixel deviation value queue is smaller than a sixth threshold value, and then generate a filtered pixel deviation value queue using each remaining pixel deviation value;
an eighth determining unit, configured to determine an average value of each pixel offset value in the filtered pixel offset value queue as an average displacement value corresponding to the target vehicle;
and the splicing unit is used for splicing each frame of target image according to the average displacement value and a preset lane configuration file so as to generate a lateral map corresponding to the target vehicle.
Further, in another possible implementation form of the present application, the preset lane configuration file includes a preset splicing range corresponding to each lane in a current road, where the current road is a road where the image acquisition device is currently located, and the preset splicing range is an image range corresponding to a splicing area of each lane in an image acquired by the image acquisition device; correspondingly, the splicing unit is specifically configured to:
determining a preset splicing range corresponding to the target lane according to the target lane and the preset splicing ranges corresponding to the lanes;
determining an image area to be spliced corresponding to each frame of target image according to a preset splicing range corresponding to the target lane;
and splicing image areas to be spliced corresponding to each frame of target image according to the average displacement value to generate a side view corresponding to the target vehicle.
Further, in another possible implementation form of the present application, the preset lane configuration file includes preset image ranges corresponding to lanes in a current road, where the current road is a road where the image capture device is currently located, and the preset image ranges are image ranges corresponding to the lanes in an image captured by the image capture device; accordingly, the first cropping module 42 includes:
the ninth determining unit is used for determining the corresponding image range of the target lane in each frame of target image according to the target lane and the preset image range corresponding to each lane;
and the tenth determining unit is used for performing cutting processing on each frame of target image according to the corresponding image range of the target lane in each frame of target image so as to determine the target lane image included in each frame of target image.
Further, in another possible implementation form of the present application, the apparatus 40 further includes:
the third determining module is used for determining lane line cutting amplitude according to the number of pixels contained in a unit area of an image acquired by the image acquisition equipment;
and the second cutting module is used for carrying out lane line cutting processing on each frame of target lane image according to the lane line cutting amplitude so as to remove the lane lines contained in each frame of target lane image.
Further, in another possible implementation form of the present application, the first obtaining module 41 includes:
the third acquisition unit is used for acquiring the vehicle entering time when the target vehicle enters the monitoring range corresponding to the image acquisition equipment and the vehicle leaving time when the target vehicle leaves the monitoring range;
and the eleventh determining unit is used for determining the images which are acquired by the image acquisition equipment and have the acquisition time between the vehicle entering time and the vehicle leaving time as target images and adding the target images into the target image queue.
Further, in another possible implementation form of the present application, the first obtaining module 41 further includes:
and the twelfth determining unit is used for determining a third preset number of images before the vehicle entering moment at the collecting moment and a fourth preset number of images after the vehicle leaving moment at the collecting moment as target images and adding the target images into the target image queue.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In order to implement the above embodiment, the present application further provides a terminal device.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
As shown in fig. 5, the terminal device 200 includes:
a memory 210 and at least one processor 220, a bus 230 connecting the different components (including the memory 210 and the processor 220), the memory 210 storing a computer program, which when executed by the processor 220, implements the method for generating a vehicle profile according to the embodiments of the present application.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Terminal device 200 typically includes a variety of electronic device readable media. Such media can be any available media that is accessible by terminal device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 240 and/or cache memory 250. The terminal device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 280 having a set (at least one) of program modules 270 may be stored, for example, in the memory 210, such program modules 270 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. The program modules 270 generally perform the functions and/or methodologies of the embodiments described herein.
Terminal device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with the terminal device 200, and/or with any devices (e.g., network card, modem, etc.) that enable the terminal device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the terminal device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) through the network adapter 293. As shown, the network adaptor 293 communicates with the other modules of the terminal apparatus 200 via the bus 230. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with terminal device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the terminal device of this embodiment, reference is made to the foregoing explanation of the method for generating the vehicle side view in the embodiment of the present application, and details are not repeated here.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be implemented by instructing relevant hardware by a computer program, which can be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above can be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier signal, telecommunications signal and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In some jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (15)

1. A method of generating a vehicle profile, comprising:
acquiring a target image queue corresponding to a target vehicle, wherein the target image queue comprises a plurality of frames of target images acquired by image acquisition equipment at a fixed position when the target vehicle passes through;
according to a target lane where the target vehicle is located and a preset lane configuration file, each frame of target image is cut to determine a target lane image included in each frame of target image;
detecting a moving object for each frame of the target lane image to determine a target vehicle region contained in each frame of the target lane image;
determining a pixel offset value queue corresponding to the target vehicle according to the pixel position difference between the target vehicle areas contained in the adjacent target lane images;
and splicing each frame of the target image according to the pixel deviation value queue and the preset lane configuration file to generate a lateral map corresponding to the target vehicle.
2. The method of claim 1, wherein the number of the target lane images is N, N being a positive integer, and wherein determining the pixel offset value queue corresponding to the target vehicle according to the position difference between the target vehicle regions included in the respective adjacent target lane images comprises:
determining that the target lane images of the ith frame and the (i + 1) th frame contain the vehicle characteristic positions in the target vehicle area according to a preset characteristic position extraction rule, wherein i is a positive integer which is greater than or equal to 1 and smaller than N;
determining a pixel offset value of the target vehicle between the target lane images of the i frame and the i +1 frame according to a pixel position difference between the vehicle characteristic positions contained in the target lane images of the i frame and the i +1 frame;
and writing the pixel offset value of the target vehicle between the ith frame and the (i + 1) th frame of the target lane image into the pixel offset value queue.
3. The method of claim 2, wherein prior to writing pixel offset values for the target vehicle between the target lane image at frame i and frame i +1 into the pixel offset value queue, further comprising:
determining that a pixel offset value of the target vehicle between the target lane image at an i-th frame and an i + 1-th frame is greater than a first threshold.
4. The method of claim 2, wherein the writing the pixel offset value of the target vehicle between the ith frame and the i +1 th frame of the target lane image to the pixel offset value queue further comprises:
when the queue length of the pixel deviation value queue is larger than a second threshold value, acquiring a first preset number of first reference pixel deviation values from the head of the pixel deviation value queue;
determining an average of the first reference pixel offset values;
removing a first abnormal pixel offset value having a difference greater than a third threshold from the pixel offset value queue from the average of the first reference pixel offset values.
5. The method of claim 2, wherein the writing of pixel offset values for the target vehicle between the target lane image at the i frame and the i +1 frame to the pixel offset value queue further comprises:
determining a current queue length of the pixel offset value queue;
when the length of the current queue is larger than a fourth threshold value, a second preset number of second reference pixel deviation values are obtained from the tail of the pixel deviation value queue;
determining that the pixel offset value queue is completed when a difference between a maximum value and a minimum value of the second reference pixel offset value is less than a fifth threshold.
6. The method as claimed in claim 2, wherein the preset feature position extraction rules include feature position starting position determination rules and the size of a feature position template, and the determining the i frame and the i +1 frame of the target lane image including the vehicle feature position in the target vehicle region according to the preset feature position extraction rules comprises:
determining the initial positions of the vehicle characteristic positions in the target vehicle region of the ith frame and the (i + 1) th frame according to the characteristic position initial position determination rule;
and determining the vehicle characteristic positions in the target vehicle region of the ith frame and the (i + 1) th frame according to the starting positions of the vehicle characteristic positions in the target vehicle region of the ith frame and the (i + 1) th frame and the sizes of the characteristic position templates.
7. The method of claim 1, wherein the generating a lateral map corresponding to the target vehicle by stitching each frame of the target image according to the pixel offset value queue and the preset lane profile comprises:
determining an average value of all pixel deviation values in the pixel deviation value queue, circularly removing all second abnormal pixel deviation values with the largest difference value with the average value until the difference values between all the remaining pixel deviation values in the pixel deviation value queue are smaller than a sixth threshold value, and generating a filtered pixel deviation value queue by utilizing all the remaining pixel deviation values;
determining the average value of each pixel deviation value in the screened pixel deviation value queue as the average displacement value corresponding to the target vehicle;
and splicing each frame of the target image according to the average displacement value and the preset lane configuration file to generate a lateral map corresponding to the target vehicle.
8. The method of claim 7, wherein the preset lane configuration file includes a preset splicing range corresponding to each lane in a current road, the current road is a road where the image capturing device is currently located, the preset splicing range is an image range corresponding to a splicing area of each lane in an image captured by the image capturing device, and the splicing processing is performed on each frame of the target image according to the average displacement value and the preset lane configuration file to generate a side view corresponding to the target vehicle, including:
determining a preset splicing range corresponding to the target lane according to the target lane and the preset splicing ranges corresponding to the lanes;
determining an image area to be spliced corresponding to each frame of the target image according to a preset splicing range corresponding to the target lane;
and splicing the image areas to be spliced corresponding to each frame of target image according to the average displacement value to generate a side map corresponding to the target vehicle.
9. The method of claim 1, wherein the preset lane configuration file includes a preset image range corresponding to each lane in a current road, the current road is a road where the image capturing device is currently located, the preset image range is an image range corresponding to each lane in an image captured by the image capturing device, and the step of performing a cropping process on each frame of the target image according to a target lane where the target vehicle is located and the preset lane configuration file to determine a target lane image included in each frame of the target image includes:
determining the corresponding image range of the target lane in each frame of the target image according to the target lane and the preset image range corresponding to each lane;
and cutting each frame of target image according to the corresponding image range of the target lane in each frame of target image to determine the target lane image included in each frame of target image.
10. The method of claim 1, wherein after performing a cropping process on each frame of the target image according to a target lane in which the target vehicle is located and a preset lane profile to determine a target lane image included in each frame of the target image, the method further comprises:
determining lane line cutting amplitude according to the number of pixels contained in a unit area of an image acquired by the image acquisition equipment;
and performing lane line cutting processing on each frame of target lane image according to the lane line cutting amplitude so as to remove lane lines contained in each frame of target lane image.
11. The method of any one of claims 1-10, wherein the obtaining a target image queue corresponding to a target vehicle comprises:
acquiring the vehicle entering time when the target vehicle enters the monitoring range corresponding to the image acquisition equipment and the vehicle leaving time when the target vehicle leaves the monitoring range;
and determining the image which is acquired by the image acquisition equipment and has the acquisition time between the vehicle entering time and the vehicle leaving time as the target image and adding the target image into the target image queue.
12. The method according to claim 11, wherein after determining the image acquired by the image acquisition device with the acquisition time between the entry time and the exit time as the target image and adding the target image queue, further comprising:
and determining a third preset number of images before the entry moment at the acquisition moment and a fourth preset number of images after the exit moment at the acquisition moment as the target images and adding the target images into the target image queue.
13. An apparatus for generating a vehicle side view, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image queue corresponding to a target vehicle, and the target image queue comprises a plurality of frames of target images acquired by image acquisition equipment at a fixed position when the target vehicle passes through;
the first cutting module is used for cutting each frame of target image according to a target lane where the target vehicle is located and a preset lane configuration file so as to determine a target lane image included in each frame of target image;
the first determining module is used for detecting moving objects of each frame of target lane image so as to determine a target vehicle region contained in each frame of target lane image;
the second determining module is used for determining a pixel offset value queue corresponding to the target vehicle according to the pixel position difference between the target vehicle areas contained in the adjacent target lane images;
and the splicing module is used for splicing each frame of the target image according to the pixel deviation value queue and the preset lane configuration file so as to generate a lateral map corresponding to the target vehicle.
14. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-12 when executing the computer program.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-12.
CN202211562710.4A 2022-12-07 2022-12-07 Vehicle side view generating method and device and terminal equipment Pending CN115965636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211562710.4A CN115965636A (en) 2022-12-07 2022-12-07 Vehicle side view generating method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211562710.4A CN115965636A (en) 2022-12-07 2022-12-07 Vehicle side view generating method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN115965636A true CN115965636A (en) 2023-04-14

Family

ID=87359274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211562710.4A Pending CN115965636A (en) 2022-12-07 2022-12-07 Vehicle side view generating method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN115965636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079219A (en) * 2023-10-08 2023-11-17 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079219A (en) * 2023-10-08 2023-11-17 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service
CN117079219B (en) * 2023-10-08 2024-01-09 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service

Similar Documents

Publication Publication Date Title
CN106611512B (en) Method, device and system for processing starting of front vehicle
CN112677977B (en) Driving state identification method and device, electronic equipment and steering lamp control method
CN110008891B (en) Pedestrian detection positioning method and device, vehicle-mounted computing equipment and storage medium
CN115331191B (en) Vehicle type recognition method, device, system and storage medium
CN111976601B (en) Automatic parking method, device, equipment and storage medium
CN111915883A (en) Road traffic condition detection method based on vehicle-mounted camera shooting
CN111368612A (en) Overman detection system, personnel detection method and electronic equipment
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN115965636A (en) Vehicle side view generating method and device and terminal equipment
CN112455465B (en) Driving environment sensing method and device, electronic equipment and storage medium
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN114264310A (en) Positioning and navigation method, device, electronic equipment and computer storage medium
CN113689493A (en) Lens attachment detection method, lens attachment detection device, electronic equipment and storage medium
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN116543047A (en) Position estimation self-diagnosis method, device and storage medium for multi-camera system
CN107452230B (en) Obstacle detection method and device, terminal equipment and storage medium
CN116152753A (en) Vehicle information identification method and system, storage medium and electronic device
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN110539748A (en) congestion car following system and terminal based on look around
CN115359438A (en) Vehicle jam detection method, system and device based on computer vision
CN111191603B (en) Method and device for identifying people in vehicle, terminal equipment and medium
JP5957182B2 (en) Road surface pattern recognition method and vehicle information recording apparatus
CN115762153A (en) Method and device for detecting backing up
CN114973157A (en) Vehicle separation method, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination