CN116109830B - Vehicle separation method and device based on machine vision and computer equipment - Google Patents

Vehicle separation method and device based on machine vision and computer equipment Download PDF

Info

Publication number
CN116109830B
CN116109830B CN202310407550.4A CN202310407550A CN116109830B CN 116109830 B CN116109830 B CN 116109830B CN 202310407550 A CN202310407550 A CN 202310407550A CN 116109830 B CN116109830 B CN 116109830B
Authority
CN
China
Prior art keywords
vehicle
image
tail
time information
blank
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310407550.4A
Other languages
Chinese (zh)
Other versions
CN116109830A (en
Inventor
徐欢
陈海杰
汪庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Innoview Technology Co ltd
Original Assignee
Shenzhen Innoview Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Innoview Technology Co ltd filed Critical Shenzhen Innoview Technology Co ltd
Priority to CN202310407550.4A priority Critical patent/CN116109830B/en
Publication of CN116109830A publication Critical patent/CN116109830A/en
Application granted granted Critical
Publication of CN116109830B publication Critical patent/CN116109830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle separation method, a device and computer equipment based on machine vision, comprising the following steps: acquiring a vehicle image; preprocessing the vehicle image, and extracting features to obtain a vehicle feature value; inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle, and marking the boundaries of the head and the tail of the vehicle; outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at a defined vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line; and separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information. According to the invention, the boundary between the head and the tail of the vehicle is obtained for vehicle separation in an image processing mode, so that the accuracy of vehicle separation is improved, and the misjudgment or inaccurate vehicle separation caused by external interference is avoided.

Description

Vehicle separation method and device based on machine vision and computer equipment
Technical Field
The present invention relates to the field of image detection technologies, and in particular, to a vehicle separation method, apparatus, and computer device based on machine vision.
Background
The vehicle separation methods commonly used in the market include: the ground induction coil type vehicle separation method, the laser type vehicle separation method and the infrared grating type vehicle separation method are easy to be interfered by the outside to cause misjudgment or inaccurate vehicle separation, such as: 1. the ground induction coil type vehicle separation method needs a certain interval between front and rear vehicles to separate vehicles; 2. the laser type and the infrared grating type vehicle separation method can be misjudged due to interference of passing objects (some objects except vehicles).
Disclosure of Invention
The invention mainly aims to provide a vehicle separation method, device and computer equipment based on machine vision, which aim to overcome the defect that vehicle separation misjudgment or vehicle separation inaccuracy are caused by external interference when vehicle separation is carried out.
In order to achieve the above object, the present invention provides a vehicle separation method based on machine vision, comprising the steps of:
acquiring a vehicle image;
preprocessing the vehicle image, and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value;
inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle;
outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at a defined vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line;
and separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information.
Further, the step of acquiring the vehicle image includes:
a background image of the vehicle passing by is acquired, and a side view image of the vehicle passing by the background is acquired.
Further, the step of preprocessing the vehicle image includes:
respectively carrying out graying treatment on the background image and the side view image;
performing differential processing on the background image and the side view image after graying processing by adopting a differential method, and removing the background in the side view image to obtain a differential image;
performing space domain smoothing and frequency domain smoothing on the differential image, and removing interference noise in the differential image to obtain a smooth image;
performing image segmentation on the smooth image, and converting the smooth image to obtain a binary image;
and carrying out median filtering on the binary image, removing noise points, and obtaining a filtered image as a preprocessed vehicle image.
Further, the step of extracting the features of the preprocessed vehicle image to obtain the vehicle feature value includes:
scanning the preprocessed vehicle image, extracting corresponding vehicle outline shape features, and calculating the vehicle outline size;
the step of inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle comprises the following steps:
according to the vehicle outline shape characteristics and the vehicle outline size, matching the vehicle type in the vehicle image, extracting a vehicle head boundary and a vehicle tail boundary, and marking the vehicle head boundary and the vehicle tail boundary.
Further, before the step of outputting the headstock arrival time information when the headstock of the vehicle is detected to arrive at the delimited vehicle separation line, the method comprises the following steps:
and detecting the motion trail of the vehicle by a motion detection technology so as to detect whether the head and the tail of the vehicle reach a defined vehicle separation line.
Further, when the head of the vehicle is detected to reach a delimited vehicle separation line, outputting head arrival time information; when the tail of the vehicle is detected to leave the defined vehicle separation line, the step of outputting the tail leaving time information further comprises the following steps:
according to the moment difference between the moment information of arrival of the headstock and the moment information of departure of the tailstock;
acquiring length information of the vehicle from the vehicle characteristic value;
calculating the speed of the vehicle according to the length information and the time difference value, and judging whether the speed exceeds a threshold value;
if the threshold value is exceeded, intercepting a running image of the vehicle when detecting that the tail of the vehicle leaves a delimited vehicle separation line; wherein the driving image is a side view of the vehicle;
generating a first blank layer, and merging the running image into the first blank layer; wherein the bottom of the running image is aligned with the bottom of the first blank layer, and the left side of the first blank layer is outside the left side of the running image to form a first blank area; the right side of the first blank layer is outside the right side of the running image, so that a second blank area is formed; the top of the first blank layer is outside the top of the driving image, and a third blank area is formed;
adding the head arrival time information in the second blank area, adding the tail departure time information in the first blank area, and adding the speed of the vehicle in the third blank area to form a comprehensive driving image;
controlling cameras arranged in front of the defined vehicle separation line by a preset distance to continuously acquire a plurality of whole vehicle images of the vehicle, and acquiring time for acquiring the whole vehicle images each time; the whole vehicle image comprises a vehicle body and a license plate number on the vehicle body;
adding the vehicle speed and the time for acquiring the whole vehicle image each time into the corresponding whole vehicle image;
and packaging and storing the comprehensive driving image and each whole vehicle image in a designated folder, and naming the license plate number on the vehicle body as the file name of the designated folder.
Further, the step of adding the vehicle speed and the time of each time of collecting the whole vehicle image to the corresponding whole vehicle image includes:
acquiring the width and the height of the whole vehicle image;
generating a second blank layer; the width of the second blank layer is consistent with the width of the whole vehicle image, and the height of the second blank layer is larger than the height of the whole vehicle image;
respectively merging the whole vehicle image into the second blank image layer to form a merged image; the top, left side and right side of the whole vehicle image are aligned with the top, left side and right side of the second blank image layer respectively; the bottom of the second blank layer is outside the bottom of the whole vehicle image, so that a fourth blank area is formed;
and adding the vehicle speed and the time for acquiring the whole vehicle image each time into the fourth blank area.
The invention also provides a vehicle separation device based on machine vision, which comprises:
an acquisition unit configured to acquire a vehicle image;
the processing unit is used for preprocessing the vehicle image and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value;
the analysis unit is used for inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image and marking the boundaries of the head and the tail of the vehicle;
the detection unit is used for outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at the delimited vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line;
and the separation unit is used for separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information.
Further, the acquiring unit is specifically configured to:
a background image of the vehicle passing by is acquired, and a side view image of the vehicle passing by the background is acquired.
The invention also provides a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of any of the methods described above when the computer program is executed.
The invention provides a vehicle separation method, a vehicle separation device and computer equipment based on machine vision, which comprise the following steps: acquiring a vehicle image; preprocessing the vehicle image, and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value; inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle; outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at a defined vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line; and separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information. According to the invention, the boundaries of the head and the tail of the vehicle are obtained in an image processing mode, so that the vehicle separation is carried out according to the fact that the head and the tail leave a defined vehicle separation line, the accuracy of the vehicle separation is improved, and the misjudgment or inaccurate vehicle separation caused by external interference is avoided.
Drawings
FIG. 1 is a schematic diagram of steps of a vehicle separation method based on machine vision in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a machine vision based vehicle disconnect apparatus in accordance with one embodiment of the present invention;
fig. 3 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, in one embodiment of the present invention, a vehicle separation method based on machine vision is provided, including the following steps:
step S1, acquiring a vehicle image;
step S2, preprocessing the vehicle image, and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value;
s3, inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle;
step S4, outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at a delimited vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line;
and S5, separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information.
In this embodiment, a camera is preset to collect image information of the vehicle passing by. As described in the above step S1, when the camera detects that the vehicle is driving into the field of view of the camera, the camera performs real-time detection snapshot on the vehicle to obtain the vehicle image, where the vehicle image includes a background image and a side view of the vehicle, the background image is a background image through which the vehicle passes, and the side view is a side view image when the vehicle passes through the background.
As described in step S2, the collected vehicle image is subjected to image preprocessing operations such as graying, image difference, image segmentation, etc., to remove interference and noise, so that the image meets the requirements required for feature extraction, and the amount of data to be processed later can be reduced. And then extracting effective vehicle characteristic values from the preprocessed images.
The vehicle characteristic values are input into a vehicle characteristic analyzer for analysis, as described in the above steps S3-S5, so as to identify the moving object in the vehicle image as a motor vehicle, and mark the boundaries of the head and tail of the vehicle. In this embodiment, a vehicle separation line is also defined in advance, when it is detected that the head of the vehicle reaches the defined vehicle separation line, the head arrival time information is output, and when the tail of the vehicle leaves the defined vehicle separation line, the tail departure time information is output; and separating vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information, so as to achieve the purpose of vehicle separation. By adopting the image processing mode based on machine vision, the vehicle separation precision is convenient to improve, and the accuracy of vehicle separation is improved.
In one embodiment, the step S1 of acquiring a vehicle image includes:
a background image of the vehicle passing by is acquired, and a side view image of the vehicle passing by the background is acquired. In this embodiment, the collected image includes a background image, and when the camera mounting position is fixed, the field of view of the collected image is also fixed; therefore, the background image of the vehicle passing by and the side view image of the vehicle passing by the background are acquired simultaneously, so that the subsequent differential identification of the vehicle is facilitated.
In an embodiment, the step of preprocessing the vehicle image includes:
respectively carrying out graying treatment on the background image and the side view image; the data processing amount of the background image and the side view image can be effectively reduced by carrying out gray processing, and the operation amount can be reduced in the subsequent feature extraction process.
Performing differential processing on the background image and the side view image after graying processing by adopting a differential method, and removing the background in the side view image to obtain a differential image; the camera mounting position and the detection range are fixed, so that the background is not changed greatly, the vehicle is in a running motion state, the redundant background is removed by adopting a background-foreground difference method, and the vehicle is left as a target.
Performing space domain smoothing and frequency domain smoothing on the differential image, and removing interference noise in the differential image to obtain a smooth image; the above spatial domain smoothing and frequency domain smoothing processes can effectively remove interference noise in the differential image.
Performing image segmentation on the smooth image, and converting the smooth image to obtain a binary image; the smooth image after the processing contains more gray information and is not easy to extract the characteristics, so that the image is divided and processed to be converted into a binary image, so that the target in the image is clearer and the characteristic extraction is convenient.
And carrying out median filtering on the binary image, removing noise points, and obtaining a filtered image as a preprocessed vehicle image. After the image is segmented, the gray level image is changed into a binary image, and the binary image at the moment is subjected to median filtering, so that small noise points can be removed. At this time, if there are still some relatively large white areas or large black gaps, holes, these interference factors need to be further removed.
In an embodiment, the step of extracting features of the preprocessed vehicle image to obtain a vehicle feature value includes:
scanning the preprocessed vehicle image, extracting corresponding vehicle outline shape features, and calculating the vehicle outline size; the vehicle image is preprocessed to obtain a binary image, the binary image is scanned, the outline shape characteristics of the vehicle are extracted, the outline size of the vehicle is calculated, and the type of the vehicle can be determined according to the outline sizes and the outline shapes of various types of vehicles in national standards.
The step of inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle comprises the following steps:
according to the vehicle outline shape characteristics and the vehicle outline size, matching the vehicle type in the vehicle image, extracting a vehicle head boundary and a vehicle tail boundary, and marking the vehicle head boundary and the vehicle tail boundary.
In an embodiment, before the step of outputting the headstock arrival time information when the headstock of the vehicle is detected to arrive at the delimited vehicle separation line, the method includes:
and detecting the motion trail of the vehicle by a motion detection technology so as to detect whether the head and the tail of the vehicle reach a defined vehicle separation line.
In another embodiment, when the head of the vehicle is detected to reach a delimited vehicle separation line, outputting head arrival time information; after step S4 of outputting the tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line, the method further includes:
according to the moment difference between the moment information of arrival of the headstock and the moment information of departure of the tailstock;
acquiring length information of the vehicle from the vehicle characteristic value;
calculating the speed of the vehicle according to the length information and the time difference value, and judging whether the speed exceeds a threshold value; in this embodiment, the speed of the vehicle can be directly calculated by the length information of the vehicle, the arrival time information of the vehicle head and the time difference between the departure time information of the vehicle tail, so as to determine whether the vehicle is overspeed. For example, vehicle speed=length information of vehicle/time difference. If the vehicle speed does not exceed the threshold value, no processing may be performed.
If the threshold value is exceeded, intercepting a running image of the vehicle when detecting that the tail of the vehicle leaves a delimited vehicle separation line; wherein the driving image is a side view of the vehicle; in order to remain as proof of overspeed of the vehicle, it is necessary to immediately capture/intercept a running image of the vehicle when the tail of the vehicle leaves a delimited vehicle separation line.
Generating a first blank layer, and merging the running image into the first blank layer; wherein the bottom of the running image is aligned with the bottom of the first blank layer, and the left side of the first blank layer is outside the left side of the running image to form a first blank area; the right side of the first blank layer is outside the right side of the running image, so that a second blank area is formed; the top of the first blank layer is outside the top of the driving image, and a third blank area is formed; in this embodiment, in order to add the speed information of the vehicle, the head arrival time information and the tail departure time information to the running image, and the running image cannot be blocked, the above processing may be performed on the running image, so as to avoid the unclear condition caused when the running image is blocked.
Adding the head arrival time information in the second blank area, adding the tail departure time information in the first blank area, and adding the speed of the vehicle in the third blank area to form a comprehensive driving image;
controlling cameras arranged in front of the defined vehicle separation line by a preset distance to continuously acquire a plurality of whole vehicle images of the vehicle, and acquiring time for acquiring the whole vehicle images each time; the whole vehicle image comprises a vehicle body and a license plate number on the vehicle body; in this embodiment, it is also necessary to capture images of the entire vehicle of a plurality of vehicles, and the license plate number on the vehicle body needs to be included in the normal image. Therefore, the camera arranged at a preset distance in front of the defined vehicle separation line can be controlled to continuously shoot a plurality of whole vehicle images.
Adding the vehicle speed and the time for acquiring the whole vehicle image each time into the corresponding whole vehicle image;
and packaging and storing the comprehensive driving image and each whole vehicle image in a designated folder, and naming the license plate number on the vehicle body as the file name of the designated folder. In order to separately store overspeed behaviors of different vehicles, the comprehensive driving image and each whole vehicle image can be packaged and stored in a specified folder, and the license plate number on the vehicle body is named as the file name of the specified folder.
In this embodiment, the step of adding the vehicle speed and the time of each time of capturing the whole vehicle image to the corresponding whole vehicle image includes:
acquiring the width and the height of the whole vehicle image;
generating a second blank layer; the width of the second blank layer is consistent with the width of the whole vehicle image, and the height of the second blank layer is larger than the height of the whole vehicle image; in an embodiment, the height of the second blank layer may be N times the height of the whole vehicle image, where N is a fixed value greater than 1.
Respectively merging the whole vehicle image into the second blank image layer to form a merged image; the top, left side and right side of the whole vehicle image are aligned with the top, left side and right side of the second blank image layer respectively; the bottom of the second blank layer is outside the bottom of the whole vehicle image, so that a fourth blank area is formed;
and adding the vehicle speed and the time for acquiring the whole vehicle image each time into the fourth blank area.
In this embodiment, in order to add the speed information of the vehicle and the time of each acquisition of the whole vehicle image in the whole vehicle image, the whole vehicle image cannot be blocked, and the running image may be processed as described above, so as to avoid the unclear condition caused when the whole vehicle image is blocked.
Referring to fig. 2, in an embodiment of the present invention, there is further provided a machine vision-based vehicle separation apparatus, including:
an acquisition unit configured to acquire a vehicle image;
the processing unit is used for preprocessing the vehicle image and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value;
the analysis unit is used for inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image and marking the boundaries of the head and the tail of the vehicle;
the detection unit is used for outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at the delimited vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line;
and the separation unit is used for separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information.
In an embodiment, the acquiring unit is specifically configured to:
a background image of the vehicle passing by is acquired, and a side view image of the vehicle passing by the background is acquired.
In this embodiment, for specific implementation of each unit in the above embodiment of the apparatus, please refer to the description in the above embodiment of the method, and no further description is given here.
Referring to fig. 3, in an embodiment of the present invention, there is further provided a computer device, which may be a server, and an internal structure thereof may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store the corresponding data in this embodiment. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a machine vision based vehicle separation method.
It will be appreciated by those skilled in the art that the architecture shown in fig. 3 is merely a block diagram of a portion of the architecture in connection with the present inventive arrangements and is not intended to limit the computer devices to which the present inventive arrangements are applicable.
An embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a machine vision-based vehicle separation method. It is understood that the computer readable storage medium in this embodiment may be a volatile readable storage medium or a nonvolatile readable storage medium.
In summary, the method, the device and the computer device for separating a vehicle based on machine vision provided in the embodiments of the present invention include: acquiring a vehicle image; preprocessing the vehicle image, and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value; inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle; outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at a defined vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line; and separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information. According to the invention, the boundaries of the head and the tail of the vehicle are obtained in an image processing mode, so that the vehicle separation is carried out according to the fact that the head and the tail leave a defined vehicle separation line, the accuracy of the vehicle separation is improved, and the misjudgment or inaccurate vehicle separation caused by external interference is avoided.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present invention and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (9)

1. A machine vision based vehicle separation method comprising the steps of:
acquiring a vehicle image;
preprocessing the vehicle image, and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value;
inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle;
outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at a defined vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line;
separating vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information;
outputting headstock arrival time information when the headstock of the vehicle is detected to arrive at a defined vehicle separation line; when the tail of the vehicle is detected to leave the defined vehicle separation line, the step of outputting the tail leaving time information further comprises the following steps:
according to the moment difference between the moment information of arrival of the headstock and the moment information of departure of the tailstock; acquiring length information of the vehicle from the vehicle characteristic value;
calculating the speed of the vehicle according to the length information and the time difference value, and judging whether the speed exceeds a threshold value;
if the threshold value is exceeded, intercepting a running image of the vehicle when detecting that the tail of the vehicle leaves a delimited vehicle separation line; wherein the driving image is a side view of the vehicle;
generating a first blank layer, and merging the running image into the first blank layer; wherein the bottom of the running image is aligned with the bottom of the first blank layer, and the left side of the first blank layer is outside the left side of the running image to form a first blank area; the right side of the first blank layer is outside the right side of the running image, so that a second blank area is formed; the top of the first blank layer is outside the top of the driving image, and a third blank area is formed;
adding the head arrival time information in the second blank area, adding the tail departure time information in the first blank area, and adding the speed of the vehicle in the third blank area to form a comprehensive driving image;
controlling cameras arranged in front of the defined vehicle separation line by a preset distance to continuously acquire a plurality of whole vehicle images of the vehicle, and acquiring time for acquiring the whole vehicle images each time; the whole vehicle image comprises a vehicle body and a license plate number on the vehicle body;
adding the vehicle speed and the time for acquiring the whole vehicle image each time into the corresponding whole vehicle image;
and packaging and storing the comprehensive driving image and each whole vehicle image in a designated folder, and naming the license plate number on the vehicle body as the file name of the designated folder.
2. The machine vision based vehicle separation method of claim 1, wherein the step of acquiring a vehicle image comprises:
a background image of the vehicle passing by is acquired, and a side view image of the vehicle passing by the background is acquired.
3. The machine vision based vehicle separation method of claim 2, wherein the step of preprocessing the vehicle image comprises:
respectively carrying out graying treatment on the background image and the side view image;
performing differential processing on the background image and the side view image after graying processing by adopting a differential method, and removing the background in the side view image to obtain a differential image;
performing space domain smoothing and frequency domain smoothing on the differential image, and removing interference noise in the differential image to obtain a smooth image;
performing image segmentation on the smooth image, and converting the smooth image to obtain a binary image;
and carrying out median filtering on the binary image, removing noise points, and obtaining a filtered image as a preprocessed vehicle image.
4. The machine vision-based vehicle separation method according to claim 1, wherein the step of extracting features of the preprocessed vehicle image to obtain the vehicle feature value comprises:
scanning the preprocessed vehicle image, extracting corresponding vehicle outline shape features, and calculating the vehicle outline size;
the step of inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image, and marking the boundaries of the head and the tail of the vehicle comprises the following steps:
according to the vehicle outline shape characteristics and the vehicle outline size, matching the vehicle type of the vehicle in the vehicle image, extracting the vehicle head boundary and the vehicle tail boundary, and marking the vehicle head boundary and the vehicle tail boundary.
5. The machine vision-based vehicle separation method according to claim 1, wherein the step of outputting the head arrival time information upon detecting that the head of the vehicle arrives at the defined vehicle separation line, includes:
and detecting the motion trail of the vehicle by a motion detection technology so as to detect whether the head and the tail of the vehicle reach a defined vehicle separation line.
6. The machine vision based vehicle separation method according to claim 1, wherein the step of adding the vehicle speed and the time of each acquisition of the whole vehicle image to the corresponding whole vehicle image includes:
acquiring the width and the height of the whole vehicle image;
generating a second blank layer; the width of the second blank layer is consistent with the width of the whole vehicle image, and the height of the second blank layer is larger than the height of the whole vehicle image;
respectively merging the whole vehicle image into the second blank image layer to form a merged image; the top, left side and right side of the whole vehicle image are aligned with the top, left side and right side of the second blank image layer respectively; the bottom of the second blank layer is outside the bottom of the whole vehicle image, so that a fourth blank area is formed;
and adding the vehicle speed and the time for acquiring the whole vehicle image each time into the fourth blank area.
7. A machine vision-based vehicle disconnect apparatus, comprising:
an acquisition unit configured to acquire a vehicle image;
the processing unit is used for preprocessing the vehicle image and extracting the characteristics of the preprocessed vehicle image to obtain a vehicle characteristic value;
the analysis unit is used for inputting the vehicle characteristic value into a vehicle characteristic analyzer for analysis, identifying the vehicle from the vehicle image and marking the boundaries of the head and the tail of the vehicle;
the detection unit is used for outputting headstock arrival time information when detecting that the headstock of the vehicle arrives at the delimited vehicle separation line; outputting tail departure time information when detecting that the tail of the vehicle leaves the defined vehicle separation line;
the separation unit is used for separating the vehicles in the vehicle image according to the vehicle head arrival time information and the vehicle tail departure time information;
the detection unit is further used for:
according to the moment difference between the moment information of arrival of the headstock and the moment information of departure of the tailstock;
acquiring length information of the vehicle from the vehicle characteristic value;
calculating the speed of the vehicle according to the length information and the time difference value, and judging whether the speed exceeds a threshold value;
if the threshold value is exceeded, intercepting a running image of the vehicle when detecting that the tail of the vehicle leaves a delimited vehicle separation line; wherein the driving image is a side view of the vehicle;
generating a first blank layer, and merging the running image into the first blank layer; wherein the bottom of the running image is aligned with the bottom of the first blank layer, and the left side of the first blank layer is outside the left side of the running image to form a first blank area; the right side of the first blank layer is outside the right side of the running image, so that a second blank area is formed; the top of the first blank layer is outside the top of the driving image, and a third blank area is formed;
adding the head arrival time information in the second blank area, adding the tail departure time information in the first blank area, and adding the speed of the vehicle in the third blank area to form a comprehensive driving image;
controlling cameras arranged in front of the defined vehicle separation line by a preset distance to continuously acquire a plurality of whole vehicle images of the vehicle, and acquiring time for acquiring the whole vehicle images each time; the whole vehicle image comprises a vehicle body and a license plate number on the vehicle body;
adding the vehicle speed and the time for acquiring the whole vehicle image each time into the corresponding whole vehicle image;
and packaging and storing the comprehensive driving image and each whole vehicle image in a designated folder, and naming the license plate number on the vehicle body as the file name of the designated folder.
8. The machine vision based vehicle separation device of claim 7, wherein the acquisition unit is specifically configured to:
a background image of the vehicle passing by is acquired, and a side view image of the vehicle passing by the background is acquired.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, carries out the steps of the method according to any one of claims 1 to 6.
CN202310407550.4A 2023-04-17 2023-04-17 Vehicle separation method and device based on machine vision and computer equipment Active CN116109830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310407550.4A CN116109830B (en) 2023-04-17 2023-04-17 Vehicle separation method and device based on machine vision and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310407550.4A CN116109830B (en) 2023-04-17 2023-04-17 Vehicle separation method and device based on machine vision and computer equipment

Publications (2)

Publication Number Publication Date
CN116109830A CN116109830A (en) 2023-05-12
CN116109830B true CN116109830B (en) 2023-06-16

Family

ID=86254764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310407550.4A Active CN116109830B (en) 2023-04-17 2023-04-17 Vehicle separation method and device based on machine vision and computer equipment

Country Status (1)

Country Link
CN (1) CN116109830B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0935197A (en) * 1995-07-14 1997-02-07 Aisin Seiki Co Ltd Vehicle recognizing method
JP2004240636A (en) * 2003-02-05 2004-08-26 Toyota Central Res & Dev Lab Inc White line detection device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992865A (en) * 2018-01-26 2018-05-04 重庆邮电大学 A kind of vehicle identification method and system based on video analysis
CN110765929A (en) * 2019-10-21 2020-02-07 东软睿驰汽车技术(沈阳)有限公司 Vehicle obstacle detection method and device
CN114396998B (en) * 2021-11-17 2024-04-09 西安航天三沃机电设备有限责任公司 Multi-source vehicle information matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0935197A (en) * 1995-07-14 1997-02-07 Aisin Seiki Co Ltd Vehicle recognizing method
JP2004240636A (en) * 2003-02-05 2004-08-26 Toyota Central Res & Dev Lab Inc White line detection device

Also Published As

Publication number Publication date
CN116109830A (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN111382704B (en) Vehicle line pressing violation judging method and device based on deep learning and storage medium
CN111489337B (en) Automatic optical detection pseudo defect removal method and system
US10867403B2 (en) Vehicle external recognition apparatus
CN112598922B (en) Parking space detection method, device, equipment and storage medium
EP2662827B1 (en) Video analysis
DE112009001686T5 (en) Object detecting device
EP2573709A2 (en) Method and apparatus for identifying motor vehicles for monitoring traffic
WO2014002692A1 (en) Stereo camera
CN113184707B (en) Method and system for preventing lifting of collection card based on laser vision fusion and deep learning
WO2021000948A1 (en) Counterweight weight detection method and system, and acquisition method and system, and crane
US10223587B2 (en) Pairing of images of postal articles with descriptors of singularities of the gradient field
CN116109830B (en) Vehicle separation method and device based on machine vision and computer equipment
I Abbas et al. Iraqi cars license plate detection and recognition system using edge detection and template matching correlation
CN115880481A (en) Curve positioning algorithm and system based on edge profile
WO2010113217A1 (en) Character recognition device and character recognition method
CN115331193A (en) Parking space identification method, parking space identification system, electronic equipment and storage medium
CN112016514B (en) Traffic sign recognition method, device, equipment and storage medium
CN114724119A (en) Lane line extraction method, lane line detection apparatus, and storage medium
CN114494355A (en) Trajectory analysis method and device based on artificial intelligence, terminal equipment and medium
CN109993761B (en) Ternary image acquisition method and device and vehicle
DE102011111856B4 (en) Method and device for detecting at least one lane in a vehicle environment
CN111161542B (en) Vehicle identification method and device
CN112766272A (en) Target detection method, device and electronic system
JP4096932B2 (en) Vehicle collision time estimation apparatus and vehicle collision time estimation method
KR102613465B1 (en) Method and device for classifying and quantity of empty bottles through multi-camera based sequential machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant