CN114332681A - Vehicle identification method and device - Google Patents

Vehicle identification method and device Download PDF

Info

Publication number
CN114332681A
CN114332681A CN202111493952.8A CN202111493952A CN114332681A CN 114332681 A CN114332681 A CN 114332681A CN 202111493952 A CN202111493952 A CN 202111493952A CN 114332681 A CN114332681 A CN 114332681A
Authority
CN
China
Prior art keywords
target
vehicle
tracking target
video frame
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111493952.8A
Other languages
Chinese (zh)
Inventor
林翠翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN202111493952.8A priority Critical patent/CN114332681A/en
Publication of CN114332681A publication Critical patent/CN114332681A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a vehicle identification method and device, relates to the technical field of monitoring, and can acquire complete imaging of a vehicle and solve the problem that a single-frame image is difficult to cover the complete vehicle. The specific scheme is as follows: and the electronic equipment detects each video frame, and when at least one tracking target exists in the first video frame, the tracking target in the video frame is bound with the vehicle body and the wheels to obtain at least one whole vehicle target. And for the first finished vehicle target, determining the movement direction of the first finished vehicle target, performing jigsaw puzzle on the video frame associated with the first finished vehicle target when the first finished vehicle target arrives according to the movement direction and the position of the first finished vehicle target in the view field, and stopping the jigsaw puzzle on the first finished vehicle target when the first finished vehicle target drives away according to the position of the first finished vehicle target in the view field to obtain a complete jigsaw puzzle image of the first finished vehicle target. The embodiment of the application is used for the process of acquiring the vehicle image by the electronic equipment.

Description

Vehicle identification method and device
Technical Field
The embodiment of the application relates to the technical field of monitoring, in particular to a vehicle identification method and device.
Background
The high-speed toll gate is a main application scene of vehicle type charging, and the vehicle type is confirmed mainly through a wheel axle identification technology so as to determine the corresponding fee of the vehicle type.
At present, the scheme related to the axle identification technology mainly comprises the steps of detecting the outer contour of a vehicle and the type of the vehicle by laser and sensor equipment so as to determine the cost according to the type of the vehicle, but the scheme has higher cost. At present, video technology is mostly adopted, for example, a car body image under an optimal angle is obtained based on a fisheye wide-angle lens, a car head, a car tail and wheels of the car body image are detected by adopting a car detection technology, the number of the wheels appearing between the car head and the car tail is counted, the number of wheel axles of the car is obtained, and cost is determined according to wheel axle data of the car. The video technology can carry out detection and subsequent logic processing based on a single-frame image, and the processing speed is high. However, in the video technology, for a large vehicle with an excessively long vehicle body, the detection method of the fisheye wide-angle lens based on a single-frame image cannot acquire a complete vehicle body image of the vehicle, and the detection of the number of axles may be wrong, so that the charging efficiency is low.
Disclosure of Invention
The embodiment of the application provides a vehicle identification method and device, which can acquire complete vehicle imaging of a vehicle, solve the problem that a single-frame image is difficult to cover the complete vehicle, accurately count the number and type of axles according to the complete image, and improve the vehicle charging efficiency.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a vehicle identification method is provided, the method comprising: the electronic equipment detects each shot video frame, and when the fact that at least one tracking target exists in the first video frame and the tracking target is a vehicle body or a wheel is detected, the vehicle body and the wheel are bound on the first video frame and the tracking target in each video frame of a plurality of video frames obtained through subsequent shooting, and at least one vehicle target is obtained; for a first finished vehicle target in the at least one finished vehicle target, the electronic device determines the movement direction of the first finished vehicle target, and when the first finished vehicle target arrives according to the movement direction and the position of the first finished vehicle target in the field of view of the camera device, the video frame associated with the first finished vehicle target is subjected to picture splicing; and when the electronic equipment determines that the first whole vehicle target drives away according to the position of the first whole vehicle target in the view field, the electronic equipment stops jigsaw puzzle of the first whole vehicle target to obtain a complete jigsaw puzzle image of the first whole vehicle target.
The electronic device in the present application may be, for example, an image pickup device capable of picking up a video, a server, or the like. When the electronic device is a server, the server may perform mosaic processing according to a video frame captured by the image capture device received from the image capture device.
When the electronic device obtains the complete puzzle image, the complete puzzle image can be sent to a user device that includes a display screen to display the complete puzzle image in the display screen of the user device.
Therefore, in the application, a plurality of video frames of the same vehicle can be obtained by obtaining the plurality of video frames of the vehicle and tracking and binding the vehicle body and the wheels of the same vehicle, so that the jigsaw can be performed according to the video frames between the arrival time and the departure time of the same vehicle, and the complete jigsaw image of the whole vehicle target can be obtained. Compared with the prior art in which only a single-frame image of the vehicle is obtained through shooting, for a vehicle with a long vehicle body, the problem that a complete image of the vehicle cannot be obtained is solved through the video frame jigsaw.
In one possible design, the method further includes: when the complete jigsaw image is obtained, the electronic equipment determines the number of axles and the type of wheels of a first complete vehicle target; wheel types include a concave wheel and a cam. Optionally, the detection of the wheel by the camera device is realized based on the image of the gunlock; the electronic device sends the number of axles and the type of axles of the first full car target to the user device. For the billing platform, the cost of the vehicle may be determined based on the number of axles of the vehicle. Moreover, the problem that single-side double wheels cannot be distinguished in view field images can be solved by acquiring wheel classification (concave-convex classification) information, the accuracy of wheel axle technology is improved, and the wheel output information of products is richer and can assist applications such as overweight.
In one possible design, before the binding of the vehicle body and the wheel of the tracking target in the first video frame and each video frame of the plurality of video frames obtained by subsequent shooting, the method further comprises the following steps: for each video frame, the electronic equipment determines the contact ratio of a first tracking target in the next video frame and a second tracking target in the previous video frame; when the contact ratio is greater than or equal to a preset threshold value, the electronic equipment determines that the first tracking target and the second tracking target are the same tracking target, records the identification and the type of the second tracking target, and the identification and the type of the second tracking target are the same as the identification and the type of the first tracking target; the electronic device determining whether the first tracked target is within a neighborhood threshold position range of the second tracked target; when the first tracking target is determined to be in the neighborhood threshold position range of the second tracking target, the electronic equipment records the position information of the first tracking target in the current video frame; when the contact ratio is smaller than a preset threshold value, the electronic equipment determines that the first tracking target and the second tracking target are not the same tracking target, and establishes the first tracking target as a new tracking target.
In this way, the electronic device can determine whether the tracking targets acquired in the video frames are the same target, so that when the jigsaw processing is performed, the video frames of the same tracking target are determined to perform the jigsaw.
In one possible design, the binding of the vehicle body and the wheel to the tracking target in the first video frame and each video frame of the plurality of video frames obtained by subsequent shooting comprises: and the electronic equipment binds the vehicle body and the wheels of the tracking target in each video frame according to the identification and the type of the tracking target in each video frame, the position information in the current frame and the position relation of the wheels on the vehicle body. That is to say, the present application may mark the same target in two video frames with the same identifier and type when detecting that the tracked target in two video frames is the same target. Moreover, the vehicle body and the wheels of the same vehicle can be bound according to the position relationship of the vehicle body and the wheels in the same vehicle, for example, the association relationship between the identifier of the vehicle body and the identifier of the wheels is established. Wherein each tracking target in the same vehicle is an independent target.
In one possible design, the electronic device determining a movement direction of the first full car target, and the determining that the first full car target arrives according to the movement direction and a position of the first full car target in a field of view of the image pickup device includes: the electronic equipment determines the movement direction of the first whole vehicle target according to a plurality of historical position information of the tracking target in the first whole vehicle target; and the electronic equipment determines that the first whole vehicle target arrives when the position information of the tracking target in the first whole vehicle target is within the preset position range of the visual field under the motion direction according to the motion direction. For example, if a vehicle is moving to the left, the vehicle may be considered to have arrived if one sample point in the front of the vehicle arrives at 1/4 on the right side of the field of view, and the vehicle may be considered to be about to drive away if one sample point in the rear of the vehicle arrives at 1/4 on the left side of the field of view. Therefore, when the complete video frame of the vehicle is obtained, the jigsaw can be performed according to the video frame between the time when the vehicle arrives and the time when the vehicle departs, and the complete image of the vehicle is obtained.
In one possible design, the method further includes: when the electronic equipment determines that any video frame of the first video frame and the plurality of video frames is a third tracking target of the vehicle body, performing character detection on the third tracking target, and determining whether characters used for indicating the number of people exist in an image of the third tracking target; and when the electronic equipment determines that the characters used for indicating the number of people exist in the third tracking target, the electronic equipment acquires the number information in the adjacent area range of the character area used for indicating the number of people, and sends the number information to the user equipment when the complete jigsaw image is obtained. For example, the application can give a mark that the number of the load people corresponds to the number limit according to the OCR technology, and can be used for the subdivision charging of a two-wheel passenger car.
In a second aspect, an electronic device is provided, the electronic device comprising: the binding unit is used for detecting each shot video frame, and when the fact that at least one tracking target exists in the first video frame and the tracking target is a vehicle body or a wheel is detected, the vehicle body and the wheel are bound on the first video frame and the tracking target in each video frame of a plurality of video frames obtained by subsequent shooting, and at least one vehicle target is obtained; the picture splicing unit is used for determining the movement direction of a first whole vehicle target in at least one whole vehicle target, and performing picture splicing on a video frame associated with the first whole vehicle target when the first whole vehicle target arrives according to the movement direction and the position of the first whole vehicle target in the field of view of the camera device; and the jigsaw unit is also used for stopping jigsaw of the first whole vehicle target when the first whole vehicle target is determined to drive away according to the position of the first whole vehicle target in the view field, so as to obtain a complete jigsaw image of the first whole vehicle target. Optionally, a sending unit may be further included for sending the complete mosaic image to the user equipment.
In one possible design, the system further includes a detection unit configured to: determining the number of axles and the type of wheels of a first whole vehicle target when a complete jigsaw image is obtained; wheel types include a concave wheel and a cam. The detection of the wheels by the optional camera device is realized based on the image of the gunlock; and the sending unit is also used for sending the number of the wheel shafts and the type of the wheel shafts of the first whole vehicle target to the user equipment.
In one possible design, the method further includes an attribute obtaining unit configured to: for each video frame, determining the coincidence degree of a first tracking target in the next video frame and a second tracking target in the previous video frame; when the contact ratio is greater than or equal to a preset threshold value, determining that the first tracking target and the second tracking target are the same tracking target, and recording the identifier and the type of the second tracking target, wherein the identifier and the type of the second tracking target are the same as the identifier and the type of the first tracking target; determining whether the first tracked target is within a neighborhood threshold position range of the second tracked target; when the first tracking target is determined to be in the neighborhood threshold position range of the second tracking target, recording the position information of the first tracking target in the current video frame; and when the contact ratio is smaller than a preset threshold value, determining that the first tracking target and the second tracking target are not the same tracking target, and establishing the first tracking target as a new tracking target.
In one possible design, the binding unit is configured to: and binding the vehicle body and the wheels of the tracking target in each video frame according to the identification and the type of the tracking target in each video frame, the position information in the current frame and the position relation of the wheels on the vehicle body.
In one possible design, the puzzle unit is configured to: determining the movement direction of a first whole vehicle target according to a plurality of historical position information of a tracking target in the first whole vehicle target; and according to the movement direction, when the position information of the tracking target in the first whole vehicle target is determined to be within the preset position range of the downward visual field in the movement direction, the first whole vehicle target is determined to arrive.
In one possible design, the detection unit is further configured to: when any video frame in the first video frame and the plurality of video frames is determined to be a third tracking target of the vehicle body, character detection is carried out on the third tracking target, and whether characters used for indicating the number of people exist in an image of the third tracking target is determined; when determining that characters used for indicating the number of people exist in the third tracking target, acquiring the number of people in the adjacent area range of the character area used for indicating the number of people; the sending unit is further configured to: and sending the information of the number of people to the user equipment when the complete jigsaw image is obtained.
In a third aspect, there is provided a communication device comprising at least one processor coupled to a memory, the at least one processor being configured to read and execute a program stored in the memory to cause the device to perform the method of the first aspect or any one of the first aspects.
In a fourth aspect, there is provided a chip, coupled to a memory, for reading and executing program instructions stored in the memory to implement the method of the first aspect or any one of the first aspects.
In a fifth aspect, an electronic device is provided, which includes: a memory and a processor. The memory is coupled to the processor. The memory is for storing computer program code comprising computer instructions. The transceiver is used for receiving data and transmitting data. The computer instructions, when executed by a processor, cause the electronic device to perform any of the vehicle identification methods as provided by the first aspect or its corresponding possible design.
In a sixth aspect, the present application provides a chip system, which is applied to an electronic device. The system-on-chip includes one or more interface circuits, and one or more processors. The interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the cloud center and to send the signal to the processor, the signal including computer instructions stored in the memory. When the processor executes the computer instructions, the cloud centre performs the vehicle identification method as provided by the first aspect or its respective possible design.
In a seventh aspect, an embodiment of the present application provides an electronic apparatus, where the electronic apparatus has a function of implementing a behavior of an electronic device in any one of the foregoing aspects and any possible implementation manner. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the above-described functions. For example, a binding module or unit, a tile module or unit, a sending module or unit, a detecting module or unit, an attribute obtaining module or unit, etc.
In an eighth aspect, embodiments of the present application provide an electronic device that includes an antenna, one or more processors, and one or more memories. The one or more memories are coupled to the one or more processors, the one or more memories are configured to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the vehicle identification method of any of the above aspects and any possible implementation.
In a ninth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute the vehicle identification method in any one of the above aspects and any one of the possible implementations.
In a tenth aspect, embodiments of the present application provide a computer program product, which when run on a computer or a processor, causes the computer or the processor to execute the vehicle identification method in any one of the above aspects and any one of the possible implementations.
In an eleventh aspect, an embodiment of the present application provides a system, where the system may include the electronic device, the user equipment, and the charging platform in any possible implementation manner of any one of the above aspects. The electronic device may perform the vehicle identification method in any one of the above aspects and any one of the possible implementations.
It is understood that any electronic device, electronic apparatus, chip system, computer-readable storage medium or computer program product provided above may be applied to the corresponding method provided above, and therefore, the beneficial effects achieved by the method may refer to the beneficial effects in the corresponding method, and are not described herein again.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic diagram of a single-frame image of an ultra-long vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of an onboard vehicle according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a charging system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a vehicle identification method according to an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart illustrating a vehicle identification method according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of a rectangular frame of a vehicle body and wheel according to an embodiment of the present disclosure;
FIG. 8 is a schematic view of a wheel classification provided in an embodiment of the present application;
FIG. 9 is a schematic illustration of marking a tracked target in a vehicle according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating the result of a vehicle body and wheel binding according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of arrival and departure times of a vehicle according to an embodiment of the present disclosure;
FIG. 12 is a schematic view of a vehicle target provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a camera according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
At present, in the implementation scheme of 'reform and cancel highway provincial toll station of deepened toll road system', a method for adjusting the freight car toll charging mode is proposed, unified toll charging is carried out according to vehicle (axle) type books, and task requirements such as 'classification of vehicle toll and vehicle type of toll road' standards are revised. It should be noted that the axle and the wheel shaft in the present application have the same meaning.
In the axle identification technology, a single-frame image is mostly adopted for detection to obtain the number of axles, so that the charging is carried out according to the number of axles. At present, a vehicle image under an optimal angle is acquired by a wide-angle lens, and a vehicle detection technology is adopted to detect a vehicle head, a vehicle tail and wheels so as to count the number of wheels on one side of a wheel shaft between the vehicle head and the vehicle tail, and determine the number of wheels of a vehicle according to the number of wheels on one side of the wheel shaft. However, for example, as shown in fig. 1, for a large vehicle with a vehicle length of more than 200 meters, the wide-angle lens cannot acquire a complete body image of the vehicle, for example, the wheel axle at the end of the vehicle tail or the wheel axle at the vehicle head may not be counted, and the number of wheel axles between the vehicle head and the vehicle tail may not be counted accurately. For example, the large truck in fig. 1 should have 5 axles, but since the truck body is too long, only 4 axles (marked as 11, 12, 13 and 14) are displayed in a single frame image captured by the wide-angle lens, and the axle 15 is not captured. Although the wide-angle lens has a good detection effect on the vehicle head, the detection on the vehicle tail has certain fluctuation due to the long vehicle body. For example, the tail of a vehicle may not be detected, or, for a vehicle with a short vehicle body, if two vehicles with short vehicle bodies exist in a single frame image, the results of the two vehicles may be mistaken for the detection result of the same vehicle, which affects axle statistics and causes low charging efficiency.
Moreover, there is often a "vehicle-on-board" problem for the trailer that can miscalculate the trailer and the towed vehicle on the trailer as the same vehicle, interfering with axle counting. For example, as shown in fig. 2, when a vehicle 21 is towed on a trailer 20 and a camera is capturing the number of axles, the axles of the vehicle 21 and the axles of the trailer 20 may be counted as the axles of the trailer 20, which may cause a billing error. Moreover, since the wheels at the rear of the vehicle are small, only the number of wheels can be detected, and the wheel type cannot be distinguished. For billing platforms, the statistics of wheel types help the platform to better facilitate vehicle type analysis for vehicle traffic.
In view of the above, the present application provides a vehicle identification method and apparatus, which can be applied to a charging system to identify a vehicle so as to charge the vehicle. Specifically, the video image processing technology can be based on, the videos of the vehicle body and the wheels related to the same vehicle are subjected to jigsaw processing according to the tracking of the vehicle body and the wheels in a plurality of video frames, and the complete jigsaw image of the vehicle is obtained, so that the number of wheel shafts in the complete jigsaw image can be accurately counted, and reliable charging information can be obtained. Moreover, when the vehicle body and the wheels of the same vehicle are identified, the problem of 'getting on the vehicle' can be avoided, and the charging accuracy is improved.
The method may be applied to a toll road vehicle toll collection system, which may include a camera device 31 located at a road toll station, a user device 32 and a toll platform 33, as shown in fig. 3. The camera device 31 may be configured to capture a picture of a vehicle passing through a toll station on a road, obtain a video, and perform analysis processing based on a plurality of video frames in the video to obtain a complete jigsaw image of the vehicle, so as to obtain the number of axles of the vehicle based on the complete jigsaw image. The camera device 31 may also send the complete puzzle image and the number of axles of the vehicle to the user device 32, so that the user device 32 can store image information of the passing vehicle for use in subsequent viewing. The user device 32 may also send the number of axles to the billing platform 33, and the billing platform 33 may determine the vehicle type based on the number of axles, and thus the fee corresponding to the vehicle type. The user equipment may be a terminal equipment such as a Personal Computer (PC).
In some embodiments of the present application, the camera device 31 may further obtain information about the number of people loading the vehicle, and the camera device 31 may further send the information about the number of people loading the vehicle to the user device while sending the complete puzzle image to the user device, so that after the user device sends the number of people loading the vehicle to the charging platform at the same time, the charging platform may be used to determine the cost of the vehicle type based on the number of axles and the number of people loading the vehicle.
In some embodiments of the present application, the camera device 31 may be a bolt-camera type camera device. As for gunlock camera equipment, the application range is wider, different lenses can be selected as required, remote or wide-angle monitoring is realized, and the visibility of a shot object is better. Furthermore, the camera device of the gun camera can be applied to areas with insufficient light and areas where the lighting device cannot be installed at night, and is more suitable for monitoring the position or movement of an object.
It should be noted that, although the image capturing apparatus 31 is taken as an example to describe the process of executing the puzzle in the embodiment of the present application, it should be understood that the image capturing apparatus 31 may also send information such as the video frame for the puzzle and the parameter of the tracking target to an electronic apparatus such as a server in communication with the image capturing apparatus 31, and the server executes the puzzle process. Thus the system shown in fig. 3 may also include a server. The specific main execution body for executing the puzzle process is not limited in this application.
Based on this, the embodiment of the present application provides an electronic device, which may be the image capturing device 40 in fig. 4, and the image capturing device 40 may implement vehicle event detection, wheel axle identification, and the like based on the extracted relevant information of the vehicle. Fig. 4 is a schematic diagram of functional modules of an image capturing apparatus provided in the present application. The image pickup apparatus includes a vehicle body and wheel detection module 401, a target tracking module 402, and a logic analysis module 403. The vehicle body and wheel detection module 401 may be configured to detect a tracking target frame by frame in a video acquired by the image capturing device 40 by using an image processing technology, and when the tracking target is detected, the target tracking module 402 may be configured to track the tracking target and determine information such as a position and a track of the tracking target in a video frame. The logic analysis module 403 may be configured to determine the arrival time and departure time of the vehicle based on the location of the tracking target in the video frame, so as to tile the video frame of the vehicle when the vehicle leaves, and obtain a complete tile image. If the information about the number of people who are loaded needs to be acquired, the image capturing apparatus 40 may further include a character recognition module 404 for recognizing whether characters similar to the number of people who are loaded exist in the video frame, so as to read the information about the number of people behind the number of people who are loaded.
Based on the vehicle identification method and the electronic device provided by the application, the following further describes the embodiment of the application.
Example one
The present application provides a vehicle identification method, as shown in fig. 5, the method including:
501. and the electronic equipment detects each video frame, and when the fact that at least one tracking target exists in the first video frame and the tracking target is a vehicle body or a wheel is detected, the vehicle body and the wheel are bound on the first video frame and the tracking target in each video frame of a plurality of video frames obtained by subsequent shooting, and at least one vehicle target is obtained.
The embodiment of the present application takes an electronic apparatus as an example of an image pickup apparatus, which may be a gun camera apparatus.
The camera device can take video shots of pictures in the field of view to obtain continuous video frames. Every time when the image pickup device shoots a video frame, the video frame can be immediately detected, and whether an object to be tracked exists in an image of the video frame can be determined. The tracking target in the present application includes a body and wheels of a vehicle. If a tracking target to be tracked is detected in the first video frame, it can be assumed that a vehicle enters the field of view of the camera device. At this time, the image pickup apparatus may mark, for example, add a mark to the vehicle body and the wheel in each of the detected video frames, and may track the vehicle body and the wheel of the same vehicle so that the mark of the vehicle body of the same vehicle is the same and the mark of the wheel is the same in a plurality of video frames. And binding the vehicle body and the wheels of the same vehicle according to the position relationship of the vehicle body and the wheels, for example, establishing the association relationship of the vehicle body and the wheels of the same vehicle to obtain at least one whole vehicle target.
In the application, the vehicle body and the wheel in each video frame are marked independently when being marked, the vehicle body of the same vehicle in a plurality of video frames is marked in the same way, and the same wheel is marked in the same way.
The field of view here represents the maximum range that can be monitored by the camera of the camera device, and is usually expressed in terms of angle, and the larger the field of view, the larger the monitoring range.
The detection of the tracking target in the video frame in step 501 may be performed by the vehicle body and wheel detection module 401, the tracking of the vehicle body and the wheel of the same vehicle may be performed by the target tracking module 402, and the binding of the vehicle body and the wheel of the same vehicle may be performed by the logic analysis module 403. The specific implementation process may refer to the description in the second embodiment corresponding to fig. 6.
502. For a first whole vehicle target in the at least one whole vehicle target, the electronic device determines the movement direction of the first whole vehicle target, and when the arrival of the first whole vehicle target is determined according to the movement direction and the position of the first whole vehicle target in the field of view of the camera device, the video frame associated with the first whole vehicle target is subjected to picture splicing.
It is stated here that at least one entire vehicle object is due to the fact that a plurality of vehicles may be involved in the continuously acquired video frames, for example, a rear half body of a preceding vehicle and a front half body of a following vehicle may appear simultaneously in a certain video frame.
Taking the first vehicle object as an example, the image capturing apparatus may determine a moving direction of the first vehicle object, such as a left movement or a right movement, according to the historical positions of the vehicle body and the wheels in the plurality of video frames in the first vehicle object. In this way, when the camera device determines that the first whole vehicle target reaches the preset position of the visual field according to the position information of the tracking target (such as the wheel or the vehicle body) in the first whole vehicle target, the camera device determines that the first whole vehicle target arrives, a vehicle arrival signal can be sent to the user device, and the user device can record the vehicle arrival time information and the like. For example, when the first full vehicle target is traveling to the left, when the first full vehicle target reaches the right side 1/4 of the field of view, it is determined that the first full vehicle target arrives. At this time, every time the image pickup apparatus detects a video frame having the same mark as the tracking target of the first vehicle object, the image pickup apparatus performs the mosaic processing on the video frame and a previous video frame having the same mark as the tracking target of the first vehicle object.
The determination of the direction of movement and tile processing in step 502 may be performed by the above-described logical processing module 403.
503. And when the electronic equipment determines that the first whole vehicle target drives away according to the position of the first whole vehicle target in the view field, the electronic equipment stops jigsaw puzzle of the first whole vehicle target to obtain a complete jigsaw puzzle image of the first whole vehicle target, and sends the complete jigsaw puzzle image to the user equipment.
Continuing with the example in step 502, when it is determined that the first whole vehicle target reaches another preset position of the field of view, for example, the left side 1/4 of the field of view, based on the position information of the tracking target in the first whole vehicle target, it is determined that the first whole vehicle target is about to drive away. At this time, the video frame determined to be driven away is used as the last video frame of the jigsaw to be jigsaw, and after the jigsaw result of the first whole vehicle target is jigsaw, the jigsaw is finished, and the complete jigsaw image of the first whole vehicle target is obtained.
At this time, the number of tires and the tire type of the first full vehicle target may also be determined. The number of wheels in the video frame associated with the first vehicle-finishing target is counted, and the wheels with the same wheel mark are marked as 1 wheel, so that the number of wheels of the first vehicle-finishing target is obtained, and meanwhile, the type of each wheel is obtained. And then, the complete jigsaw image and the number of the wheels of the first whole vehicle target can be sent to the user equipment, so that the user equipment records the complete jigsaw image of the first whole vehicle target and then sends the number of the wheels to the charging platform, and the charging platform gives the charging cost according to the number of the wheels.
It is to be understood that the number of wheels transmitted is the number of wheels on a single side of the vehicle. If a certain wheel axle of the vehicle has two wheels on one side, it is also necessary to count on one wheel, i.e. the number of wheels is the same as the number of wheel axles.
The tile processing in step 503 is also performed by the logic processing module 403.
Of course, if the charging platform needs to acquire the number of the load people of the vehicle, the camera device needs to send the information of the number of the load people corresponding to the first vehicle target to the charging platform through the user device.
Therefore, in the application, the plurality of video frames of the whole vehicle target are determined through video shooting instead of detecting and charging the single-frame image of the vehicle, the complete jigsaw image of the whole vehicle target is obtained through jigsaw processing of the plurality of video frames, and the problem that the complete image of the vehicle cannot be obtained for an ultra-long vehicle based on the single-frame image can be solved. In addition, when the complete jigsaw image is obtained, the number of wheels of the vehicle, namely the number of wheel shafts, can be further obtained based on the complete jigsaw image, and therefore the charging efficiency is improved.
The following further describes the process in the first embodiment, and specifically, refer to the second embodiment.
Example two
An embodiment of the present application provides a vehicle identification method, as shown in fig. 6, the method includes:
601. the electronic device detects each captured video frame within the field of view, determines whether a tracking target to be detected is present in each video frame, and determines the location and type of the tracking target when it is determined that the tracking target is present.
The embodiment of the present application will be described by taking an electronic apparatus as an example of an image pickup apparatus. The camera device may be a bolt camera device.
The camera device can shoot pictures in a field of view in real time, namely the camera device can shoot continuous video frames, and when one video frame is obtained by shooting, the camera device can detect the video frame and determine whether a tracking target to be detected exists in the video frame. The tracking target to be detected is the vehicle body and the wheels of the vehicle, and the position and the type of the vehicle body and the position and the type of the wheels are determined.
In the present application, the position of the vehicle body may be understood as the position of a rectangular frame of the vehicle body in the current video frame, as shown in fig. 7, when the vehicle enters into the field of view of the camera, the camera device may acquire the position of the rectangular frame where the vehicle body is located in real time (for example, 71 when the vehicle starts to enter into the field of view in fig. 7). Similarly, the position of the wheel may be understood as the position of the rectangular box of the wheel in the current video frame (e.g., 72 in fig. 7 when the vehicle begins to enter the field of view). Since the vehicle is moving, the positions of the rectangular frame of the vehicle body and the rectangular frame of the wheel in the field of view may also be different in each video frame.
The position of the rectangular frame may be understood as coordinate information of pixel points in four sides of the rectangular frame in the current video frame, or coordinate information of four corner points of the four sides.
It should be noted that the vehicle in the present application is understood to be composed of a vehicle body and wheels. The vehicle body includes components other than wheels, including, for example, a head, a car, a joint between the head and the car, and the like.
For example, the camera device can detect and classify the vehicle body and the wheels in real time in a deep learning manner. The deep learning mode can be understood as follows: and training a large number of video frames of passing vehicles in the field of view in advance to obtain a training model. The training model can be used for calculating a pixel point structure in a video frame when the video frame is input into the training model, and outputting the position of a rectangular frame of a vehicle body and the position of a rectangular frame of a wheel in the video frame, the type of the vehicle body and the type of the wheel. For example, types of vehicle bodies include: trucks, passenger cars, trailers, and the like. The types of wheels may include a concave wheel and a cam, as shown in fig. 8. Of course, the concave wheel and the cam in fig. 8 are only schematically distinguished, and the actual wheel profile may differ from the details shown in fig. 8.
It should be noted that the axle on which the cam is located usually has two wheels, i.e. a single wheel on one side. The wheel axle of the concave wheel is provided with 4 wheels, and the single side of the concave wheel is provided with double wheels. And this application is when forming images based on the rifle bolt, and is better to the visibility of wheel, can realize the classification of concave cam when realizing wheel detection.
In some embodiments, when the image capturing apparatus acquires a plurality of video frames, for example, 25 video frames, within 1 second, the difference between two adjacent frames is small, and the present application may detect all the captured video frames or every other frame. For example, in the detection, the 1 st frame, the 3 rd frame, the 5 th frame, and the like are detected.
Step 601 may be performed by the body and wheel detection module 401 described above.
602. When the fact that at least one tracking target exists in the first video frame and the tracking target is a vehicle body or a wheel is detected, the electronic equipment determines the contact ratio of the first tracking target in the next video frame and the second tracking target in the previous video frame according to each video frame.
When the camera device detects the vehicle body or the wheel in the first video frame, the vehicle body and the wheel can be recorded with relevant attributes or interpreted as being marked. For example, as shown in fig. 9, the identification of the vehicle body is recorded as 1, the identification of the wheels is recorded as 2, and the like, the identification of each wheel detected in the same vehicle is independent, and the position information (the coordinates of the four corner points of the rectangular frame or the coordinates of the pixel points in the four sides) and the trajectory information and the like of the vehicle body and the wheels are recorded. The trajectory information can be understood as coordinate information of a center point in a rectangular frame, for example.
Then, the image pickup apparatus continues to detect each video frame captured later, and determines the coincidence degree of the first tracking target in the subsequent video frame and the second tracking target in the previous video frame to determine whether the tracking targets in the two frames are the same target. Thus, if the same target is determined, the identity and type of the first tracking target in the next video frame may be recorded as the identity and type of the second tracking target in the previous video frame (set forth in step 603), and the position information and trajectory information of the first tracking target in the current video frame may be recorded (set forth in step 604).
The degree of coincidence here can be understood as: for example, for a car body, since the car is moving, if the color values of the pixels and the positions of the pixels in the rectangular frame of the car body in the next video frame are the same as the color values of the pixels and the numbers of the pixels with the same positions in the rectangular frame of the car body in the previous video frame, the targets in the two video frames are considered as the same target.
Step 602 may be performed by the target tracking module 402.
In some embodiments, the target tracking module 402 may include a target preprocessing subunit, a target matching subunit, and a target tracking unit. And the target preprocessing subunit is used for filtering the tracking targets in the video frames detected by the vehicle body and wheel detection module 401 in real time, namely, all the tracking targets are considered to be effective tracking targets after a certain number of frames are detected in an accumulated manner, otherwise, the tracking targets are considered to be detected by mistake. For example, when a vehicle body in a certain video frame is detected, but an object with a degree of overlap with the vehicle body larger than a preset threshold value is not detected in subsequent video frames, it is considered that the vehicle body detected in the video frame may not be the vehicle body actually, may be an environmental object or other things, and the like. And the object matching subunit can be used for determining the coincidence degree of the tracking objects in the two video frames. The target tracking unit may be configured to track the detected tracking target, determine whether the tracking target is stably tracked, and record position information and trajectory information of the tracking target if the tracking target is in a stable tracking state.
603. When the contact ratio is larger than or equal to the preset threshold value, the electronic equipment determines that the first tracking target and the second tracking target are the same tracking target, records the identification and the type of the first tracking target, and the identification and the type of the first tracking target are the same as those of the second tracking target.
For example, the preset threshold is 80%, that is, when the coincidence degree of the pixel point of the first tracking target and the pixel point of the second tracking target is greater than or equal to 80%, it is determined that the first tracking target and the second tracking target are the same tracking target.
If the second tracked object is identified as 1 and the type is a truck, then the first tracked object is also identified as 1 and the type is also a truck.
And if the contact ratio is smaller than the preset threshold value, the first tracking target is considered as a new tracking target, and a record of relevant attributes is newly established for the first tracking target.
Step 602 may be performed by the target tracking module 402.
604. The electronic equipment determines whether the first tracking target is in the adjacent threshold position range of the second tracking target, and when the first tracking target is determined to be in the adjacent threshold position range of the second tracking target, the electronic equipment records the position information of the first tracking target in the current video frame.
In consideration of the fact that the algorithm program or the network signal of the image pickup apparatus may be unstable and the calculation error of the established tracking target may occur during the tracking process, in order to determine that the tracked tracking target is still being stably tracked, it is possible to determine whether the tracking state during the detection process is stable, in addition to detecting the target in the video frame and determining whether the target is the same tracking target.
In some embodiments, the present application may employ a feature point tracking method for the determination. When a second tracking target in a previous video frame is detected, one pixel point in the second tracking target can be selected as a feature point. When the next video frame is obtained by shooting, whether the rectangular frame of the tracking target exists in the adjacent threshold position range of the second tracking target in the next video frame can be determined according to the position of the rectangular frame of the second tracking target. If the rectangular frames of the tracking target exist, the target frame closest to the characteristic point in the rectangular frames is determined, and whether the characteristic point is in the target frame is determined. If the characteristic point is determined to be in the target frame, the fact that the target frame and the rectangular frame of the second tracking target have an overlapping area is shown, and the second tracking target is still in a stable tracking state. Since the first tracked target in the subsequent video frame is the same as the second tracked target in the previous video frame, it is determined that the first tracked target is within the neighborhood threshold range of the second tracked target, and the second tracked target is still being stably tracked.
For example, if the picture of each video frame is divided into a plurality of grids, each grid occupies m units. Assuming that the rectangular frame of the second tracked target in the previous video frame is 3 × 3 grids, the neighborhood threshold position range of the second tracked target may be the area of 12 grids around the 3 × 3 grids, and it is determined whether the target frame 101 closest to the feature point of the second tracked target in the 12 grids is within the target frame 101, and if so, it indicates that the second tracked target is still being stably tracked.
In some embodiments, when it is determined that a certain tracking target is still being stably tracked, position information and trajectory information of the first tracking target in the subsequent frame may be recorded. The position information of the first tracking target may be understood as coordinate information of four corner points of a rectangular frame in which the first tracking target is located. The trajectory of the first tracked target may be understood as coordinate information of a center point, a boundary point, and the like of a rectangular frame of the first tracked target.
Step 602 may be performed by the target tracking module 402.
605. The electronic equipment binds the vehicle body and the wheels of the tracking target in the first video frame and each video frame of the plurality of video frames obtained by subsequent shooting to obtain at least one vehicle target.
In some embodiments, the camera device binds the vehicle body and the wheels of the tracking target in each video frame according to the identification and the type of the tracking target in each video frame, the position information in the current frame, and the position relationship of the wheels on the vehicle body. Step 605 may be performed when each video frame is obtained by the camera device.
It will be appreciated that for a complete vehicle, the wheels are below the head and bed, or the wheels are below the body. Then, for a certain video frame, if the tracking targets of the plurality of vehicle bodies and the tracking target of the vehicle head are obtained through detection, the camera device can bind the vehicle bodies and the wheels according to the position relation between the vehicle head and the wheels.
For example, as shown in fig. 10, it is assumed that there is a problem of getting on the vehicle in a certain video frame, and the tracking targets detected in the video frame include 2 vehicle body targets and 4 wheel targets, the 2 vehicle body targets are respectively identified by 1 and 3, the 4 wheel targets are respectively identified by 2, 4, 5 and 6, and there are 6 tracking targets. According to the position relationship of the wheels and the vehicle body, the wheels 2 and 5 positioned below the vehicle body 1 and the vehicle body 1 can be bound, and the wheels 4 and 6 positioned below the vehicle body 3 and the vehicle body 3 can be bound.
Here, the binding may be understood as establishing an association relationship between the identifier of the vehicle body and the identifier of the wheel, where the association relationship further includes the type of the bound vehicle, for example, in the association relationship 1 of the vehicle body 1, the type of the vehicle is a trailer, and in the association relationship of the vehicle body 2, the type of the vehicle is a car.
Step 602 may be performed by logic analysis module 403.
606. For a first whole vehicle target in the at least one whole vehicle target, the electronic device determines the movement direction of the first whole vehicle target, and when the arrival of the first whole vehicle target is determined according to the movement direction and the position of the first whole vehicle target in the field of view of the camera device, the video frame associated with the first whole vehicle target is subjected to picture splicing.
Step 606 may be performed after the plurality of video frames are obtained by the camera device.
In some embodiments, the camera device determines a moving direction of the first whole vehicle target according to a plurality of historical position information of the tracking target in the first whole vehicle target; and the camera equipment determines that the first whole vehicle target arrives when the position information of the tracking target in the first whole vehicle target is determined to be within the preset position range of the visual field under the motion direction according to the motion direction.
For example, if there is a body marked 1 in all of the plurality of video frames, starting from the mth frame thereof, a sampling point in the body can be determined. Then, it is determined that the coordinate position of the sampling point in the plurality of video frames changes in coordinates, for example, when the vehicle body moves to the left, the difference between the abscissa in the coordinates of the sampling point in the previous video frame and the coordinates of the sampling point in the next video frame is a positive value, and it can be understood that the vehicle body moves to the left. Conversely, if the difference is negative, the vehicle body is considered to be moving rightward. The moving direction of the vehicle body is the moving direction of the vehicle.
On the basis of determining the moving direction of the first full car target, as shown in (a) of fig. 11, assuming that the sampling point 122 (for example, at the head of a car) in the car body 1 of the first full car target reaches 1/4 on the right side of the field of view, the image pickup device determines to start to mosaic the video frames associated with the car body 1 and sends a car arrival signal to the user device, and the user device can record the time of arrival of the car, the car type, or the like. Assuming that the video frames associated with the vehicle body 1 include video frames of the vehicle body identified as 1, and video frames of the wheels identified as 2 and 3, the image capturing apparatus can subject the video frames whose tracking targets are 1, 2, and 3 among the video frames to the mosaic processing.
Step 602 may be performed by logic analysis module 403.
607. And when the electronic equipment determines that the first whole vehicle target drives away according to the position of the first whole vehicle target in the view field, the electronic equipment stops jigsaw puzzle of the first whole vehicle target to obtain a complete jigsaw puzzle image of the first whole vehicle target.
Continuing with the example in step 606, as shown in fig. 11 (b), when another sampling point 123 of the vehicle body 1 is taken and it is determined that the sampling point 123 (e.g., at the tail of the vehicle) reaches the left side 1/4 of the field of view, the vehicle corresponding to the vehicle body 1 is considered to be about to drive away, and the video frame associated with the vehicle body 1 acquired at the time point of the drive-away may be considered as the last frame required for the puzzle. The camera device can then stitch the video frames associated with the body 1 and wheels 2, 3 into one complete stitched image. Such as the image shown in fig. 11 (c).
Then, the camera device can send the complete jigsaw image to the user device for storage.
Step 602 may be performed by logic analysis module 403.
608. The electronic device determines the number of axles and wheel types of the first full car target, the wheel types including a concave wheel and a cam, and the detection of the wheel is based on bolt face imaging.
When a complete jigsaw image is obtained, the camera device determines the number of wheel shafts and the type of wheels of a first complete vehicle target, the type of the wheels comprises concave wheels and cams, and the detection of the wheels is realized based on the imaging of a gun camera, or the imaging of the side face of the vehicle based on the gun camera.
When the camera device counts the video frames of the first whole vehicle target, the camera device can acquire the number of the identifications of the wheels in the video frames, the wheels with the same identifications are calculated as one wheel, the number of the wheel shafts is also equivalent to the number of the wheel shafts, and the type of the wheels with the same identifications is determined. For example, in fig. 12, the wheel types of the first full vehicle target include a concave wheel and a cam. Alternatively, in other embodiments, the wheel types of the first cart target may be all concave wheels, or all cam wheels.
609. The electronic device sends the complete puzzle image, the number of axles and the type of axles of the first full vehicle target to the user device.
Thus, when the user equipment receives the complete jigsaw image, the number of the wheel shafts and the type of the wheel shafts of the first whole vehicle target, the complete jigsaw image of the first whole vehicle target can be recorded, the wheel shaft data and the type of the wheel shafts are sent to the charging platform, and the charging platform charges according to the number of the wheel shafts or the number of the wheel shafts and the type of the wheel shafts.
It should be noted that in a vehicle billing system, there may be a plurality of camera devices for collecting vehicle information, and steps 601 to 609 of the present application may be executed by one of the camera devices, and the plurality of camera devices may further include a device for collecting a vehicle license number, and send the license number to a user device, and the user device may store a complete jigsaw image of a vehicle and simultaneously establish a corresponding relationship between the complete jigsaw image and the license number.
In addition, the charging platform in the application can charge according to the number of the wheel shafts of the vehicle and can also charge by referring to the number of the load people of the vehicle.
Therefore, in some embodiments of the present application, when each video frame is obtained by the image capturing apparatus, text recognition may be performed on the video frame to determine whether text indicating the number of people exists in the video frame.
Accordingly, the method of the present application may further comprise:
when the camera device determines that any one of the first video frame and the plurality of video frames exists as a third tracking target of the vehicle body, character detection is carried out on the third tracking target, and whether characters used for indicating the number of people exist in an image of the third tracking target is determined.
And when the camera equipment determines that the third tracking target has the characters for indicating the number of people, acquiring the number information in the adjacent area range of the character area for indicating the number of people, and sending the number information to the user equipment.
The process of performing text recognition here may be performed by the text recognition module 404 described above.
For example, the present application may perform text detection on a third tracking target, which is a vehicle body, to identify whether a "number of people who is loaded" is printed in the third tracking target, and when it is determined that the "number of people who is loaded" is printed in the third tracking target, key information after the "number of people who is loaded", that is, number information may be recorded in a related attribute of the third tracking target.
The technology for performing Character Recognition here may be, for example, an Optical Character Recognition (OCR) technology or other technologies for recognizing characters, and the present application is not limited thereto. For example, for a two-wheeled bus, the billing platform may be assisted in game breakdown charging for the resulting axle, vehicle type, and number of people loaded.
Therefore, in the embodiment of the application, the scheme of multiple video frames can be utilized, the detection and classification of the vehicle body and the wheels are carried out on each video frame in the image of the gunlock, and the tracking and binding of the vehicle and the wheels are realized. Also, the number of axles, tire types, and number of people loaded, etc. of the user equipment vehicle may be given as the vehicle arrives and departs. The scheme of this kind of this application that adopts the video multiframe can solve the current ubiquitous single frame formation of image of all kinds of camera lenses and be difficult to cover the problem of complete automobile body, moreover, this application adopts multiframe video scheme, can realize that the vehicle reachs the field of view and leaves the seizure of the signal in field of view, abandons traditional sensor scheme and carries out the scheme that the vehicle reachs and the vehicle leaves, and the rate of accuracy of the product actual measurement effect of this application can reach 99% ~ 100%, satisfies the high accurate demand of toll station.
In addition, this application adopts rifle bolt formation of image to carry out the wheel classification, can solve the unable problem of distinguishing the unilateral double round of visual field image, can promote shaft technical accuracy, and the information of product output wheel is more abundant, can assist the calculation of the application such as overweight of vehicle. In addition, the tracking scheme of the tracking target is adopted, the problem of getting on the vehicle can be solved, the wheel axle cannot be bound by mistake, and the wheel axle counting of the vehicle is more accurate.
Moreover, when the information of the number of the load people of the charging platform is provided, the charging platform can be used for subdividing and charging the two-wheeled passenger car.
In a word, the vehicle charging system has universality for the entrance and the exit for charging according to the vehicle type, not only can complete jigsaw images of the vehicle be obtained, the arrival and departure signals of the vehicle are given in real time, and the number of the wheel shafts is synchronously given, so that effective guarantee can be provided for efficient charging, and the efficient operation of a toll station is improved. Simultaneously, this application is based on the figure statistics of shaft, and the identification of load number can effectively support camera equipment's newly-increased function.
It is to be understood that, in order to implement the above-described functions, the image pickup apparatus includes hardware and/or software modules corresponding to the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present embodiment may perform division of functional modules for the image capturing apparatus according to the above method example, for example, each functional module may be divided for each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 13 shows a possible composition diagram of the electronic device 130 involved in the above embodiment, as shown in fig. 13, the electronic device 130 may include: a binding unit 1301, a tile unit 1302, a sending unit 1303, a detecting unit 1304, and an attribute obtaining unit 1305.
Among other things, the binding unit 1301 may be used to support the electronic device 130 to perform the above-described steps 501, 605, steps, etc., and/or other processes for the techniques described herein.
The tile unit 1302 may be used to support the electronic device 130 in performing the above-described steps 502, 606, 607, etc., and/or other processes for the techniques described herein.
The sending unit 1303 may be used to support the electronic device 130 in performing the above-described steps 503, 609, etc., and/or other processes for the techniques described herein.
The detection unit 1304 may be used to support the electronic device 130 in performing the above-described steps 601, 602, etc., and/or other processes for the techniques described herein.
The attribute acquisition unit 1305 may be used to enable the electronic device 130 to perform the above-described steps 603, 604, 608, etc., and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The electronic device 130 provided by the embodiment is used for executing the vehicle identification method, so that the same effects as the implementation method can be achieved.
Where an integrated unit is employed, the electronic device 130 may include a processing module, a memory module, and a communication module. The processing module may be configured to control and manage the actions of the image capturing apparatus 130, and for example, may be configured to support the electronic apparatus 130 to execute the steps executed by the binding unit 1301, the tile unit 1302, the detection unit 1304, and the attribute acquisition unit 1305. The memory module may be used to support the electronic device 130 in storing program codes and data, etc. A communication module may be used to support communication of the electronic device 130 with other devices, such as with user equipment.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
In an embodiment, when the processing module is a video signal processor, the storage module is a memory, and the communication module is a transceiver, the electronic device according to this embodiment may be a video camera having the structure shown in fig. 14, and the video camera further includes a camera for acquiring a video frame.
Embodiments of the present application also provide an electronic device including one or more processors and one or more memories. The one or more memories are coupled to the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the associated method steps described above to implement the vehicle identification method in the embodiments described above.
Embodiments of the present application also provide a computer storage medium, where computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the above related method steps to implement the vehicle identification method in the above embodiments.
Embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the vehicle identification method executed by the electronic device in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the vehicle identification method executed by the electronic equipment in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Another embodiment of the present application provides a system, which may include the above electronic device, a user device, and a billing platform, and may be used to implement the above vehicle identification method.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A vehicle identification method, characterized in that the method comprises:
the electronic equipment detects each acquired video frame, and when at least one tracking target exists in a first video frame and is detected to be a vehicle body or a wheel, the first video frame and the tracking target in each video frame of a plurality of video frames obtained by subsequent shooting are bound by the vehicle body and the wheel to obtain at least one vehicle target;
for a first finished vehicle target in the at least one finished vehicle target, the electronic equipment determines the movement direction of the first finished vehicle target, and when the first finished vehicle target arrives according to the movement direction and the position of the first finished vehicle target in the field of view of the camera equipment, the video frame associated with the first finished vehicle target is subjected to picture splicing;
and when the electronic equipment determines that the first whole vehicle target drives away according to the position of the first whole vehicle target in the view field, stopping jigsaw puzzle of the first whole vehicle target to obtain a complete jigsaw puzzle image of the first whole vehicle target.
2. The method of claim 1, further comprising:
when the complete jigsaw image is obtained, the electronic equipment determines the number of axles and the type of wheels of the first complete vehicle target; the wheel types include a concave wheel and a cam;
the electronic device sends the number of axles and the type of the axles of the first whole vehicle target to the user device.
3. The method of claim 1 or 2, wherein before the binding of the vehicle body and the wheel of the tracking target in the first video frame and each video frame of the plurality of video frames obtained by subsequent shooting, the method further comprises:
for each video frame, the electronic equipment determines the coincidence degree of a first tracking target in the next video frame and a second tracking target in the previous video frame;
when the contact ratio is greater than or equal to a preset threshold value, the electronic device determines that the first tracking target and the second tracking target are the same tracking target, records the identifier and the type of the second tracking target, and the identifier and the type of the second tracking target are the same as the identifier and the type of the first tracking target;
the electronic device determining whether the first tracked target is within a proximity threshold position range of the second tracked target;
when the first tracking target is determined to be within the neighborhood threshold position range of the second tracking target, the electronic equipment records the position information of the first tracking target in the current video frame;
when the contact ratio is smaller than a preset threshold value, the electronic equipment determines that the first tracking target and the second tracking target are not the same tracking target, and establishes the first tracking target as a new tracking target.
4. The method of claim 3, wherein the binding of the vehicle body and the wheel to the tracking target in the first video frame and each video frame of the plurality of video frames obtained by subsequent shooting comprises:
and the electronic equipment binds the vehicle body and the wheels of the tracking target in each video frame according to the identification and the type of the tracking target in each video frame, the position information in the current frame and the position relation of the wheels on the vehicle body.
5. The method of claim 3, wherein the electronic device determining a direction of movement of the first vehicle object, and wherein determining the arrival of the first vehicle object based on the direction of movement and the position of the first vehicle object in the field of view of the imaging device comprises:
the electronic equipment determines the movement direction of the first whole vehicle target according to a plurality of historical position information of the tracking target in the first whole vehicle target;
and the electronic equipment determines that the first whole vehicle target arrives when the position information of the tracking target in the first whole vehicle target is determined to be within the preset position range of the view field in the movement direction according to the movement direction.
6. The method of claim 1, further comprising:
when the electronic equipment determines that the first video frame and any one of the plurality of video frames exist as a third tracking target of the vehicle body, performing character detection on the third tracking target, and determining whether characters used for indicating the number of people exist in an image of the third tracking target;
and when the electronic equipment determines that the characters for indicating the number of people exist in the third tracking target, acquiring the number information in the adjacent area range of the character area for indicating the number of people, and sending the number information to the user equipment when the complete jigsaw image is obtained.
7. An electronic device, comprising:
the binding unit is used for detecting each shot video frame, and when at least one tracking target exists in a first video frame and is a vehicle body or a wheel, binding the vehicle body and the wheel of the first video frame and the tracking target in each video frame of a plurality of video frames obtained by subsequent shooting to obtain at least one vehicle target;
the picture splicing unit is used for determining the moving direction of a first whole vehicle target in the at least one whole vehicle target, and carrying out picture splicing on a video frame associated with the first whole vehicle target when the first whole vehicle target arrives according to the moving direction and the position of the first whole vehicle target in the view field of the camera equipment;
the jigsaw unit is further used for stopping jigsaw of the first whole vehicle target when the first whole vehicle target is determined to drive away according to the position of the first whole vehicle target in the view field, and obtaining a complete jigsaw image of the first whole vehicle target.
8. The electronic device of claim 7, further comprising a detection unit to:
determining the number of axles and the type of wheels of the first whole vehicle target when the complete jigsaw image is obtained; the wheel types include a concave wheel and a cam;
the sending unit is further configured to send the number of axles and the type of the axle of the first vehicle finishing target to the user equipment.
9. The electronic device according to claim 7 or 8, further comprising an attribute acquisition unit configured to:
for each video frame, determining the coincidence degree of a first tracking target in the next video frame and a second tracking target in the previous video frame;
when the contact ratio is greater than or equal to a preset threshold value, determining that the first tracking target and the second tracking target are the same tracking target, and recording the identifier and the type of the second tracking target, wherein the identifier and the type of the second tracking target are the same as the identifier and the type of the first tracking target;
determining whether the first tracked target is within a proximity threshold location range of the second tracked target;
when the first tracking target is determined to be within the neighborhood threshold position range of the second tracking target, recording the position information of the first tracking target in the current video frame;
and when the contact ratio is smaller than a preset threshold value, determining that the first tracking target and the second tracking target are not the same tracking target, and establishing the first tracking target as a new tracking target.
10. A computer-readable storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-6.
CN202111493952.8A 2021-12-08 2021-12-08 Vehicle identification method and device Pending CN114332681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111493952.8A CN114332681A (en) 2021-12-08 2021-12-08 Vehicle identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111493952.8A CN114332681A (en) 2021-12-08 2021-12-08 Vehicle identification method and device

Publications (1)

Publication Number Publication Date
CN114332681A true CN114332681A (en) 2022-04-12

Family

ID=81051435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111493952.8A Pending CN114332681A (en) 2021-12-08 2021-12-08 Vehicle identification method and device

Country Status (1)

Country Link
CN (1) CN114332681A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439783A (en) * 2022-09-01 2022-12-06 苏州思卡信息系统有限公司 Vehicle identification and tracking detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014241134A (en) * 2013-06-11 2014-12-25 ゼロックス コーポレイションXerox Corporation Methods and systems of classifying vehicles using motion vectors
CN111860384A (en) * 2020-07-27 2020-10-30 上海福赛特智能科技有限公司 Vehicle type recognition method
CN112836631A (en) * 2021-02-01 2021-05-25 南京云计趟信息技术有限公司 Vehicle axle number determining method and device, electronic equipment and storage medium
CN112966582A (en) * 2021-02-26 2021-06-15 北京卓视智通科技有限责任公司 Vehicle type three-dimensional recognition method, device and system, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014241134A (en) * 2013-06-11 2014-12-25 ゼロックス コーポレイションXerox Corporation Methods and systems of classifying vehicles using motion vectors
CN111860384A (en) * 2020-07-27 2020-10-30 上海福赛特智能科技有限公司 Vehicle type recognition method
CN112836631A (en) * 2021-02-01 2021-05-25 南京云计趟信息技术有限公司 Vehicle axle number determining method and device, electronic equipment and storage medium
CN112966582A (en) * 2021-02-26 2021-06-15 北京卓视智通科技有限责任公司 Vehicle type three-dimensional recognition method, device and system, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439783A (en) * 2022-09-01 2022-12-06 苏州思卡信息系统有限公司 Vehicle identification and tracking detection method and device
CN115439783B (en) * 2022-09-01 2023-10-31 苏州思卡信息系统有限公司 Detection method and equipment for vehicle identification and tracking

Similar Documents

Publication Publication Date Title
CN109767626B (en) Roadside parking inspection method and system, patrol car and server
CN107305627A (en) A kind of automobile video frequency monitoring method, server and system
CN105632182B (en) A kind of method and its system that rule-breaking vehicle behavior is put to the proof
CN101510356B (en) Video detection system and data processing device thereof, video detection method
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN105185123A (en) Fake plate vehicle recognition system and method
US10777075B2 (en) Device for tolling or telematics systems
CN103794056A (en) Vehicle type accurate classification system and method based on real-time double-line video stream
CN108133599A (en) A kind of slag-soil truck video frequency identifying method and system
CN104574954A (en) Vehicle checking method and system based on free flow system as well as control equipment
CN111145555B (en) Method and device for detecting vehicle violation
CN110232827B (en) Free flow toll collection vehicle type identification method, device and system
CN111292432A (en) Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection
CN113055823B (en) Method and device for managing shared bicycle based on road side parking
CN104794906A (en) Vehicle management platform of outdoor parking lot exit
CN114694095A (en) Method, device, equipment and storage medium for determining parking position of vehicle
CN110111582A (en) Multilane free-flow vehicle detection method and system based on TOF camera
CN114332681A (en) Vehicle identification method and device
CN115909223A (en) Method and system for matching WIM system information with monitoring video data
CN205405899U (en) Forbidden candid photograph device violating regulations of freight train
CN111105619A (en) Method and device for judging road side reverse parking
CN114495520A (en) Vehicle counting method, device, terminal and storage medium
CN113591643A (en) Underground vehicle station entering and exiting detection system and method based on computer vision
CN112396843A (en) Method for roadside parking detection by using vehicle detection model
CN111260953A (en) In-road parking management method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination