CN111383460B - Vehicle state discrimination method and device and computer storage medium - Google Patents
Vehicle state discrimination method and device and computer storage medium Download PDFInfo
- Publication number
- CN111383460B CN111383460B CN202010482543.7A CN202010482543A CN111383460B CN 111383460 B CN111383460 B CN 111383460B CN 202010482543 A CN202010482543 A CN 202010482543A CN 111383460 B CN111383460 B CN 111383460B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- target
- frame
- image group
- flag bit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012850 discrimination method Methods 0.000 title claims abstract description 16
- 238000012544 monitoring process Methods 0.000 claims abstract description 123
- 239000013598 vector Substances 0.000 claims abstract description 98
- 238000006073 displacement reaction Methods 0.000 claims abstract description 57
- 238000001514 detection method Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000009825 accumulation Methods 0.000 claims description 2
- 239000003550 marker Substances 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/056—Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a vehicle state discrimination method, a device and a computer storage medium, wherein the vehicle state discrimination method comprises the following steps: continuously capturing monitoring video images with preset frame numbers to form a target monitoring image group; acquiring a category flag bit, a vehicle frame and a license plate frame of a target monitoring image group through a target detection model; acquiring a vehicle displacement vector based on the target monitoring image group, and acquiring a displacement flag bit based on the vehicle displacement vector; when the category flag bit is consistent with the displacement flag bit, outputting the running state of the target vehicle; calculating a license plate offset vector based on the vehicle frame and the license plate frame of the target monitoring image group, and outputting the driving direction of the target vehicle by the license plate offset vector; the vehicle state of the target vehicle is acquired based on the traveling state and the traveling direction of the target vehicle. By the method, the states of the vehicles entering and exiting can be rapidly and accurately judged through the zone bit information, the vehicle displacement vector and the license plate offset vector.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for determining a vehicle state, and a computer storage medium.
Background
With the increasing popularity of Intelligent Traffic Systems (ITS) in Traffic scenes such as electric police, gates, entrances and exits, parking lots and the like, the determination of the driving state of vehicles becomes more and more important, especially in entrance and exit charging scenes or entrance and exit Traffic flow statistics scenes in which the vehicles are mixed in and out. At present, under most of domestic entrance and exit scenes, a hardware implementation mode based on a ground induction coil is adopted, the technology is easily limited by hardware defects such as self sensitivity of a coil sensor, signal detection and the like, the driving state of a vehicle cannot be accurately judged, and accurate charging information or traffic flow information of the vehicle cannot be output. Therefore, under the background of the current era of artificial intelligence, a stable and reliable method for judging the driving direction and state of a vehicle is particularly important for improving the intelligent level of the ITS.
Disclosure of Invention
The application provides a vehicle state judging method, a vehicle state judging device and a computer storage medium, which are used for solving the problem that in the prior art, the running state of a vehicle cannot be accurately judged, so that accurate charging information or traffic flow information of the vehicle cannot be output.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a vehicle state determination method including:
continuously capturing monitoring video images with preset frame numbers to form a target monitoring image group;
acquiring a category flag bit, a vehicle frame and a license plate frame of the target monitoring image group through a target detection model;
acquiring a vehicle displacement vector based on the target monitoring image group, and acquiring a displacement flag bit based on the vehicle displacement vector;
when the category flag bit is consistent with the displacement flag bit, outputting the running state of the target vehicle;
calculating a license plate offset vector based on the vehicle frame and the license plate frame of the target monitoring image group, and outputting the driving direction of a target vehicle by the license plate offset vector;
and acquiring the vehicle state of the target vehicle based on the running state and the running direction of the target vehicle.
Wherein the step of obtaining a displacement flag based on the vehicle displacement vector comprises:
when the vehicle displacement vector is in the positive direction, judging that the running state of the target vehicle is driving in, and setting the displacement flag bit to be 0;
and when the vehicle displacement vector is in a negative direction, judging that the running state of the target vehicle is running out, and setting the displacement flag bit to be 1.
After the step of obtaining the displacement flag bit based on the vehicle displacement vector, the vehicle state discrimination method includes:
when the category flag bit is inconsistent with the displacement flag bit, calculating a tail lamp mark bit of the target monitoring image group through a tail lamp detection algorithm;
outputting a driving state of the target vehicle based on the tail lamp flag bit.
Wherein, the step of calculating the tail light marker bit of the target monitoring image group by the tail light detection algorithm comprises the following steps:
when the monitoring video image in the target monitoring image group does not have a tail lamp detection frame, setting a tail lamp zone bit of the monitoring video image to be 0;
when one tail lamp detection frame exists in the monitoring video image in the target monitoring image group, setting a tail lamp zone bit of the monitoring video image to be 1;
calculating the accumulated sum of tail lamp zone bits of all monitoring video images in the target monitoring image group;
when the accumulated sum is more than half of the frame number of all the monitoring video images, setting a tail lamp zone bit of the target monitoring image group to be 1;
and when the accumulated sum is less than or equal to half of the frame number of all the monitoring video images, setting a tail lamp zone bit of the target monitoring image group to be 0.
Wherein the step of calculating a license plate offset vector based on the vehicle frame and the license plate frame of the target monitoring image group and outputting the driving direction of the target vehicle by the license plate offset vector comprises
Acquiring a coordinate vector of the upper left corner and a coordinate vector of the lower right corner of the vehicle frame;
acquiring a coordinate vector of the upper left corner and a coordinate vector of the lower right corner of the license plate frame;
calculating a first license plate offset vector based on the upper left corner coordinate vector of the vehicle frame, the upper left corner coordinate vector of the license plate frame and the lower right corner coordinate vector;
calculating a second card offset vector based on the lower-right corner coordinate vector of the vehicle frame, the upper-left corner coordinate vector and the lower-right corner coordinate vector of the license plate frame;
outputting the driving direction of the target vehicle according to the magnitude relation of the first license plate offset vector and the second license plate offset vector.
The step of obtaining the category flag bit of the target monitoring image group through the target detection model comprises the following steps:
detecting monitoring video images with continuous preset frame numbers through the target detection model;
acquiring the category flag bit of each frame of monitoring video image, and accumulating the category flag bits of the monitoring video images with the preset frame number;
and outputting the category flag bit of the target monitoring image group according to the accumulation processing result of the category flag bit of the monitoring video image with the preset frame number.
The method comprises the following steps of continuously capturing monitoring video images with preset frame numbers to form a target monitoring image group, wherein the steps comprise:
presetting a rule line based on camera parameters;
acquiring a monitoring video image of the camera, and acquiring a vehicle frame in the monitoring video image through the target detection model;
when the vehicle frame of the monitoring video image touches the rule line, continuously capturing the monitoring video images with preset frame numbers to form the target monitoring image group.
Wherein the step of presetting a ruled line based on camera parameters comprises:
obtaining an image resolution based on the camera parameters, wherein the image resolution comprises a horizontal resolution and a vertical resolution;
and presetting the ruled line in the image based on a preset scale factor and the vertical resolution.
In order to solve the above technical problem, another technical solution adopted by the present application is: a vehicle state discrimination device is provided, the vehicle positioning device includes a processor and a memory; the memory stores therein a computer program, and the processor executes the computer program to realize the steps of the above-described vehicle state discrimination method.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer storage medium having a computer program stored therein, the computer program when executed implementing the steps of the vehicle state discrimination method.
Different from the prior art, the beneficial effects of this application lie in: the vehicle state judging device continuously captures monitoring video images with preset frame numbers to form a target monitoring image group; acquiring a category flag bit, a vehicle frame and a license plate frame of a target monitoring image group through a target detection model; acquiring a vehicle displacement vector based on the target monitoring image group, and acquiring a displacement flag bit based on the vehicle displacement vector; when the category flag bit is consistent with the displacement flag bit, outputting the running state of the target vehicle; calculating a license plate offset vector based on the vehicle frame and the license plate frame of the target monitoring image group, and outputting the driving direction of the target vehicle by the license plate offset vector; the vehicle state of the target vehicle is acquired based on the traveling state and the traveling direction of the target vehicle. By the method, the states of the vehicles entering and exiting can be rapidly and accurately judged through the zone bit information, the vehicle displacement vector and the license plate offset vector.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for determining a vehicle state according to the present application;
FIG. 2 is a schematic flow chart of step 101 in FIG. 1;
FIG. 3 is a schematic illustration of ruled lines in an image of a vehicle exiting as provided herein;
FIG. 4 is a schematic illustration of a ruled line in a vehicle-entering image provided by the present application;
FIG. 5 is a schematic flow chart of step 105 of FIG. 1;
FIG. 6 is a schematic diagram of a deviation displacement of a license plate provided herein;
fig. 7 is a schematic structural diagram of an embodiment of a vehicle state determination device provided in the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problem that the prior art cannot accurately judge the running state of a vehicle and cannot output accurate charging information or traffic flow information of the vehicle, the application provides a vehicle state judging method. Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a vehicle state determination method provided in the present application.
The vehicle state judging method is applied to a vehicle state judging device, wherein the vehicle state judging device can be a server, a terminal device, or a system formed by the server and the terminal device in a matched mode. Accordingly, each part, such as each unit, subunit, module, and submodule, included in the vehicle state determination device may be all provided in the server, may be all provided in the terminal device, and may be provided in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein.
In the following description of the embodiments, the state determination device is unified as the execution subject of the vehicle determination method.
As shown in fig. 1, the vehicle state determination method of the present embodiment specifically includes the following steps:
s101: and continuously capturing monitoring video images with preset frame numbers to form a target monitoring image group.
Wherein, the state discrimination device presets a confidence threshold of the vehicle frame and a displacement vector statistical frame number NumCNTAnd ruled lines in the surveillance video images.
The vehicle frame confidence coefficient threshold is used for setting a starting frame of the target monitoring image group. Specifically, the target detection model performs vehicle detection on a monitoring video image acquired by a camera in real time, and outputs a vehicle frame and a corresponding confidence coefficient in the image. The state discrimination device takes the monitoring video image with the first frame confidence coefficient larger than the preset confidence coefficient threshold value as the initial frame of the target monitoring image group, and continuously snapshotting Num is carried out on the monitoring video image based on the initial frameCNTAnd the monitoring video images of the frames form a target monitoring image group.
In this embodiment, reference may be specifically made to fig. 2 for a step of setting a rule line in a monitoring video image, where fig. 2 is a specific implementation of step 101 in fig. 1, and the method in fig. 2 includes the following steps:
s201: image resolution is obtained based on the camera parameters.
The state discrimination device acquires camera parameters of the camera, and determines that the image resolution of the shot monitoring video image is w x h, w is the horizontal resolution, and h is the vertical resolution based on the camera parameters.
S202: ruled lines are preset in the image based on a preset scale factor and vertical resolution.
The rule lines preset in this embodiment mainly include a driving-in pre-judgment line, a driving-out pre-judgment line, and a snapshot start line, and refer to fig. 3 and 4. In other embodiments, the ruled lines may be increased, decreased, or modified based on the needs of the worker, and are not described in detail herein.
Specifically, when the image resolution of the monitored video image is w × h, the position calculation formula of the ruled line is specifically as follows:
wherein, snaplineFor snapping lines, inlineFor driving into the predetermined line, outlineFor driving out of predetermined lines, snapratioFor the proportionality coefficient of the snap line with respect to the image height, inratioFor driving into the proportionality coefficient of the prejudged line with respect to the image height, outratioTo derive a scaling factor of the pre-determined line with respect to the image height.
S203: and acquiring a monitoring video image of the camera, and acquiring a vehicle frame in the monitoring video image through the target detection model.
S204: when the vehicle frame of the monitoring video image touches the rule line, continuously capturing the monitoring video image with the preset frame number to form a target monitoring image group.
When the state discrimination device judges that the vehicle frame in the monitoring video image collected by the camera touches the rule line of step 202, namely the initial snapshot line in fig. 3 and 4, the state discrimination device continuously snapshots NumCNTAnd the monitoring video images of the frames form a target monitoring image group.
Further, the state discrimination device may further discriminate the starting frame of the target monitoring image group in combination with two factors, i.e., the rule line and the confidence level of the vehicle frame, where the confidence level of the vehicle frame near the rule line is greater than the preset confidence level threshold, the state discrimination device sets the current frame of the target monitoring image group as the starting frame of the target monitoring image group, filters out the vehicle frame detection result whose initial state is unreliable, determines the starting frame of the subsequent displacement vector calculation, and ensures the stability of the starting state of the target vehicle.
S102: and acquiring the class mark bit, the vehicle frame and the license plate frame of the target monitoring image group through the target detection model.
The state discrimination device inputs monitoring video images continuously acquired in the target monitoring image group into the target detection model, and the target detection model detects and positions the vehicles in each monitoring video image to acquire the vehicle frame and the category zone bit of the target vehicle. The present embodiment sets two different category flags according to the head or tail of the vehicle in the surveillance video image, for example, the category flag label of the surveillance video image displaying the head of the target vehicleIDSet to 0, displaying the class flag label of the surveillance video image of the tail of the target vehicleIDIs set to 1.
After the category flag bits of all the monitoring video images are obtained, the state discrimination device establishes a category queue and stores the category flag bit label of the target vehicle from the start frame to the end frameID. Then, the state discrimination device calculates the class flag C of the target monitoring image group by performing sum operation by the target vehicle class teamFlagAnd outputting the class flag bit C of the target monitoring image groupFlag. Specifically, when the processing result sum is accumulated>NumCNT2, then class flag bit CFlagSetting the vehicle position as 1, and marking the vehicle position as the tail of the vehicle; when the processing result sum is accumulated<NumCNT2, then class flag bit CFlagAnd setting the position as 0, and marking the position of the vehicle as the head of the vehicle.
According to the method, through a deep learning-based Yolov3 target detection training method, CNN convolutional neural network parameters are set to train training set data, and a target detection model is obtained.
S103: and acquiring a vehicle displacement vector based on the target monitoring image group, and acquiring a displacement flag bit based on the vehicle displacement vector.
Wherein, the state discrimination device counts the positive and negative values of the target vehicle displacement vector △ dy in the process of detecting the target vehicle moving from the initial frame to the end frame, and sets the displacement flag △ dy by the positive and negative values of the displacement vector △ dyFlagTo output of (c).
Specifically, when the target vehicle is displacedWhen the vector △ dy is positive, the displacement flag bit △ dy of the target vehicle is setFlagSet to 0, determines that the driving state of the target vehicle is the drive-in state, and sets the displacement flag bit △ dy of the target vehicle when the target vehicle displacement vector △ dy is negativeFlagSet to 1, the running state of the target vehicle is determined to be the exit state.
In step 102, the state discrimination device sets the category flag label of the target vehicleIDPassing the class flag label under the condition of meeting the normal traffic ruleIDThe driving state of the target vehicle is acquired. For example, when labelIDIf =0, the traveling state of the target vehicle is the drive-in state; when labelIDIf =1, the traveling state of the target vehicle is the exit state.
In summary, the state determination device obtains the driving state of the target vehicle by using both the displacement flag and the category flag. Before outputting the final driving state, the state determination device needs to determine whether the form state of the target vehicle is consistent or not according to the logic processing result between the displacement flag bit and the category flag bit. Wherein the logic processing operation is an AND operation, when the logic processing result is 1, it indicates that the consistency of the shift flag bit and the class flag bit is confirmed, and the process proceeds to step 104.
When the logic processing result is 0, that is, the displacement flag bit and the category flag bit are not consistent, the state discrimination device may further calculate a tail light flag bit of a multi-frame surveillance video image in the target surveillance image group by a tail light detection algorithm, and discriminate the driving state of the target vehicle by combining the tail light flag bit of the multi-frame surveillance video image.
Specifically, the state discrimination device uses all the monitoring video images of the target monitoring image group as input images for Tail light detection, and establishes a Tail light queue TailQueueAnd traversing all the monitoring video images. When a tail lamp detection frame exists in a current frame monitoring video image detected by a tail lamp, setting a tail lamp flag bit of the current frame monitoring video image to be 1, and storing a setting result into a tail lamp queue; when the current frame monitoring video image detected by the tail lamp does not have a tail lamp detection frame, the monitoring video image of the current frame is detectedThe tail light flag is set to 0, and the setting result is stored in the tail light queue.
After traversing, the state discrimination device carries out summation operation on the tail lamp queue to obtain sum, and if sum is obtained>NumCNTSetting a final tail lamp mark bit, namely a tail lamp mark bit of the target monitoring image group to be 1; if sum<= NumCNTAnd 2, setting the final tail lamp mark bit, namely the tail lamp mark bit of the target monitoring image group to be 0.
When the final tail lamp mark position is 1, judging that the tail target of the target vehicle exists, and confirming that the running state of the target vehicle is the running-out state; and when the final tail lamp mark bit is 0, judging that the head target of the target vehicle exists, and confirming that the running state of the target vehicle is the driving-in state.
S104: and outputting the running state of the target vehicle when the class flag bit is consistent with the displacement flag bit.
S105: and calculating a license plate offset vector based on the vehicle frame and the license plate frame of the target monitoring image group, and outputting the driving direction of the target vehicle by the license plate offset vector.
Wherein, in the above step, the state discriminating means continuously snaps Num when the lower boundary of the vehicle frame touches the snap lineCNTThe frame monitoring video images are cached, accidental results caused by single frame images are avoided, furthermore, the state distinguishing device divides the vehicle frames of all cached images up and down, detects the license plates of the lower half parts of the vehicle frames, and stores the detection results of the license plates.
The state determination device calculates a license plate offset vector according to a detection result of the license plate frame, and outputs a driving direction of the target vehicle according to the license plate offset vector, the specific calculation step refers to fig. 5, fig. 5 is a specific implementation manner of step 105 in fig. 1, and the method in fig. 5 includes the following steps:
s301: and acquiring the coordinate vector of the upper left corner and the coordinate vector of the lower right corner of the vehicle frame.
As shown in fig. 6, the state determination device obtains the coordinate vectors of the upper left corner and the lower right corner of the vehicle frame as follows:
s302: and acquiring the coordinate vector of the upper left corner and the coordinate vector of the lower right corner of the license plate frame.
As shown in fig. 6, the state determination device obtains the coordinate vectors of the upper left corner and the lower right corner of the license plate frame as follows:
s303: and calculating a first license plate offset vector based on the upper left corner coordinate vector of the vehicle frame, the upper left corner coordinate vector of the license plate frame and the lower right corner coordinate vector.
S304: a second card offset vector is calculated based on the lower right corner coordinate vector of the vehicle frame, the upper left corner coordinate vector of the license plate frame, and the lower right corner coordinate vector.
The license plate offset vector calculation formula is as follows:
wherein,is the first license plate offset vector and,is the second license plate offset vector, N is the frame number of the target monitoring image group, and i represents the ith frame monitoring video image.
S305: and outputting the running direction of the target vehicle according to the magnitude relation of the first license plate offset vector and the second license plate offset vector.
Wherein the state discrimination device outputs the license plate offset flag d of the target vehicle according to the magnitude relation between the first license plate offset vector and the second license plate offset vectorFlagThereby determining the traveling direction of the target vehicle. Specifically, when the first license plate offset vector is larger than the second license plate offset vector, the license plate offset flag bit dFlagSet to 0, indicating that the target vehicle is traveling to the left; when the first license plate offset vector is smaller than the second license plate offset vector, the license plate offset zone bit dFlagSet to 1, indicates that the target vehicle is traveling to the right.
S106: the vehicle state of the target vehicle is acquired based on the traveling state and the traveling direction of the target vehicle.
Wherein the state discrimination device integrates the target vehicle running state result calculated and output in the above steps, and the license plate deviation sign is dFlagAnd outputting the vehicle state of the target vehicle, wherein the state table is specifically referred to as the following table:
vehicle running state table
In this embodiment, the vehicle state determination device continuously captures monitoring video images with a preset number of frames to form a target monitoring image group; acquiring a category flag bit, a vehicle frame and a license plate frame of a target monitoring image group through a target detection model; acquiring a vehicle displacement vector based on the target monitoring image group, and acquiring a displacement flag bit based on the vehicle displacement vector; when the category flag bit is consistent with the displacement flag bit, outputting the running state of the target vehicle; calculating a license plate offset vector based on the vehicle frame and the license plate frame of the target monitoring image group, and outputting the driving direction of the target vehicle by the license plate offset vector; the vehicle state of the target vehicle is acquired based on the traveling state and the traveling direction of the target vehicle. By the method, the states of the vehicles entering and exiting can be rapidly and accurately judged through the zone bit information, the vehicle displacement vector and the license plate offset vector.
In order to implement the vehicle positioning method of the foregoing embodiment, the present application further provides a vehicle state determination device, and specifically refer to fig. 7, where fig. 7 is a schematic structural diagram of an embodiment of the vehicle positioning device provided in the present application.
As shown in fig. 7, the vehicle state determination apparatus 500 of the present embodiment includes a processor 51, a memory 52, an input/output device 53, and a bus 54.
The processor 51, the memory 52, and the input/output device 53 are respectively connected to the bus 54, the memory 52 stores a computer program, and the processor 51 is configured to execute the computer program to implement the vehicle state determination method according to the above embodiment.
In the present embodiment, the processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The processor 51 may also be a GPU (Graphics Processing Unit), which is also called a display core, a visual processor, and a display chip, and is a microprocessor specially used for image operation on a personal computer, a workstation, a game machine, and some mobile devices (such as a tablet computer, a smart phone, etc.). The GPU is used for converting and driving display information required by a computer system, providing a line scanning signal for a display and controlling the display of the display correctly, is an important element for connecting the display and a personal computer mainboard, and is also one of important devices for man-machine conversation. The display card is an important component in the computer host, takes charge of outputting display graphics, and is very important for people engaged in professional graphic design. A general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
The present application also provides a computer storage medium, as shown in fig. 8, the computer storage medium 600 is used for storing a computer program 61, and the computer program 61 is used for implementing the method in the embodiment of the vehicle state identification method of the present application when being executed by a processor.
The method related in the embodiment of the vehicle state identification method of the application can be stored in a device, such as a computer readable storage medium, when the method is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
Claims (9)
1. A vehicle state discrimination method, characterized by comprising:
continuously capturing monitoring video images with preset frame numbers to form a target monitoring image group;
acquiring a vehicle frame and a license plate frame of the target monitoring image group through a target detection model;
generating a category flag bit according to the vehicle orientation displayed by the vehicle frame in the target monitoring image group;
acquiring a vehicle displacement vector based on the target monitoring image group, and generating a displacement flag bit according to the direction of the vehicle displacement vector;
when the category flag bit is consistent with the displacement flag bit, outputting the running state of the target vehicle;
acquiring a coordinate vector of the upper left corner and a coordinate vector of the lower right corner of the vehicle frame;
acquiring a coordinate vector of the upper left corner and a coordinate vector of the lower right corner of the license plate frame;
calculating a first license plate offset vector based on the upper left corner coordinate vector of the vehicle frame, the upper left corner coordinate vector of the license plate frame and the lower right corner coordinate vector;
calculating a second card offset vector based on the lower-right corner coordinate vector of the vehicle frame, the upper-left corner coordinate vector and the lower-right corner coordinate vector of the license plate frame;
outputting a driving direction of the target vehicle from a magnitude relationship of the first and second license plate offset vectors;
acquiring a vehicle state of the target vehicle, wherein the vehicle state comprises a driving state and a driving direction of the target vehicle.
2. The vehicle state discrimination method according to claim 1,
the step of generating a displacement flag from the direction of the vehicle displacement vector comprises:
when the vehicle displacement vector is in the positive direction, judging that the running state of the target vehicle is driving in, and setting the displacement flag bit to be 0;
and when the vehicle displacement vector is in a negative direction, judging that the running state of the target vehicle is running out, and setting the displacement flag bit to be 1.
3. The vehicle state discrimination method according to claim 1,
after the step of generating the displacement flag from the direction of the vehicle displacement vector, the vehicle state discrimination method includes:
when the category flag bit is inconsistent with the displacement flag bit, calculating a tail lamp mark bit of the target monitoring image group through a tail lamp detection algorithm;
outputting a driving state of the target vehicle based on the tail lamp flag bit.
4. The vehicle state discrimination method according to claim 3,
the step of calculating the tail light marker bit of the target monitoring image group by the tail light detection algorithm comprises the following steps:
when the monitoring video image in the target monitoring image group does not have a tail lamp detection frame, setting a tail lamp zone bit of the monitoring video image to be 0;
when one tail lamp detection frame exists in the monitoring video image in the target monitoring image group, setting a tail lamp zone bit of the monitoring video image to be 1;
calculating the accumulated sum of tail lamp zone bits of all monitoring video images in the target monitoring image group;
when the accumulated sum is more than half of the frame number of all the monitoring video images, setting a tail lamp zone bit of the target monitoring image group to be 1;
and when the accumulated sum is less than or equal to half of the frame number of all the monitoring video images, setting a tail lamp zone bit of the target monitoring image group to be 0.
5. The vehicle state discrimination method according to claim 1,
the step of generating a category flag according to the vehicle orientation displayed by the vehicle frame in the target monitoring image group comprises the following steps:
detecting monitoring video images with continuous preset frame numbers through the target detection model;
acquiring the category flag bit of each frame of monitoring video image, and accumulating the category flag bits of the monitoring video images with the preset frame number;
and outputting the category flag bit of the target monitoring image group according to the accumulation processing result of the category flag bit of the monitoring video image with the preset frame number.
6. The vehicle state discrimination method according to claim 1,
the step of continuously capturing the monitoring video images with the preset frame number to form a target monitoring image group comprises the following steps:
presetting a rule line based on camera parameters;
acquiring a monitoring video image of the camera, and acquiring a vehicle frame in the monitoring video image through the target detection model;
when the vehicle frame of the monitoring video image touches the rule line, continuously capturing the monitoring video images with preset frame numbers to form the target monitoring image group.
7. The vehicle state discrimination method according to claim 6,
the step of presetting a rule line based on camera parameters includes:
obtaining an image resolution based on the camera parameters, wherein the image resolution comprises a horizontal resolution and a vertical resolution;
and presetting the ruled line in the image based on a preset scale factor and the vertical resolution.
8. A vehicle state discrimination device characterized by comprising a processor and a memory; the memory stores a computer program, and the processor executes the computer program to realize the steps of the vehicle state determination method according to any one of claims 1 to 7.
9. A computer storage medium storing a computer program which, when executed, implements the steps of the vehicle state discrimination method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010482543.7A CN111383460B (en) | 2020-06-01 | 2020-06-01 | Vehicle state discrimination method and device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010482543.7A CN111383460B (en) | 2020-06-01 | 2020-06-01 | Vehicle state discrimination method and device and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111383460A CN111383460A (en) | 2020-07-07 |
CN111383460B true CN111383460B (en) | 2020-09-18 |
Family
ID=71219062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010482543.7A Active CN111383460B (en) | 2020-06-01 | 2020-06-01 | Vehicle state discrimination method and device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111383460B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712708A (en) * | 2020-12-28 | 2021-04-27 | 上海眼控科技股份有限公司 | Information detection method, device, equipment and storage medium |
CN112699781A (en) * | 2020-12-29 | 2021-04-23 | 上海眼控科技股份有限公司 | Vehicle lamp state detection method and device, computer equipment and readable storage medium |
CN112906647B (en) * | 2021-03-24 | 2023-12-19 | 杭州鲁尔物联科技有限公司 | Method and device for monitoring load of small-span bridge, computer equipment and storage medium |
CN113487649B (en) * | 2021-05-20 | 2022-12-06 | 浙江大华技术股份有限公司 | Vehicle detection method and device and computer storage medium |
CN114170810A (en) * | 2021-12-28 | 2022-03-11 | 深圳市捷顺科技实业股份有限公司 | Vehicle traveling direction identification method, system and device |
CN118129764B (en) * | 2024-05-08 | 2024-06-28 | 苏州楚亦捷科技有限公司 | Moving target coordinate iterative positioning method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3719315B2 (en) * | 1997-08-08 | 2005-11-24 | アイシン・エィ・ダブリュ株式会社 | Vehicle navigation device and storage medium |
CN104036640A (en) * | 2014-05-16 | 2014-09-10 | 北京卓视智通科技有限责任公司 | Panoramic image acquisition device and panoramic image acquisition method |
CN105989740A (en) * | 2015-02-10 | 2016-10-05 | 成都海存艾匹科技有限公司 | Night parking monitor |
CN204965751U (en) * | 2015-09-08 | 2016-01-13 | 苏州市世跃智能科技有限公司 | Bayonet type electronic police system |
CN207860066U (en) * | 2018-01-26 | 2018-09-14 | 深圳市领航者汽车智能技术开发有限公司 | A kind of panorama record supplementary controlled system |
-
2020
- 2020-06-01 CN CN202010482543.7A patent/CN111383460B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111383460A (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111383460B (en) | Vehicle state discrimination method and device and computer storage medium | |
US10261574B2 (en) | Real-time detection system for parked vehicles | |
EP3806064B1 (en) | Method and apparatus for detecting parking space usage condition, electronic device, and storage medium | |
CN111784857A (en) | Parking space management method and device and computer storage medium | |
Kanhere et al. | Real-time incremental segmentation and tracking of vehicles at low camera angles using stable features | |
CN110298300B (en) | Method for detecting vehicle illegal line pressing | |
CN110738150B (en) | Camera linkage snapshot method and device and computer storage medium | |
CN109241938B (en) | Road congestion detection method and terminal | |
CN113139482A (en) | Method and device for detecting traffic abnormity | |
CN110135377B (en) | Method and device for detecting motion state of object in vehicle-road cooperation and server | |
CN111178119A (en) | Intersection state detection method and device, electronic equipment and vehicle | |
CN111898491A (en) | Method and device for identifying reverse driving of vehicle and electronic equipment | |
CN111597879A (en) | Gesture detection method, device and system based on monitoring video | |
CN112163544A (en) | Method and system for judging random placement of non-motor vehicles | |
CN108389177B (en) | Vehicle bumper damage detection method and traffic safety early warning method | |
CN113869258A (en) | Traffic incident detection method and device, electronic equipment and readable storage medium | |
CN113487649B (en) | Vehicle detection method and device and computer storage medium | |
CN112101139B (en) | Human shape detection method, device, equipment and storage medium | |
CN113850752A (en) | Vehicle overload detection method, device and storage medium | |
CN112633228A (en) | Parking detection method, device, equipment and storage medium | |
CN111652143B (en) | Vehicle detection method and device and computer storage medium | |
JP2002367077A (en) | Device and method for deciding traffic congestion | |
JP7203277B2 (en) | Method and apparatus for monitoring vehicle license plate recognition rate and computer readable storage medium | |
CN114463702A (en) | Fault identification method and device of guardrail, medium and electronic equipment | |
CN114863372A (en) | Parking management method, parking management device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20200707 Assignee: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd. Assignor: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd. Contract record no.: X2021330000117 Denomination of invention: A vehicle state discrimination method, device and computer storage medium Granted publication date: 20200918 License type: Common License Record date: 20210823 |
|
EE01 | Entry into force of recordation of patent licensing contract |