CN114140762A - Method for automatically identifying vehicle driving direction - Google Patents

Method for automatically identifying vehicle driving direction Download PDF

Info

Publication number
CN114140762A
CN114140762A CN202111217204.7A CN202111217204A CN114140762A CN 114140762 A CN114140762 A CN 114140762A CN 202111217204 A CN202111217204 A CN 202111217204A CN 114140762 A CN114140762 A CN 114140762A
Authority
CN
China
Prior art keywords
vehicle
picture
lane
recognition model
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111217204.7A
Other languages
Chinese (zh)
Inventor
乐曦
明志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongzhi Digital Technology Co ltd
Original Assignee
Wuhan Zhongzhi Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongzhi Digital Technology Co ltd filed Critical Wuhan Zhongzhi Digital Technology Co ltd
Priority to CN202111217204.7A priority Critical patent/CN114140762A/en
Publication of CN114140762A publication Critical patent/CN114140762A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A method of automatic identification of a direction of travel of a vehicle, comprising: the method comprises the steps of collecting vehicle pictures captured by all vehicle access devices in an area, and carrying out feature recognition on the vehicle pictures; determining a vehicle running track through the vehicle picture after the characteristic recognition to form a vehicle running track database; inputting a vehicle running track database into an initial lane recognition model, training the lane recognition model, and generating a trained lane recognition model; and acquiring a vehicle picture captured in real time, inputting the vehicle picture captured in real time into the trained lane recognition model, and outputting vehicle driving track data. The lane driving direction is automatically recognized based on the actual snapshot data. The bidirectional variable lane can be automatically adapted according to the data snapshot condition, and the scene adaptability is stronger. The problem of current video vehicle identification technique to inaccurate the direction of travel discernment on two-way lane is solved.

Description

Method for automatically identifying vehicle driving direction
Technical Field
The invention relates to the field of intelligent park management systems, in particular to a method and a system for automatically identifying the driving direction of a vehicle.
Background
With the continuous development of video security, the monitoring video data is increased explosively, and an intelligent monitoring system is needed for massive monitoring video data. Most of traditional intelligent traffic monitoring systems only have video data recording capacity, but do not have video target detection tracking and video time behavior analysis understanding capacity, and have many disadvantages: the observation needs to be checked manually, a large amount of manpower and financial resources need to be consumed, and the efficiency is low; each abnormal condition cannot be observed in place, and the conditions of inaccurate judgment and omission occur. At present, most of video monitoring data are acquired by a static camera, and from the practical application perspective, the problem of target structuring in a monitoring video is researched aiming at the monitoring video data under a static background.
A core problem in an intelligent monitoring system is how to efficiently query and analyze massive video data, and at present, most scenes such as license plate recognition, vehicle information analysis and the like are applied, but the application in scenes of vehicle detention and vehicle wandering is not much.
Disclosure of Invention
In view of the above, the present invention has been made to provide a method for automatic recognition of a vehicle's driving direction that overcomes or at least partially solves the above-mentioned problems.
In order to solve the technical problem, the embodiment of the application discloses the following technical scheme:
a method of automatic identification of a direction of travel of a vehicle, comprising:
s100, vehicle pictures captured by all vehicle access devices in an area are collected, and feature recognition is carried out on the vehicle pictures;
s200, determining a vehicle running track through the vehicle picture after the characteristic recognition to form a vehicle running track database;
s300, inputting a vehicle driving track database into an initial lane recognition model, training the lane recognition model, and generating a trained lane recognition model;
s400, acquiring a vehicle picture captured in real time, inputting the vehicle picture captured in real time into the trained lane recognition model, and outputting vehicle running track data.
Further, in S100, the feature recognition of the vehicle picture specifically includes: firstly, preprocessing a vehicle picture, then extracting the edge of the picture, then positioning a vehicle tail marker, segmenting characters, and finally identifying the characters.
Further, in S100, recognizing the character specifically includes: firstly, correctly segmenting a character image area, and then correctly separating a single character; finally, the single character is correctly recognized.
Further, in S200, the method for determining the driving trajectory of the vehicle includes: through the vehicle feature recognition in the S100, at least the license plate number and the vehicle shooting angle of the vehicle are extracted, the picture capturing time is obtained, and a vehicle driving track is formed according to the license plate number, the vehicle shooting angle and the picture capturing time of the vehicle.
Further, the vehicle shooting angle is divided into a vehicle head shooting and a vehicle tail shooting, wherein the vehicle shooting angle is one of the vehicle head shooting and the vehicle tail shooting.
Further, the method for judging the vehicle shooting angle comprises the following steps: and preprocessing the vehicle picture, positioning the tail marker, and judging that the vehicle shooting angle is the tail shooting if the vehicle picture contains the information of the tail identification marker, or else, shooting the head.
Further, in S100, the method for generating the trained lane recognition model includes: according to the vehicle running track, the shooting equipment information and the lane information, the direction of the lane to which the vehicle belongs is subjected to weighting operation by the video camera, high-weight association of the lane, the vehicle shooting angle and the running direction in the camera shooting picture is obtained, and a trained lane recognition model is generated.
Further, still include: s500, judging whether the vehicle has abnormal behaviors in the area or not according to the vehicle running direction output by the lane recognition model.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
a method of automatic identification of a direction of travel of a vehicle, comprising: the method comprises the steps of collecting vehicle pictures captured by all vehicle access devices in an area, and carrying out feature recognition on the vehicle pictures; determining a vehicle running track through the vehicle picture after the characteristic recognition to form a vehicle running track database; inputting a vehicle running track database into an initial lane recognition model, training the lane recognition model, and generating a trained lane recognition model; and acquiring a vehicle picture captured in real time, inputting the vehicle picture captured in real time into the trained lane recognition model, and outputting vehicle driving track data. The lane driving direction is automatically recognized based on the actual snapshot data. The bidirectional variable lane can be automatically adapted according to the data snapshot condition, and the scene adaptability is stronger. The problem of current video vehicle identification technique to inaccurate the direction of travel discernment on two-way lane is solved.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a method for automatically identifying a driving direction of a vehicle according to embodiment 1 of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problems in the prior art, the embodiment of the invention provides a method for automatically identifying the driving direction of a vehicle.
Example 1
A method for automatically identifying a driving direction of a vehicle, as shown in fig. 1, includes:
s100, vehicle pictures captured by all vehicle access devices in the area are collected, and feature recognition is carried out on the vehicle pictures.
In this embodiment, the vehicle picture is obtained by analyzing video stream data in a static background captured by a fixed camera. The method for recognizing the characteristics of the vehicle picture specifically comprises the following steps: firstly, preprocessing a vehicle picture, then extracting the edge of the picture, then positioning a vehicle tail marker, segmenting characters, and finally identifying the characters. Preferably, the recognizing the character specifically includes: firstly, correctly segmenting a character image area, and then correctly separating a single character; finally, the single character is correctly recognized.
S200, determining a vehicle running track through the vehicle picture after the characteristic recognition to form a vehicle running track database.
In this embodiment, the method for determining the vehicle driving track includes: through the vehicle feature recognition in the S100, at least the license plate number and the vehicle shooting angle of the vehicle are extracted, the picture capturing time is obtained, and a vehicle driving track is formed according to the license plate number, the vehicle shooting angle and the picture capturing time of the vehicle. Specifically, the vehicle shooting angle is divided into a vehicle head shooting and a vehicle tail shooting, wherein the vehicle shooting angle is one of the vehicle head shooting and the vehicle tail shooting.
In this embodiment, the method for determining the vehicle shooting angle includes: and preprocessing the vehicle picture, positioning the tail marker, and judging that the vehicle shooting angle is the tail shooting if the vehicle picture contains the information of the tail identification marker, or else, shooting the head. And segmenting the vehicle picture by an image segmentation algorithm to obtain a matrix picture of the vehicle tail marker, continuously segmenting the matrix picture into various characters, and putting the characters into an LSTM convolutional neural network for recognition to obtain corresponding marker information. If the picture has the information of the vehicle tail identification marker, the picture can be judged as a vehicle tail photo, otherwise, the picture is other.
S300, inputting a vehicle driving track database into an initial lane recognition model, training the lane recognition model, and generating a trained lane recognition model; specifically, the method for generating the trained lane recognition model comprises the following steps: according to the vehicle running track, the shooting equipment information and the lane information, the direction of the lane to which the vehicle belongs is subjected to weighting operation by the video camera, high-weight association of the lane, the vehicle shooting angle and the running direction in the camera shooting picture is obtained, and a trained lane recognition model is generated.
S400, acquiring a vehicle picture captured in real time, inputting the vehicle picture captured in real time into the trained lane recognition model, and outputting vehicle running track data. Subsequent vehicle data identification of the user in the lane driving direction trained by the model can automatically identify information such as the driving direction of the vehicle, video cameras at the entrance and exit positions and the like as experience output information. Meanwhile, newly captured data can still be used as training data of the model.
In some preferred embodiments, a method for automatically identifying a driving direction of a vehicle further includes: s500, judging whether the vehicle has abnormal behaviors in the area or not according to the vehicle running direction output by the lane recognition model. Specifically, the vehicle abnormal behavior includes at least: behaviors such as vehicle loitering and detention are analyzed, and abnormal behaviors are managed and controlled.
The method for automatically identifying the vehicle driving direction disclosed by the embodiment comprises the following steps: the method comprises the steps of collecting vehicle pictures captured by all vehicle access devices in an area, and carrying out feature recognition on the vehicle pictures; determining a vehicle running track through the vehicle picture after the characteristic recognition to form a vehicle running track database; inputting a vehicle running track database into an initial lane recognition model, training the lane recognition model, and generating a trained lane recognition model; and acquiring a vehicle picture captured in real time, inputting the vehicle picture captured in real time into the trained lane recognition model, and outputting vehicle driving track data. The lane driving direction is automatically recognized based on the actual snapshot data. The bidirectional variable lane can be automatically adapted according to the data snapshot condition, and the scene adaptability is stronger. The problem of current video vehicle identification technique to inaccurate the direction of travel discernment on two-way lane is solved.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".

Claims (8)

1. A method for automatically identifying a driving direction of a vehicle is characterized by comprising the following steps:
s100, vehicle pictures captured by all vehicle access devices in an area are collected, and feature recognition is carried out on the vehicle pictures;
s200, determining a vehicle running track through the vehicle picture after the characteristic recognition to form a vehicle running track database;
s300, inputting a vehicle driving track database into an initial lane recognition model, training the lane recognition model, and generating a trained lane recognition model;
s400, acquiring a vehicle picture captured in real time, inputting the vehicle picture captured in real time into the trained lane recognition model, and outputting vehicle running track data.
2. The method according to claim 1, wherein in S100, the step of performing feature recognition on the vehicle picture specifically comprises: firstly, preprocessing a vehicle picture, then extracting the edge of the picture, then positioning a vehicle tail marker, segmenting characters, and finally identifying the characters.
3. The method according to claim 2, wherein the step S100 of recognizing the character specifically comprises: firstly, correctly segmenting a character image area, and then correctly separating a single character; finally, the single character is correctly recognized.
4. The method for automatically identifying the driving direction of the vehicle as claimed in claim 1, wherein in S200, the method for determining the driving track of the vehicle comprises: through the vehicle feature recognition in the S100, at least the license plate number and the vehicle shooting angle of the vehicle are extracted, the picture capturing time is obtained, and a vehicle driving track is formed according to the license plate number, the vehicle shooting angle and the picture capturing time of the vehicle.
5. The method as claimed in claim 4, wherein the photographing angle of the vehicle is divided into a photographing of a head and a photographing of a tail, and wherein the photographing angle of the vehicle is one of the photographing of the head and the photographing of the tail.
6. The method for automatically identifying the driving direction of the vehicle as claimed in claim 4, wherein the method for judging the shooting angle of the vehicle comprises the following steps: and preprocessing the vehicle picture, positioning the tail marker, and judging that the vehicle shooting angle is the tail shooting if the vehicle picture contains the information of the tail identification marker, or else, shooting the head.
7. The method for automatically recognizing the driving direction of the vehicle as claimed in claim 1, wherein the method for generating the trained lane recognition model comprises: according to the vehicle running track, the shooting equipment information and the lane information, the direction of the lane to which the vehicle belongs is subjected to weighting operation by the video camera, high-weight association of the lane, the vehicle shooting angle and the running direction in the camera shooting picture is obtained, and a trained lane recognition model is generated.
8. A method for automatic recognition of vehicle direction of travel as claimed in claim 1, further comprising: s500, judging whether the vehicle has abnormal behaviors in the area or not according to the vehicle running direction output by the lane recognition model.
CN202111217204.7A 2021-10-19 2021-10-19 Method for automatically identifying vehicle driving direction Pending CN114140762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111217204.7A CN114140762A (en) 2021-10-19 2021-10-19 Method for automatically identifying vehicle driving direction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111217204.7A CN114140762A (en) 2021-10-19 2021-10-19 Method for automatically identifying vehicle driving direction

Publications (1)

Publication Number Publication Date
CN114140762A true CN114140762A (en) 2022-03-04

Family

ID=80394419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111217204.7A Pending CN114140762A (en) 2021-10-19 2021-10-19 Method for automatically identifying vehicle driving direction

Country Status (1)

Country Link
CN (1) CN114140762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307619A (en) * 2023-03-29 2023-06-23 邦邦汽车销售服务(北京)有限公司 Rescue vehicle allocation method and system based on data identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307619A (en) * 2023-03-29 2023-06-23 邦邦汽车销售服务(北京)有限公司 Rescue vehicle allocation method and system based on data identification
CN116307619B (en) * 2023-03-29 2023-09-26 邦邦汽车销售服务(北京)有限公司 Rescue vehicle allocation method and system based on data identification

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN106600977B (en) Multi-feature recognition-based illegal parking detection method and system
CN112907982B (en) Method, device and medium for detecting vehicle illegal parking behavior
CN107305627B (en) Vehicle video monitoring method, server and system
CN102007499B (en) Detecting facial expressions in digital images
CN109993056A (en) A kind of method, server and storage medium identifying vehicle violation behavior
US20060067562A1 (en) Detection of moving objects in a video
CN110580808B (en) Information processing method and device, electronic equipment and intelligent traffic system
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN111078946A (en) Bayonet vehicle retrieval method and system based on multi-target regional characteristic aggregation
CN112836683A (en) License plate recognition method, device, equipment and medium for portable camera equipment
CN104537840A (en) System for detecting illegally operated taxis
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN112541432A (en) Video livestock identity authentication system and method based on deep learning
CN114140762A (en) Method for automatically identifying vehicle driving direction
CN115810134A (en) Image acquisition quality inspection method, system and device for preventing car insurance from cheating
CN111091041A (en) Vehicle law violation judging method and device, computer equipment and storage medium
CN111753610A (en) Weather identification method and device
KR102122853B1 (en) Monitoring system to control external devices
CN114445787A (en) Non-motor vehicle weight recognition method and related equipment
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN113283303A (en) License plate recognition method and device
CN113850112A (en) Road condition identification method and system based on twin neural network
CN117437792B (en) Real-time road traffic state monitoring method, device and system based on edge calculation
Li et al. On traffic density estimation with a boosted SVM classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination