CN113688662A - Motor vehicle passing warning method and device, electronic device and computer equipment - Google Patents

Motor vehicle passing warning method and device, electronic device and computer equipment Download PDF

Info

Publication number
CN113688662A
CN113688662A CN202110756947.5A CN202110756947A CN113688662A CN 113688662 A CN113688662 A CN 113688662A CN 202110756947 A CN202110756947 A CN 202110756947A CN 113688662 A CN113688662 A CN 113688662A
Authority
CN
China
Prior art keywords
target
motor vehicle
vehicle
warning
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110756947.5A
Other languages
Chinese (zh)
Inventor
林骏
王亚运
王志庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110756947.5A priority Critical patent/CN113688662A/en
Publication of CN113688662A publication Critical patent/CN113688662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application relates to a motor vehicle passing warning method, a motor vehicle passing warning device, an electronic device and computer equipment. The accuracy of the acquired vehicle information of the motor vehicle in the target lane can be improved through the correlation processing of the radar data and the video data, so that the accuracy of the detection of the vehicle information to be warned is improved, and the timely warning of the motor vehicle information meeting the warning condition in the target lane is realized.

Description

Motor vehicle passing warning method and device, electronic device and computer equipment
Technical Field
The application relates to the field of image recognition, in particular to a motor vehicle passing warning method, a motor vehicle passing warning device, an electronic device and computer equipment.
Background
In order to meet the needs of urban infrastructure, logistics and other trips, the number of various types of motor vehicles on urban roads is gradually increased, and the potential safety hazard is increased accordingly. For example, because the driver has a blind viewing angle area, the safety of pedestrians and non-motor vehicles in the blind viewing angle area of the driver can be threatened during the driving process of the motor vehicle. Therefore, the motor vehicle information in the traffic scene needs to be detected to determine the motor vehicle with the traffic safety hidden danger and take corresponding measures in time. In the current vehicle detection method, a video tracking technology is often used for monitoring a target vehicle in acquired video data, the mode for monitoring the target vehicle based on the video image only is not high in stability, and the accuracy of acquired vehicle information of a motor vehicle, such as the vehicle speed and the vehicle distance, is low.
Aiming at the problem of low vehicle information identification accuracy in the related art, an effective solution is not provided.
Disclosure of Invention
The embodiment provides a motor vehicle passing warning method, a motor vehicle passing warning device, an electronic device and computer equipment, so as to solve the problem that the accuracy of vehicle information identification and lane judgment is low in the related art.
In a first aspect, in this embodiment, a method for warning of vehicle passing is provided, including the steps of:
acquiring a target video under a target traffic scene, wherein the target video comprises at least one motor vehicle and a target lane;
acquiring vehicle attributes of target vehicles in the target video, wherein the target vehicles comprise the vehicles running in the target lane in the at least one vehicle;
acquiring radar data corresponding to the target motor vehicle, wherein the radar data comprises driving information of the target motor vehicle;
and determining whether the target motor vehicle meets vehicle warning conditions or not according to the vehicle attributes and the driving information.
In some embodiments, the obtaining the vehicle attribute of the target vehicle in the target video comprises:
and judging whether the motor vehicle in the target video runs in the target lane or not based on a target detection algorithm, if so, setting the motor vehicle as a target motor vehicle, and acquiring the vehicle attribute of the target motor vehicle.
In some embodiments, the determining the target lane based on the pre-configured lane lines in the target video and the determining whether the motor vehicle in the target video is driving in the target lane based on the target detection algorithm includes:
inputting the target video into a preset target detection model to obtain key point information of the motor vehicle;
calculating the head projection central point of the motor vehicle according to the key point information;
and determining whether the motor vehicle runs in the target lane or not according to the position relation between the vehicle head projection center point and the lane line.
In some embodiments, the obtaining radar data corresponding to the target vehicle includes:
acquiring radar data in the target traffic scene;
and associating the radar data in the target traffic scene to the corresponding target motor vehicle in the target video through an image calibration algorithm to obtain the radar data corresponding to the target motor vehicle.
In some embodiments, the associating the radar data with the corresponding target vehicle in the target video through an image calibration algorithm includes:
and mapping image data in the target video into a bird's-eye view through an image calibration algorithm, and associating the radar data to the corresponding target motor vehicle according to the bird's-eye view.
In some embodiments, the obtaining the vehicle attribute of the target vehicle in the target video includes:
acquiring a target motor vehicle image of the target motor vehicle, and processing the target motor vehicle image by using a license plate detection method to obtain a license plate image of the target motor vehicle;
processing the license plate image by utilizing a license plate recognition algorithm to obtain the license plate attribute of the target motor vehicle;
and processing the target motor vehicle image by using a vehicle type recognition algorithm to obtain the vehicle type attribute of the target motor vehicle.
In some of these embodiments, in the case where it is determined that the target vehicle complies with a vehicle alert condition based on the vehicle attribute and the travel information, the method further comprises:
determining identification information of the target motor vehicle based on a video tracking algorithm;
and generating warning information for each target motor vehicle according to the identification information in the target traffic scene.
In a second aspect, there is provided in this embodiment a motor vehicle passing warning device, comprising: the device comprises an acquisition module, a judgment module, a radar vision correlation module and a warning module, wherein:
the acquisition module is used for acquiring a target video under a target traffic scene, wherein the target video comprises at least one motor vehicle and a target lane;
the judging module is used for acquiring the vehicle attribute of a target motor vehicle in the target video, wherein the target motor vehicle comprises a motor vehicle running in the target lane in the at least one motor vehicle;
the radar vision correlation module is used for acquiring radar data corresponding to the target motor vehicle, wherein the radar data comprises running information of the target motor vehicle;
and the warning module is used for determining whether the target motor vehicle meets vehicle warning conditions or not according to the vehicle attributes and the driving information.
In a third aspect, there is provided an electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, there is provided in this embodiment a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the computer program.
According to the motor vehicle passing warning method, the motor vehicle passing warning device, the electronic device and the computer equipment, the target video under the target traffic scene is obtained, wherein the target video comprises at least one motor vehicle and a target lane, the vehicle attribute of the target motor vehicle in the target video is obtained, the target motor vehicle comprises at least one motor vehicle running in the target lane, radar data corresponding to the target motor vehicle is obtained, the radar data comprises running information of the target motor vehicle, and whether the target motor vehicle meets the vehicle warning condition or not is determined according to the vehicle attribute and the running information. The accuracy of the acquired vehicle information of the motor vehicle in the target lane can be improved through the correlation processing of the radar data and the video data, so that the accuracy of the detection of the vehicle information to be warned is improved, and the timely warning of the motor vehicle information meeting the warning condition in the target lane is realized.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a view of an application scenario of a motor vehicle passing warning method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for warning of vehicle passing according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for warning of a vehicle passing by according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a motor vehicle passing warning device according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Fig. 1 is a diagram illustrating an application scenario of the method for warning of vehicle passing in one embodiment. As shown in fig. 1, includes a monitoring device 101, a server 102, a radar 103, and a display screen 104. The monitoring device 101 is configured to shoot a target traffic scene to obtain a target video of vehicle driving in the target traffic scene, where the target video includes at least one vehicle and a target lane. The server 102 is configured to receive the target video acquired by the monitoring device 101, and acquire a vehicle attribute of a target motor vehicle in a target lane of the target traffic scene in the target video. Additionally, the radar 103 is configured to acquire radar data in a target traffic scene, and send the radar data to the server 102, and the server 102 obtains radar data corresponding to a target vehicle. Finally, the server 102 judges whether the target motor vehicle meets the vehicle warning condition according to the vehicle attribute and the driving information in the radar data, if so, warning information is generated for the target motor vehicle, and the warning information is displayed on the display screen 104 to prompt other vehicles and pedestrians in the target traffic scene to notice avoiding. The monitoring device 101 and the server 102, the server 102 and the radar 103, and the server 102 and the display screen 104 may be connected through a network.
In the present embodiment, a method for warning vehicle passing is provided, and fig. 2 is a flowchart of the method for warning vehicle passing of the present embodiment, as shown in fig. 2, the flowchart includes the following steps:
step S210, a target video under a target traffic scene is obtained, wherein the target video comprises at least one motor vehicle and a target lane.
The target traffic scene may be a scene of a traffic intersection. The target video can be video data obtained by shooting the vehicle running condition in the intersection by monitoring equipment installed at the intersection.
Step S220, vehicle attributes of target motor vehicles in the target video are obtained, wherein the target motor vehicles comprise at least one motor vehicle running in a target lane.
Specifically, the vehicle attribute may include, but is not limited to, a vehicle type, a license plate, and the like of the target vehicle. The vehicle type information of the target motor vehicle is mainly a vehicle type, and specifically may be any one of Multi-Purpose Vehicles (MPVs), sport Utility Vehicles (suvs), taxis, minivans, medium vans, trucks, minivans, medium coaches, buses, cars, pickup trucks, and other special Vehicles. Additionally, the license plate information of the target vehicle specifically includes a license plate type, a license plate color, and a license plate number. Specifically, the vehicle attribute may be obtained by image processing a vehicle picture of the target vehicle. Furthermore, an anchor-base method or an anchor-free method can be used for carrying out license plate region segmentation on a vehicle picture of a target motor vehicle, and a traditional Support Vector Machine (SVM) (support Vector machine) or a deep learning method is used for detecting the segmented license plate image to obtain information such as a license plate number, a license plate type, a license plate color and the like. The vehicle picture of the target motor vehicle can be processed by using a Convolutional Neural Network (CNN) network classification method to obtain the vehicle type information of the target motor vehicle. The vehicle picture of the target motor vehicle can be a certain frame of image data in a target video, and can also be a license plate picture obtained by shooting the target motor vehicle passing through a specific location of a target traffic scene in real time.
Additionally, the target video may include multiple lanes and multiple motor vehicles, and before the vehicle attribute of the target motor vehicle in the target lane is obtained, the motor vehicle appearing in the target video needs to be subjected to target detection by an anchor-base or anchor-free method to determine the motor vehicle running in the target lane, and the motor vehicle is taken as the target motor vehicle. The target lane may be a lane in which traffic safety monitoring is required in actual situations. For example, a truck traveling in a right-turn lane may fail to take into account pedestrians or other vehicles nearby when making a right turn due to a blind area of the driver's view, thereby posing a threat to traffic safety. The right-turn lane in the target video can be set as the target lane. Specifically, after a target video in a target traffic scene is acquired, lane lines in the target video may be configured in advance to determine target lanes in the target video. The configuration can be performed manually or in real time by using any lane line detection algorithm. Further, whether the motor vehicle runs in the target lane can be judged according to the position relation between the motor vehicle and the target lane in the target video. For example, after obtaining a plurality of pieces of key point information of the vehicle, it is determined whether the vehicle is a target vehicle traveling in a target lane based on a positional relationship between the key point information and a lane line already arranged.
In addition, the target video can be processed by using a cyclic Structure kernel algorithm csk (cyclic Structure kernels), a kernel Correlation filter kcf (kernel Correlation filter), or the like, or a series of deep learning methods such as CenterTrack, SiamRPN, or the like, to realize video tracking of the vehicle therein, so that the identification information of the same vehicle in the target video is consistent between two adjacent frames. In particular, a vehicle center point can be detected using the centrnet, and displacement characteristics of an excited vehicle can be obtained using the vehicle center point between different frames to enable tracking of the motor vehicle. By tracking the motor vehicles in the target video, the same motor vehicle in the actual scene can be bound with the same identification information from appearance to disappearance in the target video, so that the uniqueness of the motor vehicle in the target video is realized, and the subsequent repeated processing of the motor vehicle is avoided.
And step S230, acquiring radar data corresponding to the target motor vehicle, wherein the radar data comprises the running information of the target motor vehicle.
The radar data corresponding to the target motor vehicle can be obtained by the radar equipment installed in the target traffic scene, transmitting electromagnetic waves to the motor vehicle in the target traffic scene and receiving the echo of the electromagnetic waves. The radar device can obtain information such as the distance, the direction and the speed of the motor vehicle relative to the radar device in a target traffic scene. The information can be used as the driving information of the motor vehicle in the target traffic scene. In addition, in order to obtain the driving information of the target motor vehicle from the radar data of the radar device, the radar data and the video data in the target traffic scene need to be associated, so as to respectively associate the radar data with the motor vehicles in the target video. Specifically, each frame of image data in the target video may be mapped to a bird's eye view by using a conventional affine transformation or deep learning method, and then the radar data may be associated with the corresponding vehicle in the bird's eye view. So as to obtain radar data corresponding to the target motor vehicle, thereby obtaining the running information of the target motor vehicle, such as the speed, the distance and the like.
Because the radar is arranged on the long-distance tracking and the target detection, the radar data is correlated with the target video, and the running information of the target motor vehicle is obtained from the radar data, so that the accuracy and the stability are higher compared with the method for obtaining the running information of the target motor vehicle by using an image processing method.
And step S240, determining whether the target motor vehicle meets the vehicle warning condition or not according to the vehicle attribute and the driving information.
The vehicle warning condition is used for limiting which motor vehicle is to be warned, and specifically may include the contents of limiting the vehicle type, the vehicle speed, the vehicle distance, the license plate type and the like. The vehicle type and the license plate type can be limited to one or more types, and the vehicle speed and the vehicle distance can be limited to a certain range. The vehicle warning condition may be configured in advance according to actual traffic scene requirements and traffic rules, and is not specifically limited in this embodiment. In the case that the vehicle attribute and the travel information of the target vehicle both meet the vehicle warning condition, warning information may be generated for the target vehicle. The warning information can be specifically determined according to the requirements of the actual traffic scene and is used for warning other motor vehicles, non-motor vehicles and pedestrians in the target traffic scene so as to avoid the target motor vehicles and avoid traffic accidents. Specifically, the warning information may be information of a target motor vehicle, such as a vehicle distance, a vehicle speed, a vehicle type, a license plate type, and a license plate number. In addition, after the warning information is generated, the display screen set in the target traffic scene may be used to perform map-on warning, the audio output device may be used to perform voice warning, or a combination of a picture and a voice is used to perform warning, which is not limited in this embodiment.
For example, a motor vehicle a is detected in a target video, key point information of the motor vehicle a is obtained through a centret method, the key point information is processed to obtain a head projection center point of the motor vehicle a, the position relation between the head projection center point and a preset lane line is compared, the head projection center point is determined to be between the two preset lane lines, the motor vehicle a is set as a target motor vehicle, and driving information of the target motor vehicle a is obtained through a radar-vision correlation. In addition, after the target motor vehicle A runs through a detection line arranged at a certain position of a target lane, the target motor vehicle A is captured in real time, and a vehicle image of the target motor vehicle is obtained. And detecting the license plate and the vehicle type of the vehicle image to obtain the vehicle attribute of the target motor vehicle. And under the conditions that the vehicle distance of the target motor vehicle is smaller than the preset warning vehicle distance, the vehicle speed is larger than the preset warning vehicle speed, the vehicle type is the appointed large truck type and the license plate type is a yellow plate or a double-layer yellow plate, the vehicle distance, the vehicle speed, the vehicle type, the license plate type and the license plate number of the target motor vehicle A are output to a warning device under a target traffic scene so as to remind surrounding vehicles and pedestrians of paying attention to safety.
In the above steps S210 to S240, by obtaining a target video in a target traffic scene, where the target video includes at least one vehicle and a target lane, a vehicle attribute of the target vehicle in the target video is obtained, and the target vehicle includes at least one vehicle running in the target lane, radar data corresponding to the target vehicle is obtained, where the radar data includes running information of the target vehicle, and it is determined whether the target vehicle meets a vehicle warning condition according to the vehicle attribute and the running information. The accuracy of the acquired vehicle information of the motor vehicle in the target lane can be improved through the correlation processing of the radar data and the video data, so that the accuracy of the detection of the vehicle information to be warned is improved, and the timely warning of the motor vehicle information meeting the warning condition in the target lane is realized.
Further, based on the step S220, obtaining the vehicle attribute of the target motor vehicle in the target video specifically includes the following steps:
step S221, judging whether the motor vehicle in the target video runs in the target lane or not based on the target detection algorithm, if so, setting the motor vehicle as the target motor vehicle, and acquiring the vehicle attribute of the target motor vehicle.
An anchor-base method, such as a uniform Real-Time Object Detection algorithm YOLO (uniform Look on), a Real-Time Object Detection), a single detector ssd (single Shot multi box detector), a regional convolutional neural network rcnn (regions with CNN features), or an anchor-free method, such as CenterNet, CornerNet, etc., may be used to perform target Detection on a target video to achieve positioning of a motor vehicle in the target video, and determine a positional relationship between the motor vehicle and a lane line, thereby determining whether the motor vehicle is driving in a target lane.
Further, in one embodiment, based on the step S221, the determining of the target lane based on the lane line configured in advance in the target video and based on the target detection algorithm, determining whether the motor vehicle in the target video is running in the target lane includes the following steps:
and step S2211, inputting the target video into a preset target detection model to obtain key point information of the motor vehicle.
In particular, the target detection model may be a centret model. And processing the target video by using the centret based on the key point detection to obtain the key point information of the motor vehicle in the target video. Wherein the key point information may be a center point T of the motor vehiclecLeft headlight point LlRight front big lamp point LrLeft tire pressure point TlRight tyre pressure point TrWide T of car body framewHigh T of vehicle body framehAnd a central point P of the license platecLeft rearview mirror point MlAnd right rearview mirror point Mr. The key point information can be used to characterize the position information of the vehicle in the target video.
And step S2212, calculating the head projection central point of the motor vehicle according to the key point information.
Specifically, in the detected key point information, if the left tire pressure point TlAnd right tire pressure point TrIf the two wheels exist simultaneously, the projection central point P-lane of the vehicle head is a left tire pressure point TlAnd right tire pressure point TrThe midpoint of the line between them. If the left tire pressure point T is not detected simultaneouslylAnd right tire pressure pointTrThen judge the left headlight point LlAnd a front right large lamp point LrWhether the two points exist simultaneously or not is judged, if yes, the calculation mode of the vehicle head projection central point P-lane is as follows:
Figure BDA0003147502640000091
otherwise, if the left rearview mirror point MlAnd right rearview mirror point MrAnd if the two points exist simultaneously, the calculation mode of the vehicle head projection central point P-lane is as follows:
Figure BDA0003147502640000092
and step S2213, determining whether the motor vehicle runs in the target lane or not according to the position relation between the vehicle head projection center point and the lane line.
Specifically, if the vehicle head projection center point P-lane is located between two preset lane lines, the motor vehicle is determined to be running in the target lane.
Additionally, in an embodiment, based on the step S230, acquiring radar data corresponding to the target vehicle further includes the following steps:
and step S231, radar data under the target traffic scene is acquired.
Specifically, the radar data in the target traffic scene may be provided by the radar device installed in the target traffic scene by transmitting an electromagnetic wave to the motor vehicle in the target traffic scene and receiving an echo of the electromagnetic wave. The radar device may be installed in a target traffic scene adjacent to the monitoring device, or may be installed in a target traffic scene separately from the monitoring device, and the installation manner is not specifically limited in this embodiment.
And step S232, associating the radar data in the target traffic scene with the corresponding target motor vehicle in the target video through an image calibration algorithm to obtain the radar data corresponding to the target motor vehicle.
Additionally, in one embodiment, based on the step S232, the associating the radar data to the corresponding target vehicle in the target video by the image calibration algorithm further includes the following steps:
step S2321, image data in the target video is mapped into a bird 'S-eye view through an image calibration algorithm, and radar data are related to the corresponding target motor vehicle according to the bird' S-eye view.
Specifically, the video image is calibrated by using an affine transformation or deep learning method, and a single-frame picture in the video image is mapped into a bird's-eye view. And according to the coordinates in the aerial view, correlating the aerial view with the radar data to obtain radar data corresponding to the target motor vehicle.
Additionally, in an embodiment, based on the step S220, the obtaining of the vehicle attribute of the target vehicle in the target video includes obtaining the vehicle attribute of the target vehicle in the target video, and further includes the following steps:
step S222, obtaining a target motor vehicle image of the target motor vehicle, and processing the target motor vehicle image by using a license plate detection method to obtain a license plate image of the target motor vehicle.
Specifically, the target vehicle image may be obtained by image segmentation of a target vehicle located in a target lane in a target video, or may be obtained by capturing the target vehicle in the target lane in real time by a camera disposed at a specific location. The license plate detection method is not limited to any of methods such as YOLO, SSD, RCNN, centrnet, and CornerNet. And processing the target motor vehicle image by using a license plate detection method, and segmenting a license plate area to obtain the license plate image of the target motor vehicle.
And step S223, processing the license plate image by using a license plate recognition algorithm to obtain the license plate attribute of the target motor vehicle.
Specifically, the license plate image is recognized by combining morphology with an SVM, or by using a deep learning method such as a recurrent Neural network (rnn), a spatial transformation network (stn), a spatial transformation network (CNN), or the like, so as to obtain the license plate type and the license plate number of the license plate image. Wherein, the license plate type can include: yellow cards, double-layer yellow cards, black cards, yellow-green double-pieced cards, gradient green cards, blue cards, white cards and the like.
And S224, processing the target motor vehicle image by using a vehicle type recognition algorithm to obtain the vehicle type attribute of the target motor vehicle.
In addition, the target vehicle image can be input into the CNN network, and the vehicle type information of the target vehicle can be output.
In one embodiment, based on the step S240, in the case that it is determined that the target vehicle meets the vehicle warning condition according to the vehicle attribute and the driving information, the method further includes the following steps:
and step S241, determining the identification information of the target motor vehicle based on a video tracking algorithm.
Specifically, the target motor vehicle in the target video is tracked by using a video tracking algorithm, so that the target motor vehicle corresponds to the same motor vehicle in the actual traffic scene from appearance to disappearance in the target video. Therefore, uniqueness of the target motor vehicle in the target video is realized, and subsequent repeated processing on the target motor vehicle is avoided.
And step S242, generating warning information for each target motor vehicle according to the identification information in the target traffic scene.
Specifically, the warning information can be generated for each target motor vehicle once, so that in an actual traffic scene, the warning of the vehicle information is performed for each target motor vehicle meeting the vehicle warning condition once or for preset times, and the interference of repeated prompting to surrounding residents is avoided.
In one embodiment, as shown in fig. 3, there is provided a motor vehicle passing warning method, including:
step S310, inputting a video in a target traffic scene;
step S320, carrying out lane line configuration on the video;
step S330, vehicle detection, tracking and key point detection are carried out on the video;
step S340, performing radar vision association on the video data and the radar data;
step S350, identifying vehicle attributes in a target traffic scene;
and S360, performing right-turn wagon warning on the wagon on the right-turn lane according to the radar data and the vehicle attribute.
In the above steps S210 to S240, based on the target detection algorithm, it is determined whether the motor vehicle in the target video is running in the target lane, the vehicle attribute of the target motor vehicle running in the target lane is identified to obtain the motor vehicle information in the specific lane, the key point information of the motor vehicle is obtained by using the target detection model, the head projection center point of the motor vehicle is calculated according to the key point information, it is determined whether the motor vehicle is running in the target lane according to the position relationship between the head projection center point and the lane line, the accuracy of determining the position relationship between the motor vehicle and the target lane is improved, the radar data in the target traffic scene is obtained, the radar data in the target traffic scene is associated to the corresponding target motor vehicle in the target video by the image calibration algorithm, the radar data corresponding to the target motor vehicle is obtained, and the accuracy of obtaining the running information of the target motor vehicle is improved, the license plate detection method is used for obtaining the license plate image of the target motor vehicle, the license plate recognition algorithm is used for obtaining the license plate attribute of the license plate image, and the vehicle type recognition algorithm is used for obtaining the vehicle type attribute of the target motor vehicle, so that the warning information can be generated for the target motor vehicle which accords with the vehicle warning information according to the condition of an actual traffic scene, and the timely warning of the motor vehicle information which accords with the vehicle warning condition in a right-turn lane is realized.
In this embodiment, a motor vehicle passing warning device is further provided, which is used to implement the above embodiments and preferred embodiments, and the description of the device is omitted. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a schematic structural diagram of a vehicle-passing warning device 40 in the embodiment of the present application, and as shown in fig. 4, the vehicle-passing warning device 40 includes: an obtaining module 42, a determining module 44, a radar vision correlation module 46, and an alerting module 48, wherein:
the acquiring module 42 is configured to acquire a target video in a target traffic scene, where the target video includes at least one motor vehicle and a target lane;
the judging module 44 is configured to obtain a vehicle attribute of a target motor vehicle in a target video, where the target motor vehicle includes a motor vehicle running in a target lane in at least one motor vehicle;
the radar vision correlation module 46 is configured to obtain radar data corresponding to the target motor vehicle, where the radar data includes driving information of the target motor vehicle;
and the warning module 48 is used for determining whether the target motor vehicle meets the vehicle warning condition according to the vehicle attribute and the driving information.
The motor vehicle passing warning device 40 acquires the vehicle attribute of the target motor vehicle in the target video by acquiring the target video in the target traffic scene, wherein the target video includes at least one motor vehicle and a target lane, the target motor vehicle includes at least one motor vehicle running in the target lane, and the radar data corresponding to the target motor vehicle is acquired, wherein the radar data includes running information of the target motor vehicle, and determines whether the target motor vehicle meets the vehicle warning condition according to the vehicle attribute and the running information. The accuracy of the acquired vehicle information of the motor vehicle in the target lane can be improved through the correlation processing of the radar data and the video data, so that the accuracy of the detection of the vehicle information to be warned is improved, and the timely warning of the motor vehicle information meeting the warning condition in the target lane is realized.
In one embodiment, the determining module 44 is further configured to determine whether the vehicle in the target video is driving in the target lane based on a target detection algorithm, and if so, set the vehicle as the target vehicle and obtain the vehicle attribute of the target vehicle.
In one embodiment, the determining module 44 is further configured to input the target video into a preset target detection model, obtain key point information of the motor vehicle, calculate a vehicle head projection center point of the motor vehicle according to the key point information, and determine whether the motor vehicle runs in the target lane according to a position relationship between the vehicle head projection center point and a lane line.
In one embodiment, the radar vision correlation module 46 is further configured to acquire radar data in a target traffic scene, and correlate the radar data in the target traffic scene to a corresponding target vehicle in the target video through an image calibration algorithm, so as to obtain radar data corresponding to the target vehicle.
In one embodiment, the radar-vision correlation module 46 is further configured to map the image data in the target video to a bird's-eye view through an image calibration algorithm, and correlate the radar data to the corresponding target vehicle according to the bird's-eye view.
In one embodiment, the determining module 44 is further configured to obtain a target vehicle image of the target vehicle, process the target vehicle image by using a license plate detection method to obtain a license plate image of the target vehicle, process the license plate image by using a license plate recognition algorithm to obtain a license plate attribute of the target vehicle, and process the target vehicle image by using a vehicle type recognition algorithm to obtain a vehicle type attribute of the target vehicle.
In one embodiment, the alert module 48 is further configured to determine identification information of the target vehicles based on a video tracking algorithm, and generate alert information for each target vehicle according to the identification information in the target traffic scene.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In one embodiment, as shown in FIG. 5, an electronic device is provided that includes a memory and a processor. The memory has stored therein a computer program for providing computing and control capabilities to the processor of the electronic device. The memory of the electronic device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor, when executing the computer program, implements the following steps:
acquiring a target video under a target traffic scene, wherein the target video comprises at least one motor vehicle and a target lane;
acquiring vehicle attributes of target vehicles in a target video, wherein the target vehicles comprise at least one vehicle running in a target lane;
acquiring radar data corresponding to a target motor vehicle, wherein the radar data comprises running information of the target motor vehicle;
and determining whether the target motor vehicle meets the vehicle warning condition or not according to the vehicle attribute and the driving information.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and judging whether the motor vehicle in the target video runs in the target lane or not based on a target detection algorithm, if so, setting the motor vehicle as the target motor vehicle, and acquiring the vehicle attribute of the target motor vehicle.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting a target video into a preset target detection model to obtain key point information of the motor vehicle;
calculating the head projection central point of the motor vehicle according to the key point information;
and determining whether the motor vehicle runs in the target lane or not according to the position relation between the vehicle head projection center point and the lane line.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring radar data in a target traffic scene;
and associating the radar data in the target traffic scene to the corresponding target motor vehicle in the target video through an image calibration algorithm to obtain the radar data corresponding to the target motor vehicle.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and mapping image data in the target video into a bird's-eye view through an image calibration algorithm, and associating the radar data to the corresponding target motor vehicle according to the bird's-eye view.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a target motor vehicle image of a target motor vehicle, and processing the target motor vehicle image by using a license plate detection method to obtain a license plate image of the target motor vehicle;
processing the license plate image by utilizing a license plate recognition algorithm to obtain the license plate attribute of the target motor vehicle;
and processing the target motor vehicle image by using a vehicle type recognition algorithm to obtain the vehicle type attribute of the target motor vehicle.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining identification information of the target motor vehicle based on a video tracking algorithm;
and generating warning information for each target motor vehicle according to the identification information in the target traffic scene.
The electronic device acquires the vehicle attribute of the target motor vehicle in the target video by acquiring the target video under the target traffic scene, wherein the target video comprises at least one motor vehicle and a target lane, the target motor vehicle comprises at least one motor vehicle running in the target lane, and the radar data corresponding to the target motor vehicle is acquired, wherein the radar data comprises the running information of the target motor vehicle, and whether the target motor vehicle meets the vehicle warning condition is determined according to the vehicle attribute and the running information. The accuracy of the acquired vehicle information of the motor vehicle in the target lane can be improved through the correlation processing of the radar data and the video data, so that the accuracy of the detection of the vehicle information to be warned is improved, and the timely warning of the motor vehicle information meeting the warning condition in the target lane is realized.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing a preset configuration information set. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize the motor vehicle passing warning method.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a motor vehicle passing warning method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (11)

1. A method for warning of vehicle passing, the method comprising:
acquiring a target video under a target traffic scene, wherein the target video comprises at least one motor vehicle and a target lane;
acquiring vehicle attributes of target vehicles in the target video, wherein the target vehicles comprise the vehicles running in the target lane in the at least one vehicle;
acquiring radar data corresponding to the target motor vehicle, wherein the radar data comprises driving information of the target motor vehicle;
and determining whether the target motor vehicle meets vehicle warning conditions or not according to the vehicle attributes and the driving information.
2. The motor vehicle passing warning method according to claim 1, wherein the obtaining the vehicle attribute of the target motor vehicle in the target video comprises:
and judging whether the motor vehicle in the target video runs in the target lane or not based on a target detection algorithm, if so, setting the motor vehicle as a target motor vehicle, and acquiring the vehicle attribute of the target motor vehicle.
3. The motor vehicle passing warning method according to claim 2, wherein the target lane is determined based on a lane line configured in advance in the target video, and the determining whether the motor vehicle in the target video is running in the target lane based on a target detection algorithm comprises:
inputting the target video into a preset target detection model to obtain key point information of the motor vehicle;
calculating the head projection central point of the motor vehicle according to the key point information;
and determining whether the motor vehicle runs in the target lane or not according to the position relation between the vehicle head projection center point and the lane line.
4. The method for warning that a motor vehicle passes through the vehicle according to claim 1, wherein the obtaining of the radar data corresponding to the target motor vehicle comprises:
acquiring radar data in the target traffic scene;
and associating the radar data in the target traffic scene to the corresponding target motor vehicle in the target video through an image calibration algorithm to obtain the radar data corresponding to the target motor vehicle.
5. The motor vehicle passing warning method according to claim 4, wherein the associating the radar data in the target traffic scene to the corresponding target motor vehicle in the target video through an image calibration algorithm comprises:
and mapping image data in the target video into a bird's-eye view through the image calibration algorithm, and associating the radar data to the corresponding target motor vehicle according to the bird's-eye view.
6. The motor vehicle passing warning method according to claim 1, wherein the vehicle attributes of the target motor vehicle include license plate attributes and vehicle type attributes, and the obtaining the vehicle attributes of the target motor vehicle in the target video includes:
acquiring a target motor vehicle image of the target motor vehicle, and processing the target motor vehicle image by using a license plate detection method to obtain a license plate image of the target motor vehicle;
processing the license plate image by utilizing a license plate recognition algorithm to obtain the license plate attribute of the target motor vehicle;
and processing the target motor vehicle image by using a vehicle type recognition algorithm to obtain the vehicle type attribute of the target motor vehicle.
7. The motor vehicle passing warning method according to claim 1, wherein in a case where it is determined that the target motor vehicle meets a vehicle warning condition based on the vehicle attribute and the travel information, the method further comprises:
determining identification information of the target motor vehicle based on a video tracking algorithm;
and generating warning information for each target motor vehicle according to the identification information in the target traffic scene.
8. A motor vehicle warning device that crosses car, its characterized in that includes: the device comprises an acquisition module, a judgment module, a radar vision correlation module and a warning module, wherein:
the acquisition module is used for acquiring a target video under a target traffic scene, wherein the target video comprises at least one motor vehicle and a target lane;
the judging module is used for acquiring the vehicle attribute of a target motor vehicle in the target video, wherein the target motor vehicle comprises a motor vehicle running in the target lane in the at least one motor vehicle;
the radar vision correlation module is used for acquiring radar data corresponding to the target motor vehicle, wherein the radar data comprises running information of the target motor vehicle;
and the warning module is used for determining whether the target motor vehicle meets vehicle warning conditions or not according to the vehicle attributes and the driving information.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
11. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to carry out the steps of the method according to any one of claims 1 to 7 when executed.
CN202110756947.5A 2021-07-05 2021-07-05 Motor vehicle passing warning method and device, electronic device and computer equipment Pending CN113688662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110756947.5A CN113688662A (en) 2021-07-05 2021-07-05 Motor vehicle passing warning method and device, electronic device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110756947.5A CN113688662A (en) 2021-07-05 2021-07-05 Motor vehicle passing warning method and device, electronic device and computer equipment

Publications (1)

Publication Number Publication Date
CN113688662A true CN113688662A (en) 2021-11-23

Family

ID=78576650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110756947.5A Pending CN113688662A (en) 2021-07-05 2021-07-05 Motor vehicle passing warning method and device, electronic device and computer equipment

Country Status (1)

Country Link
CN (1) CN113688662A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627409A (en) * 2022-02-25 2022-06-14 海信集团控股股份有限公司 Method and device for detecting abnormal lane change of vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627409A (en) * 2022-02-25 2022-06-14 海信集团控股股份有限公司 Method and device for detecting abnormal lane change of vehicle

Similar Documents

Publication Publication Date Title
CN113284366B (en) Vehicle blind area early warning method, early warning device, MEC platform and storage medium
US10839694B2 (en) Blind spot alert
Mukhtar et al. Vehicle detection techniques for collision avoidance systems: A review
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
US7366325B2 (en) Moving object detection using low illumination depth capable computer vision
CN107341454B (en) Method and device for detecting obstacles in scene and electronic equipment
Wu et al. Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement
CN112084232B (en) Vehicle driving risk assessment method and device based on visual field information of other target vehicles
Kwon et al. A study on development of the blind spot detection system for the IoT-based smart connected car
US20170309181A1 (en) Apparatus for recognizing following vehicle and method thereof
CN112172663A (en) Danger alarm method based on door opening and related equipment
CN110033621B (en) Dangerous vehicle detection method, device and system
WO2009101660A1 (en) Vehicle periphery monitoring device, vehicle, and vehicle periphery monitoring program
CN112937445B (en) 360-degree vehicle safety auxiliary method and vehicle-mounted system
Cualain et al. Multiple-camera lane departure warning system for the automotive environment
Choi et al. Cut-in vehicle warning system exploiting multiple rotational images of SVM cameras
CN112699862B (en) Image data processing method, device, equipment and storage medium
CN113688662A (en) Motor vehicle passing warning method and device, electronic device and computer equipment
CN112183206B (en) Traffic participant positioning method and system based on road side monocular camera
CN110727269A (en) Vehicle control method and related product
Satzoda et al. Vision-based front and rear surround understanding using embedded processors
CN113661093A (en) Driver assistance method and device
CN112660121A (en) Hidden danger vehicle identification early warning method and device, vehicle-mounted terminal and storage medium
Dhanasekaran et al. A survey on vehicle detection based on vision
CN114898325B (en) Vehicle dangerous lane change detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination