CN112766069A - Vehicle illegal parking detection method and device based on deep learning and electronic equipment - Google Patents

Vehicle illegal parking detection method and device based on deep learning and electronic equipment Download PDF

Info

Publication number
CN112766069A
CN112766069A CN202011633740.0A CN202011633740A CN112766069A CN 112766069 A CN112766069 A CN 112766069A CN 202011633740 A CN202011633740 A CN 202011633740A CN 112766069 A CN112766069 A CN 112766069A
Authority
CN
China
Prior art keywords
target
area
target detection
initial image
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011633740.0A
Other languages
Chinese (zh)
Inventor
杨开岳
黄家嘉
唐上体
纪哲
顾博欣
刘嘉乐
林俊杰
黄靖霞
张飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202011633740.0A priority Critical patent/CN112766069A/en
Publication of CN112766069A publication Critical patent/CN112766069A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of vehicle detection, and provides a vehicle illegal parking detection method and device based on deep learning and an electronic device, wherein the method comprises the following steps: acquiring an initial image, and respectively inputting the initial image into a target detection network and a semantic segmentation network; performing target detection on the initial image through the target detection network, and outputting a target detection characteristic map, wherein the target detection characteristic map comprises a target position of the target object; performing semantic segmentation on the initial image through the semantic segmentation network, and outputting a semantic feature map; and overlapping the target detection feature map and the semantic feature map, acquiring a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the restricted area, and judging whether the target object violates the law or not based on the proportion of the first overlapping area and the second overlapping area. The invention can reduce the workload and improve the working efficiency.

Description

Vehicle illegal parking detection method and device based on deep learning and electronic equipment
Technical Field
The application relates to the technical field of vehicle detection, in particular to a vehicle illegal parking detection method and device based on deep learning and an electronic device.
Background
With the rapid development of economy, the total quantity of roads and vehicles in cities in China is continuously increased, the vehicle illegal parking behaviors are also increased, and the detection of the vehicle illegal parking in urban road monitoring images or videos becomes an important task in city and park management. Although high-definition monitoring cameras are already deployed at most intersections, the amount of videos generated every day is increasingly huge, time and labor are wasted when the videos are monitored in real time or processed offline manually, and missed judgment is easy to occur, so that the working efficiency is low. Therefore, in the prior art, the problems of large workload and low working efficiency exist in monitoring the illegal parking vehicles.
Content of application
The application aims at overcoming the defects in the prior art, and provides a vehicle illegal parking detection method based on deep learning, which is used for solving the problems of large workload and low working efficiency in monitoring illegal parking vehicles.
The purpose of the application is realized by the following technical scheme:
in a first aspect, a deep learning-based vehicle parking violation detection method is provided, and the method comprises
Acquiring an initial image, and respectively inputting the initial image into a target detection network and a semantic segmentation network, wherein the initial image comprises a target object;
performing target detection on the initial image through the target detection network, and outputting a target detection characteristic map, wherein the target detection characteristic map comprises a target position of the target object;
performing semantic segmentation on the initial image through the semantic segmentation network, and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region;
and overlapping the target detection feature map and the semantic feature map, acquiring a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the restricted area, and judging whether the target object violates the law or not based on the proportion of the first overlapping area and the second overlapping area.
In a second aspect, an embodiment of the present invention further provides a vehicle parking violation detection apparatus based on deep learning, where the apparatus includes:
the system comprises an input module, a semantic segmentation module and a target detection module, wherein the input module is used for acquiring an initial image and respectively inputting the initial image into a target detection network and a semantic segmentation network, and the initial image comprises a target object;
the first target detection module is used for carrying out target detection on the initial image through the target detection network and outputting a target detection characteristic diagram, wherein the target detection characteristic diagram comprises a target position of the target object;
the first semantic segmentation module is used for performing semantic segmentation on the initial image through the semantic segmentation network and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region;
and the judging module is used for superposing the target detection feature map and the semantic feature map, acquiring a first superposed area of the target position of the target object and the conventional area and a second superposed area of the target position of the target object and the restricted area, and judging whether the target object violates the parking condition based on the proportion of the first superposed area to the second superposed area.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing the steps in the deep learning based vehicle violation detection method according to any of the specific embodiments.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the deep learning-based vehicle violation detection method according to any of the specific embodiments in this embodiment.
The beneficial effect that this application brought: respectively inputting an initial image into a target detection network and a semantic segmentation network by acquiring the initial image, wherein the initial image comprises a target object; performing target detection on the initial image through the target detection network, and outputting a target detection characteristic map, wherein the target detection characteristic map comprises a target position of the target object; performing semantic segmentation on the initial image through the semantic segmentation network, and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region; and overlapping the target detection feature map and the semantic feature map, acquiring a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the restricted area, and judging whether the target object violates the law or not based on the proportion of the first overlapping area and the second overlapping area. The embodiment of the invention combines the target detection network and the semantic segmentation network for judgment, detects and outputs the target detection characteristic diagram through the target detection network, performs semantic segmentation through the semantic segmentation network and outputs the semantic characteristic diagram, combines the target detection characteristic diagram and the semantic characteristic diagram, judges whether the target position of the target object is illegal according to the overlapping area of the target position of the target object and a conventional area and the proportion of the overlapping area of the target position of the target object and a restricted area, and directly positions the target position and the restricted area, thereby reducing the video data volume; in addition, the whole process does not need manual video real-time supervision and processing, the video data can be processed by the method, the workload is effectively reduced, and the working efficiency is improved.
Drawings
FIG. 1 is a schematic flowchart of a deep learning-based vehicle violation detection method according to an embodiment of the present application;
FIG. 2 is a schematic flowchart illustrating another method for detecting vehicle parking violation based on deep learning according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an embodiment of step S102 in fig. 1 according to the present application;
fig. 4 is a flowchart illustrating an embodiment of step S204 in fig. 2 according to the present application;
FIG. 5 is a schematic structural diagram of a vehicle illegal parking detection device based on deep learning according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another deep learning-based vehicle violation detection apparatus provided in the embodiments of the present application;
FIG. 7 is a schematic structural diagram of another deep learning-based vehicle violation detection apparatus provided in the embodiments of the present application;
FIG. 8 is a schematic structural diagram of another deep learning-based vehicle violation detection apparatus provided in the embodiments of the present application;
FIG. 9 is a schematic structural diagram of another deep learning-based vehicle violation detection apparatus provided in the embodiments of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following describes preferred embodiments of the present application, and those skilled in the art will be able to realize the invention and its advantages by using the related art in the following description.
As shown in fig. 1, to further describe the technical solution of the present application, an embodiment of the present invention provides a flowchart of a vehicle parking violation detection method based on deep learning, where the method specifically includes the following steps:
s101, obtaining an initial image, and respectively inputting the initial image into a target detection network and a semantic segmentation network, wherein the initial image comprises a target object.
In the embodiment, the scene in which the vehicle illegal parking detection method based on deep learning is applied includes, but is not limited to, an illegal parking detection system, a traffic management system, and the like. And the electronic device on which the vehicle illegal parking detection method based on deep learning operates can acquire the initial image from the video shooting device in a wired connection mode or a wireless connection mode. The Wireless connection may include, but is not limited to, a 3G/4G connection, a WiFi (Wireless-Fidelity) connection, a bluetooth connection, a wimax (worldwide Interoperability for Microwave access) connection, a Zigbee (low power local area network protocol), a uwb (ultra wideband) connection, and other Wireless connection methods now known or developed in the future.
The initial image may be a frame image or a video image. The initial image may include a variety of information, such as: the initial image may include regular areas of cars, roads/garages, etc., restricted areas, identification tags, etc. The target detection network described above may be a target detection network using YOLO-V3. The semantic segmentation network may be a deep-v 3 semantic segmentation network. The initial image input into the YOLO-V3 target detection network and the Deeplab-V3 semantic segmentation network are the same frame image. In the embodiment of the present invention, the target object may be a vehicle.
Specifically, the YOLO target detection network divides an input initial image into m × m grids, and if the coordinates of the center position of a target object to be detected fall into a certain grid, the grid is responsible for detecting the target object. Each grid can predict B bounding boxes (bbox) and their confidence levels (confidence score), as well as C class probabilities. The bbox information (x, y, w, h) is the offset of the center position of the object from the grid position in the x, y axes, as well as the width and height, all normalized. The confidence level reflects whether the object is included and the accuracy of the location where the object is included.
The network structure of YOLO may include YOLO-V1, YOLO-V2, YOLO-V3, and the like. The YOLO-V3 and YOLO-V3 target detection networks used in the embodiment of the invention use multi-scale prediction (class FPN) and use better basic classification networks (class ResNet) and classifiers. The used basic network is Darknet-53, and the speed is higher under the condition of ensuring the accuracy by imitating ResNet.
The deep-v 3 semantic segmentation network, namely a multi-scale (multiple scales) segmentation object, designs serial and parallel convolution modules with holes, and adopts various different atrous rates to acquire multi-scale content information. An Atrous Spatial Pyramid Pooling (ASPP) module is provided in the Deeplab V3, convolution characteristics of different scales are mined, image layer characteristics of global content information are coded, and a segmentation effect is improved.
Specifically, the initial image may be acquired in different ways under different conditions. When the offline state is processed, the initial image can be directly acquired offline to be input to the target detection network and the semantic segmentation network. When the device is in an online state, the initial image can be acquired in real time through a video monitoring device (with a camera), then the acquired initial image is input into a YOLO-V3 target detection network for target object detection, and the image correspondingly input into the Deeplab-V3 semantic segmentation network can be an offline acquired video image.
S102, carrying out target detection on the initial image through a target detection network, and outputting a target detection characteristic diagram, wherein the target detection characteristic diagram comprises a target position of a target object.
The target detection feature map may be a feature map including information about a vehicle and a vehicle position, that is, feature extraction is performed. When the YOLO-V3 target detection network performs target detection, when a target object is detected, the target object may be boxed and labeled, for example: the target object is a vehicle on a road, and the vehicle can be framed by rectangular frames 550 and 377 in this order, and car is marked in/out of the frames. Of course, in the labeling, the labeling box includes the position of the target object in contact with the ground (target position).
S103, performing semantic segmentation on the initial image through a semantic segmentation network, and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region.
Wherein the regular area may represent a road safe driving area in the road, and the above-mentioned restricted area may represent an area where parking is prohibited. When the vehicle occupies a certain area in the parking prohibition area, it can be judged that the vehicle has an illegal parking behavior.
The semantic Segmentation of the initial image by the above-mentioned deep-v 3 semantic Segmentation network may represent that the initial image is divided into different semantic interpretable categories, that is, pixels are grouped (Grouping)/segmented (Segmentation) according to the difference of semantic meanings expressed in the image, for example: distinguishing all pixels belonging to a conventional area in the image, and coating all pixels of the conventional area into blue; all pixels belonging to the limited area in the initial image are distinguished, and all pixels corresponding to the limited area are painted into purple. After the initial image is subjected to semantic segmentation through the Deeplab-v3 semantic segmentation network, a semantic feature map after the semantic segmentation can be provided, wherein the semantic feature map can comprise a conventional area and a limited area, and certainly can also comprise interfering objects such as a signboard and an obstacle.
S104, overlapping the target detection feature map and the semantic feature map, acquiring a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the restricted area, and judging whether the target object is illegal to stop based on the proportion of the first overlapping area and the second overlapping area.
After the YoLO-V3 target detection network outputs a target detection feature map and the Deeplab-V3 semantic segmentation network outputs a semantic feature map, the target detection feature map and the semantic feature map can be overlapped, namely, after the same frame of initial image is respectively subjected to feature extraction through the YoLO-V3 target detection network and the Deeplab-V3 semantic segmentation network, the respectively extracted features are overlapped.
The first overlapping area may be an overlapping area of the target position of the target object after overlapping and the normal area; the second overlapping area may refer to an overlapping area of the target position and the limiting region, and the overlapping area may be directly identified and obtained. The above-mentioned determining whether the target object violates the parking criterion according to the ratio of the first overlapping area to the second overlapping area may be calculating the ratio of the first overlapping area to the second overlapping area, and comparing the calculated ratio with a preset ratio to determine whether the target object violates the parking criterion, for example: and if P is larger than 1, the vehicle is judged to be illegal, otherwise, the vehicle is judged to be normally parked. As a possible embodiment, if it is determined that the vehicle is parked illegally, an alarm message may be sent, where the alarm message may include text messages, voice messages, alarm sounds, and the like, and the illegal parking vehicle may be marked with an alarm in a background picture.
In the embodiment of the invention, an initial image is acquired and is respectively input into a target detection network and a semantic segmentation network, and the initial image comprises a target object; performing target detection on the initial image through a target detection network, and outputting a target detection characteristic diagram, wherein the target detection characteristic diagram comprises a target position of a target object; performing semantic segmentation on the initial image through a semantic segmentation network, and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region; the target detection feature map and the semantic feature map are overlapped, a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the limiting area are obtained, and whether the target object is illegal or not is judged based on the proportion of the first overlapping area and the second overlapping area. The embodiment of the invention combines the target detection network and the semantic segmentation network for judgment, detects and outputs the target detection characteristic diagram through the target detection network, performs semantic segmentation through the semantic segmentation network and outputs the semantic characteristic diagram, combines the target detection characteristic diagram and the semantic characteristic diagram, judges whether the target position of the target object is illegal or not according to the overlapping area of the target position of the target object and the conventional area and the proportion of the overlapping area of the target position of the target object and the limited area, and directly positions the target position and the limited area, thereby reducing the video data volume; and the whole process does not need to manually monitor and process the video in real time, so that the video data can be processed, the workload is effectively reduced, and the working efficiency is improved.
Optionally, as shown in fig. 2, fig. 2 is a flowchart of another vehicle violation detection method based on deep learning according to an embodiment of the present application. The acquiring of the initial image comprises offline acquisition and real-time acquisition, and the initial image comprises a video image, and the method specifically comprises the following steps:
s201, obtaining a video image in a preset time period in an off-line mode, training the video image in the preset time period through a Gaussian mixture background model, and extracting a target background image.
The preset time period may be 1min, 10min, and the like, and may be specifically defined by a user, or may be adjusted according to a specific situation. The video images can be obtained in an off-line condition and shot by a fixed camera, and the condition that a plurality of vehicles stop can be detected in the video images. The gaussian mixture model may be a model in which K (basically 3 to 5) gaussian models are used to characterize the characteristics of each pixel point in an image, the gaussian mixture model is updated after a new frame of image is obtained, each pixel point in the current image is matched with the gaussian mixture model, and if the matching is successful, the point can be determined as a background point, otherwise, the point is a foreground point. The target background map described above may be a map that does not include a target object (vehicle).
For the same fixed camera, the pictures of the shot scene after semantic segmentation are very similar. Therefore, the road background shot by each camera can be subjected to semantic segmentation once, and the segmented image can be provided for all subsequent detections. The real scene in the monitoring video is complicated and changeable. The Gaussian mixture model can overcome the problems of image jitter, noise interference, moving target movement and the like, and a clean target background image (without a target object) is extracted from the video stream. Specifically, the modeling of the gaussian mixture background model may be to represent the background by using statistical information such as probability density of a large number of sample values of a pixel in a long time (within a preset time period), and then to perform target pixel judgment by using statistical differences to model the complex dynamic background, thereby extracting the target background map.
S202, inputting the target background image into a semantic segmentation network for semantic segmentation, and outputting a background semantic feature image which comprises a conventional area and a limited area.
After a target background image is extracted through the Gaussian mixture background model, the target background image is input into a deep Labv3 semantic segmentation network for semantic segmentation, at the moment, because no target object exists in the target background image, the speed is higher during semantic segmentation, a conventional area and a limited area are easily divided, and a background semantic feature map is output.
S203, acquiring an initial image in real time, inputting the initial image into a target detection network for target detection, and outputting a target detection characteristic diagram.
The video image for training the mixed Gaussian background model is acquired in an off-line state, which means that the historical image is trained, and then the background semantic feature map obtained by semantic segmentation is input into a DeepLabv3 semantic segmentation network for semantic segmentation can be applied to subsequent real-time vehicle parking detection. By acquiring a frame of initial image under the same camera in real time and inputting the initial image into a YOLO-V3 target detection network for target detection, a target object (vehicle) in the initial image can be identified, and finally a target detection feature map containing a target position of the target object is output.
S204, the target detection feature map and the background semantic feature map are overlapped, a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the restricted area are obtained, and whether the target object is illegal to stop is judged based on the proportion of the first overlapping area and the second overlapping area.
The target detection feature map containing the target position and the background semantic feature map containing the limited area and the conventional area can be overlapped, and the overlapped area of the target position and the limited area and/or the conventional area can be seen after the overlap. And then judging whether the vehicle is illegal to stop by comparing the ratio of the target position to the first overlapping area of the conventional area and the second overlapping area of the target position and the restricted area.
In the embodiment of the invention, because the mixed Gaussian background model, the target detection network and the semantic segmentation network are combined for judgment, the target detection characteristic image is detected and output in real time through the target detection network, and in an off-line state, a section of video image acquired under the same camera is input into the mixed Gaussian background model for training to output the target background image, and then the target background image is input and matched with the semantic segmentation network for semantic segmentation, the problems of image jitter, noise interference, target object movement and the like in the video image can be solved, and a cleaner target background image without a target object can be provided for the semantic segmentation network. When the semantic segmentation is carried out, the recognition speed can be increased, and the rapid and accurate semantic segmentation is realized. After the target detection feature map and the background semantic feature map are combined, whether the target detection feature map violates the judgment of the proportion of the overlapping area of the target position of the target object and the conventional area and the overlapping area of the target position of the target object and the limiting area is judged, the target detection feature map and the background semantic feature map are directly positioned to the target position and the limiting area, and the video data volume is reduced; in addition, the whole process does not need manual video real-time supervision and processing, and the processing of the video data is directly realized through the method provided by the embodiment, so that the workload can be effectively reduced, and the working efficiency can be improved.
Optionally, as shown in fig. 3, fig. 3 is a schematic flowchart of step S102 provided in this embodiment, where S102 specifically includes the following steps:
s301, carrying out target detection on the initial image through a target detection network, and carrying out labeling frame labeling on the detected target object.
When the vehicle is identified in the process of detecting the vehicle in the initial image by the YOLO-V3 target detection network, the vehicle may be labeled by using the labeling frame, so as to further obtain information in the labeling frame.
S302, selecting a preset area of a labeling frame as a target detection feature map to be output, wherein the preset area of the labeling frame comprises a target position of a target object.
In consideration of the fact that the position information of the vehicle is located at the bottom of the vehicle, the target object detected in the mark frame is the vehicle, and therefore, the preset area in the mark frame may be the lower half of the mark frame. The position occupied by the vehicle, i.e., the target position described above, may be included in the lower half of the label box. The preset area may be an area with a preset size, for example: the length and the width respectively account for 1/3 of the length and the width of the standard frame and take the bottom edge as the standard.
In the embodiment of the invention, the vehicle is marked in the form of the marking frame, so that the information in the marking frame can be acquired more quickly; and the lower half part of the intercepting marking frame is more favorable for acquiring the position information of the vehicle, so that the first coincidence area and the second coincidence area can be conveniently acquired subsequently.
Optionally, as shown in fig. 4, fig. 4 is a schematic flowchart of step S204 provided in this embodiment, where S204 specifically includes the following steps:
s401, merging the target detection feature map and the semantic feature map.
After the target detection feature map and the semantic feature map are output, the target detection feature map and the semantic feature map can be combined into a whole, and a first overlapping area and a second overlapping area are obtained from the overlapped images. And the size proportion of the target detection feature map is the same as that of the semantic feature map.
S402, calculating a first overlapping area of the target position of the target object in the preset area of the labeling frame and the conventional area, and a second overlapping area of the target position of the target object in the preset area of the labeling frame and the limiting area.
Specifically, after the preset area is intercepted from the labeling frame, the interference of the target object to the target area can be reduced, and therefore, the calculation of the first coincidence area of the target position and the conventional area and the second coincidence area of the target area and the limiting area are facilitated. When the first coincidence area and the second coincidence area are calculated, the coincidence area can be irregular, so that the calculation can be performed by methods of translation, segmentation, image complementing, integration, coordinate axis combination and the like, and the first coincidence area and the second coincidence area obtained through calculation are more accurate.
And S403, calculating the area ratio of the first overlapping area to the second overlapping area.
And if the first overlapping area is A, the second overlapping area is B, and the ratio is P, then P is B/A. When the overlapping area of the restricted area and the target position (second overlapping area B) is larger than the overlapping area of the conventional area and the target position (first overlapping area B), P is larger than 1, which indicates that the vehicle occupies more than half of the position in the parking violation area, and the situation can be judged as the vehicle parking violation, otherwise, the vehicle can be judged to be normally parked.
S404, if the area ratio exceeds a preset ratio, the target object is judged to be illegal.
The preset ratio may be 1, and when P is greater than 1, it is determined as illegal parking, and when P is less than or equal to 1, it may be determined as illegal parking, for example: a is 1.5m2,B=3m2If P is 2 > 1, the vehicle is illegal.
In the embodiment of the invention, the first overlapping area of the position (target position) occupied by the vehicle and the conventional area and the second overlapping area of the conventional area and the restricted area are respectively calculated, the ratio of the second overlapping area to the first overlapping area is judged, and whether the vehicle is illegal to stop is directly judged according to the ratio of the areas, so that the method is simple, convenient and quick. And the video real-time supervision processing is not required to be carried out manually, and the video data is directly processed by the method, so that the workload is effectively reduced, and the working efficiency is improved.
In a second aspect, please refer to fig. 5, fig. 5 is a schematic structural diagram of a vehicle illegal parking detection device based on deep learning according to an embodiment of the present application, and as shown in fig. 5, the device 500 specifically includes:
an input module 501, configured to acquire an initial image, and input the initial image into a target detection network and a semantic segmentation network, respectively, where the initial image includes a target object;
a first target detection module 502, configured to perform target detection on the initial image through a target detection network, and output a target detection feature map, where the target detection feature map includes a target position of a target object;
the first semantic segmentation module 503 is configured to perform semantic segmentation on the initial image through a semantic segmentation network, and output a semantic feature map, where the semantic feature map includes a conventional region and a restricted region;
the determining module 504 is configured to overlap the target detection feature map and the semantic feature map, obtain a first overlap area between the target position of the target object and the conventional region, and a second overlap area between the target position of the target object and the restricted region, and determine whether the target object violates the parking determination based on a ratio of the first overlap area to the second overlap area.
Optionally, acquiring an initial image includes offline acquisition and real-time acquisition, where the initial image includes a video image, as shown in fig. 6, fig. 6 is a schematic structural diagram of another vehicle-based vehicle-parking-violation detecting apparatus based on deep learning provided in the embodiment of the present application, and the apparatus 500 further includes:
the training module 505 is configured to acquire a video image in a preset time period in an off-line manner, train the video image in the preset time period through a gaussian mixture background model, and extract a target background image;
the second semantic segmentation module 506 is configured to input the target background image into a semantic segmentation network for semantic segmentation, and output a background semantic feature image, where the background semantic feature image includes a conventional region and a restricted region;
the second target detection module 507 is used for acquiring an initial image in real time, inputting the initial image into a target detection network for target detection, and outputting a target detection characteristic diagram;
the determining module 504 is further configured to overlap the target detection feature map with the background semantic feature map.
Optionally, as shown in fig. 7, fig. 7 is a schematic structural diagram of another deep learning-based vehicle parking violation detection apparatus provided in the embodiment of the present application, where the first target detection module 502 includes:
a labeling unit 5021, configured to perform target detection on the initial image through a target detection network, and perform labeling frame labeling on the detected target object;
the selecting unit 5022 is configured to select a preset region of a label frame as a target detection feature map to be output, where the preset region of the label frame includes a target position of a target object.
Optionally, as shown in fig. 8, fig. 8 is a schematic structural diagram of another vehicle illegal parking detection device based on deep learning according to an embodiment of the present application, where the determining module 504 includes:
a merging unit 5041, configured to merge the target detection feature map with the semantic feature map;
a first calculating unit 5042, configured to calculate a first overlapping area between the target position of the target object in the preset region of the labeling frame and the normal region, and a second overlapping area between the target position of the target object in the preset region of the labeling frame and the restricted region;
a second calculation unit 5043, configured to calculate an area ratio of the first overlapping area to the second overlapping area;
a first determination unit 5044, configured to determine that the target object violates if the area ratio exceeds a preset ratio.
Optionally, as shown in fig. 9, fig. 9 is a schematic structural diagram of another vehicle illegal parking detection device based on deep learning according to an embodiment of the present application, where the determining module 504 further includes:
a second determining unit 5045, configured to determine that the target object is not violated if the area ratio does not exceed the preset ratio.
The vehicle illegal parking detection device based on deep learning provided by the embodiment of the invention can realize each process and the same beneficial effect realized by the vehicle illegal parking detection method based on deep learning in any method embodiment, and in order to avoid repetition, the detailed description is omitted here.
In a third aspect, as shown in fig. 10, for a schematic structural diagram of an electronic device provided in an embodiment of the present invention, an electronic device 100 includes: the memory 1002, the processor 1001, the network interface 1003, and a computer program stored on the memory 1002 and executable on the processor 1001 are communicatively connected to each other through a system bus. It is noted that only electronic device 100 having components 1001 and 1003 is shown, but it is understood that not all of the illustrated components need be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the electronic device 100 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device 100 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The electronic device 100 can interact with the client through a keyboard, a mouse, a remote controller, a touch pad, a voice control device, or the like.
Wherein:
the processor 1001 may be a controller, microcontroller, microprocessor, or other data processing chip in some embodiments, the processor 1001 generally being used to control the overall operation of the computer device.
The processor 1001 is used for calling the computer program stored in the memory 1002, and executes the following steps:
acquiring an initial image, and respectively inputting the initial image into a target detection network and a semantic segmentation network, wherein the initial image comprises a target object;
performing target detection on the initial image through a target detection network, and outputting a target detection characteristic diagram, wherein the target detection characteristic diagram comprises a target position of a target object;
performing semantic segmentation on the initial image through a semantic segmentation network, and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region;
the target detection feature map and the semantic feature map are overlapped, a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the limiting area are obtained, and whether the target object is illegal or not is judged based on the proportion of the first overlapping area and the second overlapping area.
Optionally, acquiring the initial image includes acquiring the initial image in an off-line manner and acquiring the initial image in a real-time manner, where the initial image includes a video image, and the processor 1001 is further configured to perform the following steps:
obtaining a video image in a preset time period in an off-line manner, training the video image in the preset time period through a Gaussian mixture background model, and extracting a target background image;
inputting the target background image into a semantic segmentation network for semantic segmentation, and outputting a background semantic feature image, wherein the background semantic feature image comprises a conventional area and a limited area;
acquiring an initial image in real time, inputting the initial image into a target detection network for target detection, and outputting a target detection characteristic diagram;
and overlapping the target detection feature map and the background semantic feature map.
Optionally, the step of performing, by the processor 1001, target detection on the initial image through a target detection network and outputting a target detection feature map includes:
carrying out target detection on the initial image through a target detection network, and carrying out marking frame marking on the detected target object;
and selecting a preset area of the labeling frame as a target detection characteristic diagram to be output, wherein the preset area of the labeling frame comprises the target position of the target object.
Optionally, the step of determining whether the target object violates the constraint region based on a ratio of the first overlap area to the second overlap area includes:
merging the target detection feature map and the semantic feature map;
calculating a first overlapping area of a target position of a target object in a preset area of the labeling frame and the conventional area, and a second overlapping area of the target position of the target object in the preset area of the labeling frame and the limiting area;
calculating the area ratio of the first coincidence area to the second coincidence area;
and if the area ratio exceeds a preset ratio, judging that the target object is illegal to stop.
Optionally, the step of determining whether the target object is illegal based on the ratio of the first overlapping area to the second overlapping area, executed by the processor 1001, further includes:
and if the area ratio does not exceed the preset ratio, determining that the target object does not break.
The electronic device 100 provided by the embodiment of the present invention can implement each implementation manner in the vehicle illegal parking detection method embodiment based on deep learning, and has corresponding beneficial effects, and for avoiding repetition, details are not repeated here.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the deep learning-based vehicle violation detection method provided in the present application. That is, in the embodiment of the present invention, when the computer program of the computer readable storage medium is executed by the processor, the steps of the above-mentioned deep learning-based vehicle violation detection method can be implemented to reduce the nonlinearity of the digital circuit control capacitance.
Illustratively, the computer program of the computer-readable storage medium comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer readable medium may include: capable of carrying computer program code.
Any entity or device, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier signal, telecommunications signal, and software distribution medium, etc.
It should be noted that, since the computer program of the computer-readable storage medium is executed by the processor to implement the steps of the deep-learning-based vehicle violation detection method, all the embodiments of the deep-learning-based vehicle violation detection method are applicable to the computer-readable storage medium, and can achieve the same or similar beneficial effects.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The foregoing is a more detailed description of the present application in connection with specific preferred embodiments, and it is not intended that the present application be limited to the specific embodiments shown. For those skilled in the art to which the present application pertains, several simple deductions or substitutions may be made without departing from the concept of the present application, and all should be considered as belonging to the protection scope of the present application.

Claims (10)

1. A vehicle illegal parking detection method based on deep learning is characterized by comprising the following steps:
acquiring an initial image, and respectively inputting the initial image into a target detection network and a semantic segmentation network, wherein the initial image comprises a target object;
performing target detection on the initial image through the target detection network, and outputting a target detection characteristic map, wherein the target detection characteristic map comprises a target position of the target object;
performing semantic segmentation on the initial image through the semantic segmentation network, and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region;
and overlapping the target detection feature map and the semantic feature map, acquiring a first overlapping area of the target position of the target object and the conventional area and a second overlapping area of the target position of the target object and the restricted area, and judging whether the target object violates the law or not based on the proportion of the first overlapping area and the second overlapping area.
2. The method of claim 1, wherein said acquiring an initial image comprises an offline acquisition and a real-time acquisition, said initial image comprising a video image, said method further comprising the steps of:
obtaining a video image in a preset time period in an off-line manner, training the video image in the preset time period through a mixed Gaussian background model, and extracting a target background image;
inputting the target background image into the semantic segmentation network for semantic segmentation, and outputting a background semantic feature image, wherein the background semantic feature image comprises the conventional area and the limited area;
acquiring the initial image in real time, inputting the initial image into the target detection network for target detection, and outputting the target detection characteristic diagram;
and overlapping the target detection feature map and the background semantic feature map.
3. The method of claim 1, wherein the step of performing object detection on the initial image through the object detection network and outputting an object detection feature map comprises:
performing target detection on the initial image through the target detection network, and performing labeling frame labeling on the detected target object;
and selecting a preset area of the labeling frame as a target detection characteristic diagram to be output, wherein the preset area of the labeling frame comprises the target position of the target object.
4. The method according to claim 3, wherein the step of superposing the target detection feature map and the semantic feature map, acquiring a first superposed area of the target position of the target object and the regular region, and a second superposed area of the target position of the target object and the restricted region, and the step of judging whether the target object violates the criterion based on a ratio of the first superposed area to the second superposed area comprises:
merging the target detection feature map and the semantic feature map;
calculating a first overlapping area of the target position of the target object and the conventional area in the preset area of the labeling frame, and a second overlapping area of the target position of the target object and the limiting area in the preset area of the labeling frame;
calculating the area ratio of the first coincidence area to the second coincidence area;
and if the area ratio exceeds a preset ratio, judging that the target object violates the parking.
5. The method of claim 4, wherein the step of determining whether the target object violates based on the ratio of the first coincidence area to the second coincidence area further comprises:
and if the area ratio does not exceed the preset ratio, judging that the target object does not break.
6. A vehicle parking violation detection device based on deep learning, the device comprising:
the system comprises an input module, a semantic segmentation module and a target detection module, wherein the input module is used for acquiring an initial image and respectively inputting the initial image into a target detection network and a semantic segmentation network, and the initial image comprises a target object;
the first target detection module is used for carrying out target detection on the initial image through the target detection network and outputting a target detection characteristic diagram, wherein the target detection characteristic diagram comprises a target position of the target object;
the first semantic segmentation module is used for performing semantic segmentation on the initial image through the semantic segmentation network and outputting a semantic feature map, wherein the semantic feature map comprises a conventional region and a limited region;
and the judging module is used for superposing the target detection feature map and the semantic feature map, acquiring a first superposed area of the target position of the target object and the conventional area and a second superposed area of the target position of the target object and the restricted area, and judging whether the target object violates the parking condition based on the proportion of the first superposed area to the second superposed area.
7. The apparatus of claim 6, wherein said acquiring an initial image comprises an offline acquisition and a real-time acquisition, the apparatus further comprising:
the training module is used for acquiring a video image in a preset time period in an off-line mode, training the video image in the preset time period through a mixed Gaussian background model, and extracting a target background image;
the second semantic segmentation module is used for inputting the target background image into the semantic segmentation network for semantic segmentation and outputting a background semantic feature image, wherein the background semantic feature image comprises the conventional area and the limited area;
the second target detection module is used for acquiring the initial image in real time, inputting the initial image into the target detection network for target detection, and outputting the target detection characteristic diagram;
the judging module is further used for enabling the target detection feature map to be overlapped with the background semantic feature map.
8. A vehicle parking violation detection device based on deep learning, wherein the first object detection module comprises:
the marking unit is used for carrying out target detection on the initial image through the target detection network and marking a marking frame on the detected target object;
and the selecting unit is used for selecting the preset area of the marking frame as a target detection characteristic diagram to be output, wherein the preset area of the marking frame comprises the target position of the target object.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the deep learning based vehicle violation detection method according to any of claims 1-5 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the deep learning-based vehicle violation detection method according to any one of claims 1 to 5.
CN202011633740.0A 2020-12-31 2020-12-31 Vehicle illegal parking detection method and device based on deep learning and electronic equipment Pending CN112766069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011633740.0A CN112766069A (en) 2020-12-31 2020-12-31 Vehicle illegal parking detection method and device based on deep learning and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011633740.0A CN112766069A (en) 2020-12-31 2020-12-31 Vehicle illegal parking detection method and device based on deep learning and electronic equipment

Publications (1)

Publication Number Publication Date
CN112766069A true CN112766069A (en) 2021-05-07

Family

ID=75697902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011633740.0A Pending CN112766069A (en) 2020-12-31 2020-12-31 Vehicle illegal parking detection method and device based on deep learning and electronic equipment

Country Status (1)

Country Link
CN (1) CN112766069A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990162A (en) * 2021-05-18 2021-06-18 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium
CN113378787A (en) * 2021-07-07 2021-09-10 山东建筑大学 Intelligent traffic electronic prompting device detection method and system based on multi-feature vision
CN113989305A (en) * 2021-12-27 2022-01-28 城云科技(中国)有限公司 Target semantic segmentation method and street target abnormity detection method applying same
CN115049948A (en) * 2022-08-15 2022-09-13 深圳市万物云科技有限公司 Unmanned aerial vehicle inspection method and device based on neural network model and related equipment
CN115082903A (en) * 2022-08-24 2022-09-20 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium
CN115294774A (en) * 2022-06-20 2022-11-04 桂林电子科技大学 Non-motor vehicle road illegal parking detection method and device based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993056A (en) * 2019-02-25 2019-07-09 平安科技(深圳)有限公司 A kind of method, server and storage medium identifying vehicle violation behavior
CN110852236A (en) * 2019-11-05 2020-02-28 浙江大华技术股份有限公司 Target event determination method and device, storage medium and electronic device
CN111368687A (en) * 2020-02-28 2020-07-03 成都市微泊科技有限公司 Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993056A (en) * 2019-02-25 2019-07-09 平安科技(深圳)有限公司 A kind of method, server and storage medium identifying vehicle violation behavior
CN110852236A (en) * 2019-11-05 2020-02-28 浙江大华技术股份有限公司 Target event determination method and device, storage medium and electronic device
CN111368687A (en) * 2020-02-28 2020-07-03 成都市微泊科技有限公司 Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990162A (en) * 2021-05-18 2021-06-18 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium
CN112990162B (en) * 2021-05-18 2021-08-06 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium
CN113378787A (en) * 2021-07-07 2021-09-10 山东建筑大学 Intelligent traffic electronic prompting device detection method and system based on multi-feature vision
CN113989305A (en) * 2021-12-27 2022-01-28 城云科技(中国)有限公司 Target semantic segmentation method and street target abnormity detection method applying same
CN113989305B (en) * 2021-12-27 2022-04-22 城云科技(中国)有限公司 Target semantic segmentation method and street target abnormity detection method applying same
CN115294774A (en) * 2022-06-20 2022-11-04 桂林电子科技大学 Non-motor vehicle road illegal parking detection method and device based on deep learning
CN115294774B (en) * 2022-06-20 2023-12-29 桂林电子科技大学 Non-motor vehicle road stopping detection method and device based on deep learning
CN115049948A (en) * 2022-08-15 2022-09-13 深圳市万物云科技有限公司 Unmanned aerial vehicle inspection method and device based on neural network model and related equipment
CN115049948B (en) * 2022-08-15 2022-11-22 深圳市万物云科技有限公司 Unmanned aerial vehicle inspection method and device based on neural network model and related equipment
CN115082903A (en) * 2022-08-24 2022-09-20 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium
CN115082903B (en) * 2022-08-24 2022-11-11 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112766069A (en) Vehicle illegal parking detection method and device based on deep learning and electronic equipment
CN110390262B (en) Video analysis method, device, server and storage medium
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
Rathore et al. Smart traffic control: Identifying driving-violations using fog devices with vehicular cameras in smart cities
US11587327B2 (en) Methods and systems for accurately recognizing vehicle license plates
CN111191570B (en) Image recognition method and device
CN110826429A (en) Scenic spot video-based method and system for automatically monitoring travel emergency
CN113822285A (en) Vehicle illegal parking identification method for complex application scene
CN108182218B (en) Video character recognition method and system based on geographic information system and electronic equipment
CN110705370A (en) Deep learning-based road condition identification method, device, equipment and storage medium
CN112381014A (en) Illegal parking vehicle detection and management method and system based on urban road
CN111079621A (en) Method and device for detecting object, electronic equipment and storage medium
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
CN113469115A (en) Method and apparatus for outputting information
CN113505638A (en) Traffic flow monitoring method, traffic flow monitoring device and computer-readable storage medium
CN113505643A (en) Violation target detection method and related device
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
Song et al. Vision-based parking space detection: A mask R-CNN approach
Delavarian et al. Multi‐camera multiple vehicle tracking in urban intersections based on multilayer graphs
CN111767904B (en) Traffic incident detection method, device, terminal and storage medium
CN113538968B (en) Method and apparatus for outputting information
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN112861701B (en) Illegal parking identification method, device, electronic equipment and computer readable medium
CN114724107A (en) Image detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination