CN112434657A - Drift carrier detection method, device, program, and computer-readable medium - Google Patents

Drift carrier detection method, device, program, and computer-readable medium Download PDF

Info

Publication number
CN112434657A
CN112434657A CN202011436050.6A CN202011436050A CN112434657A CN 112434657 A CN112434657 A CN 112434657A CN 202011436050 A CN202011436050 A CN 202011436050A CN 112434657 A CN112434657 A CN 112434657A
Authority
CN
China
Prior art keywords
image
vehicle
detected
target vehicle
tail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011436050.6A
Other languages
Chinese (zh)
Inventor
潘柳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Publication of CN112434657A publication Critical patent/CN112434657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a drift cargo detection scheme, this scheme is earlier before detecting from holistic position frame or the key point that waits to detect the target vehicle in the image, then according to the position frame or the key point of target vehicle follow it intercepts the peripheral image in the image to wait to detect to intercept the rear of a vehicle and carries out the detection of drift cargo, therefore the content that detects is more targeted, is of value to improving detection efficiency and precision, and wait to detect the image to the multiframe simultaneously and discern, combine every recognition result of waiting to detect the image to confirm final testing result, the result is more reliable, in addition utilized the deep learning algorithm in the testing process, stable high efficiency, the reusability is good, can save a large amount of manpowers.

Description

Drift carrier detection method, device, program, and computer-readable medium
Technical Field
The present application relates to the field of information technology, and in particular, to a method, device, program, and computer-readable medium for detecting a drift carrier.
Background
With the rapid development of social economy, the acceleration of urban development process, the increasing of urban population, the increasing of people's living standard, the increasing of the quantity of automobiles and the increasing of urban traffic problems. In the transportation process of vehicles carrying various cargoes, the cargoes may be scattered and fall off due to various factors such as insecure fixation and poor road conditions, and therefore road traffic hidden dangers are caused to other vehicles. Therefore, how to quickly and accurately detect whether the freight vehicle has the carried object drifting in the driving process is an urgent technical problem to be solved.
At present, the identification of the car drifting cargos is mainly to shoot discrete images or continuous videos containing target vehicles at different time points through front-end equipment such as a monitoring camera and the like, and then to manually check the shot images or videos, or to try to intelligently check the shot images or videos by adopting an intelligent algorithm. However, in the existing scheme, the cost of manual audit is high, and the accuracy of intelligent audit is not high, so that a better scheme for detecting the drift carried objects does not exist.
Disclosure of Invention
An object of this application is to provide a drift cargo detection scheme for solve among the prior art and detect the problem with high costs, the rate of accuracy is low.
To achieve the above object, the present application provides a method for detecting a drift carrier, the method comprising:
acquiring a video to be detected, and acquiring a plurality of frames of images to be detected from the video to be detected;
detecting a position frame or a key point of a target vehicle from each frame of image to be detected, and intercepting a vehicle tail peripheral image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle;
identifying whether the carried objects drifting away from the target vehicle exist in the vehicle tail periphery image;
and determining the detection result of the target vehicle for the drifting carrying objects according to the identification results of the multiple frames of images to be detected.
Further, the method for capturing the vehicle tail periphery image of the target vehicle from each frame of image to be detected comprises the following steps:
identifying a vehicle belonging to the type of the target vehicle from each frame of image to be detected;
and tracking the same vehicle belonging to the type of the target vehicle in each frame of image to be detected as the target vehicle of the current detection, and detecting the position frame or the key point of the target vehicle from the image to be detected.
Further, the key points include a left key point and a right key point;
intercepting the vehicle tail peripheral image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle, wherein the image comprises:
calculating the coordinates of the central point of the vehicle tail peripheral image of the target vehicle according to the left key point and the right key point;
determining second position information of the vehicle tail periphery image in the image to be detected according to the central point coordinate;
and intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the second position information of the vehicle tail periphery image.
Further, intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle, and the method comprises the following steps:
identifying the tail direction of the target vehicle in the image to be detected;
determining second position information of the image around the tail of the vehicle in the image to be detected according to the first position information of the position frame of the target vehicle and the direction of the tail of the vehicle;
and intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the second position information of the vehicle tail periphery image.
Further, according to the first position information of the position frame of the target vehicle and the direction of the vehicle tail, determining second position information of the vehicle tail surrounding image in the image to be detected, including:
determining a vehicle tail left vertex coordinate according to the coordinate of the position frame of the target vehicle and the vehicle tail direction, wherein if the vehicle tail direction is a first direction, the vehicle tail left vertex coordinate is a lower left corner coordinate of the position frame of the target vehicle, and if the vehicle tail direction is a second direction, the vehicle tail left vertex coordinate is an upper left corner coordinate of the position frame of the target vehicle;
and determining second position information of the image around the vehicle tail in the image to be detected according to the coordinate of the left vertex of the vehicle tail and the first height and the first width of the position frame of the target vehicle.
Further, according to the recognition results of the multiple frames of images to be detected, determining the detection result of the drift cargo of the target vehicle, comprising the following steps:
and when the detection result is that the number of the images to be detected of the carried objects drifted by the target vehicle is larger than the judgment threshold value, determining that the carried objects drifted by the target vehicle.
Further, intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle, and the method comprises the following steps:
determining a first image boundary range to be intercepted from the image to be detected according to the position frame or the key point of the target vehicle;
and if the first image boundary range exceeds a second image boundary range of the image to be detected, intercepting the image content of a target boundary range in the image to be detected as a vehicle tail periphery image, wherein the target boundary range is the intersection of the first image boundary range and the second image boundary range.
Based on another aspect of the application, there is also provided a drift carrier detection apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, cause the apparatus to perform the steps of the drift carrier detection method.
Further, a computer readable medium is provided, having computer readable instructions stored thereon, which are executable by a processor to implement the steps of the method for detecting a drift carrier.
The embodiment of the application also provides a computer program, wherein the computer program is stored in computer equipment, so that the computer equipment executes the steps of the drift carrier detection method.
Compared with the prior art, the detection scheme for the drifting carrying object provided by the application can firstly acquire the video to be detected, acquire multiple frames of images to be detected from the video to be detected, detect the position frame or the key point of the target vehicle from each frame of image to be detected, intercept the image around the tail of the vehicle from the image to be detected according to the position frame or the key point of the target vehicle, identify whether the carrying object drifting around the target vehicle exists in the image around the tail of the vehicle, and determine the detection result of the drifting carrying object of the target vehicle according to the identification result of the multiple frames of images to be detected. The position frame or the key point of the target vehicle is detected from the integral image to be detected before detection, and then the image around the tail of the vehicle is intercepted from the image to be detected according to the position frame or the key point of the target vehicle to detect the scattered carrier, so the detection content is more targeted, the detection efficiency and the detection accuracy are improved, continuous multi-frame images to be detected are identified simultaneously, the final detection result is determined by combining the identification result of each image to be detected, the result is more reliable, in addition, a deep learning algorithm is utilized in the detection process, the stability and the high efficiency are realized, the reusability is good, and a large amount of manpower can be saved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a process flow diagram of a method for detecting a drift carrier according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a process for detecting a location frame or keypoint of a target vehicle according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an image to be detected in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a positional relationship between a position frame of a target vehicle in an image to be detected and a vehicle rear peripheral image according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a process for capturing a vehicle rear periphery image according to a position frame according to an embodiment of the present disclosure;
fig. 6 is a to-be-detected image with a first direction of a vehicle tail in the embodiment of the present application;
FIG. 7 is a diagram illustrating an image to be detected with a second direction of a vehicle tail in an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a positional relationship between a position frame in an image to be detected and a vehicle tail peripheral image, where a vehicle tail direction is a first direction in an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a positional relationship between a position frame in an image to be detected and a vehicle tail peripheral image, where a vehicle tail direction is a second direction in the embodiment of the application;
FIG. 10 is a schematic diagram of left and right keypoints of a truck in an image to be detected according to an embodiment of the present application;
fig. 11 is a flowchart illustrating a process of capturing a vehicle rear periphery image of the target vehicle from the image to be detected according to the embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a positional relationship between a center point in an image to be detected and a vehicle tail peripheral image according to an embodiment of the present application;
FIG. 13 is a flowchart of a process for detecting a vehicle carrying a vehicle in bulk according to an embodiment of the present disclosure;
the same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a typical configuration of the present application, the terminal, the devices serving the network each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The application provides a detection method for a drift cargo, the scheme is that before detection, a position frame or a key point of a target vehicle is detected from an integral image to be detected, then the detection of the drift cargo is carried out by intercepting a vehicle tail peripheral image from the image to be detected according to the position frame or the key point of the target vehicle, so that the detection content is more targeted, the detection efficiency and the detection precision are improved, continuous multiframe images to be detected are identified simultaneously, the final detection result is determined by combining the identification result of each image to be detected, the result is more reliable, in addition, a depth learning algorithm is utilized in the detection process, the stability and the high efficiency are realized, the reusability is good, and a large amount of manpower can be saved.
In an actual scenario, the execution subject of the method may be a user device, a network device, or a device formed by integrating the user device and the network device through a network, or may also be an application program running on the device. The user equipment comprises but is not limited to various terminal equipment such as a computer, a mobile phone and a tablet computer; including but not limited to implementations such as a network host, a single network server, multiple sets of network servers, or a cloud-computing-based collection of computers. Here, the Cloud is made up of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a type of distributed Computing, one virtual computer consisting of a collection of loosely coupled computers.
Fig. 1 shows a process flow of a method for detecting a drift carrier in an embodiment of the present application, the method at least comprising the following steps:
step S101, acquiring a video to be detected, and acquiring a plurality of frames of images to be detected from the video to be detected. The video to be detected can be videos shot by various front-end devices, such as traffic monitoring videos captured by monitoring cameras installed in roads. The videos can be uploaded to a corresponding server for storage after being shot and obtained by front-end videos, and when the detection of the floating carrier is needed, corresponding videos can be obtained from the server by executing equipment to serve as the videos to be detected of the detection.
For a video to be detected, continuous multi-frame images are included, and each frame of image in the video can be extracted to be used as an image to be detected. In an actual scene, when multiple frames of images to be detected are acquired from the video to be detected, all the frames of images contained in a section of the video to be detected can be directly used as the images to be detected according to the actual detection requirements, or a certain number of images can be extracted from all the images contained in a section of the video to be detected and used as the images to be detected.
For example, if the acquired video to be detected is a video with a length of 5 seconds, the video to be detected includes 150 continuous images. In some embodiments, all the 150 frames of images can be used as images to be detected, and complete detection is performed, so that the detection result is more reliable. In other embodiments, a part of images, for example, 30 frames of images, may be extracted from the image to be detected, and the extracted image is used as the image to be detected to perform subsequent detection processing, so that when the extraction mode is appropriate, the detection calculation load may be reduced and the detection efficiency may be improved on the premise of ensuring the reliable detection result.
The extraction mode of the image in the video to be detected adopts the following mode: for example, images with preset frames may be extracted at certain intervals to serve as images to be detected, taking the scene as an example, starting from the first frame of the video to be detected, one frame of image is extracted every 5 frames to serve as the video to be detected, and 30 frames of images may be acquired to serve as the images to be detected. For example, after dividing the video to be detected into multiple segments of sub-videos, randomly extracting images with preset frame numbers from each sub-video to serve as the video to be detected, taking the scene as an example, averagely dividing a video with 5 seconds into 5 segments of sub-videos with 1 second, and then randomly extracting 6 frames of images from each segment of sub-videos to obtain 30 frames of images to serve as the images to be detected.
It will be understood by those skilled in the art that the specific form of acquiring multiple frames of images to be detected is merely exemplary, and other forms based on similar principles now known or later developed should be included within the scope of the present application if applicable thereto, and are incorporated herein by reference. For example, the number of frames for extracting the image to be detected may be dynamically set, and if the vehicle in the video moves faster, the image with more frames is extracted as the video to be detected, whereas if the vehicle in the video moves slower, the image with less frames is extracted as the video to be detected, so as to dynamically adjust the balance between the reliability of the detection result and the calculation load.
Step S102, detecting a position frame or a key point of a target vehicle from each frame of image to be detected, and intercepting a vehicle tail periphery image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle.
The vehicle tail peripheral image refers to image content corresponding to a vehicle tail part of a target vehicle and a peripheral area of the vehicle tail part in the image to be detected. In the actual scene, the situation that most of the vehicles carrying cargos mostly occur in the area around the vehicle tail when carrying cargos are scattered is considered, even if the situation that the cargos are scattered occurs, effective results cannot be detected out when other image contents except the area around the vehicle tail are detected, therefore, when the detection of the scattered cargos is carried out, a position frame or a key point of a target vehicle can be detected from each frame of image to be detected, then the image around the vehicle tail of the target vehicle is intercepted from the corresponding image to be detected based on the position frame or the key point, the specific detection is carried out, but not the complete image to be detected, the image data needing to be processed in the detection process can be greatly reduced, the calculation force is used in the detection of the effective data more intensively, the detection efficiency can be improved, and the detection precision is improved.
When detecting the position frame or the key point of the target vehicle from each frame of image to be detected, the processing flow shown in fig. 2 is adopted, and at least the following processing steps are included:
in step S201, a vehicle belonging to the type of the target vehicle is identified from each frame of the image to be detected. The type of the target vehicle may be set according to the requirements of the actual scene, for example, in this embodiment, if detection is required to detect a car drifting cargo, the type of the target vehicle may be set as a truck, and in addition, the type of the truck may be further subdivided. Therefore, by setting different types of target vehicles, the scheme provided by the embodiment of the application can be applied to detection of the drifting carried objects of various vehicles, so that the application scene is wider.
In some embodiments of the present application, when identifying a vehicle belonging to a target vehicle type, a vehicle in an image to be detected may be identified for each frame of the image to be detected, and then a vehicle belonging to the target vehicle type may be identified from the vehicles. In the actual identification process, a deep learning algorithm can be adopted for automatic identification, vehicles in the images to be detected are identified by using a vehicle target detection model based on deep learning for each frame of images to be detected, and vehicles belonging to the type of the target vehicles are identified from the vehicles by using a vehicle classification model based on deep learning.
The vehicle target detection model based on deep learning can be obtained by adopting the following modes:
a1. acquiring images of vehicles of different vehicle types under the conditions of different illumination and different shooting angles as training samples;
a2. marking the position of the vehicle in the image by adopting vehicle position information in the forms of a position frame and the like, and generating a marking file with vehicle position information;
a3. and training a deep neural network model by using the image and the corresponding annotation file to obtain a vehicle target detection model based on deep learning.
After the deep learning-based vehicle target detection model is trained, the image to be detected which needs to be identified can be used as input, and the vehicle position information in the image to be detected can be automatically output based on the deep learning-based vehicle target detection model, so that the vehicle in the image to be detected can be identified. Taking the image to be detected shown in fig. 3 as an example, the vehicle position information of vehicles T1, C1 and C2 in the image to be detected can be identified by using a vehicle target detection model based on deep learning, and these vehicle position information can be represented by using position frames, each position frame can be recorded in the form of a quaternion array (m, n, height, width), where m and n are coordinates of the lower left corner of the position frame, height and width are respectively the height and width of the position frame, and each identified position frame corresponds to one vehicle in the image to be detected.
And the deep learning based vehicle classification model can be obtained by adopting the following modes:
b1. acquiring images of different types of vehicles under different illumination and different shooting angles, and taking the images as training samples;
b2. classifying images of vehicles of different types, and marking the vehicle types of the vehicles in the images as class marks;
b3. and training a deep neural network model by using the vehicle image with the class mark to obtain a vehicle classification model based on deep learning.
After the deep learning-based vehicle target detection model is trained, vehicle images of corresponding positions are extracted from the images to be detected according to the vehicle position information output by the vehicle target detection model and are used as the input of the vehicle target detection model, so that the corresponding category labels can be output, and vehicles belonging to the type of the target vehicles in the images to be detected are identified. Taking the three vehicles T1, C1, and C2 identified as above as an example, the vehicle image marked by the position frame can be extracted from the image to be detected based on the corresponding position frame, and then the vehicle target detection model based on deep learning is input, so that the corresponding class mark can be output, for example, the class mark of T1 is a van, and the class marks of C1 and C2 are cars. If the target vehicle type in the present embodiment is a truck, the vehicle belonging to the target vehicle type that can be identified from the frame image is T1.
Step S202, tracking the same vehicle belonging to the type of the target vehicle in each frame of image to be detected as the target vehicle of the current detection, and detecting the position frame or the key point of the target vehicle from the image to be detected.
When a target vehicle is subjected to drift carrier detection on multiple frames of images to be detected obtained based on the same video to be detected, the same vehicle needs to be tracked in each frame of image to be detected, so that the identification results of the multiple frames of images to be detected in the same detection are carried out on the same vehicle. Therefore, the same vehicle belonging to the type of the target vehicle in each frame of image to be detected can be tracked by using the vehicle tracking model based on deep learning as the target vehicle of the current detection. In some embodiments of the present application, the deep learning based vehicle tracking model may be obtained as follows:
c1. acquiring videos shot at different places and different shooting angles and containing the same vehicle, and converting the videos into multi-frame images to be used as training samples;
c2. marking the same vehicle position information in the multi-frame images by using the position frame, and generating a marking file with the vehicle position information;
c3. and training a deep neural network model by using the multi-frame images and the vehicle position marking file to obtain a vehicle tracking model based on deep learning.
After the deep learning-based vehicle tracking model is trained, multiple frames of images to be detected of vehicles which are identified to belong to the type of the target vehicle can be input into the vehicle tracking model, so that a group of position frames of the same vehicle in each frame of image to be detected are output, and the same vehicle in the image to be detected is tracked to serve as the target vehicle of the current detection. In an actual scene, if the image to be detected contains a plurality of vehicles belonging to the types of the target vehicles, each vehicle can be processed independently, a plurality of groups of position frames corresponding to the target vehicles are recorded, and in a subsequent detection process, each group of position frames can be processed independently to realize detection of the scattered cargos of different vehicles.
When the position frame or the key point of the target vehicle is detected from the image to be detected, if the position frame of the target vehicle is detected, the recognition result can be recorded as the position frame of the target vehicle when the vehicle position information in the image to be detected is recognized based on the vehicle target detection model of deep learning.
If the detection is the key point of the target vehicle, the image containing the target vehicle can be identified and obtained by utilizing a vehicle key point detection model based on deep learning. The key point can be the position of a component which can play a positioning role in the vehicle of the target vehicle type, so that the key point is used as a point for determining the position information in the image of the periphery of the intercepted vehicle tail. For example, it may be a center point at a license plate of the vehicle, an upper apex and/or a lower apex on both sides of a vehicle compartment, a point where a wheel is located, and the like. In some embodiments of the present application, if the target vehicle type is a truck, the key points may be set as two vertexes of a bucket edge of the truck, which is loaded with cargo, as a left key point p1 and a right key point p2, respectively, as shown in fig. 10. The deep learning-based vehicle key point detection model can be obtained by adopting the following modes:
e1. obtaining vehicle images under the conditions of different illumination and different shooting angles as training samples;
e2. marking key points of the vehicle, for example, the key points can be a left key point and a right key point;
e3. and training a deep neural network model by using the vehicle image with the key point mark to obtain a vehicle key point detection model based on deep learning.
After the deep learning-based vehicle key point detection model is trained, the image to be detected or the vehicle image in the position frame can be input into the vehicle key point detection model, so that the vehicle key point detection model outputs a category label, and the key point of the target vehicle is detected.
After the key points of the target vehicle are detected and obtained, the position frame of the target vehicle or the key points of the target vehicle can be used for capturing the images around the tail of the vehicle.
Since the lens orientation of the front-end device that captures a video is generally fixed in an actual scene, for example, a monitoring camera disposed above a road will capture a vehicle passing in the road below at a fixed angle at a fixed position, and the direction of travel permitted due to the lane is also relatively fixed. Therefore, in the image to be detected, the vehicle tail periphery image and the position frame of the target vehicle have a relatively fixed position relationship. Therefore, when the image of the periphery of the tail of the target vehicle is captured from the image to be detected by using the position frame of the target vehicle, the image of the periphery of the tail of the target vehicle can be captured from the image to be detected according to the position frame based on the position relation. For example, taking the target vehicle T1 shown in fig. 3 as an example, the position frame in the frame to be detected is R1, and the extracted vehicle rear periphery image of the target vehicle is R1, as shown in fig. 4.
In some embodiments of the present application, the image of the periphery of the rear of the vehicle of the target vehicle may be cut out from the image to be detected in a manner as shown in fig. 5. The acquisition mode may include at least the following processing steps:
and S501, identifying the tail direction of the target vehicle in the image to be detected. Fig. 6 and 7 are images captured by the road monitoring camera in an actual scene. In fig. 6, the target vehicle travels away from the camera, so that the direction of its rear can be defined as a first direction, in the determined position frame R2, its rear is located at the lower part of the position frame and its head is located at the upper part of the position frame, and in fig. 7, the target vehicle travels toward the camera, so that the direction of its rear can be defined as a second direction, in the determined position frame R3, its head is located at the lower part of the position frame and its rear is located at the upper part of the position frame.
In some embodiments of the present application, a deep learning based vehicle head and tail classification model may be utilized to identify a vehicle tail direction of a target vehicle in the image to be detected. The classification model of the vehicle head and the vehicle tail based on deep learning can be obtained by adopting the following modes:
d1. obtaining vehicle images under the conditions of different illumination and different shooting angles as training samples;
d2. classifying vehicle images with different head and tail directions, and marking the categories, for example, the vehicle images can be marked as a first category and a second category by using the tail direction as a reference;
d3. and training a deep neural network model by using the vehicle image with the class mark to obtain a head and tail classification model based on deep learning.
After the deep learning-based vehicle head and tail classification model is trained, the vehicle image in the position frame in the image to be detected can be input into the vehicle head and tail classification model, so that the vehicle head and tail classification model outputs a classification mark, and the direction of the tail of the target vehicle is determined.
Step S502, determining second position information of the image around the vehicle tail in the image to be detected according to the first position information of the position frame of the target vehicle and the direction of the vehicle tail.
If the first position information of the position frame can be represented by the quaternion array, the second position information of the vehicle tail periphery image determined by combining the vehicle tail direction in the image to be detected can also be represented by the corresponding quaternion array.
In an actual scene, when second position information of the vehicle tail periphery image in the image to be detected is determined, the vehicle tail left vertex coordinate can be determined according to the coordinate of the position frame of the target vehicle and the vehicle tail direction. For example, in the present embodiment, the coordinates of the target vehicle position frame may be represented by coordinates (m, n) at the lower left corner of the position frame, and if the vehicle rear direction is the first direction, the coordinates at the lower left corner of the position frame may be determined as the coordinates of the vehicle rear left vertex, that is, x is m and y is n in the vehicle rear left vertex coordinates (x, y). In another case, if the vehicle rear direction is the second direction, the coordinate of the upper left corner of the position frame may be determined as the coordinate of the vehicle rear left vertex, that is, x is m, and y is n + height in the vehicle rear left vertex coordinates (x, y).
After the coordinates of the left vertex of the vehicle tail are determined, second position information of the image around the vehicle tail in the image to be detected can be determined according to the coordinates of the left vertex of the vehicle tail, the first height and the first width of the position frame of the target vehicle.
In some embodiments of the present application, the image left vertex coordinates of the vehicle rear periphery image are determined to be (x-a width, y-b height), the second height of the vehicle rear periphery image is determined to be c height, and the second width of the vehicle rear periphery image is determined to be d width, based on the vehicle rear left vertex coordinates and the first height and first width of the position frame of the target vehicle. Thus, the second position information of the vehicle rear periphery image may be (x-a width, y-b height, c height, d width) represented by the quaternion array.
Wherein, a, b, c, d can be set to be integers larger than 0 according to the requirement of the actual scene. Because the adoption is followed in this application embodiment the purpose of intercepting the peripheral image of rear of a vehicle in waiting to detect the image lies in selecting the image data that influences more effectively to the testing result pertinence, improves detection efficiency, consequently can prescribe the size and the position of the peripheral image of rear of a vehicle in comparatively appropriate numerical value as required, for example, set the value range of a, b, c in (0, 1) with the value range of d in (1, 2), make the peripheral image of rear of a vehicle can cover the rear of a vehicle position of target vehicle, and the width is greater than the position frame of target vehicle, and highly is less than the position frame of target vehicle. Taking the scenes shown in fig. 6 and 7 as an example, the images around the rear of the vehicle are r2 and r3, respectively.
As a more preferable mode, the value range of a may be further set to [0.2, 0.4], the value range of b may be set to [0.2, 0.5], the value range of c may be set to [0.6, 0.9], and the value range of d may be set to [1.3, 1.7 ]. In addition, in order to capture a more appropriate image around the vehicle tail, values of a, b, c and d can be respectively taken when the directions of the vehicle tail are different.
For example, when the rear direction of the vehicle is the first direction, a, b, c, d take values of 0.25, 0.75, and 1.5, respectively, and the second position information of the image around the rear of the vehicle can be expressed as (x-0.25 · width, y-0.25 · height,0.75 · height,1.5 · width). And when the rear direction of the vehicle is the second direction, a, b, c, d take values of 0.25, 0.5, 0.75 and 1.5, respectively, and the second position information of the image around the rear of the vehicle can be expressed as (x-0.25 · width, y-0.5 · height,0.75 · height,1.5 · width). Thus, the positional relationship of the position frame and the vehicle rear periphery image in two cases can be as shown in fig. 8 and 9, respectively, where R2 and R3 are the position frames of the subject vehicle, and R2 and R3 are the clipped vehicle rear periphery images.
And S503, intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the second position information of the vehicle tail periphery image.
When the image of the periphery of the rear of the vehicle of the target vehicle is captured from the image to be detected according to the key points of the target vehicle, the image of the periphery of the rear of the vehicle of the target vehicle can be captured from the image to be detected by adopting a mode shown in fig. 11 after the key points of the target vehicle are determined. The acquisition mode may include at least the following processing steps:
step S1101, calculating coordinates of a center point of the vehicle rear periphery image of the target vehicle according to the key points of the target vehicle. When the key points of the target vehicle include a left key point and a right key point, the center point coordinates may be calculated from the two key points, for example, when the coordinates of the left key point are (x1, y1) and the coordinates of the right key point are (x2, y2), the coordinates of the center point (x ', y') may take a value of x ═ x1+ x2)/2, and y ═ y1+ y 2)/2.
And step S1102, determining second position information of the vehicle tail periphery image in the image to be detected according to the central point coordinate.
Here, in determining the second position information of the image around the vehicle rear, the manner when the image around the vehicle rear is cut out from the position frame may be referred to, that is, the coordinates of the left vertex of the image around the vehicle rear may be determined to be (x '-a · width, y' -b · height), the second height of the image around the vehicle rear may be c · height, and the second width of the image around the vehicle rear may be determined to be d · width, based on the coordinates of the central point and the first height and the first width of the position frame of the target vehicle. Thus, when the second position information of the vehicle rear periphery image is expressed by the quaternion array, it may be (x '-a width, y' -b height, c height, d width).
The value ranges of a, b, c and d may also be set to (0, 1) and (1, 2) for similar reasons, so that the vehicle tail periphery image may cover the vehicle tail portion of the target vehicle, and the width is larger than the position frame of the target vehicle, and the height is smaller than the position frame of the target vehicle. As a more preferable mode, the value range of a may be further set to [0.6, 0.8], the value range of b may be set to [0.2, 0.5], the value range of c may be set to [0.6, 0.9], and the value range of d may be set to [1.3, 1.7 ]. In addition, in order to capture a more appropriate image around the vehicle tail, values of a, b, c and d can be respectively taken when the directions of the vehicle tail are different.
In the scene shown in fig. 6, a, b, c, and d respectively take values of 0.75, 0.3, 0.75, and 1.5, and the second position information of the image around the rear of the vehicle can be expressed as (x '-0.75 · width, y' -0.3 · height,0.75 · height, and 1.5 · width). In this case, the positional relationship between the center point and the vehicle rear periphery image may be such that the center point is P and the vehicle rear periphery image is r4, as shown in fig. 12. And when the rear direction of the vehicle is the second direction, a, b, c, d take values of 0.75, 0.5, 0.75 and 1.5, respectively, and the second position information of the image around the rear of the vehicle can be expressed as (x '-0.75 · width, y' -0.5 · height,0.75 · height,1.5 · width).
Step S1103, capturing the image of the periphery of the vehicle tail of the target vehicle from the image to be detected, according to the second position information of the image of the periphery of the vehicle tail.
In an actual scene, when the intercepted vehicle tail peripheral image exceeds the boundary of the original image to be detected, the boundary can be used as a limit to avoid intercepting blank image content. Therefore, when the image of the periphery of the tail of the target vehicle is intercepted from the image to be detected, the boundary range of the first image to be intercepted is determined from the image to be detected according to the position frame or the key point of the target vehicle.
Here, the first image boundary range is a boundary range to which a specific value of the second position information points, and the boundary range may exceed a boundary of the image to be detected, so that blank image content is intercepted. Therefore, before the vehicle tail periphery image is actually captured, a judgment can be performed once, if the first image boundary range exceeds the second image boundary range of the image to be detected, the intersection of the first image boundary range and the second image boundary range can be taken as a target boundary range, and the image content of the target boundary range is captured in the image to be detected as the vehicle tail periphery image, so that blank image content is prevented from being captured.
Step S103, after the images around the vehicle tail are intercepted from each frame of image to be detected, whether the carried objects scattered by the target vehicle exist in the images around the vehicle tail can be identified. In some embodiments of the present application, a deep learning based drift cargo classification model may be utilized to identify the presence of the target vehicle drifting cargo in the vehicle rear perimeter image.
The drift cargo classification model based on deep learning can be obtained by adopting the following modes:
f1. acquiring vehicle tail peripheral images of vehicles of target vehicle types under the conditions of different illumination and different shooting angles, and taking the images as training samples;
f2. classifying the vehicle tail position image as a sample according to three categories of a carrier with drift, no drift and the carrier but not the carrier, and marking the categories;
f3. training a deep neural network model by using the vehicle tail periphery image with the class mark to obtain a drifting carrier classification model based on deep learning;
after the flying carrier classification model based on deep learning is trained, the intercepted vehicle tail periphery image can be input into the flying carrier classification model, so that a classification mark is output, and whether the carrier flying of the target vehicle exists in the vehicle tail periphery image or not is recognized as a recognition result.
And S104, determining a detection result of the drifting carried objects of the target vehicle according to the identification result of the multiple frames of images to be detected.
In an actual scene, in order to make the detection result more reliable, a determination threshold value is preset, and when the number of images to be detected of the drift cargo exceeds the determination threshold value, the target vehicle is considered to drift the cargo. Therefore, when the detection result of the drifting cargo is determined, the target vehicle drifting cargo can be determined when the detection result is that the number of the images to be detected of the target vehicle drifting cargo is larger than the judgment threshold.
The determination threshold may be set according to the detection requirement of the actual scene, for example, if the possibility of occurrence of erroneous determination needs to be reduced, a higher determination threshold may be set, and if the vehicle that drifts away the carrier needs to be prevented from being detected, a lower determination threshold may be set. In addition, when setting the specific value of the determination threshold, a value may be set directly empirically, for example, set to 5, 10, 20, or the like, or the specific value of the determination threshold may be determined at a certain ratio based on the total number of frames of the to-be-detected image acquired in step S101, for example, the determination threshold may be set at 20%, 50%, or the like of the total number of frames.
Fig. 13 shows a processing flow for detecting the vehicle drift carrier by using the solution provided by the embodiment of the present application, which includes the following processing steps:
step S1, acquiring a video to be detected from a server, and converting the video to be detected into a plurality of frames of images to be detected;
and step S2, detecting the vehicles in the multi-frame images to be detected by adopting the vehicle target detection model based on the deep learning, and judging whether the vehicles are trucks or not by utilizing the vehicle classification model based on the deep learning.
Step S3, when a truck is detected, a vehicle tracking model based on deep learning is adopted to track the same truck in multiple frames of images to be detected, and the position frame of the truck in the multiple frames of images to be detected is recorded;
and step S4, for the truck in each frame of image to be detected, utilizing the classification model of the head and the tail of the truck based on deep learning to judge the tail direction of the truck image in the position frame, then utilizing the position frame of the truck to intercept the images around the tail of the truck, and utilizing the classification model of the drift cargo based on deep learning to judge whether the cargo floats in the images around the tail of the truck.
And step S5, for multiple frames of images to be detected, judging that the detection result of the truck is the drifting cargo when the frame number of the images to be detected of the truck is judged to be the drifting cargo is larger than a certain judgment threshold value.
Further, the present application provides a computer device comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform the steps of the aforementioned drift cargo detection method.
In particular, the methods and/or embodiments in the embodiments of the present application may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. The computer program, when executed by a processing unit, performs the above-described functions defined in the method of the present application.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present application also provides a computer-readable medium, which may be included in the apparatus described in the foregoing embodiments; or may be separate and not incorporated into the device. The computer-readable medium carries one or more computer-readable instructions executable by a processor to perform the steps of the method and/or solution of the embodiments of the present application as described above.
In addition, the embodiment of the present application also provides a computer program, where the computer program is stored in a computer device, so that the computer device executes the steps of the drift carrier detection method.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In some embodiments, the software programs of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A method of detecting a drift cargo, the method comprising:
acquiring a video to be detected, and acquiring a plurality of frames of images to be detected from the video to be detected;
detecting a position frame or a key point of a target vehicle from each frame of image to be detected, and intercepting a vehicle tail peripheral image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle;
identifying whether the carried objects drifting away from the target vehicle exist in the vehicle tail periphery image;
and determining the detection result of the target vehicle for the drifting carrying objects according to the identification results of the multiple frames of images to be detected.
2. The method of claim 1, wherein detecting a location frame or a key point of the target vehicle from each frame of the image to be detected comprises:
identifying a vehicle belonging to the type of the target vehicle from each frame of image to be detected;
and tracking the same vehicle belonging to the type of the target vehicle in each frame of image to be detected as the target vehicle of the current detection, and detecting a position frame or a key point of the target vehicle from the image to be detected.
3. The method of claim 1, wherein the keypoints comprise a left keypoint and a right keypoint;
intercepting the vehicle tail peripheral image of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle, wherein the image comprises:
calculating the coordinates of the central point of the vehicle tail peripheral image of the target vehicle according to the left key point and the right key point;
determining second position information of the vehicle tail periphery image in the image to be detected according to the central point coordinate;
and intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the second position information of the vehicle tail periphery image.
4. The method according to claim 1, wherein the step of intercepting the image of the periphery of the tail of the vehicle of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle comprises the following steps:
identifying the tail direction of the target vehicle in the image to be detected;
determining second position information of the image around the tail of the vehicle in the image to be detected according to the first position information of the position frame of the target vehicle and the direction of the tail of the vehicle;
and intercepting the vehicle tail periphery image of the target vehicle from the image to be detected according to the second position information of the vehicle tail periphery image.
5. The method as claimed in claim 4, wherein determining second position information of the image around the vehicle tail in the image to be detected according to the first position information of the position frame of the target vehicle and the direction of the vehicle tail comprises:
determining a vehicle tail left vertex coordinate according to the coordinate of the position frame of the target vehicle and the vehicle tail direction, wherein if the vehicle tail direction is a first direction, the vehicle tail left vertex coordinate is a lower left corner coordinate of the position frame of the target vehicle, and if the vehicle tail direction is a second direction, the vehicle tail left vertex coordinate is an upper left corner coordinate of the position frame of the target vehicle;
and determining second position information of the image around the vehicle tail in the image to be detected according to the coordinate of the left vertex of the vehicle tail and the first height and the first width of the position frame of the target vehicle.
6. The method of claim 1, wherein determining the detection result of the target vehicle for the errant payload according to the recognition results of the plurality of frames of images to be detected comprises:
and when the detection result is that the number of the images to be detected of the carried objects drifted by the target vehicle is larger than the judgment threshold value, determining that the carried objects drifted by the target vehicle.
7. The method according to claim 1, wherein the step of intercepting the image of the periphery of the tail of the vehicle of the target vehicle from the image to be detected according to the position frame or the key point of the target vehicle comprises the following steps:
determining a first image boundary range to be intercepted from the image to be detected according to the position frame or the key point of the target vehicle;
and if the first image boundary range exceeds a second image boundary range of the image to be detected, intercepting the image content of a target boundary range in the image to be detected as a vehicle tail periphery image, wherein the target boundary range is the intersection of the first image boundary range and the second image boundary range.
8. A computer device, characterized in that the device comprises a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, cause the device to perform the steps of the method of any of claims 1 to 7.
9. A computer-readable medium having computer-readable instructions stored thereon which are executable by a processor to implement the steps of the method of any one of claims 1 to 7.
10. A computer program, characterized in that the computer program is stored on a computer device, causing the computer device to perform the steps of the method according to any of claims 1 to 7.
CN202011436050.6A 2020-11-20 2020-12-10 Drift carrier detection method, device, program, and computer-readable medium Pending CN112434657A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011315199 2020-11-20
CN2020113151999 2020-11-20

Publications (1)

Publication Number Publication Date
CN112434657A true CN112434657A (en) 2021-03-02

Family

ID=74691130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011436050.6A Pending CN112434657A (en) 2020-11-20 2020-12-10 Drift carrier detection method, device, program, and computer-readable medium

Country Status (1)

Country Link
CN (1) CN112434657A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221724A (en) * 2021-05-08 2021-08-06 杭州鸿泉物联网技术股份有限公司 Vehicle spray detection method and system
CN113553953A (en) * 2021-07-23 2021-10-26 城云科技(中国)有限公司 Vehicle parabolic detection method and device, electronic device and readable storage medium
CN114387788A (en) * 2021-12-02 2022-04-22 浙江大华技术股份有限公司 Method and device for identifying alternate passing of vehicles and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014234085A (en) * 2013-06-03 2014-12-15 株式会社東芝 Station platform fall detection device
CN105046225A (en) * 2015-07-14 2015-11-11 安徽清新互联信息科技有限公司 Vehicle distance detection method based on tail detection
KR20190015868A (en) * 2017-08-07 2019-02-15 주식회사보다텍 Response System For a Fall From Railroad Platform
JP2019142304A (en) * 2018-02-19 2019-08-29 株式会社明電舎 Fallen object detection device and fallen object detection method
CN111784747A (en) * 2020-08-13 2020-10-16 上海高重信息科技有限公司 Vehicle multi-target tracking system and method based on key point detection and correction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014234085A (en) * 2013-06-03 2014-12-15 株式会社東芝 Station platform fall detection device
CN105046225A (en) * 2015-07-14 2015-11-11 安徽清新互联信息科技有限公司 Vehicle distance detection method based on tail detection
KR20190015868A (en) * 2017-08-07 2019-02-15 주식회사보다텍 Response System For a Fall From Railroad Platform
JP2019142304A (en) * 2018-02-19 2019-08-29 株式会社明電舎 Fallen object detection device and fallen object detection method
CN111784747A (en) * 2020-08-13 2020-10-16 上海高重信息科技有限公司 Vehicle multi-target tracking system and method based on key point detection and correction

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221724A (en) * 2021-05-08 2021-08-06 杭州鸿泉物联网技术股份有限公司 Vehicle spray detection method and system
CN113553953A (en) * 2021-07-23 2021-10-26 城云科技(中国)有限公司 Vehicle parabolic detection method and device, electronic device and readable storage medium
CN113553953B (en) * 2021-07-23 2024-02-06 城云科技(中国)有限公司 Vehicle parabolic detection method and device, electronic device and readable storage medium
CN114387788A (en) * 2021-12-02 2022-04-22 浙江大华技术股份有限公司 Method and device for identifying alternate passing of vehicles and computer storage medium
CN114387788B (en) * 2021-12-02 2023-09-29 浙江大华技术股份有限公司 Identification method, identification equipment and computer storage medium for alternate traffic of vehicles

Similar Documents

Publication Publication Date Title
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN110163176B (en) Lane line change position identification method, device, equipment and medium
CN106169244A (en) The guidance information utilizing crossing recognition result provides device and method
CN111444798B (en) Identification method and device for driving behavior of electric bicycle and computer equipment
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
CN109766793B (en) Data processing method and device
CN110458050B (en) Vehicle cut-in detection method and device based on vehicle-mounted video
CN110348392B (en) Vehicle matching method and device
CN111178357B (en) License plate recognition method, system, device and storage medium
CN113963330A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112200142A (en) Method, device, equipment and storage medium for identifying lane line
WO2021088504A1 (en) Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111860219B (en) High-speed channel occupation judging method and device and electronic equipment
CN111382735A (en) Night vehicle detection method, device, equipment and storage medium
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
CN112507874B (en) Method and device for detecting motor vehicle jamming behavior
CN112784675B (en) Target detection method and device, storage medium and terminal
CN113221894A (en) License plate number identification method and device of vehicle, electronic equipment and storage medium
Małecki et al. Mobile system of decision-making on road threats
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN115965636A (en) Vehicle side view generating method and device and terminal equipment
CN113642521B (en) Traffic light identification quality evaluation method and device and electronic equipment
CN115019511A (en) Method and device for identifying illegal lane change of motor vehicle based on automatic driving vehicle
CN114895274A (en) Guardrail identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination