CN112560664A - Method, device, medium and electronic equipment for detecting intrusion of forbidden region - Google Patents

Method, device, medium and electronic equipment for detecting intrusion of forbidden region Download PDF

Info

Publication number
CN112560664A
CN112560664A CN202011458952.XA CN202011458952A CN112560664A CN 112560664 A CN112560664 A CN 112560664A CN 202011458952 A CN202011458952 A CN 202011458952A CN 112560664 A CN112560664 A CN 112560664A
Authority
CN
China
Prior art keywords
target vehicle
area
vehicle
intrusion
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011458952.XA
Other languages
Chinese (zh)
Other versions
CN112560664B (en
Inventor
路萍
戴一凡
王宝宗
顾会建
史宏涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Suzhou Automotive Research Institute of Tsinghua University filed Critical Tsinghua University
Priority to CN202011458952.XA priority Critical patent/CN112560664B/en
Publication of CN112560664A publication Critical patent/CN112560664A/en
Application granted granted Critical
Publication of CN112560664B publication Critical patent/CN112560664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a device, a medium and electronic equipment for detecting the intrusion of a forbidden area. The method comprises the following steps: acquiring vehicle running image data by using an unmanned aerial vehicle; determining target vehicle track information according to the vehicle running image data; and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area. This technical scheme can utilize unmanned aerial vehicle to obtain vehicle image data that traveles, whether has the invasion action to the target car and detect, and the ground area of shooting is wide, has not restricted by the traffic, and information acquisition is diversified, and is flexible, characteristics such as low-cost high benefit.

Description

Method, device, medium and electronic equipment for detecting intrusion of forbidden region
Technical Field
The embodiment of the application relates to the technical field of intrusion detection, in particular to a method, a device, a medium and electronic equipment for detecting intrusion in an illegal area.
Background
Unmanned aerial vehicles are increasingly widely applied to traffic research, and particularly have remarkable advantages in traffic information acquisition and traffic monitoring. With the continuous development of artificial intelligence and sensing technology, automobiles gradually develop towards the aspects of intellectualization, electrification and the like, and an automatic driving vehicle is one of important development trends. China is a large using country of automobiles and electric vehicles, so that the significance of extracting illegal invasion scenes as automatic driving vehicle tests is great.
The current main methods of the existing intrusion detection method are detection of a ground embedded induction coil, or detection by microwave and laser, or detection by erecting a camera on a road.
The ground induction coil has large quantity of road engineering in the early detection stage and influences traffic and urban image. Microwave and laser detection are susceptible to terrain, are costly, and are harmful to humans. Road camera monitoring detection cost is high, the field of vision is restricted, the target of detection takes place to shelter from easily and the adhesion, and the detection scheme does not possess the commonality.
Disclosure of Invention
The embodiment of the application provides a method, a device, a medium and electronic equipment for detecting the intrusion of an illegal area, an unmanned aerial vehicle is used for acquiring vehicle driving image data, whether intrusion behaviors exist in a target vehicle or not is detected, the ground area is wide in shooting, and the method has the advantages of being free from traffic limitation, diversified in collected information, flexible, low in cost, high in benefit and the like.
In a first aspect, an embodiment of the present application provides a method for detecting an intrusion into a prohibited area, where the method includes:
acquiring vehicle running image data by using an unmanned aerial vehicle;
determining target vehicle track information according to the vehicle running image data;
and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area.
In a second aspect, an embodiment of the present application provides an apparatus for detecting an intrusion in a forbidden area, where the apparatus includes:
the vehicle driving image data acquisition module is used for acquiring vehicle driving image data by using the unmanned aerial vehicle;
the target vehicle track information determining module is used for determining target vehicle track information according to the vehicle running image data;
and the intrusion behavior determining module is used for determining whether the target vehicle has intrusion behaviors or not according to the target vehicle track information and the preset intrusion conditions of the forbidden region.
In a third aspect, an embodiment of the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for detecting intrusion into a prohibited area according to an embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method for detecting intrusion into a prohibited area according to the embodiment of the present application.
According to the technical scheme provided by the embodiment of the application, the unmanned aerial vehicle is used for acquiring vehicle running image data; determining target vehicle track information according to vehicle running image data; and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area. This technical scheme can utilize unmanned aerial vehicle to obtain vehicle image data that traveles, whether has the invasion action to the target car and detect, and the ground area of shooting is wide, has not restricted by the traffic, and information acquisition is diversified, and is flexible, characteristics such as low-cost high benefit.
Drawings
Fig. 1 is a flowchart of a method for detecting an intrusion into a forbidden area according to an embodiment of the present application;
fig. 2 is a schematic diagram of a procedure of intrusion detection of a forbidden area according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for detecting an intrusion into a forbidden area according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a method for detecting intrusion in an illegal area according to an embodiment of the present application, where the present embodiment is applicable to a situation where whether intrusion behaviors exist in a motor vehicle and a non-motor vehicle, and the method can be executed by an apparatus for detecting intrusion in an illegal area according to an embodiment of the present application, where the apparatus can be implemented by software and/or hardware, and can be integrated in a device such as an intelligent terminal for detecting intrusion in an illegal area.
As shown in fig. 1, the method for detecting an intrusion into a prohibited area includes:
and S110, acquiring vehicle running image data by using the unmanned aerial vehicle.
Among them, the vehicle travel image data may be image data in which the vehicle travels over a period of time. The vehicle travel image data may be data on the distance traveled by the vehicle, the vehicle type, the vehicle area, the vehicle color, and the like.
In this embodiment, the camera in the unmanned aerial vehicle is used to collect the vehicle driving image data at a low altitude fixed point. Preferably, the acquisition frequency of the camera is 25 frames/second to 30 frames/second. Unmanned aerial vehicle's fixed point position can set for according to the shooting demand.
And S120, determining target vehicle track information according to the vehicle running image data.
In the scheme, the target vehicle can be various types of motor vehicles or non-motor vehicles. The target vehicle trajectory information may be constituted by features of the target vehicle in the plurality of vehicle travel images. Illustratively, the target vehicle trajectory information is constituted by the positions of a plurality of frames of target vehicles in the vehicle travel image data.
In this embodiment, after the vehicle driving image data is acquired, a track coordinate system is established by taking the center of a road in a video as a circle center in combination with map information according to each frame of video image in the vehicle driving image data. The track coordinate system takes the lane direction as the Y axis, the transverse direction perpendicular to the lane direction as the X axis, the left side of the lane is negative, and the right side of the lane is positive. And performing feature extraction on the vehicle running image data by using a deep learning algorithm to obtain the features of each frame of target vehicle, and adding the features of the target vehicle into a track coordinate system to obtain the track information of the target vehicle.
S130, determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area.
In the present embodiment, the prohibited area may be an area where entry is prohibited. For example, an area provided with a no-entry mark, a motor vehicle lane, or the like. The preset intrusion condition of the forbidden region can be that the target vehicle runs into the forbidden region, or that the running region of the target vehicle has an intersection region with the forbidden region within a period of time, and the like.
The intrusion behavior can be that a motor vehicle enters an illegal area, a non-motor vehicle enters a motor lane and the like. It can be understood that, in the driving process of the target vehicle, if the track information of the target vehicle meets the preset intrusion condition of the forbidden area, an intrusion behavior exists. Illustratively, when the target vehicle is detected to be driven into the forbidden area, the target vehicle has intrusion behavior.
In this technical solution, optionally, the target vehicle includes a motor vehicle and a non-motor vehicle; the forbidden region comprises a first forbidden region and a second forbidden region;
if the target vehicle is a motor vehicle, the target vehicle track information comprises a first target vehicle central point and a first target vehicle area; if the target vehicle is a non-motor vehicle, the target vehicle track information comprises a second target vehicle area;
correspondingly, determining whether the target vehicle has an intrusion behavior according to the target vehicle track information and the preset intrusion condition of the forbidden area, including:
judging whether the first target vehicle center point, the first target vehicle area and the first forbidden area meet a first preset intrusion condition;
if so, determining that the motor vehicle has an intrusion behavior;
and/or the presence of a gas in the gas,
judging whether the second target vehicle area and the second forbidden area accord with a second preset intrusion condition or not;
and if so, determining that the non-motor vehicle has intrusion behavior.
In the present embodiment, the first prohibited area may be an area where a no-entry mark is provided in the vehicle lane; the second forbidden region may be a motor vehicle lane region.
The first target vehicle center point can be the center point of a motor vehicle external rectangle in the video image; the first target vehicle area may be an area of a motor vehicle circumscribed rectangle in the video image; the second target vehicle area may be an area of a non-motor vehicle bounding rectangle in the video image.
In this embodiment, the first preset intrusion condition may be that the motor vehicle enters the first prohibited area during the driving process, or that the driving area of the motor vehicle has an intersection area with the first prohibited area during the driving process. The second preset intrusion condition may be an intersection area between a driving area and a second forbidden area during driving of the non-motor vehicle.
In this scheme, the determining whether the first preset intrusion condition is met may be determining whether a first target vehicle center point in the target vehicle track information is located in a first prohibited area, determining whether a first target vehicle area in the target vehicle track information and the first prohibited area have an intersection area, or determining whether a duration of the first target vehicle area in the target vehicle track information and the intersection area of the first prohibited area meets a certain threshold.
In this embodiment, the determining whether the second preset intrusion condition is met may be determining whether an intersection region exists between a second target vehicle area in the target vehicle track information and the second prohibited region, or determining whether a duration of the intersection region between the second target vehicle area in the target vehicle track information and the second prohibited region meets a certain threshold.
As can be understood, the motor vehicle determines whether an intrusion behavior exists by judging whether a first target vehicle center point, a first target vehicle area and a first forbidden area meet a first preset intrusion condition; and the non-motor vehicle determines whether the intrusion behavior exists by judging whether the second target vehicle area and the second forbidden area accord with a second preset intrusion condition.
By judging the motor vehicle track information and the non-motor vehicle track information, whether the motor vehicle and the non-motor vehicle have invasive behaviors or not can be determined, and technical support is provided for the automatic driving vehicle test.
In this technical solution, optionally, determining whether the first target vehicle center point, the first target vehicle area, and the first prohibited area satisfy a first preset intrusion condition includes:
if the first target vehicle center point is in the first forbidden region, a first preset intrusion condition is met;
if the first target vehicle center point is not in the first forbidden region, judging whether the area intersection of the first target vehicle area and the first forbidden region meets a first preset area condition; judging whether the duration of the intersection of the first target vehicle area and the first forbidden region meets a first preset time condition;
if yes, the first preset intrusion condition is met.
The first preset area condition may be that a minimum area intersection region of the first target vehicle area and the first prohibited region is greater than a certain threshold or less than or equal to a certain threshold, and preferably, may be greater than a certain threshold. The first preset time condition may be that a duration of an intersection of the first target vehicle area and a maximum area of the first prohibited area is greater than a certain threshold or equal to or less than a certain threshold. Preferably, it may be greater than a certain threshold. The first preset area condition and the first preset time condition are set according to the shooting height of the unmanned aerial vehicle.
It can be understood that whether the central point of the first target vehicle is in the first forbidden region or not is judged, and if the central point of the first target vehicle is in the first forbidden region, a first preset intrusion condition is met; if not, whether the area intersection of the first target vehicle area and the first forbidden area meets a first preset area condition or not is continuously judged, whether the duration time of the area intersection of the first target vehicle area and the first forbidden area meets a first preset time condition or not is judged, and meanwhile, the first preset intrusion condition can be determined to be met.
Whether the motor vehicle has intrusion behavior or not can be detected by judging whether the first target vehicle center point, the first target vehicle area and the first forbidden area meet the first preset intrusion condition or not, and the detection efficiency is high.
In this technical solution, optionally, the determining whether the second target vehicle area and the second forbidden area meet a second preset intrusion condition includes:
judging whether the area intersection of the second target vehicle area and the second forbidden region meets a second preset area condition; judging whether the duration of the intersection of the area of the second target vehicle and the area of the second forbidden region meets a second preset time condition or not;
if so, the second preset intrusion condition is met.
The second preset area condition may be that a minimum area intersection of the second target vehicle area and the second prohibited area is greater than a certain threshold or is less than or equal to a certain threshold, and preferably, may be greater than a certain threshold. The second preset time condition may be that a duration of an intersection of the second target vehicle area and a maximum area of the second prohibited area is greater than a certain threshold or equal to or less than a certain threshold. Preferably, it may be greater than a certain threshold. The second preset area condition and the second preset time condition are set according to the shooting height of the unmanned aerial vehicle.
Whether the area of the second target vehicle and the second forbidden area meet the second preset intrusion condition or not is judged, whether the non-motor vehicle has intrusion behavior or not can be detected, and the detection efficiency is high.
According to the technical scheme provided by the embodiment of the application, the unmanned aerial vehicle is used for acquiring vehicle running image data; determining target vehicle track information according to vehicle running image data; and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area. Through carrying out this technical scheme, can utilize unmanned aerial vehicle to acquire vehicle image data that traveles, whether have the invasion action to the target car and detect, the ground area of shooting is wide, has not restricted by the traffic, and information acquisition is diversified, and is flexible, characteristics such as low-cost high benefit.
Example two
Fig. 2 is a schematic diagram of a procedure of intrusion detection in a forbidden area provided in the second embodiment of the present application, and the second embodiment is further optimized based on the first embodiment. The concrete optimization is as follows: determining target vehicle track information according to the vehicle driving image data, wherein the target vehicle track information comprises the following steps: processing the vehicle driving image data to obtain target vehicle characteristics; wherein the target vehicle characteristics include a target vehicle area and a target vehicle center point; if the difference value between the area of the target vehicle at the current moment and the area of the target vehicle at the last moment meets a preset area difference value condition, and the difference value between the center point of the target vehicle at the current moment and the center point of the target vehicle at the last moment meets a preset center point difference value condition, the target vehicle is successfully tracked, and the area of the target vehicle, the center point of the target vehicle, the first forbidden region and the second forbidden region which are predetermined are added to a target vehicle track coordinate system to obtain target vehicle track information; and establishing the target vehicle track coordinate system according to the vehicle running image data. The details which are not described in detail in this embodiment are shown in the first embodiment. As shown in fig. 2, the method comprises the steps of:
s210, acquiring vehicle running image data by using the unmanned aerial vehicle.
S220, processing the vehicle running image data to obtain target vehicle characteristics; wherein the target vehicle characteristics include a target vehicle area and a target vehicle center point.
The target vehicle area can be the area of a target vehicle external rectangle in the vehicle driving image; the target vehicle center point may refer to a center point of a target vehicle external rectangle in the vehicle travel image.
In the present embodiment, the target vehicle characteristic may be some characteristic of the vehicle in the vehicle travel image. Such as the driving position of the vehicle, the type of the vehicle, and the color of the vehicle. Preferably, the target vehicle characteristic includes a target vehicle area and a target vehicle center point.
In the scheme, the vehicle driving image data is detected based on a YOLO algorithm in deep learning, and the target vehicle characteristics are extracted. The YOLO algorithm applies a single convolutional neural network to the whole image, divides the image into grids, and predicts the class probability and the bounding box of each grid, thereby extracting the target vehicle characteristics.
In this technical solution, optionally, the target vehicle characteristics further include a circumscribed rectangle, a target vehicle type, a color vector, and a confidence rate.
The circumscribed rectangle may be a boundary of the target vehicle in the vehicle driving image.
In the present embodiment, the target vehicle type may be various vehicle types. For example, a miniature car, a small car, a large car, and the like.
In this case, the color vector may be a color of the target vehicle in the vehicle travel image. Can be represented by (r, g, b) color vectors. Or may be represented by (h, s, v) color vectors.
Wherein the confidence rate reflects the accuracy of the YOLO algorithm to predict the target vehicle.
By carrying out characteristics on the characteristics of the target vehicle, the track information of the target vehicle can be obtained, and the intrusion detection of the forbidden area of the target vehicle is realized.
S230, if the difference value between the area of the target vehicle at the current moment and the area of the target vehicle at the last moment meets a preset area difference value condition, and the difference value between the center point of the target vehicle at the current moment and the center point of the target vehicle at the last moment meets a preset center point difference value condition, successfully tracking the target vehicle, and adding the area of the target vehicle, the center point of the target vehicle, the first forbidden region and the second forbidden region which are predetermined to a target vehicle track coordinate system to obtain target vehicle track information; and establishing the target vehicle track coordinate system according to the vehicle running image data.
In the present embodiment, the vehicle travel image data is continuous, being composed of a plurality of frame images. And processing the vehicle running image data to obtain the target vehicle characteristics in each frame of image, wherein in the target vehicle tracking process, the target vehicle track is determined by judging whether the target vehicle characteristics at the current moment and the target vehicle characteristics at the last moment meet preset conditions. When the target vehicle is tracked, the area and the confidence rate of the target vehicle need to be larger than certain threshold values. And if the target vehicle at the current moment is matched with a plurality of target vehicles at the previous moment, detecting the target vehicle at the previous moment by taking the target vehicle at the previous moment as a tracking target with the minimum distance difference value between the target vehicle and the target central point of the target vehicle at the previous moment.
The preset area difference condition may be that a difference between the current time target vehicle area and the previous time target vehicle area is greater than a certain threshold or less than or equal to a certain threshold. Preferably, the value is equal to or less than a certain threshold value. The preset central point difference condition may be that the difference between the current time target vehicle central point and the last time target vehicle central point is greater than a certain threshold or less than or equal to a certain threshold. Preferably, the value is equal to or less than a certain threshold value. The preset area difference value condition and the preset central point difference value condition are set according to the shooting height of the unmanned aerial vehicle.
It can be understood that if the difference between the area of the target vehicle at the current moment and the area of the target vehicle at the previous moment is less than or equal to a certain threshold, and the difference between the center point of the target vehicle at the current moment and the center point of the target vehicle at the previous moment is less than or equal to a certain threshold, the target vehicle features in the two frames are considered to be the features of the same target vehicle, and the tracking is successful.
In the scheme, if the tracking fails, the tracking failure frequency is initialized to 0, the tracking failure is increased by 1, and when the accumulated tracking failure frequency exceeds a certain threshold value, the target vehicle is considered to have moved out of the shooting range of the camera and is not tracked.
In this embodiment, after the tracking is successful, the target vehicle area and the target vehicle center point of the same target vehicle, and the predetermined first forbidden area and the predetermined second forbidden area are added to the target vehicle track coordinate system, so as to obtain the target vehicle track information. The first forbidden region and the second forbidden region can be obtained from background data.
In this technical solution, optionally, after obtaining the target vehicle trajectory information, the method further includes:
if the target vehicle track information comprises a first vehicle type and a second vehicle type, determining the times of the first vehicle type and the times of the second vehicle type;
and determining the target vehicle type corresponding to the target vehicle track according to the first vehicle type frequency, the second vehicle type frequency and the target vehicle center point.
It can be understood that, if the YOLO algorithm identifies the same target vehicle as two or more different vehicle types, the number of times of the vehicle types is determined according to the track information of the target vehicle, and the target vehicle type corresponding to the track of the target vehicle is determined according to the number of times of the vehicle types and the center point of the target vehicle.
The target vehicle type may be determined by comparing the number of times of the first vehicle type with the number of times of the second vehicle type, and comparing a difference between a center point of the target vehicle corresponding to the first vehicle type and a center point of the target vehicle corresponding to the second vehicle type.
The processing that one target vehicle is detected into two or more different vehicle types is realized by judging the vehicle type times and the target vehicle center point in the target vehicle track information.
In this technical solution, optionally, determining a target vehicle type corresponding to the target vehicle track according to the first vehicle type number, the second vehicle type number, and the target vehicle center point includes:
determining a target vehicle type corresponding to the target vehicle track by adopting the following formula:
Figure BDA0002830535360000121
where t denotes a first vehicle type, s denotes a second vehicle type, typeiIndicating the target vehicle-type in the ith frame image, prob indicating the confidence rate in the ith frame image,
Figure BDA0002830535360000122
representing the confidence rate that the first vehicle type in the ith frame of image corresponds to,
Figure BDA0002830535360000123
indicating a confidence rate for the second model correspondence in the ith frame image,
Figure BDA0002830535360000124
representing the number of first model times in the i-frame image,
Figure BDA0002830535360000125
indicating the second car type number in the i-frame image,
Figure BDA0002830535360000126
representing the center point of the target vehicle corresponding to the first vehicle type in the i-frame images,
Figure BDA0002830535360000127
and the distance threshold of the center point is represented by S.
The processing that one target vehicle is detected into more than two different vehicle types is realized by judging the vehicle type times and the target vehicle center point in the target vehicle track information.
In this technical solution, optionally, after obtaining the target vehicle trajectory information, the method further includes:
determining the duration of the current target vehicle track;
if the duration time meets a duration time threshold, determining that the current target vehicle track is the final target vehicle track;
if the duration time does not meet the duration time threshold, determining a target vehicle track to be identified in the target area according to the target vehicle and the preset radius of the target area;
judging whether the distance between the color vector of the target vehicle track to be identified and the color vector of the current target vehicle track meets the color vector constraint condition or not; judging whether the difference value between the target vehicle area of the target vehicle track to be identified and the target vehicle area of the current target vehicle track meets the area constraint condition or not; judging whether the difference value of the circumscribed rectangle of the target vehicle track to be identified and the circumscribed rectangle of the current target vehicle track meets the circumscribed rectangle constraint condition or not;
and if so, combining the target vehicle track to be identified and the current target vehicle track to obtain the final target vehicle track.
Wherein the duration of the current target vehicle trajectory may be the duration of the detection of the target vehicle center point. For example, 30 frames/second are acquired, and the target vehicle center point is detected in 60 frames of images, then the duration of the current target vehicle trajectory is 2 seconds.
In the present embodiment, the duration threshold may be 3 seconds, 5 seconds, or the like. The setting may be made according to the detection task. Preferably, the duration threshold may be 5 seconds.
In the scheme, if the duration time does not meet the duration time threshold, the target vehicle is taken as the center, the preset radius of the target area is used for searching other surrounding target vehicles, and the tracks of the other target vehicles are taken as the tracks of the target vehicle to be identified. The target vehicle track to be identified may be one or more tracks.
Wherein, color vector constraint condition, area constraint condition and external rectangle constraint condition all can be a threshold value, and the size of threshold value can be set for according to unmanned aerial vehicle shoots the height.
In this embodiment, whether the difference between the circumscribed rectangle of the target vehicle track to be recognized and the circumscribed rectangle of the current target vehicle track meets the circumscribed rectangle constraint condition or not may be determined by respectively determining whether the difference between the length and the width of the circumscribed rectangle meets the circumscribed rectangle constraint condition or not.
It can be understood that after the target vehicle track to be recognized is obtained, whether the distance between the color vector of the recognized target vehicle track and the current target vehicle track, the difference value of the area of the target vehicle and the difference value of the circumscribed rectangle meet the preset conditions or not are respectively calculated, and when the distances meet the preset conditions, the target vehicle track to be recognized and the current target vehicle track are combined to obtain the final target vehicle track.
By judging the color vector, the target vehicle area and the circumscribed rectangle, the tracks of the same target divided into two sections can be combined to obtain the final target vehicle track.
S240, determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area.
According to the technical scheme provided by the embodiment of the application, the unmanned aerial vehicle is used for acquiring vehicle running image data; processing the vehicle driving image data to obtain target vehicle characteristics; the target vehicle characteristics comprise a target vehicle area and a target vehicle central point; if the difference value between the area of the target vehicle at the current moment and the area of the target vehicle at the last moment meets a preset area difference value condition, and the difference value between the center point of the target vehicle at the current moment and the center point of the target vehicle at the last moment meets a preset center point difference value condition, the target vehicle is successfully tracked, and the area of the target vehicle, the center point of the target vehicle, a first forbidden region and a second forbidden region which are predetermined are added to a target vehicle track coordinate system to obtain target vehicle track information; and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area. Through carrying out this technical scheme, can utilize unmanned aerial vehicle to acquire vehicle image data that traveles, whether have the invasion action to the target car and detect, the ground area of shooting is wide, has not restricted by the traffic, and information acquisition is diversified, and is flexible, characteristics such as low-cost high benefit.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an apparatus for detecting intrusion into a prohibited area according to a third embodiment of the present application, and as shown in fig. 3, the apparatus for detecting intrusion into a prohibited area includes:
a vehicle driving image data acquisition module 310, configured to acquire vehicle driving image data by using an unmanned aerial vehicle;
a target vehicle track information determining module 320, configured to determine target vehicle track information according to the vehicle driving image data;
and the intrusion behavior determining module 330 is configured to determine whether an intrusion behavior exists in the target vehicle according to the target vehicle trajectory information and a preset intrusion condition of the forbidden area.
In this technical solution, optionally, the target vehicle includes a motor vehicle and a non-motor vehicle; the forbidden region comprises a first forbidden region and a second forbidden region;
if the target vehicle is a motor vehicle, the target vehicle track information comprises a first target vehicle central point and a first target vehicle area; if the target vehicle is a non-motor vehicle, the target vehicle track information comprises a second target vehicle area;
accordingly, the intrusion behavior determination module 330 includes:
the first preset intrusion condition judging unit is used for judging whether the first target vehicle center point, the first target vehicle area and the first forbidden area meet first preset intrusion conditions or not;
the motor vehicle intrusion behavior determining unit is used for determining that the motor vehicle has intrusion behaviors if the motor vehicle intrusion behavior determining unit meets the requirement;
and/or the presence of a gas in the gas,
the second preset intrusion condition judging unit is used for judging whether the second target vehicle area and the second forbidden area accord with a second preset intrusion condition or not;
and the non-motor vehicle existence intrusion behavior determination unit is used for determining that the non-motor vehicle exists intrusion behavior if the non-motor vehicle existence intrusion behavior determination unit is matched with the intrusion behavior determination unit.
In this technical solution, optionally, the first preset intrusion condition determining unit is specifically configured to:
if the first target vehicle center point is in the first forbidden region, a first preset intrusion condition is met;
if the first target vehicle center point is not in the first forbidden region, judging whether the area intersection of the first target vehicle area and the first forbidden region meets a first preset area condition; judging whether the duration of the intersection of the first target vehicle area and the first forbidden region meets a first preset time condition;
if yes, the first preset intrusion condition is met.
In this technical solution, optionally, the second preset intrusion condition determining unit is specifically configured to:
judging whether the area intersection of the second target vehicle area and the second forbidden region meets a second preset area condition; judging whether the duration of the intersection of the area of the second target vehicle and the area of the second forbidden region meets a second preset time condition or not;
if so, the second preset intrusion condition is met.
In this technical solution, optionally, the target vehicle trajectory information determining module 320 includes:
the target vehicle characteristic obtaining unit is used for processing the vehicle running image data to obtain target vehicle characteristics; wherein the target vehicle characteristics include a target vehicle area and a target vehicle center point;
a target vehicle track information obtaining unit, configured to, if a difference value between a current time target vehicle area and a previous time target vehicle area satisfies a preset area difference value condition, and a difference value between a current time target vehicle center point and a previous time target vehicle center point satisfies a preset center point difference value condition, successfully track the target vehicle, add the target vehicle area and the target vehicle center point, and the predetermined first prohibited region and the second prohibited region to a target vehicle track coordinate system, and obtain target vehicle track information; and establishing the target vehicle track coordinate system according to the vehicle running image data.
In this technical solution, optionally, the target vehicle characteristics further include a circumscribed rectangle, a target vehicle type, a color vector, and a confidence rate.
In this technical solution, optionally, the target vehicle trajectory information determining module 320 further includes:
the vehicle type frequency determining unit is used for determining the frequency of a first vehicle type and the frequency of a second vehicle type if the target vehicle track information comprises the first vehicle type and the second vehicle type;
and the target vehicle type determining unit is used for determining the target vehicle type corresponding to the target vehicle track according to the first vehicle type frequency, the second vehicle type frequency and the target vehicle center point.
In this technical solution, optionally, the target vehicle type determining unit is specifically configured to:
determining a target vehicle type corresponding to the target vehicle track by adopting the following formula:
Figure BDA0002830535360000171
where t denotes a first vehicle type, s denotes a second vehicle type, typeiRepresenting the target vehicle type, prob, in the ith frame imageiIndicating the confidence rate in the image of the ith frame,
Figure BDA0002830535360000172
representing the confidence rate that the first vehicle type in the ith frame of image corresponds to,
Figure BDA0002830535360000173
indicating a confidence rate for the second model correspondence in the ith frame image,
Figure BDA0002830535360000174
representing the number of first model times in the i-frame image,
Figure BDA0002830535360000175
indicating the second car type number in the i-frame image,
Figure BDA0002830535360000176
representing the center point of the target vehicle corresponding to the first vehicle type in the i-frame images,
Figure BDA0002830535360000177
and the distance threshold of the center point is represented by S.
In this technical solution, optionally, the target vehicle trajectory information determining module 320 further includes:
a duration determining unit for determining a duration of the current target vehicle track;
a final target vehicle track determining unit, configured to determine that the current target vehicle track is a final target vehicle track if the duration time meets a duration time threshold;
the target vehicle track determining unit is used for determining a target vehicle track to be identified in the target area according to the target vehicle and the preset radius of the target area if the duration time does not meet the duration time threshold;
the target vehicle track judging unit is used for judging whether the distance between the color vector of the target vehicle track to be identified and the color vector of the current target vehicle track meets the color vector constraint condition or not; judging whether the difference value between the target vehicle area of the target vehicle track to be identified and the target vehicle area of the current target vehicle track meets the area constraint condition or not; judging whether the difference value of the circumscribed rectangle of the target vehicle track to be identified and the circumscribed rectangle of the current target vehicle track meets the circumscribed rectangle constraint condition or not;
and the final target vehicle track obtaining unit is used for merging the target vehicle track to be identified and the current target vehicle track to obtain the final target vehicle track if the target vehicle track to be identified and the current target vehicle track are matched.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
A fourth embodiment of the present application further provides a medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for intrusion detection in a forbidden area, the method including:
acquiring vehicle running image data by using an unmanned aerial vehicle;
determining target vehicle track information according to the vehicle running image data;
and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area.
Media-any of various types of memory devices or storage devices. The term "media" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The medium may also include other types of memory or combinations thereof. In addition, the medium may be located in the computer system in which the program is executed, or may be located in a different second computer system, which is connected to the computer system through a network (such as the internet). The second computer system may provide the program instructions to the computer for execution. The term "media" may include two or more media that may reside in different locations, such as in different computer systems that are connected by a network. The media may store program instructions (e.g., embodied as computer programs) that are executable by one or more processors.
Of course, the medium provided in the embodiments of the present application includes computer-executable instructions, and the computer-executable instructions are not limited to the operations of detecting intrusion in a forbidden area as described above, and may also perform related operations in the method of detecting intrusion in a forbidden area as provided in any embodiment of the present application.
EXAMPLE five
Fifth, an electronic device is provided, where the device for detecting intrusion into a prohibited area provided by the embodiment of the present application may be integrated into the electronic device. Fig. 4 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application. As shown in fig. 4, the present embodiment provides an electronic device 400, which includes: one or more processors 420; the storage device 410 is configured to store one or more programs, and when the one or more programs are executed by the one or more processors 420, the one or more processors 420 implement the method for detecting intrusion into a forbidden area provided by the embodiment of the present application, the method includes:
acquiring vehicle running image data by using an unmanned aerial vehicle;
determining target vehicle track information according to the vehicle running image data;
and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area.
Of course, those skilled in the art can understand that the processor 420 also implements the technical solution of the method for detecting intrusion in a forbidden area provided in any embodiment of the present application.
The electronic device 400 shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 4, the electronic device 400 includes a processor 420, a storage device 410, an input device 430, and an output device 440; the number of the processors 420 in the electronic device may be one or more, and one processor 420 is taken as an example in fig. 4; the processor 420, the storage device 410, the input device 430, and the output device 440 in the electronic apparatus may be connected by a bus or other means, and are exemplified by a bus 450 in fig. 4.
The storage device 410 is a computer-readable medium, and can be used for storing software programs, computer-executable programs, and module units, such as program instructions corresponding to the method for detecting intrusion into a forbidden area in the embodiments of the present application.
The storage device 410 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the storage 410 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 410 may further include memory located remotely from processor 420, which may be connected via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 430 may be used to receive input numbers, character information, or voice information, and to generate key signal inputs related to user settings and function control of the electronic device. The output device 440 may include a display screen, speakers, or other electronic equipment.
The electronic equipment provided by the embodiment of the application can achieve the purposes of wide ground area for shooting, no traffic limitation, diversified collected information, flexibility, low cost, high benefit and the like.
The device, medium, and electronic device for detecting intrusion into a prohibited area provided in the above embodiments may perform the method for detecting intrusion into a prohibited area provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for performing the method. Technical details that are not described in detail in the above embodiments may be referred to a method for detecting intrusion in a forbidden area provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (12)

1. A method for intrusion detection in a forbidden area, comprising:
acquiring vehicle running image data by using an unmanned aerial vehicle;
determining target vehicle track information according to the vehicle running image data;
and determining whether the target vehicle has an intrusion behavior or not according to the target vehicle track information and the preset intrusion condition of the forbidden area.
2. The method of claim 1, wherein the target vehicle comprises an automotive vehicle and a non-automotive vehicle; the forbidden region comprises a first forbidden region and a second forbidden region;
if the target vehicle is a motor vehicle, the target vehicle track information comprises a first target vehicle central point and a first target vehicle area; if the target vehicle is a non-motor vehicle, the target vehicle track information comprises a second target vehicle area;
correspondingly, determining whether the target vehicle has an intrusion behavior according to the target vehicle track information and the preset intrusion condition of the forbidden area, including:
judging whether the first target vehicle center point, the first target vehicle area and the first forbidden area meet a first preset intrusion condition;
if so, determining that the motor vehicle has an intrusion behavior;
and/or the presence of a gas in the gas,
judging whether the second target vehicle area and the second forbidden area accord with a second preset intrusion condition or not;
and if so, determining that the non-motor vehicle has intrusion behavior.
3. The method of claim 2, wherein determining whether the first target vehicle center point, the first target vehicle area, and the first prohibited area satisfy a first preset intrusion condition comprises:
if the first target vehicle center point is in the first forbidden region, a first preset intrusion condition is met;
if the first target vehicle center point is not in the first forbidden region, judging whether the area intersection of the first target vehicle area and the first forbidden region meets a first preset area condition; judging whether the duration of the intersection of the first target vehicle area and the first forbidden region meets a first preset time condition;
if yes, the first preset intrusion condition is met.
4. The method of claim 2, wherein determining whether the second target vehicle area and the second forbidden area meet a second preset intrusion condition comprises:
judging whether the area intersection of the second target vehicle area and the second forbidden region meets a second preset area condition; judging whether the duration of the intersection of the area of the second target vehicle and the area of the second forbidden region meets a second preset time condition or not;
if so, the second preset intrusion condition is met.
5. The method of claim 1, wherein determining target vehicle trajectory information from the vehicle travel image data comprises:
processing the vehicle driving image data to obtain target vehicle characteristics; wherein the target vehicle characteristics include a target vehicle area and a target vehicle center point;
if the difference value between the area of the target vehicle at the current moment and the area of the target vehicle at the last moment meets a preset area difference value condition, and the difference value between the center point of the target vehicle at the current moment and the center point of the target vehicle at the last moment meets a preset center point difference value condition, the target vehicle is successfully tracked, and the area of the target vehicle, the center point of the target vehicle, the first forbidden region and the second forbidden region which are predetermined are added to a target vehicle track coordinate system to obtain target vehicle track information; and establishing the target vehicle track coordinate system according to the vehicle running image data.
6. The method of claim 5, wherein the target vehicle characteristics further include a circumscribed rectangle, a target vehicle type, a color vector, and a confidence rate.
7. The method of claim 6, wherein after obtaining the target vehicle trajectory information, the method further comprises:
if the target vehicle track information comprises a first vehicle type and a second vehicle type, determining the times of the first vehicle type and the times of the second vehicle type;
and determining the target vehicle type corresponding to the target vehicle track according to the first vehicle type frequency, the second vehicle type frequency and the target vehicle center point.
8. The method of claim 7, wherein determining a target vehicle type corresponding to a target vehicle trajectory based on the first vehicle type number, the second vehicle type number, and a target vehicle center point comprises:
determining a target vehicle type corresponding to the target vehicle track by adopting the following formula:
Figure FDA0002830535350000031
where t denotes a first vehicle type, s denotes a second vehicle type, typeiRepresenting the target vehicle type, prob, in the ith frame imageiIndicating the confidence rate in the image of the ith frame,
Figure FDA0002830535350000032
representing the confidence rate that the first vehicle type in the ith frame of image corresponds to,
Figure FDA0002830535350000033
representing the ith in the ith frame imageThe confidence rate corresponding to the two-car model,
Figure FDA0002830535350000034
representing the number of first model times in the i-frame image,
Figure FDA0002830535350000035
indicating the second car type number in the i-frame image,
Figure FDA0002830535350000036
representing the center point of the target vehicle corresponding to the first vehicle type in the i-frame images,
Figure FDA0002830535350000037
and the distance threshold of the center point is represented by S.
9. The method of claim 6, wherein after obtaining the target vehicle trajectory information, the method further comprises:
determining the duration of the current target vehicle track;
if the duration time meets a duration time threshold, determining that the current target vehicle track is the final target vehicle track;
if the duration time does not meet the duration time threshold, determining a target vehicle track to be identified in the target area according to the target vehicle and the preset radius of the target area;
judging whether the distance between the color vector of the target vehicle track to be identified and the color vector of the current target vehicle track meets the color vector constraint condition or not; judging whether the difference value between the target vehicle area of the target vehicle track to be identified and the target vehicle area of the current target vehicle track meets the area constraint condition or not; judging whether the difference value of the circumscribed rectangle of the target vehicle track to be identified and the circumscribed rectangle of the current target vehicle track meets the circumscribed rectangle constraint condition or not;
and if so, combining the target vehicle track to be identified and the current target vehicle track to obtain the final target vehicle track.
10. An apparatus for intrusion detection in a forbidden area, comprising:
the vehicle driving image data acquisition module is used for acquiring vehicle driving image data by using the unmanned aerial vehicle;
the target vehicle track information determining module is used for determining target vehicle track information according to the vehicle running image data;
and the intrusion behavior determining module is used for determining whether the target vehicle has intrusion behaviors or not according to the target vehicle track information and the preset intrusion conditions of the forbidden region.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method of intrusion detection of forbidden areas according to any one of claims 1-9.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of intrusion detection of contraband areas according to any of claims 1-9 when executing the computer program.
CN202011458952.XA 2020-12-11 2020-12-11 Method, device, medium and electronic equipment for intrusion detection in forbidden area Active CN112560664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011458952.XA CN112560664B (en) 2020-12-11 2020-12-11 Method, device, medium and electronic equipment for intrusion detection in forbidden area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011458952.XA CN112560664B (en) 2020-12-11 2020-12-11 Method, device, medium and electronic equipment for intrusion detection in forbidden area

Publications (2)

Publication Number Publication Date
CN112560664A true CN112560664A (en) 2021-03-26
CN112560664B CN112560664B (en) 2023-08-01

Family

ID=75062316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011458952.XA Active CN112560664B (en) 2020-12-11 2020-12-11 Method, device, medium and electronic equipment for intrusion detection in forbidden area

Country Status (1)

Country Link
CN (1) CN112560664B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112947446A (en) * 2021-02-07 2021-06-11 启迪云控(上海)汽车科技有限公司 Intelligent networking application scene automatic identification method, device, medium and equipment based on fully-known visual angle and feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257152A (en) * 2017-12-28 2018-07-06 清华大学苏州汽车研究院(吴江) A kind of road intrusion detection method and system based on video
EP3361459A1 (en) * 2017-02-10 2018-08-15 Google LLC Method, apparatus and system for passive infrared sensor framework
CN111325178A (en) * 2020-03-05 2020-06-23 上海眼控科技股份有限公司 Warning object detection result acquisition method and device, computer equipment and storage medium
CN111738240A (en) * 2020-08-20 2020-10-02 江苏神彩科技股份有限公司 Region monitoring method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3361459A1 (en) * 2017-02-10 2018-08-15 Google LLC Method, apparatus and system for passive infrared sensor framework
CN108257152A (en) * 2017-12-28 2018-07-06 清华大学苏州汽车研究院(吴江) A kind of road intrusion detection method and system based on video
CN111325178A (en) * 2020-03-05 2020-06-23 上海眼控科技股份有限公司 Warning object detection result acquisition method and device, computer equipment and storage medium
CN111738240A (en) * 2020-08-20 2020-10-02 江苏神彩科技股份有限公司 Region monitoring method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112947446A (en) * 2021-02-07 2021-06-11 启迪云控(上海)汽车科技有限公司 Intelligent networking application scene automatic identification method, device, medium and equipment based on fully-known visual angle and feature extraction

Also Published As

Publication number Publication date
CN112560664B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN111506980B (en) Method and device for generating traffic scene for virtual driving environment
CN110920611B (en) Vehicle control method and device based on adjacent vehicles
CN112286206B (en) Automatic driving simulation method, system, equipment, readable storage medium and platform
CN111507160B (en) Method and apparatus for integrating travel images acquired from vehicles performing cooperative driving
KR102266996B1 (en) Method and apparatus for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor
CN110348332B (en) Method for extracting multi-target real-time trajectories of non-human machines in traffic video scene
CN112595337A (en) Obstacle avoidance path planning method and device, electronic device, vehicle and storage medium
CN103927508A (en) Target vehicle tracking method and device
CN113435237B (en) Object state recognition device, recognition method, and computer-readable recording medium, and control device
CN116778292B (en) Method, device, equipment and storage medium for fusing space-time trajectories of multi-mode vehicles
CN114519849A (en) Vehicle tracking data processing method and device and storage medium
CN112560664B (en) Method, device, medium and electronic equipment for intrusion detection in forbidden area
CN114360261B (en) Vehicle reverse running identification method and device, big data analysis platform and medium
CN110189537B (en) Parking guidance method, device and equipment based on space-time characteristics and storage medium
CN111126327A (en) Lane line detection method and system, vehicle-mounted system and vehicle
CN113177509B (en) Method and device for recognizing backing behavior
CN114220040A (en) Parking method, terminal and computer readable storage medium
US20200276984A1 (en) Server and Vehicle Control System
CN107463886B (en) Double-flash identification and vehicle obstacle avoidance method and system
CN114426030B (en) Pedestrian passing intention estimation method, device, equipment and automobile
CN115169588A (en) Electrographic computation space-time trajectory vehicle code correlation method, device, equipment and storage medium
CN115249407B (en) Indicator light state identification method and device, electronic equipment, storage medium and product
US20230419683A1 (en) Method and system for automatic driving data collection and closed-loop management
CN113762030A (en) Data processing method and device, computer equipment and storage medium
JP7147464B2 (en) Image selection device and image selection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant