CN113920706A - Abnormal event prevention and control method based on image recognition - Google Patents

Abnormal event prevention and control method based on image recognition Download PDF

Info

Publication number
CN113920706A
CN113920706A CN202110998802.6A CN202110998802A CN113920706A CN 113920706 A CN113920706 A CN 113920706A CN 202110998802 A CN202110998802 A CN 202110998802A CN 113920706 A CN113920706 A CN 113920706A
Authority
CN
China
Prior art keywords
event
characteristic
states
value
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110998802.6A
Other languages
Chinese (zh)
Inventor
郑新楠
贾熹滨
玄奇正
姚宇翔
王居川
刘子扬
武文琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110998802.6A priority Critical patent/CN113920706A/en
Publication of CN113920706A publication Critical patent/CN113920706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/008Alarm setting and unsetting, i.e. arming or disarming of the security system

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides an abnormal event prevention and control method based on image recognition. The method specifically comprises the following steps: the method comprises the steps that terminal equipment obtains real-time image data and GPS data of a monitoring scene collected by a monitoring camera, the image data and the GPS data are preprocessed to obtain a characteristic value corresponding to each time point of each preset concern of the image data, the terminal equipment analyzes and processes the preset concern characteristic values corresponding to each two adjacent time points respectively to obtain a plurality of intermediate characteristic states of the preset concern, the terminal equipment integrates and processes the plurality of continuous intermediate characteristic value states in time to obtain a characteristic event of the preset concern, and event prevention and control are conducted according to a preset braking strategy matched with the characteristic event. The method has high reliability, and the characteristic events of the preset attention objects are prevented and controlled in a fine-grained manner, so that the method is suitable for richer event types and environmental conditions, and the applicability, real-time performance and intelligent level of abnormal event prevention and control are improved.

Description

Abnormal event prevention and control method based on image recognition
Technical Field
The invention relates to the field of intelligent security, in particular to an abnormal event prevention and control method based on image recognition.
Background
The safety protection system is defined in the Chinese standard as an intrusion alarm system, a video security monitoring system, an entrance and exit control system, a BSV liquid crystal splicing wall system, an entrance guard fire-fighting system, an explosion-proof safety inspection system and the like which are formed by using safety protection products and other related products for the purpose of maintaining social public safety; or an electronic system or network of subsystems combined or integrated by these systems. The security protection means that the security protection of people, equipment, buildings or areas is comprehensively realized by adopting the modes of manpower protection, technical protection, physical protection and the like in buildings or building groups (including surrounding areas) or specific places and areas. Generally, the security protection mainly refers to technical protection, and refers to the realization of security protection by adopting security technology to protect products and protection facilities.
Machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. It is the core of artificial intelligence and the fundamental way to make computer have intelligence.
Image recognition, which refers to a technique for processing, analyzing and understanding images by a computer to recognize various different patterns of objects and objects, is a practical application of applying a deep learning algorithm.
The intelligent security system is an important technical means for implementing security control, and under the condition that the current security demand is expanded, the intelligent security system is applied more and more widely in the field of security technology. However, the security system used in the prior art mainly depends on human visual judgment, and lacks intelligent analysis on video acquisition content.
Disclosure of Invention
The invention provides an abnormal event prevention and control method based on image recognition, which can intelligently analyze the acquired content of a video and finely prevent and control the characteristic events of preset attention objects, namely is suitable for richer event types and environmental conditions, and improves the applicability, real-time property and intelligent level of abnormal event prevention and control. The invention provides an abnormal event prevention and control method based on image recognition, which comprises the following steps:
the method comprises the steps that terminal equipment obtains real-time image data and GPS data of a monitoring scene collected by a monitoring camera, and preprocesses the image data and the GPS data to obtain a characteristic value corresponding to each time point of each preset concern of the image data;
the terminal equipment analyzes and processes the preset attention feature values corresponding to every two adjacent time points to obtain a plurality of intermediate feature states of the preset attention;
the terminal equipment integrates a plurality of continuous intermediate characteristic value states in time to obtain a characteristic event of a preset attention object;
and the terminal equipment performs event prevention and control according to the preset braking strategy matched with the characteristic event.
In the abnormal event prevention and control method based on image recognition, the preset attention object is a stored preset object model.
The abnormal event prevention and control method based on image recognition is characterized in that: and according to the Yolo4 neural network model and the freezing network model, the human body, the vehicle and the characteristic object are identified.
The abnormal event prevention and control method based on image recognition as described above, the monitoring scene includes a plurality of monitoring cameras, and the method further includes, at each monitoring camera, detecting the preset attention object and triangulating the position of the characteristic event by real-time image data collected by each monitoring camera, and calculating the accurate coordinates of the preset attention object.
In the above abnormal event prevention and control method based on image recognition, the preset feature value of the attention object includes: appearance and disappearance of a preset concern obtained from the real-time image data, and a position coordinate and a speed value obtained from the GPS data and triangulation data;
the terminal equipment analyzes and processes the preset attention object characteristic value at a single time point, and the state of the preset attention object is one of the following states: an appearance state or a disappearance state;
the terminal equipment analyzes and processes the preset object model corresponding to each two adjacent time points to obtain a plurality of intermediate characteristic value states of the preset attention object, and the method comprises the following steps:
the terminal device performs comparative analysis on the preset object models corresponding to each two adjacent time points to obtain a plurality of intermediate characteristic states of a preset attention object, wherein each intermediate characteristic state is specifically one of the following states: a speed increasing state or a speed decreasing state;
the terminal equipment integrates and processes a plurality of continuous intermediate characteristic value states in time to obtain a characteristic event of a preset attention object, and the method comprises the following steps:
the terminal device obtains a feature event of a preset attention object according to a difference value between an acceleration maximum value and an acceleration minimum value in a driving direction at a time point corresponding to a plurality of continuous intermediate feature states and one of the intermediate feature states, wherein each feature event is specifically one of the following: start, accelerate, decelerate, stop, hard start, hard accelerate, hard decelerate, or hard stop.
In the method for preventing and controlling an abnormal event based on image recognition, the comparing and analyzing the speed values corresponding to each two adjacent time points by the terminal device to obtain a plurality of intermediate characteristic states of the preset attention object includes:
for any two adjacent time points, recording a first time point T1 and a second time point T2, if the speed value corresponding to T1 is V1, the speed value corresponding to T2 is T2, the V2 is greater than the V1, and the difference C1 between V2 and V1 is greater than a first threshold, the terminal device determines that the intermediate characteristic state is an acceleration state; or
If the V2 is less than the V1 and the difference C2 between the V1 and the V2 is greater than the first threshold, the terminal device determines that the intermediate characteristic state is a decelerating state;
the method for obtaining the feature event of the preset attention object by the terminal device according to the difference value between the maximum acceleration value and the minimum acceleration value in the motion direction of the time point corresponding to the plurality of continuous intermediate feature states and one of the intermediate feature states includes:
if a plurality of temporally continuous intermediate characteristic states are all in an acceleration state, the terminal device obtains speed increase accumulated quantities corresponding to the intermediate characteristic states, and when the speed increase accumulated quantities are determined to be larger than a second threshold value, whether the difference value between the maximum acceleration value and the minimum acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than a third threshold value is judged; if yes, judging whether the V1 corresponding to the first intermediate characteristic state is zero, if yes, determining that the characteristic event is an emergency starting event, and if not, determining that the characteristic event is an emergency acceleration event;
if not, judging whether the V1 corresponding to the first intermediate characteristic state is zero, if so, judging that the characteristic event is a starting event, and if not, determining that the characteristic event is an acceleration event;
or
If the plurality of intermediate characteristic states which are continuous in time are all in a deceleration state, the terminal device obtains the speed reduction accumulated amount corresponding to the plurality of intermediate characteristic states, and when the speed reduction accumulated amount is determined to be larger than a second threshold value, whether the difference value between the acceleration maximum value and the acceleration minimum value corresponding to one of the intermediate characteristic states is larger than a third threshold value is judged;
if so, judging whether the V2 corresponding to the last intermediate characteristic state is zero, if so, determining that the characteristic event is an emergency stop event, and if not, determining that the characteristic event is a rapid deceleration event;
if not, judging whether the V2 corresponding to the last intermediate characteristic state is zero, if so, judging that the characteristic event is a stop event, and if not, determining that the characteristic event is a deceleration event.
In the above abnormal event prevention and control method based on image recognition, the preset feature value of the attention object includes: obtaining a speed value and a direction of the object of interest according to the triangulation data and the GPS data;
the terminal device analyzes and processes the preset attention feature values corresponding to every two adjacent time points respectively to obtain a plurality of intermediate feature states of the preset attention, and the method comprises the following steps:
the terminal equipment integrates and processes a plurality of intermediate characteristic states which are continuous in time at every two adjacent time points, and obtains a characteristic event of a preset attention object, and the method comprises the following steps:
and obtaining a centripetal acceleration value according to the average value and the speed value of the angular speed rotating around the Z axis of the terrestrial coordinate system in the monitoring range of the time point corresponding to the intermediate state of the preset attention object.
The terminal device obtains a feature event of a preset attention object according to a plurality of continuous intermediate feature states in time and the centripetal acceleration value of a time point corresponding to one of the intermediate feature states, wherein each feature event is specifically one of the following: left turning motion, right turning motion, sharp left turning motion, sharp right turning motion, left counter motion, right counter motion, sharp left counter motion, or sharp right counter motion.
In the method for preventing and controlling an abnormal event based on image recognition, the analyzing and processing the orientations corresponding to each two adjacent time points by the terminal device to obtain a plurality of intermediate feature states of a preset attention object includes:
aiming at any two adjacent time points, a first time point T1 and a second time point T2 are recorded, if the position corresponding to T1 is a first angle, the position corresponding to T2 is a second angle, the second angle is larger than the first angle, the difference value between the second angle and the first angle is larger than a fourth threshold value, the terminal equipment determines that the intermediate characteristic state of the preset object of interest is a relative global coordinate system angle increasing state, or
If the position corresponding to the T1 is a first angle, the position corresponding to the T2 is a second angle, the second angle is smaller than the first angle, and the absolute value of the difference between the second angle and the first angle is larger than a fourth threshold, the terminal device determines that the intermediate characteristic state of the preset object of interest is a reduced state relative to the terrestrial coordinate system angle;
the method includes that the terminal equipment obtains a characteristic event of a preset attention object according to a plurality of continuous intermediate characteristic states in time and a centripetal acceleration value of a time point corresponding to one of the intermediate characteristic states, and includes the following steps:
obtaining the centripetal acceleration value according to the average angular velocity value and the velocity value of the T2 rotating around the Z axis of the terrestrial coordinate system corresponding to the intermediate characteristic state;
if a plurality of temporally continuous intermediate characteristic states are angle increasing states, the terminal device obtains angle increasing cumulant corresponding to the intermediate characteristic states, and when the angle increasing cumulant is determined to be larger than a fifth threshold value, whether the angle increasing cumulant is larger than a sixth threshold value is judged, and the sixth threshold value is larger than the fifth threshold value;
if yes, judging whether the centripetal acceleration value of the T2 corresponding to the one of the intermediate characteristic states is larger than a seventh threshold value; if so, determining that the characteristic event is a characteristic event of sudden reverse right motion, and if not, determining that the characteristic event is a characteristic event of reverse right motion;
if not, judging whether the centripetal acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than the seventh threshold value; if so, determining that the characteristic event is the characteristic event of the quick right turning motion, and if not, determining that the characteristic event is the characteristic event of the right turning motion;
or
If a plurality of temporally continuous intermediate characteristic states are all angle reduction states, the terminal device obtains angle reduction accumulated amounts corresponding to the intermediate characteristic states, and when the angle reduction accumulated amounts are determined to be larger than a fifth threshold value, whether the angle reduction accumulated amounts are larger than a sixth threshold value is judged, and the sixth threshold value is larger than the fifth threshold value;
if yes, judging whether the centripetal acceleration value of the T2 corresponding to the one of the intermediate characteristic states is larger than a seventh threshold value; if yes, determining that the characteristic event is a sudden left reverse motion event, and if not, determining that the driving event is a left reverse motion event;
if not, judging whether the centripetal acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than the seventh threshold value; if yes, determining that the characteristic event is a quick left turning motion event, and if not, determining that the driving event is a left turning motion event.
In the method for preventing and controlling an abnormal event based on image recognition, after the terminal device integrates a plurality of temporally continuous intermediate feature states to obtain a feature event of a preset attention object, the method further includes:
storing the characteristic event to a prevention and control event queue;
judging whether a plurality of adjacent characteristic events in the prevention and control event queue meet preset conditions meeting braking strategies;
if yes, carrying out complex characteristic event monitoring on the plurality of adjacent characteristic events, and matching with a corresponding preset braking strategy to carry out event prevention and control.
The abnormal event prevention and control method based on image recognition as described above, the method further comprising: selecting a threshold, and adopting self-adaptive collection of the threshold when determining the intermediate characteristic state; when the accumulated variation is judged, artificial setting is adopted; when the braking strategy is customized, artificial setting is adopted; when the emergency event is judged, a fixed threshold value and a historical statistical threshold value are combined for setting.
The abnormal event prevention and control method based on image recognition obtains real-time image data and GPS data of a monitoring scene collected by a monitoring camera through terminal equipment, preprocesses the image data and the GPS data to obtain a characteristic value corresponding to each time point of each preset concern of the image data, analyzes and processes the preset concern characteristic values corresponding to each two adjacent time points to obtain a plurality of intermediate characteristic states of the preset concern, integrates and processes the plurality of intermediate characteristic value states which are continuous in time to obtain a characteristic event of the preset concern, and performs event prevention and control according to a preset brake strategy matched with the characteristic event. The method has high reliability, and the characteristic events of the preset attention objects are prevented and controlled in a fine-grained manner, so that the method is suitable for richer event types and environmental conditions, and the applicability, real-time performance and intelligent level of abnormal event prevention and control are improved.
Drawings
In order to more clearly illustrate the real-time embodiment or the technical solution in the prior art of the present invention, the drawings used in the description of the embodiment or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of an image recognition implementation provided by a real-time example of the present invention;
FIG. 2 is a flowchart of an abnormal event prevention and control method according to a real-time embodiment of the present invention;
FIG. 3 is a flow chart of an abnormal event monitoring process according to a real-time embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of an implementation of image recognition according to an embodiment of the present invention. The method provided by the embodiment can be executed by an abnormal event prevention and control detection device, the device can be realized by software and/or hardware, and the method can be realized by programming a computer. When the terminal equipment executes the method, the terminal equipment is fixedly connected with the monitoring equipment. As shown in fig. 1, the method may include:
the method comprises the steps that firstly, terminal equipment acquires a series of picture frame data in a continuous time.
And step two, the terminal equipment converts the digital image into a binary format to accelerate the data processing speed.
And step three, the terminal equipment performs a series of preprocessing operations on the binary data obtained in the step two at the input end to achieve the purpose of data enhancement, and the operation can effectively solve the problem that the small object cannot be detected in the training model. That is to say, the preset attention at the edge of the camera can be identified through the operation, and the specific operation of the step includes:
3.1Mosaic data enhancement: the method comprises the steps of carrying out random scaling, random cutting and random arrangement operation on data.
3.2 adaptive picture scaling: the operation carries out zooming processing on the image, and adds proper black edges to enable the data matching algorithm to process the format of the data, thereby achieving the purpose of improving the processing speed.
3.3 adaptive anchor frame calculation: in the data set, an initialized fixed-length and fixed-width anchor frame is provided, prediction is carried out on the basis, comparison with a real group route is carried out, then reverse updating is carried out, and further network parameters are iterated.
In the fourth step, a backbone layer (including two structures of Focus and CSP) is arranged in the algorithm to perform a series of matrix operations on the data enhanced in the third step, so that the purpose of feature extraction is achieved, various states of the preset attention object are obtained and analyzed through the subsequent steps, and abnormal events matched with the preset attention object are further judged/predicted.
4.1Focus structure: the data is sliced, a large and small image is sliced into a large and small slice data, feature extraction is performed, and the features are matched with the features of the stored preset attention object, so that whether the preset attention object is contained in the slice or not is estimated.
4.2CSP structure: the structure is used for load balancing of the terminal processing equipment. The structure increases gradient paths, balances the calculated amount of each layer and reduces the memory flow by designing a dense block; under a certain scene, for example, the pedestrian speed of a certain monitored area is high, and the data acquisition interval is too large to acquire the data of the preset edge attention object. In this case we need to increase the number of image frames acquired per second, and this structure can well balance the load of our terminal devices.
And step five, setting a Neck layer (containing FPN and PAN structures) in the algorithm to perform upsampling, and performing downsampling and upsampling to optimize the extraction of the features.
5.1FPN Structure: the structure is from top to bottom, high-level features are fused with low-level features through upsampling, high-level strong semantic features are transmitted, the whole pyramid is enhanced, however, only semantic information is enhanced, and positioning information is not transmitted (the positioning information is fused by a PAN structure). Thereby obtaining a predicted feature map.
5.2PAN Structure: this structure is from the bottom up, and the locating information of different characteristics is from the bottom up through down-sampling, fuses with the strong location characteristic that the FPN structure conveyed, has improved the propagation of bottom characteristic. For example: three prediction features are obtained through upsampling and correspond to anchor frames with different sizes. The large anchor frame identifies the preset attention object in the data through the characteristics of the middle anchor frame and the small anchor frame, and the middle anchor frame and the small anchor frame in turn identify the partial characteristics of the preset attention object, such as: the facial expression, the limb movement, the driving state of the vehicle and the like of the person are identified (the judgment of the trend of the preset attention object in the subsequent step is combined), and the identified characteristic of the preset attention object is compared with the characteristic conforming to the preset braking strategy, so that the judgment/prediction of the event is carried out, and the action identification of the preset attention object is achieved by triggering the threshold value.
And step six, outputting the model prediction result through the following two operations in the step.
6.1 We use Bounding Box loss function to reduce the negative impact of union and intersection by difference.
6.2 in the screening of the target box, we use the non-maximum suppression of nms, i.e. the weighted nms, to predict the details of the overlapped preset interest, say the face of the overlapped pedestrian. Because the same predetermined attention object has similar characteristics, the same predetermined attention object is not repeatedly recorded.
According to the preset object identification method based on image identification, human bodies, vehicles and characteristic objects are identified according to the Yolo4 neural network model and the freezing network model.
Fig. 2 is a flowchart of an abnormal event prevention and control method according to an embodiment of the present invention. As shown in fig. 2, the method may include:
s201, the terminal equipment acquires real-time image data and GPS data of a monitoring scene acquired by a monitoring camera, and preprocesses the image data and the GPS data to obtain a characteristic value corresponding to each time point of each preset concern of the image data.
S202, the terminal equipment analyzes and processes the preset attention feature values corresponding to every two adjacent time points to obtain a plurality of intermediate feature states of the preset attention.
And S203, the terminal equipment integrates a plurality of continuous intermediate characteristic value states in time to obtain a characteristic event of a preset attention object.
And S204, the terminal equipment performs event prevention and control according to the preset braking strategy matched with the characteristic event.
Specifically, the terminal device is provided with a GPS module and a tool for transmitting real-time image data with the monitoring device, and the transmission tool may specifically use a wired connection and/or a wireless connection. When the terminal equipment normally operates, the terminal equipment can acquire corresponding real-time images and GPS data, and the acquired data table proves the state of the preset attention object and the position of the preset attention object.
Further, in step 201, since the image data and the GPS data are acquired in real time in this embodiment, the data size is large, and therefore, the data needs to be preprocessed.
The process of preprocessing the image data acquired in real time may be:
step one, the terminal equipment in the monitoring area respectively processes the real-time/continuous images acquired by the terminal equipment through a model, so that an observation result of the terminal equipment is acquired, and the result comprises a real-time, continuous characteristic value and a continuous intermediate characteristic state of a preset attention object.
And step two, integrating the observation results obtained by the terminal devices in the step one to obtain the identification result of the preset attention object. Meanwhile, in the integration processing of the step, the accurate coordinates of the preset attention object are calculated according to the observation result of each terminal device by a triangulation method.
The preprocessing process of the GPS data may be:
and the terminal equipment is turned on, and the system automatically turns on the GPS. The terminal device may store and process the pre-set GPS data until no specific pre-set object of interest is detected, and when a specific pre-set object of interest is detected, the terminal device collects and processes the real GPS data at a higher frequency.
Alternatively, the GPS is not automatically turned on until no particular preset concern is detected.
And processing the raw data collected in each specified frequency into a group of characteristic values. The characteristic value may be, for example, the type, the number of detections, and the position coordinates of the preset object of interest.
In step 202, the terminal device analyzes and processes the preset feature values of the attention object corresponding to each two adjacent time points, so as to obtain a plurality of intermediate feature states of the preset attention object. The intermediate characteristic state may be, for example, a speed increasing state or a speed decreasing state.
In step 203, the terminal device integrates a plurality of intermediate feature value states that are continuous in time to obtain a feature event of a preset attention object. And when the characteristic events are angle increasing states, for example, a plurality of intermediate characteristic states which are continuous in time, and the angle increasing cumulative quantity is greater than a reference contrast threshold value, determining the characteristic event of the right turning motion of the characteristic event.
Optionally, when no preset concern is detected, then the relevant abnormal event monitoring is skipped.
Further, the embodiment also provides a threshold value selecting logic. When the intermediate characteristic state is determined, self-adaptive collection of a threshold value is adopted; when the accumulated variation is judged, artificial setting is adopted; when the braking strategy is customized, artificial setting is adopted; when the emergency event is judged, a fixed threshold value and a historical statistical threshold value are combined for setting.
Specifically, due to the environment diversity monitored by the terminal device, an initial threshold value is set first when each type of threshold value is set, and the threshold value is updated manually when the device runs for a period of time and adaptively collects data related to the threshold value.
For the threshold data, taking the number increase as the threshold of the judgment state as an example, step one, setting an initial threshold accuracy parameter such as 90% by a user, and setting an initial threshold such as 5m/s 2; step two, aiming at the quantity values of a plurality of section states in the recording process, recording the change degree of which the quantity change value changes by more than 5m/s2 every time, and storing the change degree as a change amount Y1; step three, calculating the change degree of the quantity value exceeding 5m/s2 when the coherence process occurs each time in the detailed process of coherence with the preset threshold value in the latest sections of states, and storing the change degree as Y2; and step four, calculating to obtain distribution functions of Y1 and Y2, and finding out the set threshold value X through a specific algorithm and objective condition analysis. The setting requirement of X is 5m/s2> X quantity Z1 and Y2> X quantity Z2, and Z2/Z1> 90%, X exists as a new threshold.
For any two adjacent time points, recording a first time point T1 and a second time point T2, if the speed value corresponding to T1 is V1, the speed value corresponding to T2 is T2, the V2 is greater than the V1, and the difference C1 between V2 and V1 is greater than a first threshold, the terminal device determines that the intermediate characteristic state is an acceleration state; or if the V2 is less than the V1 and the difference C2 between the V1 and the V2 is greater than the first threshold, the terminal device determines that the intermediate characteristic state is a decelerating state;
the acceleration state is determined when the speed of the second is greater than that of the first second and the absolute value of the acceleration within the two seconds is greater than a first threshold value, wherein T1 is the first second, T2 is the second; if the second-second velocity is smaller than the second velocity, and the absolute value of the acceleration
The first threshold value: acceleration of 6m/s2If the acceleration is too small, the performance of shaking is represented, for example, if the acceleration is turned on by a throttle, and if the plurality of continuous intermediate characteristic states in time are all in an acceleration state, the terminal device acquires speed increase accumulated quantities corresponding to the plurality of intermediate characteristic states, and when the speed increase accumulated quantities are determined to be larger than a second threshold value, whether the difference value between the acceleration maximum value and the acceleration minimum value of the T2 corresponding to one of the intermediate characteristic states is larger than a third threshold value is judged; if yes, judging whether the V1 corresponding to the first intermediate characteristic state is zero, if yes, determining that the characteristic event is an emergency starting event, and if not, determining that the characteristic event is an emergency acceleration event;
the first threshold value: acceleration of 6m/s2When the acceleration is too small, the jerking is shown, for example, the throttle is turned on and the brake is turned on
The second threshold value: the accumulated amount of acceleration, which represents the average of the acceleration, may be the number of seconds of the period multiplied by 9m/s2
The third threshold value: 7.5m/s2
V1: speed of the first second
V2: the speed at the end of the second or certain period is recorded with a first time point T1 and a second time point T2 for any two adjacent time points, if the orientation corresponding to the T1 is the first angle, the orientation corresponding to the T2 is the second angle, and the second angle is greater than the first angle, a difference between the second angle and the first angle is greater than a fourth threshold, the intermediate characteristic state of the preset attention object determined by the terminal device is an angle increase state relative to a terrestrial coordinate system, if a plurality of intermediate characteristic states which are continuous in time are all angle increase states, the terminal device acquires the angle increase accumulation amounts corresponding to the plurality of intermediate characteristic states, determining whether the angle increase cumulative amount is larger than a sixth threshold value, which is larger than the fifth threshold value, when it is determined that the angle increase cumulative amount is larger than the fifth threshold value;
if not, judging whether the centripetal acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than the seventh threshold value; if so, determining that the characteristic event is the characteristic event of the quick right turning motion, and if not, determining that the characteristic event is the characteristic event of the right turning motion;
the fourth threshold value: 30 degrees, ensure a turn rather than a lane change.
A fifth threshold value: and 50 degrees, namely the lowest angle when the turning is finished, and large-amplitude lane changing is eliminated.
A sixth threshold value: and 65 degrees, ensuring that the turning is finished.
The seventh threshold value: 4m/s2
Furthermore, it is also possible to specify by configuration that only fixed thresholds or only historical statistical thresholds are used.
Fig. 3 is a flowchart of an abnormal event monitoring process according to an embodiment of the present invention. As shown in fig. 3, the process begins in block 301, where the terminal device determines to acquire real-time image data and GPS data of a monitored scene acquired by a monitoring camera. Terminal equipment can carry out wired connection or wireless connection with one or more surveillance cameras, and the surveillance camera includes but not limited to ordinary surveillance camera and infrared camera.
Next, in block 302, the terminal device may provide data to it via the monitoring camera and data collected by itself, determine whether the detected data matches the stored preset object of interest, and as described above, the terminal device may compare by means of image recognition. If the object of interest characteristic of the detected real-time picture is within a set predetermined range (e.g., 90% of similarity), the terminal device may determine that the detected preset object matches the stored model. If the preset target detected by the terminal device matches the stored model, the fig. 3 process continues to block 303, otherwise the fig. 3 process returns to block 301.
Next, in block 303, the terminal device obtains a feature value corresponding to each time point of each preset attention of the image data. As described above, the intermediate characteristic state, such as the speed increasing state or the speed decreasing state, can be obtained.
Next, in block 304, the terminal device performs an integration process on a plurality of intermediate feature value states that are continuous in time, so as to obtain a feature event of a preset attention object. In the process 304, the terminal device additionally considers the situation that the preset target object reappears after being lost. And if the terminal equipment acquires the information of the preset target object again within the set intermittent frequency (such as 5 seconds), the consistency of monitoring and recording is kept.
Next, in block 305, the terminal device performs an integration process on a plurality of intermediate feature value states that are continuous in time, and obtains a feature event of a preset attention object. As described above, characteristic events such as the flow of people exceeding a threshold value, a sudden vehicle change to a rear stop, and the like can be obtained.
Next, in block 306, the terminal device compares the current recorded characteristic event with the stored event model to determine whether the detected characteristic event matches the stored event model, and as described above, the terminal device may compare the characteristic event with the stored event model. If the detected feature event matches the stored event model (e.g., 80% similarity), then the FIG. 3 process continues to block 307, else the FIG. 3 process returns to block 301.
Next, in block 307, the terminal device matches a preset braking strategy according to the characteristic event. As mentioned above, countermeasures may include special logging of characteristic events, sending remote alarms, braking other brakes connected, etc. The terminal device may brake one or more countermeasures based on the monitored characteristic events to perform event prevention and control.
Next, in block 308, the terminal device may determine whether the monitored characteristic event matches a stored event model to determine whether the characteristic event is occurring. If the terminal device determines whether the monitored characteristic event matches the stored event model, the fig. 3 process remains in block 308, otherwise the fig. 3 process continues to block 309.
In block 309, the terminal device may stop the countermeasure. For example, the terminal device stops sending the warning information when the flow of people is lower than the threshold value. After deactivating the countermeasure, the process of fig. 3 returns to block 301, i.e., the flow chart of fig. 3 substantially continuously illustrates the process of the terminal device for anomalous event monitoring.
The computers generally each include instructions executable by one or more computers, such as those identified above, as well as instructions for performing the blocks or steps of the processes described above. The computer-executable instructions may be compiled or interpreted by a computer program created using a variety of programming languages and/or techniques including, but not limited to C, C + +, Visual Basic, Java Script, Perl, HTML, Python, and the like, alone or in combination. Generally, a processor (e.g., a microprocessor) receives instructions, e.g., from a memory, a computer-readable medium, etc., and executes the instructions, thereby executing one or more programs, including one or more of the programs described herein. Such instructions or other data may be stored and transmitted using a variety of computer-readable media. A file in a computer is typically a collection of data stored on a computer readable medium, such as a storage medium, random access memory, or the like.
Computer-readable media includes media that participate in providing data (e.g., instructions) that may be read by a computer. Such a medium may take many forms, including but not limited to, non-volatile media, and the like. Non-volatile media includes, for example, optical or magnetic disks or other persistent memory. Volatile media include Dynamic Random Access Memory (DRAM), which typically constitutes a main memory. Conventional forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic disk, any other magnetic medium, a CD-ROM (compact disc read only drive), DVD (digital versatile disc), any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM (random access memory), a PROM (programmable read only memory), an EPROM (erasable programmable read only memory), a FLASH EEPROM (FLASH electrically erasable programmable read only memory), any other memory chip or cartridge, or any other medium from which a computer can read.
With respect to the media, programs, systems, methods, etc., described herein, it will be understood that while the steps of such programs, etc., are described as occurring in a certain order, such programs may perform the operations with the described steps performed in an order other than the order described herein. It is further understood that certain steps may be performed simultaneously, that other steps may be added, or that certain steps described herein may be omitted. For example, in fig. 3, one or more steps may be omitted, or steps may be performed in a different order than shown in fig. 3. In other words, the description of the systems and/or programs herein is provided for purposes of illustrating certain embodiments and should not be construed in any way as limiting the subject matter of the present disclosure.
The above description of the present invention is intended to be illustrative. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (8)

1. An abnormal event prevention and control method based on image recognition is characterized by comprising the following steps:
step 1, terminal equipment acquires real-time image data and GPS data of a monitoring scene acquired by a monitoring camera, and pre-processes the image data and the GPS data to obtain a characteristic value corresponding to each time point of each preset concern of the image data;
step 2, the terminal equipment analyzes and processes the preset attention feature values corresponding to every two adjacent time points to obtain a plurality of intermediate feature states of the preset attention;
step 3, the terminal equipment integrates a plurality of intermediate characteristic value states which are continuous in time to obtain a characteristic event of a preset attention object;
and 4, the terminal equipment performs event prevention and control according to the preset braking strategy matched with the characteristic event.
2. The method of claim 1, wherein the preset interest is a stored preset object model.
3. The method of claim 2, wherein the preset object model is characterized by: and identifying the human body, the vehicle and the characteristic object according to the SSD neural network model and the freezing network model.
4. The method of claim 1, wherein the surveillance scene includes a plurality of surveillance cameras, and the method further comprises, at each surveillance camera, detecting the preset interest and triangulating the location of the characteristic event via real-time image data acquired by each surveillance camera.
5. The method of claim 1, wherein: the preset attention feature value comprises: appearance and disappearance of a preset concern obtained from the real-time image data, and a position coordinate and a speed value obtained from the GPS data and triangulation data;
the terminal equipment analyzes and processes the preset attention object characteristic value at a single time point, and the state of the preset attention object is one of the following states: an appearance state or a disappearance state;
the terminal equipment analyzes and processes the preset object model corresponding to each two adjacent time points to obtain a plurality of intermediate characteristic value states of the preset attention object, and the method comprises the following steps:
the terminal device performs comparative analysis on the preset object models corresponding to each two adjacent time points to obtain a plurality of intermediate characteristic states of a preset attention object, wherein each intermediate characteristic state is specifically one of the following states: a speed increasing state or a speed decreasing state;
the terminal equipment integrates and processes a plurality of continuous intermediate characteristic value states in time to obtain a characteristic event of a preset attention object, and the method comprises the following steps:
the terminal device obtains a feature event of a preset attention object according to a difference value between an acceleration maximum value and an acceleration minimum value in a driving direction at a time point corresponding to a plurality of continuous intermediate feature states and one of the intermediate feature states, wherein each feature event is specifically one of the following: start, accelerate, decelerate, stop, hard start, hard accelerate, hard decelerate, or hard stop.
6. The method according to claim 5, wherein the comparing and analyzing, by the terminal device, the respective corresponding speed values at every two adjacent time points to obtain a plurality of intermediate characteristic states of the preset attention includes:
for any two adjacent time points, recording a first time point T1 and a second time point T2, if the speed value corresponding to T1 is V1, the speed value corresponding to T2 is T2, the V2 is greater than the V1, and the difference C1 between V2 and V1 is greater than a first threshold, the terminal device determines that the intermediate characteristic state is an acceleration state; or
If the V2 is less than the V1 and the difference C2 between the V1 and the V2 is greater than the first threshold, the terminal device determines that the intermediate characteristic state is a decelerating state;
the method for obtaining the feature event of the preset attention object by the terminal device according to the difference value between the maximum acceleration value and the minimum acceleration value in the motion direction of the time point corresponding to the plurality of continuous intermediate feature states and one of the intermediate feature states includes:
if a plurality of temporally continuous intermediate characteristic states are all in an acceleration state, the terminal device obtains speed increase accumulated quantities corresponding to the intermediate characteristic states, and when the speed increase accumulated quantities are determined to be larger than a second threshold value, whether the difference value between the maximum acceleration value and the minimum acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than a third threshold value is judged; if yes, judging whether the V1 corresponding to the first intermediate characteristic state is zero, if yes, determining that the characteristic event is an emergency starting event, and if not, determining that the characteristic event is an emergency acceleration event;
if not, judging whether the V1 corresponding to the first intermediate characteristic state is zero, if so, judging that the characteristic event is a starting event, and if not, determining that the characteristic event is an acceleration event; or if the plurality of intermediate characteristic states which are continuous in time are all in a deceleration state, the terminal device obtains the speed reduction accumulated amount corresponding to the plurality of intermediate characteristic states, and when the speed reduction accumulated amount is determined to be larger than a second threshold value, whether the difference value between the acceleration maximum value and the acceleration minimum value corresponding to one of the intermediate characteristic states is larger than a third threshold value is judged;
if so, judging whether the V2 corresponding to the last intermediate characteristic state is zero, if so, determining that the characteristic event is an emergency stop event, and if not, determining that the characteristic event is a rapid deceleration event;
if not, judging whether the V2 corresponding to the last intermediate characteristic state is zero, if so, judging that the characteristic event is a stop event, and if not, determining that the characteristic event is a deceleration event;
the first threshold value: acceleration of 6m/s2
The second threshold value: an accumulated amount of acceleration representing an average of the accelerations;
the third threshold value: 7.5m/s2
V1: speed of the first second
V2: the second or the speed at the end of a certain period.
7. The method of claim 1, wherein the preset attention feature value comprises: obtaining a speed value and a direction of the object of interest according to the triangulation data and the GPS data;
the terminal device analyzes and processes the preset attention feature values corresponding to every two adjacent time points respectively to obtain a plurality of intermediate feature states of the preset attention, and the method comprises the following steps:
the terminal equipment integrates and processes a plurality of intermediate characteristic states which are continuous in time at every two adjacent time points, and obtains a characteristic event of a preset attention object, and the method comprises the following steps:
obtaining a centripetal acceleration value according to the average value and the speed value of the angular speed rotating around the Z axis of the terrestrial coordinate system in the monitoring range of the time point corresponding to the intermediate state of the preset attention object;
the terminal device obtains a feature event of a preset attention object according to a plurality of continuous intermediate feature states in time and the centripetal acceleration value of a time point corresponding to one of the intermediate feature states, wherein each feature event is specifically one of the following: left turning motion, right turning motion, sharp left turning motion, sharp right turning motion, left counter motion, right counter motion, sharp left counter motion, or sharp right counter motion.
8. The method according to claim 7, wherein the analyzing, by the terminal device, the respective orientations at every two adjacent time points to obtain a plurality of intermediate feature states of the preset attention comprises:
aiming at any two adjacent time points, a first time point T1 and a second time point T2 are recorded, if the position corresponding to T1 is a first angle, the position corresponding to T2 is a second angle, the second angle is larger than the first angle, the difference value between the second angle and the first angle is larger than a fourth threshold value, the terminal equipment determines that the intermediate characteristic state of the preset object of interest is a relative global coordinate system angle increasing state, or
If the position corresponding to the T1 is a first angle, the position corresponding to the T2 is a second angle, the second angle is smaller than the first angle, and the absolute value of the difference between the second angle and the first angle is larger than a fourth threshold, the terminal device determines that the intermediate characteristic state of the preset object of interest is a reduced state relative to the terrestrial coordinate system angle;
the method includes that the terminal equipment obtains a characteristic event of a preset attention object according to a plurality of continuous intermediate characteristic states in time and a centripetal acceleration value of a time point corresponding to one of the intermediate characteristic states, and includes the following steps:
obtaining the centripetal acceleration value according to the average angular velocity value and the velocity value of the T2 rotating around the Z axis of the terrestrial coordinate system corresponding to the intermediate characteristic state;
if a plurality of temporally continuous intermediate characteristic states are angle increasing states, the terminal device obtains angle increasing cumulant corresponding to the intermediate characteristic states, and when the angle increasing cumulant is determined to be larger than a fifth threshold value, whether the angle increasing cumulant is larger than a sixth threshold value is judged, and the sixth threshold value is larger than the fifth threshold value;
if yes, judging whether the centripetal acceleration value of the T2 corresponding to the one of the intermediate characteristic states is larger than a seventh threshold value; if so, determining that the characteristic event is a characteristic event of sudden reverse right motion, and if not, determining that the characteristic event is a characteristic event of reverse right motion;
if not, judging whether the centripetal acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than the seventh threshold value; if so, determining that the characteristic event is the characteristic event of the quick right turning motion, and if not, determining that the characteristic event is the characteristic event of the right turning motion; or if the plurality of intermediate characteristic states which are continuous in time are all angle reduction states, the terminal device obtains angle reduction accumulated amounts corresponding to the plurality of intermediate characteristic states, and when the angle reduction accumulated amounts are determined to be larger than a fifth threshold value, whether the angle reduction accumulated amounts are larger than a sixth threshold value is judged, and the sixth threshold value is larger than the fifth threshold value;
if yes, judging whether the centripetal acceleration value of the T2 corresponding to the one of the intermediate characteristic states is larger than a seventh threshold value; if yes, determining that the characteristic event is a sudden left reverse motion event, and if not, determining that the driving event is a left reverse motion event;
if not, judging whether the centripetal acceleration value of the T2 corresponding to one of the intermediate characteristic states is larger than the seventh threshold value; if yes, determining that the characteristic event is a quick left turning motion event, and if not, determining that the driving event is a left turning motion event;
the fourth threshold value: 30 degrees, ensure turning instead of lane changing;
a fifth threshold value: 50 degrees, namely the lowest angle when the turning is finished, and large-amplitude lane changing is eliminated;
a sixth threshold value: 65 degrees, ensuring the turning completion;
the seventh threshold value: 4m/s2
The method according to claim 1, wherein the terminal device performs integration processing on a plurality of temporally continuous intermediate feature states to obtain a feature event of a preset attention object, and further comprises:
storing the characteristic event to a prevention and control event queue;
judging whether a plurality of adjacent characteristic events in the prevention and control event queue meet the preset condition meeting the braking strategy or not
If yes, carrying out complex characteristic event monitoring on the plurality of adjacent characteristic events, and matching with a corresponding preset braking strategy to carry out event prevention and control.
CN202110998802.6A 2021-08-28 2021-08-28 Abnormal event prevention and control method based on image recognition Pending CN113920706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110998802.6A CN113920706A (en) 2021-08-28 2021-08-28 Abnormal event prevention and control method based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110998802.6A CN113920706A (en) 2021-08-28 2021-08-28 Abnormal event prevention and control method based on image recognition

Publications (1)

Publication Number Publication Date
CN113920706A true CN113920706A (en) 2022-01-11

Family

ID=79233483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110998802.6A Pending CN113920706A (en) 2021-08-28 2021-08-28 Abnormal event prevention and control method based on image recognition

Country Status (1)

Country Link
CN (1) CN113920706A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025862A (en) * 2007-02-12 2007-08-29 吉林大学 Video based mixed traffic flow parameter detecting method
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN104282150A (en) * 2014-09-29 2015-01-14 北京汉王智通科技有限公司 Recognition device and system of moving target
CN106127883A (en) * 2016-06-23 2016-11-16 北京航空航天大学 driving event detection method
CN107622669A (en) * 2017-10-25 2018-01-23 哈尔滨工业大学 A kind of method for identifying right turning vehicle and whether giving precedence to pedestrian
CN109493642A (en) * 2018-10-24 2019-03-19 初速度(苏州)科技有限公司 Vehicle-mounted shooting device with anti-collision early warning function
JP2020013206A (en) * 2018-07-13 2020-01-23 日本ユニシス株式会社 Device for detecting two-wheeled vehicle from moving image/camera, program, and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025862A (en) * 2007-02-12 2007-08-29 吉林大学 Video based mixed traffic flow parameter detecting method
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN104282150A (en) * 2014-09-29 2015-01-14 北京汉王智通科技有限公司 Recognition device and system of moving target
CN106127883A (en) * 2016-06-23 2016-11-16 北京航空航天大学 driving event detection method
CN107622669A (en) * 2017-10-25 2018-01-23 哈尔滨工业大学 A kind of method for identifying right turning vehicle and whether giving precedence to pedestrian
JP2020013206A (en) * 2018-07-13 2020-01-23 日本ユニシス株式会社 Device for detecting two-wheeled vehicle from moving image/camera, program, and system
CN109493642A (en) * 2018-10-24 2019-03-19 初速度(苏州)科技有限公司 Vehicle-mounted shooting device with anti-collision early warning function

Similar Documents

Publication Publication Date Title
KR101995107B1 (en) Method and system for artificial intelligence based video surveillance using deep learning
CN111079663B (en) High-altitude parabolic monitoring method and device, electronic equipment and storage medium
Lopez-Fuentes et al. Review on computer vision techniques in emergency situations
US20220343138A1 (en) Analysis of objects of interest in sensor data using deep neural networks
Owens et al. Application of the self-organising map to trajectory classification
US8195598B2 (en) Method of and system for hierarchical human/crowd behavior detection
CN112037266B (en) Falling object identification method and device, terminal equipment and storage medium
Faro et al. Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection
US6445409B1 (en) Method of distinguishing a moving object and apparatus of tracking and monitoring a moving object
Schulz et al. A controlled interactive multiple model filter for combined pedestrian intention recognition and path prediction
KR102235787B1 (en) Device and method for monitoring a berthing
US9471845B1 (en) Background modeling for imaging surveillance
WO2006083283A2 (en) Method and apparatus for video surveillance
CN111439644A (en) Alarming method of storage battery car in elevator and related device
Liu et al. Moving object detection and tracking based on background subtraction
CN108629935B (en) Method and system for detecting burglary of climbing stairs and turning windows based on video monitoring
KR20110121885A (en) Apparatus and method for traffic accident recognition
CN109255360B (en) Target classification method, device and system
CN108162858A (en) Vehicle-mounted monitoring apparatus and its method
CN111428644A (en) Zebra crossing region monitoring method, system and medium based on deep neural network
EP0977437A2 (en) Method of distinguishing a moving object and apparatus of tracking and monitoring a moving object
Mehboob et al. Trajectory based vehicle counting and anomalous event visualization in smart cities
Chen et al. Traffic extreme situations detection in video sequences based on integral optical flow
KR102556447B1 (en) A situation judgment system using pattern analysis
CN113920706A (en) Abnormal event prevention and control method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220111

WD01 Invention patent application deemed withdrawn after publication