CN110674753A - Theft early warning method, terminal device and storage medium - Google Patents

Theft early warning method, terminal device and storage medium Download PDF

Info

Publication number
CN110674753A
CN110674753A CN201910910799.0A CN201910910799A CN110674753A CN 110674753 A CN110674753 A CN 110674753A CN 201910910799 A CN201910910799 A CN 201910910799A CN 110674753 A CN110674753 A CN 110674753A
Authority
CN
China
Prior art keywords
image
theft
behavior
classification model
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910910799.0A
Other languages
Chinese (zh)
Inventor
张一�
邵泉铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Rui Yun Jie Technology Co Ltd
Original Assignee
Chengdu Rui Yun Jie Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Rui Yun Jie Technology Co Ltd filed Critical Chengdu Rui Yun Jie Technology Co Ltd
Priority to CN201910910799.0A priority Critical patent/CN110674753A/en
Publication of CN110674753A publication Critical patent/CN110674753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to a theft behavior early warning method, terminal equipment and a storage medium, which are applied to the technical field of behavior detection, and the method comprises the following steps: acquiring an image in a shot video; judging whether a person exists in the image according to a preset algorithm; if so, distinguishing the behaviors of the people in the image through a pre-trained theft behavior classification model; and if the behavior belongs to the stealing behavior, sending alarm information and image information to the user. Through above technical scheme, in the management to the region, not only improved the managerial efficiency, still reduced the cost of labor, at practical application's in-process for the identification process is more simple and convenient, and can use on most equipment.

Description

Theft early warning method, terminal device and storage medium
Technical Field
The invention relates to the technical field of behavior detection, in particular to a theft behavior early warning method, terminal equipment and a storage medium.
Background
Since articles such as building materials are often placed in a construction site or a warehouse and since the stacking area is wide, the stacking positions of the articles are not concentrated, and therefore, management of other areas for storing articles such as a construction site or a warehouse is difficult, and an event of article theft often occurs.
In the prior art, in order to enhance management and avoid theft, on one hand, the number of management personnel is usually increased, and the management intensity is increased by increasing the number of patrol, however, the efficiency is low, and the labor cost is increased. On the other hand, the omnibearing monitoring is realized by increasing the installation number of the cameras, the posture of the person in the image is analyzed by adopting a posture estimation algorithm on the image acquired by the cameras, however, the posture analysis needs a huge posture database, and the posture of the person in the image is compared with the data in the database, so as to judge whether the stealing behavior exists.
Disclosure of Invention
In view of this, a theft early warning method, a terminal device and a storage medium are provided to solve the problems in the prior art that the management efficiency for the theft is low, the identification process is complex and the implementation mode is not easy to popularize.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a theft early warning method is used, and the method comprises: acquiring an image in a shot video; judging whether a person exists in the image according to a preset algorithm; if so, distinguishing the behaviors of the people in the image through a pre-trained theft behavior classification model; and if the behavior belongs to the stealing behavior, sending alarm information and image information to the user.
Further, the determining whether a person is present in the image according to a preset algorithm includes: determining a foreground variation value of the image through a background modeling algorithm; and comparing the foreground change value with a preset change threshold, and if the foreground change value is larger than the preset change threshold, judging whether a person exists in the image through an object detection algorithm.
Further, before determining whether there is a person in the image according to a preset algorithm, the method further includes: inputting the images into a pre-constructed daylight classification model, and classifying the images; and if the image is a night image, performing image enhancement on the image.
Further, the inputting the image into a pre-constructed daylight classification model to classify the image includes: identifying a luminance value of the image; if the brightness value is smaller than a preset brightness threshold value, the image is a night image; and if the brightness value is greater than or equal to a preset brightness threshold value, the image is a daytime image.
Further, the distinguishing the behavior of the person in the image through a pre-trained theft behavior classification model includes: dividing an area containing a person in the image; inputting the divided images into a pre-trained theft behavior classification model; and sequentially judging the behaviors of the people in the divided areas.
Further, the method also comprises the following steps: and marking the distinguished area.
Further, the training process of the theft classification model comprises the following steps: obtaining a sample image and marking the sample image to obtain a normal behavior sample image and a theft behavior sample image; and inputting the normal behavior sample image and the theft behavior sample image into a pre-constructed deep learning model for training to obtain the theft behavior classification model.
Further, the construction process of the daytime classification model comprises the following steps: collecting a day image and a night image, and taking the day image and the night image as training sample images; and inputting the training sample image into a pre-constructed deep learning model for training to obtain the daytime classification model.
In a second aspect, a theft early warning device is provided, the device comprising: the acquisition module is used for acquiring images in the shot video; the first judgment module is used for judging whether a person exists in the image according to a preset algorithm; the second judging module is used for judging the behaviors of people in the image through a pre-trained theft behavior classification model if the second judging module is used for judging the behaviors of people in the image; and the sending module is used for sending alarm information and image information to the user if the behavior belongs to a theft behavior.
In a second aspect, a terminal device is adopted, the device comprising: a processor, and a memory coupled to the processor; the memory is configured to store a computer program configured to perform the theft early warning method in the first aspect; the processor is used for calling and executing the computer program in the memory.
In a third aspect, a storage medium is adopted, where a computer program is stored, and when the computer program is executed by a processor, the steps in the theft early warning method according to the first aspect are implemented.
According to the technical scheme, firstly, the video is processed according to frames according to the video shot by the camera, and the image in the video is obtained; secondly, judging whether the image contains a person or not according to a preset algorithm, if so, sequentially judging the behaviors of the pedestrian through a pre-trained theft behavior classification model, judging whether the pedestrian belongs to the theft behavior, and if so, sending alarm information and image information to a user. Through above technical scheme, through the theft action classification model of training, reduced the demand of data resource, in the management to the region, not only improved the managerial efficiency, still reduced the cost of labor, at practical application's in-process for the identification process is more simple and convenient, and can use on most equipment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a theft early warning method according to an embodiment of the present invention;
fig. 2 is a flowchart of a theft early warning method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a theft early-warning device according to another embodiment of the present invention
Fig. 4 is a schematic structural diagram of a terminal device according to still another embodiment of the present invention.
Reference numerals: the device comprises an acquisition module-301, a first judgment module-302, a second judgment module-303, a sending module-304, a processor-401 and a memory-402.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Examples
Fig. 1 is a flowchart of a theft early-warning method according to an embodiment of the present invention, where the method may be executed by the theft early-warning apparatus according to the embodiment of the present invention, and the apparatus may be implemented in a software and/or hardware manner. As shown in fig. 1, the method may specifically include the following steps:
s101, acquiring images in the shot video.
Specifically, the theft early warning method provided by the embodiment of the invention can be applied to the management of other areas for storing articles such as construction sites or warehouses in the practical application process, firstly, a plurality of cameras can be arranged in the areas to ensure that the monitoring range has no dead angle, and the cameras can shoot the monitored areas in real time, so that the theft early warning method in the embodiment can process the obtained videos according to video frames by obtaining the videos shot by the cameras, finally obtain the image information of each frame in the videos, and analyze the obtained images in time to judge whether the theft exists in the monitored areas. In the embodiment, the purpose of monitoring the area in real time can be achieved by acquiring the image in the shot video, the current state of the area is analyzed in real time, judgment is made in time, and the management efficiency of the area is improved.
And S102, judging whether a person exists in the image according to a preset algorithm.
Specifically, after the image in the captured video is acquired, when the image is identified and processed, whether a person exists in the image can be judged according to a preset algorithm. Because, in the process of practical application, only when someone is detected to appear, theft can possibly happen; if no person exists in the image, the image cannot be stolen, so that when the acquired image is identified, whether the person exists in the image needs to be judged at first. By way of a specific example, when an animal or other moving object (such as a bag blown by wind) appears in the image, unlike the shape of a human appearing in the image, the shape of the object in the current image is analyzed by using an object detection algorithm to determine whether the target object in the current image is a human, so as to improve the recognition accuracy.
And S103, if yes, distinguishing the behaviors of the people in the image through a pre-trained theft behavior classification model.
Specifically, when it is detected that a person exists in the image, the behavior of the person in the image needs to be further judged, in the practical application process, the image can be input into a pre-trained theft behavior classification model, and the behavior of the person is classified by the theft behavior classification model, wherein the recognized behavior of the person mainly can be divided into two types, one type belongs to normal behavior, such as normal action or normal static action, and the other type belongs to theft behavior, such as long time staying at a certain position and more body actions, and may belong to theft behavior. The training of the theft behavior classification model mainly includes collecting a large number of normal behavior images and theft behavior images, and the images can be divided into the normal behavior images or the theft behavior images according to the behaviors of people in the images through the trained model, so that the behaviors of people in the images can be judged more quickly and accurately in practice.
And S104, if the behavior belongs to the stealing behavior, sending alarm information and image information to the user.
Specifically, through the process in S103, after classifying the behavior of the person in the image through the theft classification model, if it is determined that the behavior of the person in the image belongs to theft, the controller of the terminal device uploads the image information at this time to the background server, so as to store important evidence, in addition, when the theft of the person in the image is recognized, the terminal device can also send alarm information to the user in real time, and also can send the image information at this time to the mobile phone of the user in a wireless transmission manner, so that the user can know the current occurring situation in time, and loss is reduced through alarming or timely checking on site. Data transmission between the terminal equipment in the monitoring area and the mobile phone of the user can adopt a Bluetooth connection mode and also can adopt a short message sending mode, the terminal equipment sends information to the mobile phone of the user according to the set mobile phone number, and the information transmission is faster through a wireless transmission mode.
According to the technical scheme, firstly, the video is processed according to frames according to the video shot by the camera, and the image in the video is obtained; secondly, judging whether the image contains a person or not according to a preset algorithm, if so, sequentially judging the behaviors of the pedestrian through a pre-trained theft behavior classification model, judging whether the pedestrian belongs to the theft behavior, and if so, sending alarm information and image information to a user. Through above technical scheme, through the theft action classification model of training, reduced the demand of data resource, in the management to the region, not only improved the managerial efficiency, still reduced the cost of labor, at practical application's in-process for the identification process is more simple and convenient, and can use on most equipment.
Fig. 2 is a flowchart of a theft early warning method according to another embodiment of the present invention, where the embodiment optimizes each step in the above embodiment based on the above embodiment. Referring to fig. 2, the method may specifically include the following steps:
s201, acquiring an image in the shot video.
S202, inputting the images into a pre-constructed daylight classification model, and classifying the images.
Specifically, in the actual shooting process, the shot images are divided into a daytime image and a nighttime image in consideration of the conditions of the daytime and the nighttime, the light of the daytime image is strong, and the behavior of people in the images is easy to recognize, and for the nighttime image, the shot nighttime image needs to be processed due to the weak light, so as to improve the accuracy of image recognition, therefore, after the images in the shot video are obtained, the images need to be classified according to the types of the images, the obtained images are mainly input into a daytime classification model, and the daytime image and the nighttime image are classified through the constructed daytime classification model, so that the nighttime image can be processed independently.
Further, the inputting the image into a pre-constructed daylight classification model to classify the image includes: identifying a luminance value of the image; if the brightness value is smaller than a preset brightness threshold value, the image is a night image; and if the brightness value is greater than or equal to a preset brightness threshold value, the image is a daytime image.
Specifically, when the camera is used for shooting, in addition to shooting in the daytime, shooting at night is also performed, the shot images are divided into a daytime image and a nighttime image, and for the specific process of image classification, in this embodiment, the brightness value of the current image is mainly recognized by an image recognition technology in the prior art for the first time, a brightness threshold is preset according to a daytime classification model, the brightness value of the recognized image is compared with the preset brightness threshold, and if the brightness value of the image is smaller than the preset brightness threshold, it is indicated that the light of the image is weak and the image belongs to the nighttime image; if the brightness value of the image is greater than or equal to the preset brightness threshold value, the image is indicated to have stronger light and belongs to the daytime image. In addition, the classification of the images can be performed by setting the shooting time, for example, the set time period is 7:00-18:00, the light in the time period is strong, the images shot in the time period all belong to the daytime image by default, and the images shot in other time periods belong to the nighttime image, wherein when the severe weather environment occurs, the images shot in the time period of 7:00-18:00 have insufficient light, and when the poor weather environment occurs, the brightness value of the images can be identified by the method in the embodiment, and then the type of the images is judged according to the brightness value.
S203, if the image is a night image, image enhancement is carried out on the image.
Specifically, when the captured image belongs to a night image, the light of the night image is weak, and when the night image is identified, the problem that the night image cannot be identified quickly and accurately is likely to occur, so that the classified night image needs to be subjected to image enhancement. In the prior art, when a night image is shot, a starlight-level camera or an infrared monitoring technology is usually adopted, the shot image can be accurately identified by an algorithm, but the starlight-level camera is difficult to deploy in a large area due to high cost. In the embodiment of the invention, the classified night images are subjected to image enhancement by adopting an image enhancement algorithm in the prior art to meet the identification requirement, wherein the image enhancement algorithm in the prior art can be a frequency domain method and a space domain method, and can also be used for enhancing the edges of the images to enhance the definition of the images.
And S204, determining a foreground change value of the image through a background modeling algorithm.
Specifically, in the process of practical application, because the monitored area is large, and some areas also belong to the open air, people and animals or objects blown by wind can appear in the shot image, therefore, when the image is identified, the foreground change value a of the image is judged through a background modeling algorithm, and when the foreground change value a is zero, it is indicated that no moving object exists in the image at the moment, and the image does not need to be identified.
S205, comparing the foreground variation value with a preset variation threshold, and if the foreground variation value is larger than the preset variation threshold, judging whether a person exists in the image through an object detection algorithm.
Specifically, because target objects in the image are different, the foreground variation value a of the image is different, for example, when a small object such as a cat or a dog appears in the image, the foreground variation value a of the image is small, and when a person passes through the image, the foreground variation value a of the image is large, so that a preset variation threshold can be set in advance to improve the accuracy and speed of image recognition, when a is greater than or equal to the preset variation threshold, it is indicated that the person may pass through the image at the moment, and then whether the person exists in the image is further determined according to an object detection algorithm, so that the accuracy of image recognition is higher; and when a is smaller than the preset change threshold, the situation that no person passes through the image at the moment is shown, and the image does not need to be identified.
And S206, if so, dividing the region containing the person in the image, inputting the divided image into a pre-trained theft behavior classification model, and sequentially judging the behavior of the person in the divided region.
Specifically, whether there is a person in the image is detected through the object detection algorithm in the above S205, when there is a person, it is determined that there are several persons in the image through the object detection algorithm, the regions of each person in the image are divided according to the number of persons in the image, the divided region image is input into the theft classification model, the behavior of the person in the region is sequentially discriminated, when the number of persons in the image is large, the region is divided by a person unit, the accuracy of image recognition can be improved, the behavior of each person in the image can be recognized, and the missing problem is avoided.
And S207, marking the distinguished area.
Specifically, in the process of practical application, when the number of people appearing in an image is large, the image is marked by dividing the area, sequentially identifying the area and marking the identified area image so as to avoid the condition of data confusion, and when one area image is identified, the image is marked, and if the image belongs to a normal image, the image is automatically ignored; if the image belongs to a stolen image, the image is uploaded to a server through the terminal equipment to be stored, and the image is sent to a user in time, so that the processing efficiency is improved.
And S208, if the behavior belongs to a theft behavior, sending alarm information and image information to the user.
Further, the training process of the theft classification model comprises the following steps: obtaining a sample image and marking the sample image to obtain a normal behavior sample image and a theft behavior sample image; and inputting the normal behavior sample image and the theft behavior sample image into a pre-constructed deep learning model for training to obtain the theft behavior classification model.
Specifically, in the embodiment of the invention, when a theft classification model is trained, some stolen video images can be collected on a network platform, a large number of sample images are collected to mark the theft images and the normal behavior images in the video to obtain normal behavior sample images and theft sample images, and the sample images are input into a pre-constructed deep learning model for training, so that the model can automatically identify and classify the images according to the sample images, and the theft classification model is obtained through continuous training.
Further, the construction process of the daytime classification model comprises the following steps: collecting a day image and a night image, and taking the day image and the night image as training sample images; and inputting the training sample image into a pre-constructed deep learning model for training to obtain the daytime classification model.
Specifically, in the embodiment of the present invention, since the night image needs to be enhanced, after the image is acquired, the image needs to be classified in advance by the day classification model, in the process of constructing the day classification model, a large number of day images and night images need to be collected first, the day images and the night images are used as training sample images, the training sample images are input into the pre-constructed deep learning model for training, and a more accurate preset brightness threshold is determined according to the collected day images and night images in the training process, so as to finally obtain the day classification model in the embodiment.
According to the technical scheme, firstly, the video is processed according to frames according to the video shot by the camera, and the image in the video is obtained; then, the images are classified through a daytime classification model, so that the images at night are conveniently enhanced, and the accuracy of image identification is improved; further, a foreground change value of the image is judged through a background modeling algorithm, whether a moving object appears in the image is judged, whether the moving object appears is judged according to an object detection algorithm, if the moving object appears in the image, the moving object is identified, the number of people in the image is determined, the image is divided into areas, behaviors of pedestrians are sequentially judged through a pre-trained theft behavior classification model, whether the moving object belongs to the theft behavior is judged, and if the moving object belongs to the theft behavior, alarm information and image information are sent to a user. Through the technical scheme, the management efficiency is improved, the labor cost is reduced, and in the process of practical application, the recognition result is more accurate and faster through continuously improving the accuracy of image recognition. The theft early warning method provided by the embodiment of the invention can be applied to most of equipment, and the practicability is improved.
Fig. 3 is a schematic structural diagram of a theft early-warning apparatus according to another embodiment of the present invention, which is suitable for executing a theft early-warning method according to the embodiment of the present invention. As shown in fig. 3, the apparatus may specifically include: an obtaining module 301, a first judging module 302, a second judging module 303 and a sending module 304, wherein:
an obtaining module 301, configured to obtain an image in a captured video.
The first determining module 302 is configured to determine whether there is a person in the image according to a preset algorithm.
And if so, the second judging module 303 is configured to judge the behavior of the person in the image through a pre-trained theft behavior classification model.
And the sending module 304 is used for sending alarm information and image information to the user if the behavior belongs to a theft behavior.
Further, the first determining module 302 includes: a determining submodule, specifically configured to determine a foreground variation value of the image through a background modeling algorithm; and the comparison submodule is specifically used for comparing the foreground change value with a preset change threshold value, and if the foreground change value is larger than the preset change threshold value, judging whether a person exists in the image through an object detection algorithm.
The system further comprises a classification module and an image enhancement module, wherein the classification module is used for inputting the images into a pre-constructed daylight classification model and classifying the images; and the image enhancement module is used for enhancing the image if the image is a night image.
Further, the classification module comprises: an identification submodule, in particular for identifying brightness values of the image; the image type judgment submodule is specifically used for judging the size of a brightness value and a preset brightness threshold value, and if the brightness value is smaller than the preset brightness threshold value, the image is a night image; and if the brightness value is greater than or equal to a preset brightness threshold value, the image is a daytime image.
Further, the second determining module 303 includes: the region division submodule is specifically used for dividing a region containing people in the image; the image input submodule is specifically used for inputting the divided images into a pre-trained theft behavior classification model; and the behavior type judgment submodule is specifically used for sequentially judging the behaviors of the people in the divided areas.
Further, the second determining module 303 further includes: and the marking submodule is specifically used for marking the distinguished area.
Further, the second judgment sub-module 303 further includes: the model training submodule is specifically used for acquiring a sample image and marking the sample image to obtain a normal behavior sample image and a theft behavior sample image; and inputting the normal behavior sample image and the theft behavior sample image into a pre-constructed deep learning model for training to obtain the theft behavior classification model.
Further, the classification module comprises: the model construction submodule is specifically used for acquiring a day image and a night image and taking the day image and the night image as training sample images; and inputting the training sample image into a pre-constructed deep learning model for training to obtain the daytime classification model.
Fig. 4 is a schematic structural diagram of a terminal device according to still another embodiment of the present invention, and as shown in fig. 4, the intelligent terminal includes:
a processor 401, and a memory 402 connected to the processor 401; the memory 402 is used for storing a computer program for executing a theft early warning method in the above embodiment of the present invention; the processor 401 is configured to call and execute the computer program in the memory 402, and the theft early warning method in the above embodiment at least includes the following steps:
acquiring an image in a shot video; judging whether a person exists in the image according to a preset algorithm; if so, distinguishing the behaviors of the people in the image through a pre-trained theft behavior classification model; and if the behavior belongs to the stealing behavior, sending alarm information and image information to the user.
The embodiment of the present invention further provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by the processor 401, the method implements the steps in the theft early warning method provided in the embodiment of the present invention, and the method at least includes the following steps:
acquiring an image in a shot video; judging whether a person exists in the image according to a preset algorithm; if so, distinguishing the behaviors of the people in the image through a pre-trained theft behavior classification model; and if the behavior belongs to the stealing behavior, sending alarm information and image information to the user.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A theft early warning method is characterized by comprising the following steps:
acquiring an image in a shot video;
judging whether a person exists in the image according to a preset algorithm;
if so, distinguishing the behaviors of the people in the image through a pre-trained theft behavior classification model;
and if the behavior belongs to the stealing behavior, sending alarm information and image information to the user.
2. The theft behavior early warning method according to claim 1, wherein the judging whether the image contains a person according to a preset algorithm comprises:
determining a foreground variation value of the image through a background modeling algorithm;
and comparing the foreground change value with a preset change threshold, and if the foreground change value is larger than the preset change threshold, judging whether a person exists in the image through an object detection algorithm.
3. The theft behavior early warning method according to claim 1, wherein before determining whether there is a person in the image according to a preset algorithm, the method further comprises:
inputting the images into a pre-constructed daylight classification model, and classifying the images;
and if the image is a night image, performing image enhancement on the image.
4. The theft early warning method according to claim 3, wherein the inputting the image into a pre-constructed daytime classification model to classify the image comprises:
identifying a luminance value of the image;
if the brightness value is smaller than a preset brightness threshold value, the image is a night image;
and if the brightness value is greater than or equal to a preset brightness threshold value, the image is a daytime image.
5. The theft early warning method according to claim 1, wherein the discriminating of the behavior of the person in the image by the pre-trained theft classification model comprises:
dividing an area containing a person in the image;
inputting the divided images into a pre-trained theft behavior classification model;
and sequentially judging the behaviors of the people in the divided areas.
6. The theft early warning method according to claim 5, further comprising:
and marking the distinguished area.
7. The theft early warning method according to claim 1, wherein the training process of the theft classification model comprises:
obtaining a sample image and marking the sample image to obtain a normal behavior sample image and a theft behavior sample image;
and inputting the normal behavior sample image and the theft behavior sample image into a pre-constructed deep learning model for training to obtain the theft behavior classification model.
8. The theft early warning method according to claim 3, wherein the construction process of the daytime classification model comprises:
collecting a day image and a night image, and taking the day image and the night image as training sample images;
and inputting the training sample image into a pre-constructed deep learning model for training to obtain the daytime classification model.
9. A terminal device, comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program for performing at least the theft early warning method of any one of claims 1 to 8;
the processor is used for calling and executing the computer program in the memory.
10. A storage medium, characterized in that the storage medium stores a computer program, which when executed by a processor, performs the steps of the theft early warning method according to any one of claims 1 to 8.
CN201910910799.0A 2019-09-25 2019-09-25 Theft early warning method, terminal device and storage medium Pending CN110674753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910910799.0A CN110674753A (en) 2019-09-25 2019-09-25 Theft early warning method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910910799.0A CN110674753A (en) 2019-09-25 2019-09-25 Theft early warning method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN110674753A true CN110674753A (en) 2020-01-10

Family

ID=69079382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910910799.0A Pending CN110674753A (en) 2019-09-25 2019-09-25 Theft early warning method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN110674753A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435418A (en) * 2021-08-26 2021-09-24 知见科技(江苏)有限公司 Electric bicycle theft identification method based on computer vision
CN113903112A (en) * 2021-09-17 2022-01-07 苏州城之瞳安防智能科技有限公司 Intelligent monitoring system, method and medium for community management
CN115019488A (en) * 2022-05-30 2022-09-06 歌尔股份有限公司 Monitoring method, device, system and medium based on intelligent wearable device
CN116153004A (en) * 2023-01-17 2023-05-23 山东浪潮科学研究院有限公司 Intelligent monitoring alarm system based on FPGA

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN107911581A (en) * 2017-11-15 2018-04-13 深圳市共进电子股份有限公司 The infrared switching method of web camera, device, storage medium and web camera
CN109544837A (en) * 2018-11-09 2019-03-29 中国计量大学 A kind of supermarket's security system based on 3DCNN
CN110060441A (en) * 2019-06-14 2019-07-26 三星电子(中国)研发中心 Method and apparatus for terminal anti-theft

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388145A (en) * 2008-11-06 2009-03-18 北京汇大通业科技有限公司 Auto alarming method and device for traffic safety
CN107911581A (en) * 2017-11-15 2018-04-13 深圳市共进电子股份有限公司 The infrared switching method of web camera, device, storage medium and web camera
CN109544837A (en) * 2018-11-09 2019-03-29 中国计量大学 A kind of supermarket's security system based on 3DCNN
CN110060441A (en) * 2019-06-14 2019-07-26 三星电子(中国)研发中心 Method and apparatus for terminal anti-theft

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435418A (en) * 2021-08-26 2021-09-24 知见科技(江苏)有限公司 Electric bicycle theft identification method based on computer vision
CN113903112A (en) * 2021-09-17 2022-01-07 苏州城之瞳安防智能科技有限公司 Intelligent monitoring system, method and medium for community management
CN115019488A (en) * 2022-05-30 2022-09-06 歌尔股份有限公司 Monitoring method, device, system and medium based on intelligent wearable device
CN116153004A (en) * 2023-01-17 2023-05-23 山东浪潮科学研究院有限公司 Intelligent monitoring alarm system based on FPGA

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN110674753A (en) Theft early warning method, terminal device and storage medium
CN109166261B (en) Image processing method, device and equipment based on image recognition and storage medium
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
US8744125B2 (en) Clustering-based object classification
CN112418069B (en) High-altitude parabolic detection method and device, computer equipment and storage medium
CN108446630B (en) Intelligent monitoring method for airport runway, application server and computer storage medium
KR101825045B1 (en) Alarm method and device
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
US20080191886A1 (en) Flame detecting method and device
CN109484935A (en) A kind of lift car monitoring method, apparatus and system
US10445885B1 (en) Methods and systems for tracking objects in videos and images using a cost matrix
WO2021139049A1 (en) Detection method, detection apparatus, monitoring device, and computer readable storage medium
CN110619277A (en) Multi-community intelligent deployment and control method and system
US11935378B2 (en) Intrusion detection methods and devices
US8411947B2 (en) Video processing to detect movement of an object in the scene
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
KR101204259B1 (en) A method for detecting fire or smoke
CN109543607A (en) Object abnormal state detection method, system, monitor system and storage medium
CN112733690A (en) High-altitude parabolic detection method and device and electronic equipment
EP2000998B1 (en) Flame detecting method and device
CN110852179A (en) Method for detecting suspicious personnel intrusion based on video monitoring platform
CN115116004A (en) Office area abnormal behavior detection system and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110

RJ01 Rejection of invention patent application after publication