CN111476179A - Behavior prediction method for key target, AI tracking camera and storage medium - Google Patents

Behavior prediction method for key target, AI tracking camera and storage medium Download PDF

Info

Publication number
CN111476179A
CN111476179A CN202010280666.2A CN202010280666A CN111476179A CN 111476179 A CN111476179 A CN 111476179A CN 202010280666 A CN202010280666 A CN 202010280666A CN 111476179 A CN111476179 A CN 111476179A
Authority
CN
China
Prior art keywords
target
information
key
image
tracking camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010280666.2A
Other languages
Chinese (zh)
Other versions
CN111476179B (en
Inventor
叶广明
齐晓辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Siyuan Electronic Technology Co ltd
Original Assignee
Shenzhen City Five Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen City Five Technology Co ltd filed Critical Shenzhen City Five Technology Co ltd
Priority to CN202010280666.2A priority Critical patent/CN111476179B/en
Publication of CN111476179A publication Critical patent/CN111476179A/en
Application granted granted Critical
Publication of CN111476179B publication Critical patent/CN111476179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a behavior prediction method of a key target, which is applied to an AI tracking camera and comprises the following steps: obtaining model parameters and initializing an original convolutional neural network according to the model parameters to obtain a classification identification model; classifying and identifying the outdoor image shot by the AI tracking camera according to the classification identification model to obtain the outdoor image with the marked information; screening out a target image containing a key target from the outdoor images with the labeled information according to the target category; and after the number of the target images exceeds a preset value, obtaining future behavior information of the key target according to the target images. The AI tracking camera can classify, identify and predict the behaviors of people or animals, can accurately track the people or the animals, and can help a user to effectively manage and monitor key targets in a monitoring area, so that the user experience is improved.

Description

Behavior prediction method for key target, AI tracking camera and storage medium
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a behavior prediction method of a key target, an AI tracking camera and a readable storage medium.
Background
The tracking camera is an unattended camera which is often applied to the field or suburb, is mainly used for helping users to hunt, record animal behaviors, research and research on wild animal diversity in a protected area, monitoring a farm, monitoring illegal hunting, monitoring a private courtyard, checking the construction site and the like, and can automatically capture the motion state of an animal by utilizing a motion detector of infrared or other induction technologies. Generally, after an animal is found by a tracking camera through a motion detector, a trigger signal is sent out, high-definition pictures and videos are automatically shot, and then the pictures and the videos are uploaded to a background server through wifi or a mobile network, so that many users install the tracking camera at places where the users are interested, and then view images or videos uploaded by the tracking camera through a user terminal to observe key targets (people or animals) appearing in a monitored area.
However, the current tracking camera simply sends the image triggered to be shot to the user terminal, and needs the user to check the images or videos uploaded by the tracking camera one by one through the user terminal, so that a large amount of time is consumed to judge the behavior characteristics of the key target, and the tracking camera has poor user experience, low entertainment and low hunting success probability.
Disclosure of Invention
The invention provides a behavior prediction method of a key target, which is applied to an AI tracking camera, can classify, identify and predict the behaviors of people or animals, can track animals or people more accurately, and can help a user to effectively manage and monitor the key target in a monitoring area, thereby improving the user experience.
In a first aspect, a method for predicting behavior of a key target is provided, which is applied to an AI tracking camera, and includes:
obtaining model parameters and initializing an original convolutional neural network according to the model parameters to obtain a classification identification model; the original convolutional neural network is pre-embedded in the AI tracking camera;
classifying and identifying the outdoor image shot by the AI tracking camera according to the classification identification model to obtain the outdoor image with the marked information; the labeling information comprises scene category, target category and behavior information;
according to the target category, screening out a target image containing a key target from the outdoor image with the labeled information;
and after the number of the target images exceeds a preset value, obtaining future behavior information of the key target according to the target images.
Preferably, obtaining future behavior of the key target from the target image comprises: after the future behavior information of the key target is obtained through the prediction of a behavior prediction model of the AI server, receiving the future behavior information sent by the AI server; the AI server establishes the behavior prediction model according to the time information, the geographic information, the scene type, the behavior information of the target image and the current environment information of the AI tracking camera.
Preferably, obtaining the future behavior information of the key target according to the target image comprises: and establishing a behavior prediction model according to the time information, the geographic information, the scene category and the behavior information of the target image and the current environment information of the AI tracking camera, and predicting the future behavior information of the key target according to the behavior prediction model.
Preferably, after obtaining the future behavior information, the method further comprises: continuously shooting an outdoor image by self-triggering, and inputting the outdoor image into the classification recognition model for classification and recognition; and if the target category in the obtained outdoor image with the labeled information is judged to be the key target and the behavior information of the target category is consistent with the future behavior information, triggering to send reminding information to the user terminal.
Preferably, after determining that the object category in the obtained outdoor image with the labeled information is the key object and that the behavior information thereof matches the future behavior information, the method further comprises: executing a preset alarm program, triggering an audible and visual alarm to emit warning light according to the object type of the key object, and/or sending out alarm ring.
Preferably, after obtaining model parameters and initializing the original convolutional neural network according to the model parameters, the method further comprises: periodically acquiring updated model parameters from the AI server, and reinitializing the original convolutional neural network according to the updated model parameters; and the updated model parameters are obtained after the AI server trains and optimizes the classification recognition model positioned in the AI server according to the outdoor images which cannot be recognized and/or are recognized wrongly and the artificial marking information corresponding to the outdoor images.
Preferably, after the target image containing the key target is screened out from the outdoor image with the marked information, the method further comprises the following steps: and sending a target image containing a key target and the geographic position of an AI tracking camera to an AI server so that the AI server generates time axis information of the key target according to the target image, wherein the time axis information comprises time information, geographic information, scene category and behavior information.
In a second aspect, an apparatus for behavior prediction of a key objective is provided, comprising:
the communication module is used for acquiring model parameters and initializing an original convolutional neural network according to the model parameters to obtain a classification recognition model; the original convolutional neural network is pre-embedded in the AI tracking camera;
the neural network module is also used for classifying and identifying the outdoor image shot by the AI tracking camera according to the classification and identification model to obtain the outdoor image with the labeled information; the labeling information comprises scene category, target category and behavior information;
the key target screening module is used for screening out a target image containing a key target from the outdoor image with the labeled information according to the target category;
and the behavior prediction module is used for obtaining the future behavior information of the key target according to the target images after the number of the target images exceeds a preset numerical value.
Preferably, the behavior prediction module is further configured to receive future behavior information of the key target after the future behavior information is predicted by the behavior prediction model of the AI server; the AI server establishes the behavior prediction model according to the time information, the geographic information, the scene type, the behavior information of the target image and the current environment information of the AI tracking camera.
Preferably, the behavior prediction module is further configured to establish a behavior prediction model according to the time information, the geographic information, the scene type, the behavior information of the target image and the current environment information of the AI tracking camera, and predict future behavior information of the key target according to the behavior prediction model.
Preferably, the neural network module is further configured to continue to self-trigger shooting of outdoor images, and input the outdoor images into the classification recognition model for classification and recognition;
and the communication module is also used for triggering to send reminding information to the user terminal if the target category in the obtained outdoor image with the marked information is judged to be the key target and the behavior information of the target category is consistent with the future behavior information.
Preferably, the device further comprises an alarm module, wherein the alarm module is used for executing a preset alarm program after judging that the target category in the obtained outdoor image with the labeled information is the key target and the behavior information of the outdoor image is consistent with the future behavior information, and triggering an audible and visual alarm to emit warning light and/or sending out an alarm ring according to the target category of the key target.
Preferably, the neural network module is further configured to periodically obtain updated model parameters from the AI server, and reinitialize the original convolutional neural network according to the updated model parameters; and the updated model parameters are obtained after the AI server trains and optimizes the classification recognition model positioned in the AI server according to the outdoor images which cannot be recognized and/or are recognized wrongly and the artificial marking information corresponding to the outdoor images.
Preferably, the communication module is further configured to, after screening out a target image containing a key target from the outdoor image with tagged information, send the target image containing the key target and a geographic location of an AI tracking camera to an AI server, so that the AI server generates time axis information of the key target according to the target image, where the time axis information includes time information, geographic information, a scene category, and behavior information.
Preferably, the behavior prediction apparatus 300 further includes: a neural network engine module 306, the neural network engine module 306 is used for driving the neural network module 302, so that the classification recognition model in the neural network module 302 has classification and recognition calculation capabilities.
In a third aspect, an AI tracking camera is provided, comprising one or more processors, one or more image sensors, a radio frequency module, an audible and visual alarm, and a memory, wherein the memory is configured to store a computer program comprising program instructions, and wherein the one or more processors are configured to invoke the program instructions to perform the method of any one of claims 1-7.
In a fourth aspect, a computer-readable storage medium is provided, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method and steps as in any of the embodiments of the first aspect.
In the embodiment of the invention, an AI tracking camera acquires model parameters and initializes an original convolutional neural network according to the model parameters to acquire a classification recognition model; classifying and identifying the outdoor image shot by the AI tracking camera according to the classification identification model to obtain the outdoor image with the marked information; then according to the target category, screening out a target image containing a key target from the outdoor images with the labeled information; and after the number of the target images exceeds a preset value, obtaining future behavior information of the key target according to the target images. The AI tracking camera can classify, identify and predict the behaviors of people or animals, can accurately track the people or the animals, and can help a user to effectively manage and monitor key targets in a monitoring area, so that the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic diagram of a network architecture for behavior prediction of a key target according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for predicting behavior of a key target according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a behavior prediction apparatus for a key objective according to an embodiment of the present invention;
fig. 4 is a block diagram of a hardware structure of an AI tracking camera according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The embodiment of the invention provides a new behavior prediction technical scheme of key targets, which can be used for carrying out classification recognition and behavior prediction on human beings or animals, can more accurately track animals or people, and can help users to effectively manage and monitor the key targets in a monitoring area, thereby improving the user experience. The following describes a network architecture for behavior prediction of a key target provided by the present invention, and with specific reference to fig. 1, the architecture mainly includes:
(1) an AI tracking camera: the core chip in the AI tracking camera is provided with an original convolutional neural network, model parameters can be downloaded from an AI server through a communication network, so that the original convolutional neural network is initialized into a classification recognition model aiming at animal characteristics, and the AI tracking camera can classify and recognize key targets (human or animals) in outdoor images obtained by self-triggering shooting based on the classification recognition model. In some embodiments, after identifying that more than a preset number of images containing the key target are obtained, the AI tracking camera may further establish a behavior prediction model based on the images and predict future behavior of the key target; in other embodiments, after identifying that more than a preset number of images containing the key target are obtained, the AI tracking camera sends the images to the AI server, the AI server builds a behavior prediction model based on the images and predicts the future behavior of the key target, and the AI tracking camera receives the future behavior information of the key target sent by the AI server.
(2) An AI server: the AI server is a server deployed at the cloud end and is responsible for training the convolutional neural network into a classification recognition model aiming at animal characteristics based on a large number of outdoor images, in the training process, for the outdoor images which cannot be classified and recognized or are wrongly classified and recognized, the artificial labeling information of the outdoor images is also acquired, and the outdoor images and the corresponding artificial labeling information are used for carrying out optimization training on the classification recognition model, so that the accuracy of the classification recognition model is higher. In some embodiments, when the number of outdoor images accumulated by the AI server and transmitted by the tracking cameras is sufficient, the AI server may further perform mathematical modeling based on the key target information in the large number of outdoor images, establish a behavior prediction model, predict future behaviors of the key targets through the behavior prediction model, and transmit the future behavior information to the corresponding AI tracking cameras.
(3) A user terminal: the user terminal is generally an intelligent device such as a smart phone, a tablet computer, an intelligent wearable device, a computer and the like. The user terminal can be used for receiving the classification and recognition results of the AI tracking camera on the outdoor images and presenting the classification and recognition results in a time axis or map form so as to facilitate the user to check the human or animal conditions in the monitoring area of the tracking camera; the user terminal can also be used for receiving future behaviors of people or animals in the prediction tracking camera monitoring area of the AI server, and the user can more accurately track the behavior track of the people or the animals according to the predicted future behaviors.
The following describes the behavior prediction method of the key target provided by the embodiment of the present invention in detail with reference to the accompanying drawings, and referring to fig. 2, fig. 2 is a schematic flow chart of the behavior prediction of the key target provided by the embodiment of the present invention. As shown in fig. 2, the method applied to the AI tracking camera specifically includes:
s101, obtaining model parameters and initializing an original convolutional neural network according to the model parameters to obtain a classification recognition model.
Conventional pursuit camera is for having the infrared detection function, and/or, the outdoor camera of difference in temperature response function, and the user can tie up the pursuit camera on outdoor trunk, when people or animal pass through the monitoring range of this pursuit camera, the pursuit camera can detect the infrared ray that people or animal gived off or the difference in temperature change that passes through to trigger the pursuit camera and shoot and obtain outdoor image or outdoor video. Compared with the conventional tracking camera, the AI tracking camera in the embodiment of the invention has the advantages that the core chip in the AI tracking camera has the original convolutional neural network, so that the AI tracking camera has the image recognition function. After the AI tracking camera leaves a factory, the convolutional neural network on the chip is a general model, and actually does not have the functions of classifying and identifying key targets, and a classification identification model aiming at the characteristics of the key targets can be obtained only after model parameters are downloaded from an AI server through a communication network (wifi, 3G/4G/5G and the like) and the original convolutional neural network is initialized according to the model parameters. In some possible embodiments, the model parameters may be downloaded from the AI server by means of a mobile terminal capable of accessing the internet, and then transmitted to the AI tracking camera by means of bluetooth wireless communication or by means of a medium such as a data line usb disk.
Because the animal species on the deep forest or the grassland are abundant, all animals are difficult to be classified and identified by one classification identification model, and the accuracy of the classification identification model is influenced certainly when all animal types are classified and identified by one classification identification model. In some embodiments of the present application, the AI server may train different classification recognition models for different animal types, for example, according to animal predation type: respectively training carnivorous animals, herbivorous animals and omnivorous animals to obtain a carnivorous animal classification and identification model, an herbivorous animal classification and identification model and an omnivorous animal classification and identification model. In other embodiments, the animal may be classified according to animals with two limbs, four limbs, or animal body weight, for example, less than 20 jin of animal, 20 to 50 jin of animal, 50 to 100 jin of animal, more than 100 jin of animal, etc. It should be understood that other classification rules are within the scope of the present invention. Correspondingly, the AI tracking camera may obtain target model parameters from the AI server according to the selection of the user and initialize the original convolutional neural network according to the target model parameters to obtain a target classification recognition model, for example, if the animal type received by the AI tracking camera from the user terminal is a omnivorous animal, the AI tracking camera obtains model parameters corresponding to the classification recognition model of the omnivorous animal from the AI server, and initializes the original convolutional neural network in the AI tracking camera according to the model parameters, thereby obtaining the classification recognition network for the omnivorous animal.
Optionally, after obtaining the model parameters and initializing the original convolutional neural network according to the model parameters, periodically obtaining updated model parameters from the AI server, and re-initializing the original convolutional neural network according to the updated model parameters; the updated model parameters are obtained after the AI server trains and optimizes the classification recognition model of the AI server according to the outdoor images which cannot be recognized and/or are recognized wrongly and the corresponding artificial marking information. The AI tracking camera updates the model parameters, so that the classification and identification accuracy of the AI tracking camera is improved.
And S102, classifying and identifying the outdoor image shot by the AI tracking camera according to the classification identification model to obtain the outdoor image with the labeled information.
After the AI tracking camera initializes the original convolutional neural network to obtain a classification recognition model, the classification recognition model can classify and recognize scenes and targets (animals or people) in outdoor images based on image contents. The AI tracking camera inputs the shot outdoor image into the classification recognition model for recognition and classification, and then outputs the outdoor image with the annotation information, wherein the annotation information at least comprises: scene category, object category, and behavior information. For example, the scene category is habitat, drinking place, etc., the target category is animal such as wild boar, antelope, spotted deer, etc., or stolen person, and the behavior information is staying, walking, eating, resting, etc.
The convolutional neural network in the present invention may be any type of neural network model such as L eNet, AlexNet, Goog L eNet, VGGNet, ResNet, DenseNet, YoloV3, etc., which is not limited in particular.
Optionally, if the outdoor image is obtained by false triggering shooting, for example, the outdoor image is identified to have no related target object, for example, no animal or person, according to the convolutional neural network, the AI tracking camera filters and deletes the outdoor image without transmitting the image to the user terminal through the communication network, so as to reduce the flow rate used by the user and the number of times of being disturbed; if the image quality of the outdoor image is lower than a preset threshold, for example, one or more of blur, color cast, dark cast and smear is identified in the outdoor image according to a convolutional neural network, the AI tracking camera performs correction processing on the outdoor image, for example, defogging, resolution improvement, definition improvement, contrast adjustment, color cast adjustment and the like on the image, and the processed image quality meets the preset threshold; if the outdoor image is obtained in an environment with weak light, such as shooting at night, and the outdoor image is usually a black-and-white image, colorizing the outdoor image is required; and then the processed image is sent to the user terminal through a communication network.
S103, screening out target images containing key targets from the outdoor images with the labeled information according to the target categories.
More than one type of animals usually appear in the monitoring area of the tracking camera, in order to help a user to effectively manage and monitor different types of animals and improve user experience, the AI tracking camera can also screen out target images containing key targets from outdoor images with labeled information according to target categories of the animals, for example, screening out antelope images from wild boar images, antelope images and fawn images.
After the target images containing the key targets are screened out from the outdoor images with the marked information, the target images containing the key targets and the geographic positions of the AI tracking cameras are sent to an AI server, so that the AI server generates time axis information of the key targets according to the target images, wherein the time axis information comprises time information, geographic information, scene categories and behavior information. The AI server can also send the time axis information and the target images to the user terminal, and the user terminal can display each image according to the shooting time of each target image and display any one or any combination of the shooting time, the shooting place, the shooting scene and the animal behavior near each target image. This timeline information can help the user to view the dynamic changes of the animals over time within the surveillance area of the tracking camera.
And S104, after the number of the target images exceeds a preset value, obtaining future behavior information of the key target according to the target images.
In one embodiment, the future behavior prediction step is implemented in an AI server, and after the AI server accumulates that a target image with labeled information sent by an AI tracking camera through a communication network exceeds a preset value, a behavior prediction model is established according to time information, geographic information, scene type, behavior information and current environment information of the AI tracking camera; after the future behavior information of the key target is predicted by the behavior prediction model of the AI server, the AI server sends the future behavior information of the key target to the AI tracking camera, and the corresponding AI tracking camera receives the future behavior information sent by the AI server.
In another embodiment, the future behavior prediction step is implemented in an AI tracking camera having a processor with high computational processing power in its chip. After the number of the target images exceeds a preset value, the processor of the AI tracking camera can directly establish a behavior prediction model according to the time information, the geographic information, the scene type and the behavior information of the target images and the current environment information of the AI tracking camera, and predict the future behavior information of the key target according to the behavior prediction model.
Wherein the current environment information includes: any one or any combination of weather, date, temperature and humidity, air temperature, wind direction, rainfall condition, sunrise, sunset and lunar phase.
The behavior prediction model can predict the behavior of the key target which may appear in a future period of time based on the behavior characteristics of the key target in a past period of time and the current environmental characteristics. After predicting the future behavior of the key target according to the behavior prediction model, the AI tracking camera continues to shoot the outdoor image by self-triggering, and inputs the outdoor image into the classification recognition model for classification and recognition; and if the AI tracking camera judges that the target category in the obtained outdoor image with the marked information is a key target and the behavior information of the target category is consistent with the future behavior information, triggering the AI tracking camera to send reminding information to the user terminal through a communication network. Through the embodiment, the behavior track of the animal or the human can be tracked more accurately, and the user can be helped to acquire the future behavior characteristics of the animal or the human more effectively based on the behavior track.
In addition, if the AI tracking camera is applied to monitoring the livestock farm, the behavior prediction model can also be used for predicting whether stealing behaviors exist, for example, if monitoring that strangers exist in the monitoring range of the tracking camera for a certain period of time for more than three days and stay near the key target for a period of time, the behavior prediction model can predict that stealing behaviors possibly occur, and the AI tracking camera sends stealing early warning information to the user to help the owner of the livestock farm to effectively protect the farmed livestock animals. The above scenarios are only examples, and other application scenarios such as illegal hunting monitoring, illegal cutting monitoring, etc. can be applied to the embodiments of the present invention.
Optionally, in some embodiments, after the AI tracking camera determines that the object category in the obtained outdoor image with tagged information is a key object and that the behavior information matches with the future behavior information thereof, a preset alarm program is executed, and an audible and visual alarm is triggered to emit a warning light according to the object category of the key object and/or an alarm ring is emitted. For example, in an animal farm, the AI tracking camera judges that the key target is a prey, and if the AI tracking camera recognizes that the action is a hunting action, the AI tracking camera is triggered to emit warning light and sound an alarm bell to drive the prey, so that the cultured animal is effectively prevented from being stolen.
In the embodiment of the invention, an AI tracking camera acquires model parameters and initializes an original convolutional neural network according to the model parameters to acquire a classification recognition model; classifying and identifying the outdoor image shot by the AI tracking camera according to the classification identification model to obtain the outdoor image with the marked information; then according to the target category, screening out a target image containing a key target from the outdoor images with the labeled information; and after the number of the target images exceeds a preset value, obtaining future behavior information of the key target according to the target images. The AI tracking camera can classify, identify and predict the behaviors of people or animals, can accurately track the people or the animals, and can help a user to effectively manage and monitor key targets in a monitoring area, so that the user experience is improved.
Fig. 3 is a behavior prediction apparatus 300 for a key objective according to an embodiment of the present invention, where the apparatus 300 includes:
the communication module 301 is configured to obtain a model parameter and initialize an original convolutional neural network according to the model parameter to obtain a classification recognition model; the original convolutional neural network is pre-embedded in the AI tracking camera;
the neural network module 302 is further configured to classify and identify the outdoor image captured by the AI tracking camera according to the classification and identification model, so as to obtain an outdoor image with labeled information; the labeling information comprises scene category, target category and behavior information;
a key target screening module 303, configured to screen a target image including a key target from the outdoor image with the tagged information according to the target category;
a behavior prediction module 304, configured to obtain future behavior information of the key target according to the target image after the number of the target images exceeds a preset value.
Preferably, the behavior prediction module 304 is further configured to receive the future behavior information sent by the AI server after the future behavior information of the key target is predicted by the behavior prediction model of the AI server; the AI server establishes the behavior prediction model according to the time information, the geographic information, the scene type, the behavior information of the target image and the current environment information of the AI tracking camera.
Preferably, the behavior prediction module 304 is further configured to establish a behavior prediction model according to the time information, the geographic information, the scene category, the behavior information of the target image and the current environment information of the AI tracking camera, and predict future behavior information of the key target according to the behavior prediction model.
Preferably, the neural network module 302 is further configured to continue to take an outdoor image by self-triggering, and input the outdoor image into the classification recognition model for classification and recognition;
the communication module 301 is further configured to trigger sending of a reminding message to the user terminal if it is determined that the target category in the obtained outdoor image with the tagged information is the key target and the behavior information of the target category matches the future behavior information.
Preferably, the device further comprises an alarm module 305, and the alarm module 305 is configured to execute a preset alarm program after determining that the object type in the obtained outdoor image with the labeled information is the key object and the behavior information of the outdoor image with the labeled information conforms to the future behavior information, trigger an audible and visual alarm to emit a warning light according to the object type of the key object, and/or send an alarm ringtone.
Preferably, the neural network module 302 is further configured to periodically obtain updated model parameters from the AI server, and reinitialize the original convolutional neural network according to the updated model parameters; and the updated model parameters are obtained after the AI server trains and optimizes the classification recognition model positioned in the AI server according to the outdoor images which cannot be recognized and/or are recognized wrongly and the artificial marking information corresponding to the outdoor images.
Preferably, the communication module 301 is further configured to, after screening out a target image containing a key target from the outdoor image with tagged information, send the target image containing the key target and a geographic location of an AI tracking camera to an AI server, so that the AI server generates time axis information of the key target according to the target image, where the time axis information includes time information, geographic information, a scene category, and behavior information.
Preferably, the behavior prediction apparatus 300 further includes: a neural network engine module 306, the neural network engine module 306 is used for driving the neural network module 302, so that the classification recognition model in the neural network module 302 has classification and recognition calculation capabilities. In some embodiments, the neural network engine module 306 is built into the processor of the AI tracking camera.
In some embodiments, the functions or included modules of the behavior prediction apparatus for a key target provided in the embodiment of the present invention may be used to execute the method described in the embodiment of the method in fig. 2, and specific implementation thereof may refer to the implementation description of the method in fig. 2, and for brevity, are not described again here.
Referring to fig. 4, fig. 4 is a schematic diagram of a hardware architecture of the AI tracking camera 400, which may include: chip 410, memory 411, Radio Frequency (RF) module 412, audible and visual alarm 413, image sensor 414. These components may communicate over one or more communication buses 4104.
The chip 410 may be integrated to include: one or more processors 4101, a clock module 4102, and a power management module 4103. The clock module 4102 integrated in the chip 410 is mainly used for generating clocks required for data transmission and timing control for the processor 4101. The power management module 4103 integrated in the chip 410 is mainly used for providing stable and high-precision voltage for the processor 410, the rf module 412 and peripheral systems. The processor (CPU)4101 is internally integrated with a neural network and a neural network engine, and the neural network engine is used for driving the neural network after model parameter initialization so as to enable the neural network to have classification and identification capabilities; in addition, the processor (CPU)4101 also incorporates an image processing unit for encoding image data collected by the image sensor to generate a photograph or video.
The memory 411 is coupled to the processor 4101 for storing various software programs and/or sets of instructions. In particular implementations, memory 411 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 411 may be used to store data, such as outdoor images captured by the image sensor 414.
A Radio Frequency (RF) module 412 for receiving and transmitting radio frequency signals mainly integrates a receiver and a transmitter of the AI tracking camera 400. Radio Frequency (RF) module 412 communicates with a communication network and other communication devices via radio frequency signals. In particular implementations, the Radio Frequency (RF) module 412 may include, but is not limited to: an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chip, a SIM card, a storage medium, and the like. In some embodiments, the Radio Frequency (RF) module 412 may be implemented on a separate chip. In the embodiment of the present invention, the Radio Frequency (RF) module 412 is mainly used for obtaining model parameters from the AI server, sending a target image selected from an outdoor image in the memory to the AI server, and sending a reminding message to the user terminal.
Audible-visual annunciator 413 includes bee calling organ and warning L ED lamp, and wherein bee calling organ can send the alarm sound of high decibel, and warning L ED lamp can launch the high warning light of illumination intensity, and audible-visual annunciator 413 mainly used judges at treater 4101 that present appearance stealer is back, sends the alarm sound of high decibel, and/or, the high warning light of emission illumination intensity.
The image sensor 414 may be provided with a photosensitive element such as an image sensor for capturing images or videos of a photographed scene, and the image sensor 414 may be a monocular camera, a binocular camera, or a multi-view camera, for example. The image sensor 414 is mainly used for taking outdoor images based on infrared trigger conditions.
In another embodiment of the present invention, a computer-readable storage medium is provided, where a computer program is stored, where the computer program includes program instructions, and if the computer-readable storage medium is applied to an AI tracking camera, the program instructions, when executed by a processor, implement the steps described in the embodiment of the method in fig. 2, specifically refer to the description in the embodiment of the method in fig. 2, and for brevity, are not described again here.
The computer readable storage medium may be an internal storage unit of the computing device according to any of the foregoing embodiments, for example, a hard disk or a memory of a terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk provided on the terminal, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the computer-readable storage medium may also include both internal and external storage units of the computing device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the computing device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A behavior prediction method of a key target is applied to an AI tracking camera, and comprises the following steps:
obtaining model parameters and initializing an original convolutional neural network according to the model parameters to obtain a classification identification model; wherein the raw convolutional neural network is pre-built in the AI tracking camera;
classifying and identifying the outdoor image shot by the AI tracking camera according to the classification identification model to obtain the outdoor image with the marked information; the labeling information comprises scene category, target category and behavior information;
according to the target category, screening out a target image containing a key target from the outdoor image with the labeled information;
and after the number of the target images exceeds a preset value, obtaining future behavior information of the key target according to the target images.
2. The method of claim 1, wherein obtaining future behavior of the key target from the target image comprises:
after the future behavior information of the key target is obtained through the prediction of a behavior prediction model of the AI server, receiving the future behavior information sent by the AI server; the AI server establishes the behavior prediction model according to the time information, the geographic information, the scene type, the behavior information of the target image and the current environment information of the AI tracking camera.
3. The method of claim 1, wherein obtaining future behavior information of the key target from the target image comprises:
and establishing a behavior prediction model according to the time information, the geographic information, the scene category and the behavior information of the target image and the current environment information of the AI tracking camera, and predicting the future behavior information of the key target according to the behavior prediction model.
4. The method of claim 2 or 3, wherein after obtaining the future behavior information, the method further comprises:
continuously shooting an outdoor image by self-triggering, and inputting the outdoor image into the classification recognition model for classification and recognition;
and if the target category in the obtained outdoor image with the labeled information is judged to be the key target and the behavior information of the target category is consistent with the future behavior information, triggering to send reminding information to the user terminal.
5. The method of claim 4, wherein after determining that the category of the target in the obtained annotated outdoor image is the key target and that the behavior information thereof corresponds to the future behavior information, the method further comprises:
executing a preset alarm program, triggering an audible and visual alarm to emit warning light according to the object type of the key object, and/or sending out alarm ring.
6. The method of claim 1, wherein after obtaining model parameters and initializing the original convolutional neural network according to the model parameters, the method further comprises:
periodically acquiring updated model parameters from the AI server, and reinitializing the original convolutional neural network according to the updated model parameters; and the updated model parameters are obtained after the AI server trains and optimizes the classification recognition model positioned in the AI server according to the outdoor images which cannot be recognized and/or are recognized wrongly and the artificial marking information corresponding to the outdoor images.
7. The method of claim 1, wherein after screening out target images containing key targets from the annotated outdoor image, the method further comprises:
and sending a target image containing a key target and the geographic position of an AI tracking camera to an AI server so that the AI server generates time axis information of the key target according to the target image, wherein the time axis information comprises time information, geographic information, scene category and behavior information.
8. An apparatus for behavior prediction for a key objective, comprising:
the communication module is used for acquiring model parameters and initializing an original convolutional neural network according to the model parameters to obtain a classification recognition model; wherein the raw convolutional neural network is pre-built in the AI tracking camera;
the neural network module is also used for classifying and identifying the outdoor image shot by the AI tracking camera according to the classification and identification model to obtain the outdoor image with the labeled information; the labeling information comprises scene category, target category and behavior information;
the key target screening module is used for screening out a target image containing a key target from the outdoor image with the labeled information according to the target category;
and the behavior prediction module is used for obtaining the future behavior information of the key target according to the target images after the number of the target images exceeds a preset numerical value.
9. An AI tracking camera comprising one or more processors, one or more image sensors, a radio frequency module, an audible and visual alarm, and a memory, wherein the memory is configured to store a computer program comprising program instructions, and wherein the one or more processors are configured to invoke the program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-7.
CN202010280666.2A 2020-04-10 2020-04-10 Behavior prediction method for key target, AI tracking camera and storage medium Active CN111476179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010280666.2A CN111476179B (en) 2020-04-10 2020-04-10 Behavior prediction method for key target, AI tracking camera and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010280666.2A CN111476179B (en) 2020-04-10 2020-04-10 Behavior prediction method for key target, AI tracking camera and storage medium

Publications (2)

Publication Number Publication Date
CN111476179A true CN111476179A (en) 2020-07-31
CN111476179B CN111476179B (en) 2023-02-14

Family

ID=71751947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010280666.2A Active CN111476179B (en) 2020-04-10 2020-04-10 Behavior prediction method for key target, AI tracking camera and storage medium

Country Status (1)

Country Link
CN (1) CN111476179B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112607542A (en) * 2020-12-10 2021-04-06 中科曙光国际信息产业有限公司 Elevator control method, elevator control device, computer equipment and storage medium
CN113763429A (en) * 2021-09-08 2021-12-07 广州市健坤网络科技发展有限公司 Pig behavior recognition system and method based on video
CN115100807A (en) * 2022-06-17 2022-09-23 贵州东彩供应链科技有限公司 System for realizing supervision of animal farm based on camera abnormity monitoring alarm
CN116089821A (en) * 2023-02-23 2023-05-09 中国人民解放军63921部队 Method for monitoring and identifying state of deep space probe based on convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699543A (en) * 2012-09-28 2014-04-02 南京理工大学 Information visualization method based on ground object classification of remote sensing image
CN107103615A (en) * 2017-04-05 2017-08-29 合肥酷睿网络科技有限公司 A kind of monitor video target lock-on tracing system and track lock method
US20190180119A1 (en) * 2017-03-30 2019-06-13 Hrl Laboratories, Llc System for real-time object detection and recognition using both image and size features
CN110533692A (en) * 2019-08-21 2019-12-03 深圳新视达视讯工程有限公司 A kind of automatic tracking method towards target mobile in unmanned plane video
WO2019233341A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN110889324A (en) * 2019-10-12 2020-03-17 南京航空航天大学 Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN110909803A (en) * 2019-11-26 2020-03-24 腾讯科技(深圳)有限公司 Image recognition model training method and device and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699543A (en) * 2012-09-28 2014-04-02 南京理工大学 Information visualization method based on ground object classification of remote sensing image
US20190180119A1 (en) * 2017-03-30 2019-06-13 Hrl Laboratories, Llc System for real-time object detection and recognition using both image and size features
CN107103615A (en) * 2017-04-05 2017-08-29 合肥酷睿网络科技有限公司 A kind of monitor video target lock-on tracing system and track lock method
WO2019233341A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN110533692A (en) * 2019-08-21 2019-12-03 深圳新视达视讯工程有限公司 A kind of automatic tracking method towards target mobile in unmanned plane video
CN110889324A (en) * 2019-10-12 2020-03-17 南京航空航天大学 Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN110909803A (en) * 2019-11-26 2020-03-24 腾讯科技(深圳)有限公司 Image recognition model training method and device and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112607542A (en) * 2020-12-10 2021-04-06 中科曙光国际信息产业有限公司 Elevator control method, elevator control device, computer equipment and storage medium
CN112607542B (en) * 2020-12-10 2022-07-26 中科曙光国际信息产业有限公司 Elevator control method, elevator control device, computer equipment and storage medium
CN113763429A (en) * 2021-09-08 2021-12-07 广州市健坤网络科技发展有限公司 Pig behavior recognition system and method based on video
CN115100807A (en) * 2022-06-17 2022-09-23 贵州东彩供应链科技有限公司 System for realizing supervision of animal farm based on camera abnormity monitoring alarm
CN116089821A (en) * 2023-02-23 2023-05-09 中国人民解放军63921部队 Method for monitoring and identifying state of deep space probe based on convolutional neural network
CN116089821B (en) * 2023-02-23 2023-08-15 中国人民解放军63921部队 Method for monitoring and identifying state of deep space probe based on convolutional neural network

Also Published As

Publication number Publication date
CN111476179B (en) 2023-02-14

Similar Documents

Publication Publication Date Title
CN111476179B (en) Behavior prediction method for key target, AI tracking camera and storage medium
US10936655B2 (en) Security video searching systems and associated methods
CN111354024B (en) Behavior prediction method of key target, AI server and storage medium
US10498955B2 (en) Commercial drone detection
US20200226360A1 (en) System and method for automatically detecting and classifying an animal in an image
US7916895B2 (en) Systems and methods for improved target tracking for tactical imaging
JP2017537357A (en) Alarm method and device
US8726324B2 (en) Method for identifying image capture opportunities using a selected expert photo agent
US20070177023A1 (en) System and method to provide an adaptive camera network
WO2017049612A1 (en) Smart tracking video recorder
US20150281568A1 (en) Content acquisition device, portable device, server, information processing device, and storage medium
CN115004269B (en) Monitoring device, monitoring method, and program
EP2892222A1 (en) Control device and storage medium
JP6704979B1 (en) Unmanned aerial vehicle, unmanned aerial vehicle system and unmanned aerial vehicle control system
JP2014045259A (en) Terminal device, server, and program
JP2017529029A (en) Camera control and image streaming
KR20220000226A (en) A system for providing a security surveillance service based on edge computing
KR20110042554A (en) Watching system
KR102492066B1 (en) Mobile preventive warning system
CN111738043A (en) Pedestrian re-identification method and device
JP7300958B2 (en) IMAGING DEVICE, CONTROL METHOD, AND COMPUTER PROGRAM
WO2022059123A1 (en) Monitoring system, camera, analyzing device, and ai model generating method
CN114495395A (en) Human shape detection method, monitoring and early warning method, device and system
Meek et al. A review of the ultimate camera trap for wildlife research and monitoring
US11922697B1 (en) Dynamically adjusting activation sensor parameters on security cameras using computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240507

Address after: 518000 Room 501, building 4, Antongda industrial plant, 68 Xingdong community, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN SIYUAN ELECTRONIC TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: SHENZHEN CITY FIVE TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right