CN113033529A - Early warning method and device based on image recognition, electronic equipment and medium - Google Patents

Early warning method and device based on image recognition, electronic equipment and medium Download PDF

Info

Publication number
CN113033529A
CN113033529A CN202110581820.4A CN202110581820A CN113033529A CN 113033529 A CN113033529 A CN 113033529A CN 202110581820 A CN202110581820 A CN 202110581820A CN 113033529 A CN113033529 A CN 113033529A
Authority
CN
China
Prior art keywords
image
target
early warning
sample
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110581820.4A
Other languages
Chinese (zh)
Inventor
谢东
陈冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Defeng New Journey Technology Co ltd
Original Assignee
Beijing Defeng New Journey Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Defeng New Journey Technology Co ltd filed Critical Beijing Defeng New Journey Technology Co ltd
Priority to CN202110581820.4A priority Critical patent/CN113033529A/en
Publication of CN113033529A publication Critical patent/CN113033529A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses an early warning method, an early warning device, electronic equipment and a medium based on image recognition. One embodiment of the method comprises: acquiring a video shot by a target camera in real time, wherein the target camera is a camera installed in a gas station area; in response to determining that the video comprises a target image, inputting the target image to a pre-trained motion detection model to generate motion type information, wherein the target image is an image containing a portrait; determining early warning information corresponding to the action type information; and controlling a device matched with the early warning information to perform early warning operation. This embodiment greatly improves the safety of the gasoline station.

Description

Early warning method and device based on image recognition, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an early warning method, an early warning device, electronic equipment and a medium based on image recognition.
Background
A gasoline station is a location where motor vehicles such as automobiles are replenished with gasoline. The requirements of the gasoline station for "safety" are extremely high, since the fuel stored in the gasoline station is inflammable and explosive. At present, the prior art usually prevents the behavior damaging the safety of a gas station by a manual patrol mode.
However, when the above-described manner is adopted, there are often technical problems as follows:
through the artifical mode of patrolling and examining, be difficult to discover the potential safety hazard in time, in addition, the artifical mode of patrolling and examining often has the hysteresis quality, promptly, when the inspection personnel discover, the action of endangering the security of filling station probably has taken place to, lost the effect of the prevention in advance to dangerous action, and then, seriously influenced the security of filling station.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an image recognition-based early warning method, apparatus, electronic device, and medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an early warning method based on image recognition, including: acquiring a video shot by a target camera in real time, wherein the target camera is a camera installed in a gas station area; in response to the fact that the video comprises a target image, inputting the target image to a pre-trained motion detection model to generate motion type information, wherein the target image is an image containing a portrait; determining early warning information corresponding to the action type information; and controlling a device matched with the early warning information to perform early warning operation.
In some embodiments, the initial motion prediction model controls the current learning rate by the following formula:
Figure 549771DEST_PATH_IMAGE001
wherein,
Figure 529228DEST_PATH_IMAGE002
means the saidThe rate of the previous learning is,
Figure 856435DEST_PATH_IMAGE003
the number of times of training is indicated,
Figure 720486DEST_PATH_IMAGE004
is shown as
Figure 393913DEST_PATH_IMAGE003
The minimum learning rate in the sub-training,
Figure 419638DEST_PATH_IMAGE005
is shown as
Figure 529414DEST_PATH_IMAGE003
The maximum learning rate in the sub-training,
Figure 197155DEST_PATH_IMAGE006
indicating the number of lots that have currently been completed,
Figure 459510DEST_PATH_IMAGE007
is shown as
Figure 921715DEST_PATH_IMAGE003
And training times for training the initial motion prediction model through the training sample set during secondary training.
In a second aspect, some embodiments of the present disclosure provide an early warning apparatus based on image recognition, the apparatus including: the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire a video shot by a target camera in real time, and the target camera is a camera installed in a gas station area; an input unit configured to input a target image to a pre-trained motion detection model to generate motion type information in response to a determination that the video includes the target image, wherein the target image is an image including a portrait; the determining unit is configured to determine early warning information corresponding to the action type information; and the control unit is configured to control a device matched with the early warning information to perform early warning operation.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: by the image recognition-based early warning method of some embodiments of the present disclosure, the safety of the gas station is improved. Specifically, the reasons for the low safety of the gasoline station are: through the mode of artifical patrolling and examining, be difficult to timely discovery potential safety hazard. Based on this, the early warning method based on image recognition according to some embodiments of the present disclosure first obtains a video captured in real time by a target camera, where the target camera is a camera installed in a gas station area. Because the camera has the advantage of all-weather work, consequently, compare the mode of patrolling and examining in the manual work, can improve the perception efficiency to dangerous behavior greatly, in addition, because the camera records the video and has the real-time, consequently can realize the perception in advance to dangerous behavior to, realize the effect of preventing in advance dangerous behavior. Then, in response to determining that the video includes a target image, the target image is input to a pre-trained motion detection model to generate motion type information, wherein the target image is an image including a portrait. In actual situations, accidents such as explosion of gas stations are often caused by illegal operations or misbehaviors of personnel in the gas station area. For example, a lighter or the like is used. Therefore, by acquiring the target image, the data processing amount can be greatly reduced, and the data processing efficiency is improved. Further, the efficiency of sensing behavior that jeopardizes the safety of the gas station can be further improved. In addition, the early warning information corresponding to the action type information is determined. And finally, controlling a device matched with the early warning information to perform early warning operation. In practical situations, different actions often have different degrees of harm to the safety of the gas station, so that the early warning operation is reasonably performed by controlling the device matched with the early warning information. By the method, the safety of the gas station can be greatly improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of an application scenario of an image recognition-based early warning method according to some embodiments of the present disclosure;
fig. 2 is a flow diagram of some embodiments of an image recognition based early warning method according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of an image recognition based early warning method according to the present disclosure;
fig. 4 is a schematic diagram of a sample image included in a sample image group being subjected to a stitching process to generate a model input image;
FIG. 5 is a schematic diagram of randomly arranging sample images after random scaling included in a sample image group after random scaling to generate a model input image;
FIG. 6 is a schematic block diagram of some embodiments of an image recognition based early warning device according to the present disclosure;
FIG. 7 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of an image recognition-based early warning method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may obtain a video 102 captured by a target camera in real time, where the target camera is a camera installed in a gas station area; secondly, in response to determining that the video 102 includes a target image 103, the computing device 101 may input the target image 103 to a pre-trained motion detection model 104 to generate motion type information 105, where the target image 103 is an image including a portrait; then, the computing device 101 may determine the warning information 106 corresponding to the action type information 105; finally, the computing device 101 may control the device 107 matching the warning information 106 to perform the warning operation.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of an image recognition based early warning method according to the present disclosure is shown. The early warning method based on image recognition comprises the following steps:
step 201, acquiring a video shot by a target camera in real time.
In some embodiments, a subject (e.g., the computing device 101 shown in fig. 1) executing the image recognition-based early warning method may acquire the video captured by the target camera in real time through a wired connection or a wireless connection. The target camera may be a camera installed in a gas station area.
As an example, the target camera may be a camera whose shooting angle is adjusted in advance. For example, the camera can be used for recording the image of the refueling area. The camera can also be used for recording pictures of the oil storage area.
As another example, when the target camera is a camera with a networking function, the execution main body may access an IP address corresponding to the target camera to pull a video captured by the target camera in real time.
In response to determining that the video includes the target image, the target image is input to a pre-trained motion detection model to generate motion type information, step 202.
In some embodiments, the execution subject may input the target image to a pre-trained motion detection model to generate the motion type information in response to determining that the video includes the target image. The target image may be an image including a portrait. First, the execution subject may determine whether the video includes the target image through a face detection model. Second, in response to determining that the video includes a target image, the target image is input to a pre-trained motion detection model to generate motion type information. The motion type information may represent a motion behavior type of an object corresponding to a portrait included in the target image. The face detection model may include, but is not limited to, any of the following: RetinaNet detection model, Yolov2 detection model, Yolov3 detection model, and FCN (full Convolutional neural Network) model. The motion detection model may be a CNN (Convolutional Neural Network) model or an RNN (Recurrent Neural Network) model.
By way of example, the action type information may be, but is not limited to, any of the following: "an apparatus which can generate an open fire is being used", "an oil gun is not subjected to a static electricity removing operation", "an apparatus which can generate an open fire is ready to be used", "smoking" and "smoking ready".
And step 203, determining early warning information corresponding to the action type information.
In some embodiments, the execution subject may determine the warning information corresponding to the action type information. Wherein the early warning information may characterize a degree of risk of the action type information. The execution main body can determine the early warning information corresponding to the action type information by inquiring the action type information and early warning information matching table. The action type information and early warning information matching table can be manually set.
As an example, the action type information may be "prepare for smoking", and the corresponding warning information may be "danger level: 1". The action type information may also be "smoking", and the corresponding warning information may be "danger level: 3"
And step 204, controlling a device matched with the early warning information to perform early warning operation.
In some embodiments, the execution subject may control a device matching with the warning information to perform the warning operation.
As an example, when the above-mentioned warning information is "danger level: 1', alarm information can be sent to the inspection personnel. And informing the inspection personnel of warning and stopping the personnel performing the dangerous actions.
As another example, when the above-mentioned warning information is "danger level: 3 ", a fire extinguishing device, such as a foam extinguisher, may be controlled to extinguish the fire in the target area. The target area may be an area photographed by the target camera.
The above embodiments of the present disclosure have the following beneficial effects: by the image recognition-based early warning method of some embodiments of the present disclosure, the safety of the gas station is improved. Specifically, the reasons for the low safety of the gasoline station are: through the mode of artifical patrolling and examining, be difficult to timely discovery potential safety hazard. Based on this, the early warning method based on image recognition according to some embodiments of the present disclosure first obtains a video captured in real time by a target camera, where the target camera is a camera installed in a gas station area. Because the camera has the advantage of all-weather work, consequently, compare the mode of patrolling and examining in the manual work, can improve the perception efficiency to dangerous behavior greatly, in addition, because the camera records the video and has the real-time, consequently can realize the perception in advance to dangerous behavior to, realize the effect of preventing in advance dangerous behavior. Then, in response to determining that the video includes a target image, the target image is input to a pre-trained motion detection model to generate motion type information, wherein the target image is an image including a portrait. In actual situations, accidents such as explosion of gas stations are often caused by illegal operations or misbehaviors of personnel in the gas station area. For example, a lighter or the like is used. Therefore, by acquiring the target image, the data processing amount can be greatly reduced, and the data processing efficiency is improved. Further, the efficiency of sensing behavior that jeopardizes the safety of the gas station can be further improved. In addition, the early warning information corresponding to the action type information is determined. And finally, controlling a device matched with the early warning information to perform early warning operation. In practical situations, different actions often have different degrees of harm to the safety of the gas station, so that the early warning operation is reasonably performed by controlling the device matched with the early warning information. By the method, the safety of the gas station can be greatly improved.
With further reference to fig. 3, a flow 300 of further embodiments of an image recognition based early warning method is shown. The flow 300 of the image recognition-based early warning method includes the following steps:
step 301, acquiring a video shot by a target camera in real time.
In some embodiments, the specific implementation of step 301 and the technical effect thereof may refer to step 201 in those embodiments corresponding to fig. 2, and are not described herein again.
Step 302, in response to determining that the video includes the target image, inputting the target image to a pre-trained motion detection model to generate motion type information.
In some embodiments, the execution subject may input the target image to a pre-trained motion detection model to generate the motion type information in response to determining that the video includes the target image. The motion detection model may be a YOLOv4 model. The motion detection model can be obtained by training the following steps:
firstly, determining images which meet labeling conditions and are included in the candidate videos as candidate images for each candidate video in a candidate video set to obtain a candidate image group.
The candidate videos in the candidate video set can be historical videos collected by a camera in a gas station area. The annotation condition may be that an object included in the image is performing a preset operation. For each frame of image in the candidate video, the executing entity may determine the image as a candidate image in response to receiving the annotation information. The annotation information may represent that the image satisfies the annotation condition.
As an example, the preset operation described above may be a "smoking" operation. The preset operation may be a "lighter using" operation.
And secondly, determining the information of the object to be detected corresponding to the object to be detected and the candidate image included in each candidate image in the candidate image group set as a training sample to obtain a training sample set.
The information of the object to be detected may include: the position coordinates of the object to be detected in the candidate image and the classification label of the object to be detected. The object to be detected may be an object corresponding to a portrait included in the candidate image.
As an example, the classification tag of the object to be detected may be "on the phone". The classification label of the object to be detected may be "smoking".
And thirdly, training an initial motion prediction model according to the training sample set to generate the motion detection model.
The execution subject may train the initial motion prediction model according to the training sample set, and determine the trained initial motion prediction model as the motion prediction model in response to determining that the accuracy of the trained initial motion prediction model satisfies a preset condition. The predetermined condition may be an accuracy of 98% or more.
Optionally, the executing entity may train an initial motion prediction model according to the training sample set to generate the motion detection model, and may include the following sub-steps:
the first substep: and randomly selecting candidate images included by the target number of training samples from the training sample set as sample images to obtain a sample image group set.
Wherein the target number may be 4.
And a second substep of performing stitching processing on each sample image included in each sample image group in the sample image group set to generate a model input image, so as to obtain a model input image set.
The execution subject may perform stitching processing on each sample image included in the sample image group according to a randomly selected order to generate a model input image.
As an example, as shown in fig. 4, a schematic diagram of performing stitching processing on each sample image included in the sample image group 401 to generate a model input image 402 may be shown.
Optionally, the executing subject performs a stitching process on each sample image included in each sample image group in the sample image group set to generate a model input image, so as to obtain a model input image set, and the executing subject may include the following steps:
the method comprises a first step of randomly cutting each training sample image in the training sample image group to generate a randomly cut sample image and obtain a randomly cut sample image group.
And secondly, carrying out random scaling treatment on each randomly cut sample image in the randomly cut sample image group to generate a sample image subjected to random scaling treatment, and obtaining a sample image group subjected to random scaling treatment.
And a third step of randomly arranging the sample images subjected to the random scaling in the sample image group subjected to the random scaling to generate the model input image.
As an example, as shown in fig. 5, a schematic diagram of randomly arranging the sample images after the random scaling processing included in the sample image group 501 after the random scaling processing to generate the model input image 502 may be shown.
A third substep of training the initial motion prediction model based on the model input image set and the training sample set to generate the motion detection model.
The execution body may input the initial motion prediction model by using the model input image set and position coordinates corresponding to the model input images in the model input image set. And comparing the label output by the initial motion prediction model with the classification label corresponding to the model input image in the model input image set. And adjusting the parameters of the initial motion prediction model according to the comparison result so as to realize the training of the initial motion prediction model.
Optionally, the initial motion prediction model controls the current learning rate by the following formula:
Figure 941624DEST_PATH_IMAGE008
wherein,
Figure 22843DEST_PATH_IMAGE002
representing the current learning rate described above.
Figure 749491DEST_PATH_IMAGE003
Indicating the number of training sessions.
Figure 444914DEST_PATH_IMAGE004
Is shown as
Figure 14436DEST_PATH_IMAGE003
Minimum learning rate at sub-training.
Figure 758401DEST_PATH_IMAGE005
Is shown as
Figure 667451DEST_PATH_IMAGE003
Maximum learning rate in sub-training.
Figure 579781DEST_PATH_IMAGE006
Indicating the number of lots that have currently completed execution.
Figure 574282DEST_PATH_IMAGE007
Is shown as
Figure 856359DEST_PATH_IMAGE003
And training the initial motion prediction model by the training sample set during the secondary training.
The formula leads the learning rate of the model during training to be larger by introducing the cosine function, so that the model can be rapidly converged. The learning rate at the later stage of training is smaller, so that the model can converge to the optimal solution. Therefore, the rapid convergence of the model is realized, and the training speed of the model is further improved.
Step 303, determining the early warning information corresponding to the action type information.
In some embodiments, the specific implementation of step 303 and the technical effect thereof may refer to step 203 in those embodiments corresponding to fig. 2, which are not described herein again.
And 304, controlling the broadcasting device to broadcast the warning information in response to determining that the warning information represents that the target object is operating the first target object.
In some embodiments, the execution subject may control the broadcasting device to broadcast the warning information in response to determining that the warning information indicates that the target object is operating the first target item.
The executing body can determine whether the target object is operating the first target object through an object detection model. The object detection model may be a CNN model.
As an example, the first target item may be a "cigarette". The alert message may be "Do not use an open-flame generating instrument or item in the fueling area! "
Step 305, in response to determining that the early warning information represents that the target object is operating the second target object, controlling the broadcasting device to broadcast the warning information; and controlling the sprinkling device to perform sprinkling operation on the area where the target object is located.
In some embodiments, the performing subject may pass through an object detection model to determine whether the target object is operating the second target item. The object detection model may be a CNN model.
As an example, the second target object may be an instrument that can generate an open flame, such as a "lighter.
Step 306, determining the face image included in the target image.
In some embodiments, the execution subject may determine a face image included in the target image through the face detection model. The face detection model may include, but is not limited to, any one of the following: RetinaNet detection model, Yolov2 detection model, Yolov3 detection model, and FCN (full Convolutional neural Network) model.
Step 307, storing the backup information to the target terminal.
In some embodiments, the execution main body may store the backup information to the target terminal through a wired connection or a wireless connection. The target terminal may be a terminal for information backup. The backup information may include: the target image, the face image, the action type information, and the acquisition time and acquisition place information corresponding to the target image.
As an example, the above-mentioned collection time may be "20201-05-12-15: 23: 43". The acquisition place information may be "XX gas station of XX street XX of XX district XX of XX city".
Alternatively, the execution subject may train the motion detection model using the target image, the position coordinates of the object corresponding to the human figure included in the target image, and the classification label of the object as training samples.
As can be seen from fig. 3, compared to the description of some embodiments corresponding to fig. 2, the present disclosure first distinguishes the item on which the target object is operating. In practice, the security impact on the gasoline station of items handled by the target object is often different. For example, the user uses the lighter, but does not send out the flame, and the security that this moment is to the filling station is less influenced, can report alarm information through broadcaster this moment, and the warning user can. When the user uses the lighter to send out the flame, it is great to the security influence of filling station this moment, can broadcast alarm information through broadcaster this moment, extinguishes the flame through watering device. And secondly, the backup information is stored in the target terminal, so that the subsequent responsibility tracing is facilitated. Then, in order to improve the detection capability of the motion detection model and the adaptability of the motion detection model, the target image, the position coordinates of the object corresponding to the human figure included in the target image, and the classification label of the object are used as training samples. The richness of the training samples is improved, and therefore the detection capability and the adaptability of the motion detection model obtained through training are guaranteed. In addition, the sample images after the respective random scaling processes are randomly arranged to generate the model input image. The data set is enriched, and the robustness of the motion detection model is improved.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an early warning apparatus based on image recognition, which correspond to those shown in fig. 2, and which may be applied in various electronic devices.
As shown in fig. 6, the image recognition-based warning apparatus 600 of some embodiments includes: an acquisition unit 601, an input unit 602, a determination unit 603, and a control unit 604. The acquiring unit 601 is configured to acquire a video shot by a target camera in real time, wherein the target camera is a camera installed in a gas station area; an input unit 602 configured to input, in response to determining that the video includes a target image, the target image being an image including a portrait, to a motion detection model trained in advance to generate motion type information; a determining unit 603 configured to determine early warning information corresponding to the action type information; and a control unit 604 configured to control the devices matched with the warning information to perform warning operation.
It will be understood that the elements described in the apparatus 600 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 600 and the units included therein, and are not described herein again.
Referring now to FIG. 7, a block diagram of an electronic device (such as computing device 101 shown in FIG. 1) 700 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via communications means 709, or may be installed from storage 708, or may be installed from ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a video shot by a target camera in real time, wherein the target camera is a camera installed in a gas station area; in response to the fact that the video comprises a target image, inputting the target image to a pre-trained motion detection model to generate motion type information, wherein the target image is an image containing a portrait; determining early warning information corresponding to the action type information; and controlling a device matched with the early warning information to perform early warning operation.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an input unit, a determination unit, and a control unit. The names of these units do not in some cases constitute a limitation on the units themselves, and for example, the acquisition unit may also be described as a "unit that acquires video captured by a target camera in real time".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. An early warning method based on image recognition comprises the following steps:
acquiring a video shot by a target camera in real time, wherein the target camera is a camera installed in a gas station area;
in response to determining that the video comprises a target image, inputting the target image to a pre-trained motion detection model to generate motion type information, wherein the target image is an image containing a portrait;
determining early warning information corresponding to the action type information;
and controlling a device matched with the early warning information to perform early warning operation.
2. The method of claim 1, wherein the method further comprises:
determining a face image included in the target image;
storing backup information to a target terminal, wherein the backup information comprises: the target image, the face image, the action type information, and acquisition time and acquisition place information corresponding to the target image.
3. The method of claim 1, wherein the controlling the device matched with the warning information to perform warning operation comprises:
and controlling a broadcasting device to broadcast warning information in response to the fact that the early warning information represents that a target object is operating a first target item, wherein the target object is an object corresponding to a portrait contained in the target image.
4. The method of claim 3, wherein the controlling the device matched with the warning information to perform the warning operation further comprises:
in response to determining that the early warning information indicates that the target object is operating a second target item, controlling the broadcasting device to broadcast the warning information; and controlling a sprinkling device to perform sprinkling operation on the area where the target object is located.
5. The method of claim 1, wherein the motion detection model is trained by:
for each candidate video in the candidate video set, determining an image which meets the labeling condition and is included in the candidate video as a candidate image to obtain a candidate image group;
determining to-be-detected object information and the candidate images, which correspond to-be-detected objects and are included in each candidate image group set, as training samples to obtain a training sample set, wherein the to-be-detected object information includes: the position coordinates of the object to be detected in the candidate image and the classification label of the object to be detected;
and training an initial motion prediction model according to the training sample set to generate the motion detection model.
6. The method of claim 5, wherein training an initial motion prediction model to generate the motion detection model from the set of training samples comprises:
randomly selecting candidate images included by a target number of training samples from the training sample set as sample images to obtain a sample image group set, wherein the target number is 4;
splicing all sample images included in each sample image group in the sample image group set to generate a model input image, so as to obtain a model input image set;
and training the initial motion prediction model according to the model input image set and the training sample set to generate the motion detection model.
7. The method according to claim 6, wherein the stitching processing of the sample images included in each sample image group in the sample image group set to generate a model input image comprises:
carrying out random cutting processing on each training sample image in the training sample image group to generate a sample image subjected to random cutting, so as to obtain a sample image group subjected to random cutting;
carrying out random scaling treatment on each sample image subjected to random cutting in the sample image group subjected to random cutting so as to generate a sample image subjected to random scaling treatment, and obtaining a sample image group subjected to random scaling treatment;
and randomly arranging the sample images after the random scaling in the sample image group after the random scaling to generate the model input image.
8. An early warning device based on image recognition comprises:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire a video shot by a target camera in real time, and the target camera is a camera installed in a gas station area;
an input unit configured to input a target image to a pre-trained motion detection model to generate motion type information in response to a determination that the video includes the target image, wherein the target image is an image including a portrait;
a determining unit configured to determine early warning information corresponding to the action type information;
and the control unit is configured to control the device matched with the early warning information to perform early warning operation.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202110581820.4A 2021-05-27 2021-05-27 Early warning method and device based on image recognition, electronic equipment and medium Pending CN113033529A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110581820.4A CN113033529A (en) 2021-05-27 2021-05-27 Early warning method and device based on image recognition, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110581820.4A CN113033529A (en) 2021-05-27 2021-05-27 Early warning method and device based on image recognition, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN113033529A true CN113033529A (en) 2021-06-25

Family

ID=76455950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110581820.4A Pending CN113033529A (en) 2021-05-27 2021-05-27 Early warning method and device based on image recognition, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113033529A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569671A (en) * 2021-07-13 2021-10-29 北京大数医达科技有限公司 Abnormal behavior alarm method and device
CN114511046A (en) * 2022-04-19 2022-05-17 阿里巴巴(中国)有限公司 Object recognition method and device
CN115375855A (en) * 2022-10-25 2022-11-22 四川公路桥梁建设集团有限公司 Visualization method and device for engineering project, electronic equipment and readable medium
CN117011787A (en) * 2023-07-12 2023-11-07 中关村科学城城市大脑股份有限公司 Information processing method and device applied to gas station and electronic equipment
CN118101926A (en) * 2024-02-29 2024-05-28 北京积加科技有限公司 Video generation method, device, equipment and medium based on monitoring camera adjustment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985365A (en) * 2020-08-06 2020-11-24 合肥学院 Straw burning monitoring method and system based on target detection technology
CN111985385A (en) * 2020-08-14 2020-11-24 杭州海康威视数字技术股份有限公司 Behavior detection method, device and equipment
CN112668663A (en) * 2021-01-05 2021-04-16 南京航空航天大学 Aerial photography car detection method based on YOLOv4
US20210133468A1 (en) * 2018-09-27 2021-05-06 Beijing Sensetime Technology Development Co., Ltd. Action Recognition Method, Electronic Device, and Storage Medium
CN112818757A (en) * 2021-01-13 2021-05-18 上海应用技术大学 Gas station safety detection early warning method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210133468A1 (en) * 2018-09-27 2021-05-06 Beijing Sensetime Technology Development Co., Ltd. Action Recognition Method, Electronic Device, and Storage Medium
CN111985365A (en) * 2020-08-06 2020-11-24 合肥学院 Straw burning monitoring method and system based on target detection technology
CN111985385A (en) * 2020-08-14 2020-11-24 杭州海康威视数字技术股份有限公司 Behavior detection method, device and equipment
CN112668663A (en) * 2021-01-05 2021-04-16 南京航空航天大学 Aerial photography car detection method based on YOLOv4
CN112818757A (en) * 2021-01-13 2021-05-18 上海应用技术大学 Gas station safety detection early warning method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569671A (en) * 2021-07-13 2021-10-29 北京大数医达科技有限公司 Abnormal behavior alarm method and device
CN114511046A (en) * 2022-04-19 2022-05-17 阿里巴巴(中国)有限公司 Object recognition method and device
CN115375855A (en) * 2022-10-25 2022-11-22 四川公路桥梁建设集团有限公司 Visualization method and device for engineering project, electronic equipment and readable medium
CN117011787A (en) * 2023-07-12 2023-11-07 中关村科学城城市大脑股份有限公司 Information processing method and device applied to gas station and electronic equipment
CN117011787B (en) * 2023-07-12 2024-02-02 中关村科学城城市大脑股份有限公司 Information processing method and device applied to gas station and electronic equipment
CN118101926A (en) * 2024-02-29 2024-05-28 北京积加科技有限公司 Video generation method, device, equipment and medium based on monitoring camera adjustment

Similar Documents

Publication Publication Date Title
CN113033529A (en) Early warning method and device based on image recognition, electronic equipment and medium
WO2020006963A1 (en) Method and apparatus for generating image detection model
CN104137154B (en) Systems and methods for managing video data
US11461995B2 (en) Method and apparatus for inspecting burrs of electrode slice
CN114625241A (en) Augmented reality augmented context awareness
CN113569825B (en) Video monitoring method and device, electronic equipment and computer readable medium
CN110996066B (en) Accident backtracking method and device
JP2022106926A (en) Camera shielding detection method, device, electronic apparatus, storage medium and computer program
CN111612422A (en) Method and device for responding to emergency, storage medium and equipment
CN115379125B (en) Interactive information sending method, device, server and medium
CN112052911A (en) Method and device for identifying riot and terrorist content in image, electronic equipment and storage medium
CN115426350A (en) Image uploading method, image uploading device, electronic equipment and storage medium
CN115359391A (en) Inspection image detection method, inspection image detection device, electronic device and medium
CN115766401B (en) Industrial alarm information analysis method and device, electronic equipment and computer medium
CN108961098A (en) Vehicle supervision method, apparatus, system and computer readable storage medium
CN112307323B (en) Information pushing method and device
CN110414625B (en) Method and device for determining similar data, electronic equipment and storage medium
CN112232326A (en) Driving information generation method and device, electronic equipment and computer readable medium
CN111586295A (en) Image generation method and device and electronic equipment
CN111695069A (en) Police service resource visualization method, system, device and storage medium
CN112306788A (en) Program dynamic monitoring method and device
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium
CN113094272B (en) Application testing method, device, electronic equipment and computer readable medium
CN110222846B (en) Information security method and information security system for internet terminal
CN113556502A (en) Emergency state communication method and system of law enforcement recorder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210625

RJ01 Rejection of invention patent application after publication