CN110363153B - Wading detection method and device, server and computer readable storage medium - Google Patents

Wading detection method and device, server and computer readable storage medium Download PDF

Info

Publication number
CN110363153B
CN110363153B CN201910643192.0A CN201910643192A CN110363153B CN 110363153 B CN110363153 B CN 110363153B CN 201910643192 A CN201910643192 A CN 201910643192A CN 110363153 B CN110363153 B CN 110363153B
Authority
CN
China
Prior art keywords
image
deep learning
water area
learning model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910643192.0A
Other languages
Chinese (zh)
Other versions
CN110363153A (en
Inventor
王帅龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tupu Network Technology Co ltd
Original Assignee
Guangzhou Tupu Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tupu Network Technology Co ltd filed Critical Guangzhou Tupu Network Technology Co ltd
Priority to CN201910643192.0A priority Critical patent/CN110363153B/en
Publication of CN110363153A publication Critical patent/CN110363153A/en
Application granted granted Critical
Publication of CN110363153B publication Critical patent/CN110363153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a wading detection method, a wading detection device, a server and a computer readable storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area; determining an image of an area where a water area is located from an image to be processed as a target image; the target image is detected through the trained deep learning model, a detection result indicating whether the water area has the wading event or not is obtained, and the problem of low water area detection efficiency can be solved.

Description

Wading detection method and device, server and computer readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a wading detection method, a wading detection device, a wading detection server and a computer readable storage medium.
Background
In the field of security supervision, pedestrian wading is a focus of attention in the field of public safety. With the improvement of environmental conditions, more and more people enjoy water playing in water areas such as rivers, reservoirs, lakes and the like. People also have a risk of play while playing in the water. For example, a player drowns accidentally. At present, the safety supervision of a water area is usually supervised by manual patrol, and patrol personnel have limited energy and cannot supervise the water area needing to be supervised all the time, so that the supervision efficiency is low.
Disclosure of Invention
The application provides a wading detection method, a wading detection device, a server and a computer readable storage medium, which can solve the problem of low efficiency of monitoring a water area.
In order to achieve the above purpose, the technical solutions provided in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a wading detection method, where the method includes:
acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area; determining an image of an area where a water area is located from the image to be processed as a target image; and detecting the target image through the trained deep learning model to obtain a detection result indicating whether the water area has the wading event or not.
In the embodiment, the method can automatically detect whether the water area has the wading event through the server, is favorable for constantly detecting whether the water area has the wading event, improves the efficiency of monitoring the water area, and is favorable for reducing human resources required by the water area detection without manually patrolling the water area.
With reference to the first aspect, in some optional embodiments, detecting the target image through the trained deep learning model to obtain a detection result indicating whether the water area has the wading event includes:
determining whether a target object exists in the target image through the deep learning model; when the target object exists in the target image, determining that the wading event exists in the water area; and when the target object does not exist in the target image, determining that the wading event does not exist in the water area.
In the above embodiment, since the target image is a scene image of a water area, it is possible to determine whether or not a wading event is present in the water area by detecting whether or not a target object is present in the target image. Based on this, be favorable to accurately detecting the event of wading.
With reference to the first aspect, in some optional embodiments, determining whether a target object exists in the target image through the deep learning model includes:
extracting a first image feature of an object in the target image through the deep learning model; matching the first image features with pre-stored second image features, wherein the second image features are image features of a specified object extracted from a training image set containing the specified object by using the deep learning model; and when the first image characteristic is matched with the second image characteristic, determining that the target object exists in the target image.
In the above embodiment, since the second image feature is a feature of the designated object, and the designated object can be understood as a designated target object that needs to be identified, it can be used to determine whether the target object exists in the target image by matching the first image feature with the second image feature.
With reference to the first aspect, in some optional embodiments, the designated object comprises a wader.
In the above embodiment, when the designated object is a wader, the method may detect whether there is wading of a person in the water area, thereby being beneficial to security supervision of wading of the person.
With reference to the first aspect, in some optional embodiments, determining, from the to-be-processed image, an image of an area where a water area is located as a target image includes:
identifying a water area in the image to be processed through the deep learning model; and determining the image of the area where the water area is located in the image to be processed as the target image.
In the above embodiment, the image segmentation is performed through the deep learning model, which is beneficial to rapidly and accurately segmenting the target image from the image to be processed.
With reference to the first aspect, in some optional embodiments, before detecting the target image through the trained deep learning model, the method further includes:
acquiring a training image set comprising a specified object, wherein the training image set comprises a plurality of training images, and each training image comprises the specified object and a label corresponding to the specified object; and training a preset deep learning model according to the training image set and a preset deep learning algorithm to obtain the deep learning model after training.
In the above embodiment, the preset deep learning model is trained by using the training image set, which helps to improve the accuracy of the deep learning model in detecting the target object.
With reference to the first aspect, in some optional embodiments, the method further comprises:
and when the detection result shows that the wading event exists in the water area, sending prompt information to a specified user terminal.
In the embodiment, the prompt information sent by the user terminal is beneficial for the manager to find out the wading event in the water area in time, and the manager can conveniently and safely supervise the water area.
In a second aspect, an embodiment of the present application further provides a wading detection device, where the device includes:
the image acquisition unit is used for acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area;
the image segmentation unit is used for determining an image of an area where a water area is located from the image to be processed to be used as a target image;
and the detection unit is used for detecting the target image through the trained deep learning model to obtain a detection result indicating whether the water area has the wading event or not.
In a third aspect, an embodiment of the present application further provides a server, including a memory and a processor coupled to each other, where the memory stores a computer program, and when the computer program is executed by the processor, the server is caused to perform the above method.
In a fourth aspect, the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the above method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the application and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic view of communication connection between a monitoring device, a user terminal, and a server according to an embodiment of the present application.
Fig. 2 is a block diagram of a server according to an embodiment of the present disclosure.
Fig. 3 is a schematic flow chart of the wading detection method provided in the embodiment of the present application.
Fig. 4a is a schematic diagram of an image to be processed according to an embodiment of the present application.
FIG. 4b is a schematic diagram of the target image obtained by dividing the image to be processed shown in FIG. 4a into two
Fig. 5 is a functional block diagram of a wading detection device according to an embodiment of the present application.
Icon: 10-a server; 11-a processing module; 12-a storage module; 13-a communication module; 20-a monitoring device; 30-a user terminal; 100-wading detection means; 110-an image acquisition unit; 120-an image segmentation unit; 130-detection unit.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that the terms "first," "second," and the like are used merely to distinguish one description from another, and are not intended to indicate or imply relative importance.
Referring to fig. 1, the present embodiment provides a server 10, where the server 10 may establish a communication connection with a monitoring device 20 and a user terminal 30 through a network to perform data interaction. The monitoring device 20 and the user terminal 30 can establish a communication connection through a network for data interaction. The number of the monitoring devices 20 communicatively connected to the server 10 may be one or more. The number of the user terminals 30 communicatively connected to the server 10 may be one or more.
Understandably, the network may be, but is not limited to, a wired network or a wireless network. The number of the monitoring devices 20 and the number of the user terminals 30 communicatively connected to the server 10 may be set according to actual situations, and is not limited specifically here.
The server 10 may be used to perform or implement the wading detection method described below. For example, the server 10 may be configured to perform wading detection on an image captured by the monitoring device 20, and detect the image to determine whether there is a wading event in an actual scene corresponding to the image. The wading event can be understood as that people or other objects enter the water area in the actual scene corresponding to the image, and the other objects include, but are not limited to, animals, garbage and the like. Animals include, but are not limited to, dogs, cats, ducks, etc. and litter includes, but is not limited to, plastic foam, rubber, etc. The server 10 may be, but is not limited to, a cluster server 10, a distributed server 10, a cloud server 10, etc., and is not limited to this.
The monitoring device 20 includes at least one camera for capturing images, and of course, the monitoring device 20 may further include a communication module for transmitting the captured images to the user terminal 30 or the server 10. Wherein, the camera is used for monitoring the waters environment. Understandably, the scene included in the image collected by the camera exists in a water area.
For example, when laying out cameras, a plurality of cameras can be laid out at intervals at the edge of a water area needing supervision. For example, the object of monitoring a wading event is a person, and it is understood that if a person wants to wade, the person usually enters the water area from the edge of the water area. Therefore, the monitoring picture (image) collected by each camera can be the edge of the water area (such as lake edge, river edge, reservoir edge, etc.), and the camera does not need to collect images of other areas of the water area, so as to reduce the number of images of the collected environment, thereby reducing the calculation amount of image processing.
The number of cameras included in the monitoring device 20 may be set according to practical situations, and may be one or more, and is not limited specifically here.
The user terminal 30 may be used to display images captured by the monitoring device 20 for review by a manager. In addition, the user terminal 30 may be further configured to receive a prompt message sent by the server 10, where the prompt message may be a prompt message sent by the server 10 when it is determined that the actual scene corresponding to the image has the wading event after the image acquired by the monitoring device 20 is detected.
The user terminal 30 may be, but is not limited to, a smart phone, a Personal Computer (PC), a tablet PC, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like. Understandably, when the monitoring device 20 has the function of information prompt or the user terminal 30 has the function of image acquisition, the monitoring device 20 and the user terminal 30 may be the same device or two independent devices respectively.
Referring to fig. 2, in the present embodiment, the server 10 may include a processing module 11 and a storage module 12. Of course, the server 10 may further include a communication module 13 and the wading detection device 100, and the processing module 11, the communication module 13, the storage module 12 and the wading detection device 100 are electrically connected directly or indirectly to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The processing module 11 may be an integrated circuit chip having signal processing capabilities. The processing module 11 may be a general-purpose processor. For example, the Processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Network Processor (NP), or the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed.
The memory module 12 may be, but is not limited to, a random access memory, a read only memory, a programmable read only memory, an erasable programmable read only memory, an electrically erasable programmable read only memory, and the like. In this embodiment, the storage module 12 may be used to store a deep learning model, an image to be processed, and the like. Of course, the storage module 12 may also be used to store a program, and the processing module 11 executes the program after receiving the execution instruction.
The communication module 13 is configured to establish a communication connection between the server 10 and the monitoring device 20 and the user terminal 30 via a network, and to transmit and receive data via the network.
The wading detection device 100 includes at least one software function module which can be stored in the storage module 12 in the form of software or firmware (firmware) or is fixed in an Operating System (OS) of the server 10. The processing module 11 is used to execute executable modules stored in the storage module 12, such as software functional modules and computer programs included in the wading detection apparatus 100.
It is understood that the configuration shown in fig. 2 is merely a schematic diagram of the configuration of the server 10, and that the server 10 may include more components than those shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 3, an embodiment of the present application further provides a wading detection method, which may be applied to the server 10, and the server 10 executes or implements each step of the wading detection method.
In this embodiment, the wading detection method may include the following steps:
step S210, acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area;
step S220, determining an image of an area where a water area is located from the image to be processed as a target image;
and step S230, detecting the target image through the trained deep learning model to obtain a detection result indicating whether the water area has the wading event or not.
In the embodiment, the method divides the image of the area where the water area is located from the image to be processed to be used as the target image, and then detects whether the wading event exists in the target image. The method automatically detects whether the water area has the wading event through the server 10, is favorable for constantly detecting whether the water area has the wading event, improves the efficiency of monitoring the water area, and reduces human resources.
In addition, since the image to be measured usually includes a water area scene and a non-water area scene, the image area corresponding to the divided water area is detected without detecting the image area corresponding to the non-water area, so that the recognition area of the image can be reduced, and the amount of calculation for image processing can be reduced. In addition, the method can also avoid the scene interference in the non-water area image from detecting the wading event, and is favorable for improving the accuracy of wading detection.
The steps of the wading detection method shown in figure 3 will now be described in detail:
step S210, acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene including a water area.
In the present embodiment, the server 10 may directly or indirectly acquire the image to be processed from the monitoring apparatus 20. For example, the monitoring device 20 may directly send the image of the captured water area scene to the server 10, or the monitoring device 20 may send the image of the captured water area scene to the user terminal 30, and then the user terminal 30 sends the image to the server 10, so that the server 10 acquires the image to be processed.
Step S220, determining an image of the area where the water area is located from the image to be processed as a target image.
In this embodiment, the method may segment the image of the area where the water area is located from the image to be processed, and use the image of the area where the water area is located as the target image.
For example, the server 10 may segment an image of an area of a water area in the image to be processed from the image to be processed, and use the segmented image of the area of the water area as a target image, so as to detect the target image later.
Referring to fig. 4a and fig. 4b in combination, fig. 4a may be understood as a to-be-processed image acquired by the server 10, where the to-be-processed image includes a region where a water area is located and a region where a non-water area is located. Fig. 4b can be understood as the target image left after the image to be processed in fig. 4a is subjected to image segmentation. The target image is the image of the area where the water area is located in the image to be processed.
Understandably, after the target image is divided from the image to be processed, the area of the image which needs to be detected by the server 10 is reduced, so that the calculation amount of the server 10 for detecting the image is favorably reduced, and the efficiency of detecting the image is improved. In addition, after the image of the non-water area is removed from the image to be processed, the scene interference server 10 in the non-water area can be prevented from detecting whether the wading event exists in the water area, so that the accuracy of wading event detection can be improved.
In this embodiment, step S220 may include: identifying a water area in the image to be processed through the deep learning model; and determining the image of the area where the water area is located in the image to be processed as the target image.
In this embodiment, the server 10 may segment the image to be processed by the trained deep learning model to obtain the target image. Before step S220, the method may include training a preset deep learning model to obtain a deep learning model after the training is completed.
The principle of training the preset deep learning model may be as follows: first, a training image set labeled with water segmentation is prepared. The training image set includes a plurality of training images. Each training image includes a water area, a non-water area, and a segmentation line between the water area and the non-water area. Wherein, different types of information are marked on the water area and the non-water area for distinguishing. And then, training the marked training image by using a preset deep learning model. For example, the preset deep learning model may include a deep lab network, the deep lab network is used to perform semantic segmentation on the image to be processed, and the input image (the image to be processed) and the output matrix are compared to obtain a difference therebetween; the DeepLab network can be continuously adjusted according to the difference between the DeepLab network and the DeepLab network until the output image is closer to the actual result, thereby completing the training of the model.
After the training of the model is completed, the to-be-processed image is input into the trained deep learning model, so that the water area and the non-water area in the to-be-processed image can be automatically segmented, and the image of the area where the water area is located is output to serve as a target image.
In the embodiment, the segmentation of the image is performed through the deep learning model, which is helpful for rapidly and accurately segmenting the target image from the image to be processed, thereby being beneficial to improving the efficiency of detecting the image.
As an alternative embodiment, in step S220, the target image may be manually segmented from the image to be processed.
And step S230, detecting the target image through the trained deep learning model to obtain a detection result indicating whether the water area has the wading event or not.
In this embodiment, a deep learning model (which may be referred to as a first deep learning model for convenience of distinguishing description) used for segmenting an image to be processed and a deep learning model (which may be referred to as a second deep learning model for convenience of distinguishing description) used for detecting a target image, that is, the first deep learning model and the second deep learning model may be the same deep learning model or two mutually independent deep learning models.
Understandably, if the first deep learning model and the second deep learning model are the same deep learning model, the deep learning model can be used for carrying out segmentation processing on an image to be processed to obtain a target image; in addition, the deep learning model can also be used for detecting the target image. When performing model training, it is necessary to train segmentation processing on an image to be processed, and also to train detection processing on a target image.
If the first deep learning model and the second deep learning model are not the same deep learning model, when the models are used, the first deep learning model is used for segmenting the image to be processed, and the second deep learning model is used for detecting the target image. In the model training, the training may be performed separately.
The obtained detection result may be a first result indicating that the water area has the wading event, or a second result indicating that the water area has no wading event.
In this embodiment, step S230 may include: determining whether a target object exists in the target image through the deep learning model; when the target object exists in the target image, determining that the wading event exists in the water area; and when the target object does not exist in the target image, determining that the wading event does not exist in the water area.
In this embodiment, the target object may be a person, other animal, trash, or the like. For example, the target object is a person, and the deep learning model may detect whether or not a person exists in the target image, and when the person is detected to exist in the target image, it may be understood that a person exists in the water area, that is, it is determined that a wading event exists in the water area. If no human image is detected in the water area, it is understood that no person is present in the water area, i.e., it is determined that no wading event exists in the water area. Of course, the target object may be an animal such as a dog or a duck, or the garbage described above, and the specific type of the target object is not limited herein.
In the above embodiment, since the target image is a scene image of a water area, it is possible to determine whether or not a wading event is present in the water area by detecting whether or not a target object is present in the target image. Therefore, the method is beneficial to accurately detecting the wading event, and the reliability of wading detection is improved.
As an optional implementation, determining whether a target object exists in the target image through the deep learning model includes: extracting a first image feature of an object in the target image through the deep learning model; matching the first image features with pre-stored second image features, wherein the second image features are image features of a specified object extracted from a training image set containing the specified object by using the deep learning model; and when the first image characteristic is matched with the second image characteristic, determining that the target object exists in the target image. The designated object can be understood as a preset target object to be identified.
In this embodiment, the deep learning model may perform feature extraction on the target image through a deep learning algorithm to obtain a first image feature of the object in the target image. For example, the deep learning model may include a Single point multi box Detector (SSD). The SSD is used for extracting a first image feature of an object in a target image. The first image feature includes, but is not limited to, an outline shape of the object, a tag type of the object, and the like. In addition, the SSD is utilized to quickly and accurately extract the image characteristics of the object in the target image, thereby being beneficial to improving the efficiency and the accuracy of wading event detection.
For example, the deep learning model may identify a first label (first image feature) of an object in the target image and then compare the first label to a second label (second image feature) of the specified object. And if the first label is the same as the second label, the target object is considered to exist in the target image. And if the first label is different from the second label, the target object does not exist in the target image.
Or, the deep learning model may identify a first contour shape of an object in the target image, compare the first contour shape with a second contour shape of the designated object, and if the similarity between the first contour shape and the second contour shape is greater than or equal to a preset threshold, determine that the target object exists in the target image. And if the similarity of the two images is smaller than a preset threshold value, the target object does not exist in the target image. The preset threshold may be set according to an actual situation, for example, the preset threshold may be 95%, 90%, or the like.
Of course, the image feature (which may refer to the first image feature or the second image feature) may also be other features, such as Histogram of Oriented Gradient (HOG) feature, and the type of the image feature is not particularly limited herein.
In the above embodiment, since the second image feature is a feature of the designated object, and the designated object can be understood as a designated target object that needs to be identified, it can be used to determine whether the target object exists in the target image by matching the first image feature with the second image feature.
As an alternative embodiment, the designated object includes a wader. Of course, the designated object may also be the above-mentioned animal, trash, or the like.
Understandably, when the designated object is a person, the target object matching the designated object is also a person. At this time, the server 10 may detect whether there is wading of a person in the water area, thereby being beneficial to safety supervision of wading of a person. If the designated object is an animal or garbage, the target object matched with the designated object is a corresponding animal or garbage, and the server 10 can identify the animal or garbage entering the water area and send a prompt.
Prior to step S230, the method may further comprise:
acquiring a training image set comprising a specified object, wherein the training image set comprises a plurality of training images, and each training image comprises the specified object and a label corresponding to the specified object; and training a preset deep learning model according to the training image set and a preset deep learning algorithm to obtain the deep learning model after training. The preset deep learning algorithm includes, but is not limited to, an R-CNN (Region-Convolutional Neural Networks) algorithm, a fast R-CNN (accelerated Region-Convolutional Neural Networks) algorithm, an SSD algorithm, and the like.
In the present embodiment, before the deep learning model is used to detect the target object (i.e., before step S230), it is necessary to ensure that the used deep learning model is a completely trained model. If the model is not trained, the model may not need to be trained. Of course, the trained model can also be trained continuously, so as to improve the accuracy of the deep learning model for wading event detection. If the model does not complete the training of the model, the deep learning model needs to be trained.
The principle of training the detection target object for the deep learning model can be as follows: first, a training image set is prepared, each training image in the training image set including a designated object and a label corresponding to the designated object. And then inputting the training image set into a deep learning model to be trained, wherein the deep learning model can output the label of an object existing in the training image, then comparing the output label with an actual label corresponding to the object, and adjusting an output result according to the difference until the output label is closer to the actual label, so that the training of the deep learning model is considered to be completed. Understandably, the training deep learning model detects the target object, which is similar to the principle of segmenting the image to be processed by the training deep learning model.
In this embodiment, if the designated image is a person, each training image includes a person image and a label representing the person image. The portrait may be a whole body portrait of a person, or a partial body portrait (for example, a head portrait, an upper body portrait, a lower body portrait, etc.). The user can train the portrait according to the fact that the event meets the condition of the wading event, and therefore accuracy of detection of the wading event is improved.
The mode of determining whether the event satisfies the wading event condition may include: for example, when a body part of a person is in water (e.g., feet, legs, etc. of the person), the person is considered to be a wading event, i.e., a wading event exists in a water area. If a person takes a ship in a water area, the behavior of the person can be considered whether to be a wading event or not according to actual conditions. For example, if a ship is prohibited from entering the water area and a person takes the ship in the water area, the water area is considered as a wading event; if the ship is allowed to enter the water area and a person takes the ship in the water area, the behavior of the person can be determined and waded, and at this time, the wading event does not exist in the water area.
If the event meets the wading event condition, the event comprises the following steps: when a person's feet are immersed in a body of water, the person's behavior is considered a wading event. During model training, the training images comprise scene images of feet of people soaked in a water area, training is carried out based on the training images, and after the training is finished, the deep learning model can accurately detect whether the water area has a wading event or not.
As an optional implementation manner, after step S230, the method may further include: and when the detection result shows that the wading event exists in the water area, sending prompt information to the specified user terminal 30.
In the present embodiment, the server 10 can establish a communication connection with a specified user terminal 30. When the water area is detected to have the wading event, the server 10 sends prompt information to the user terminal 30, and the user terminal 30 sends a prompt after receiving the prompt information. The prompting mode includes, but is not limited to, internet phone prompting, vibration prompting, light prompting, etc., and the prompting mode is not particularly limited herein.
Understandably, the prompt information sent by the user terminal 30 is beneficial for the manager to find out the wading event in the water area in time, and the manager can conveniently and safely monitor the water area. The management personnel do not need to check the monitoring picture all the time, thereby reducing the workload of the management personnel.
Referring to fig. 5, an wading detection apparatus 100 may be applied to the server 10. The wading detection apparatus 100 may be used to perform or implement the steps of the wading detection method described above. The wading detection apparatus 100 may include an image acquisition unit 110, an image segmentation unit 120, and a detection unit 130.
The image acquiring unit 110 is configured to acquire an image to be processed, where the image to be processed is an image acquired by acquiring a scene including a water area.
An image segmentation unit 120, configured to determine, from the to-be-processed image, an image of an area where a water area is located as a target image.
And the detection unit 130 is configured to detect the target image through the trained deep learning model, and obtain a detection result indicating whether the water area has the wading event.
Optionally, the detection unit 130 may be further configured to: determining whether a target object exists in the target image through the deep learning model; when the target object exists in the target image, determining that the wading event exists in the water area; and when the target object does not exist in the target image, determining that the wading event does not exist in the water area.
Optionally, the detection unit 130 may be further configured to: extracting a first image feature of an object in the target image through the deep learning model; matching the first image features with pre-stored second image features, wherein the second image features are image features of a specified object extracted from a training image set containing the specified object by using the deep learning model; and when the first image characteristic is matched with the second image characteristic, determining that the target object exists in the target image. Wherein the designated object may include a person.
Optionally, the image segmentation unit 120 may be further configured to: identifying a water area in the image to be processed through the deep learning model; and determining the image of the area where the water area is located in the image to be processed as the target image.
Optionally, the wading detection apparatus 100 may further include a model training unit. Before the detection unit 130 detects the target image through the trained deep learning model, the image acquisition unit 110 may be further configured to: acquiring a training image set comprising a specified object, wherein the training image set comprises a plurality of training images, and each training image comprises the specified object and a label corresponding to the specified object. The model training unit is used for: and training a preset deep learning model according to the training image set and a preset deep learning algorithm to obtain the deep learning model after training.
Optionally, the wading detection apparatus 100 may further include a prompt unit. The prompting unit is used for: and when the detection result shows that the wading event exists in the water area, sending prompt information to the specified user terminal 30.
It should be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the wading detection apparatus 100 described above may refer to the corresponding process of each step in the foregoing method, and will not be described in too much detail herein.
The embodiment of the application also provides a computer readable storage medium. The readable storage medium has stored therein a computer program that, when run on a computer, causes the computer to execute the wading detection method as described in the above embodiments.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by hardware, or by software plus a necessary general hardware platform, and based on such understanding, the technical solution of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments of the present application.
In summary, the present application provides a wading detection method, device, server and computer-readable storage medium. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area; determining an image of an area where a water area is located from an image to be processed as a target image; and detecting a target image through the trained deep learning model to obtain a detection result indicating whether the water area has the wading event or not. In the scheme, the method can automatically detect whether the water area has the wading event through the server, is favorable for constantly detecting whether the water area has the wading event, improves the efficiency of monitoring the water area, and is in addition favorable for reducing the manpower resources required by the water area detection without manually patrolling the condition of the water area.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, system, and method may be implemented in other ways. The apparatus, system, and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. A wading detection method, the method comprising:
acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area;
inputting the image to be processed into a deep learning model to obtain a target image, wherein the target image is an image of an area where a water domain is located in the image to be processed;
extracting first image features of an object in the target image through a trained deep learning model, wherein the deep learning model comprises a single-point multi-box detector, and the first image features comprise the outline shape of the object and the label type of the object;
matching the first image features with pre-stored second image features, wherein the second image features are image features of a specified object extracted from a training image set containing the specified object by using the deep learning model;
when the first image feature is matched with the second image feature, determining that the target object exists in the target image;
when the target object exists in the target image, determining that the wading event exists in the water area, including:
determining that the water area has the wading event when a body part of a person is in water;
when the target object does not exist in the target image, determining that the wading event does not exist in the water area;
before extracting first image features of an object in the target image through the trained deep learning model, the method further comprises:
acquiring a training image set comprising a plurality of training images, wherein each training image in the plurality of training images comprises a specified object and a label corresponding to the specified object;
training a preset deep learning model according to the training image set and a preset deep learning algorithm to obtain the deep learning model after training is completed;
the preset deep learning model is a DeepLab network, and the training process of the DeepLab network comprises the following steps: semantic segmentation is carried out on the image by using the deep Lab network, the input image and the output matrix are compared to obtain the difference between the input image and the output matrix, and the parameter of the deep Lab network is adjusted according to the difference between the input image and the output matrix, so that the training of the deep Lab network is completed.
2. The method of claim 1, wherein the designated object comprises a wader.
3. The method of claim 1, wherein determining an image of an area of a water area from the image to be processed as a target image comprises:
identifying a water area in the image to be processed through the deep learning model;
and determining the image of the area where the water area is located in the image to be processed as the target image.
4. The method of claim 1, further comprising:
and when the detection result shows that the wading event exists in the water area, sending prompt information to a specified user terminal.
5. A wading detection device, the device comprising:
the image acquisition unit is used for acquiring an image to be processed, wherein the image to be processed is an image acquired by acquiring a scene comprising a water area;
the image segmentation unit is used for inputting the image to be processed into a deep learning model to obtain a target image, wherein the target image is an image of a region where a water domain is located in the image to be processed;
the detection unit is used for extracting a first image feature of the object in the target image through a trained deep learning model, wherein the deep learning model comprises a single-point multi-box detector, and the first image feature comprises an outline shape of the object and a label type of the object; matching the first image features with pre-stored second image features, wherein the second image features are image features of a specified object extracted from a training image set containing the specified object by using the deep learning model; when the first image feature is matched with the second image feature, determining that the target object exists in the target image; when the target object exists in the target image, determining that the wading event exists in the water area, including: determining that the water area has the wading event when a body part of a person is in water; when the target object does not exist in the target image, determining that the wading event does not exist in the water area;
the detection unit is further configured to acquire a training image set including a plurality of training images, where each of the plurality of training images includes a designated object and a label corresponding to the designated object;
training a preset deep learning model according to the training image set and a preset deep learning algorithm to obtain the deep learning model after training is completed;
the preset deep learning model is a DeepLab network, and the training process of the DeepLab network comprises the following steps: semantic segmentation is carried out on the image by using the deep Lab network, the input image and the output matrix are compared to obtain the difference between the input image and the output matrix, and the parameter of the deep Lab network is adjusted according to the difference between the input image and the output matrix, so that the training of the deep Lab network is completed.
6. A server, comprising a memory and a processor coupled to each other, the memory storing a computer program which, when executed by the processor, causes the server to perform the method of any one of claims 1-4.
7. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1-4.
CN201910643192.0A 2019-07-16 2019-07-16 Wading detection method and device, server and computer readable storage medium Active CN110363153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910643192.0A CN110363153B (en) 2019-07-16 2019-07-16 Wading detection method and device, server and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910643192.0A CN110363153B (en) 2019-07-16 2019-07-16 Wading detection method and device, server and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110363153A CN110363153A (en) 2019-10-22
CN110363153B true CN110363153B (en) 2022-08-02

Family

ID=68219973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910643192.0A Active CN110363153B (en) 2019-07-16 2019-07-16 Wading detection method and device, server and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110363153B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222421A (en) * 2019-12-26 2020-06-02 杭州宇泛智能科技有限公司 Method and device for detecting personnel state in water area and electronic equipment
CN113052048B (en) * 2021-03-18 2024-05-10 阿波罗智联(北京)科技有限公司 Traffic event detection method and device, road side equipment and cloud control platform
CN113469021A (en) * 2021-06-29 2021-10-01 深圳市商汤科技有限公司 Video processing apparatus, electronic device, and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413114A (en) * 2013-05-17 2013-11-27 浙江大学 Near-drowning behavior detection method based on support vector machine
CN109102678A (en) * 2018-08-30 2018-12-28 青岛联合创智科技有限公司 A kind of drowned behavioral value method of fusion UWB indoor positioning and video object detection and tracking technique
CN109637086A (en) * 2019-01-24 2019-04-16 北京工业大学 Alarm method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3408841A4 (en) * 2016-01-26 2019-08-14 Coral Detection Systems Ltd. Methods and systems for drowning detection
CN107729874B (en) * 2017-11-06 2020-10-23 青岛联合创智科技有限公司 Swimming pool drowning detection method based on space-time neighborhood characteristics
CN108122246A (en) * 2017-12-07 2018-06-05 中国石油大学(华东) Video monitoring intelligent identifying system
CN108600701B (en) * 2018-05-02 2020-11-24 广州飞宇智能科技有限公司 Monitoring system and method for judging video behaviors based on deep learning
CN109815805A (en) * 2018-12-18 2019-05-28 深圳壹账通智能科技有限公司 Automatic identification drowned method, apparatus, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413114A (en) * 2013-05-17 2013-11-27 浙江大学 Near-drowning behavior detection method based on support vector machine
CN109102678A (en) * 2018-08-30 2018-12-28 青岛联合创智科技有限公司 A kind of drowned behavioral value method of fusion UWB indoor positioning and video object detection and tracking technique
CN109637086A (en) * 2019-01-24 2019-04-16 北京工业大学 Alarm method and system

Also Published As

Publication number Publication date
CN110363153A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363153B (en) Wading detection method and device, server and computer readable storage medium
US20160335491A1 (en) Method and device for detecting gathering of objects based on stereo vision as well as non-transitory computer-readable medium
CN109815797B (en) Living body detection method and apparatus
CN109377694B (en) Monitoring method and system for community vehicles
CN112200884B (en) Lane line generation method and device
CN115100512A (en) Monitoring, identifying and catching method and system for marine economic species and storage medium
CN110688893A (en) Detection method for wearing safety helmet, model training method and related device
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN110991443A (en) Key point detection method, image processing method, key point detection device, image processing device, electronic equipment and storage medium
CN110781770A (en) Living body detection method, device and equipment based on face recognition
CN112183356A (en) Driving behavior detection method and device and readable storage medium
CN115223022A (en) Image processing method, device, storage medium and equipment
CN109241893B (en) Road selection method and device based on artificial intelligence technology and readable storage medium
CN114120090A (en) Image processing method, device, equipment and storage medium
CN110956165B (en) Intelligent community unbundling pet early warning method and system
CN112699842A (en) Pet identification method, device, equipment and computer readable storage medium
JP2016099900A (en) Device and method of searching images
CN110634120B (en) Vehicle damage judgment method and device
CN111241930A (en) Method and system for face recognition
KR102150661B1 (en) Method and Apparatus for Preprocessing Image for Detecting Objects
US20210019553A1 (en) Information processing apparatus, control method, and program
CN113483851B (en) Road surface ponding monitoring equipment and method
CN113420631A (en) Safety alarm method and device based on image recognition
CN114283364A (en) Detection method and detection device for pet tether and electronic equipment
CN113763408A (en) Method for rapidly identifying aquatic weeds in water through images in sailing process of unmanned ship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant