CN113297918A - Visual detection method and device for river drift - Google Patents

Visual detection method and device for river drift Download PDF

Info

Publication number
CN113297918A
CN113297918A CN202110474103.1A CN202110474103A CN113297918A CN 113297918 A CN113297918 A CN 113297918A CN 202110474103 A CN202110474103 A CN 202110474103A CN 113297918 A CN113297918 A CN 113297918A
Authority
CN
China
Prior art keywords
river
drift
region
image
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110474103.1A
Other languages
Chinese (zh)
Inventor
张海刚
赵子豪
孟凡胜
易非凡
杨金锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polytechnic
Original Assignee
Shenzhen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polytechnic filed Critical Shenzhen Polytechnic
Priority to CN202110474103.1A priority Critical patent/CN113297918A/en
Publication of CN113297918A publication Critical patent/CN113297918A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a visual detection method and a device for river drift, which comprises the steps of obtaining an image of a river channel; extracting an interested region of the river channel image based on a preset extraction method; performing difference feature extraction on the region of interest based on the twin neural network; and identifying the difference features extracted by the twin neural network based on an SSD target detection algorithm so as to determine the type and the position of the drift. The invention has the beneficial effects that: from the perspective of computer vision, a twin network-based target detection model is constructed, the appearance of river drift is regarded as a characteristic difference, the detection of the drift is converted into the identification and positioning of difference characteristics, and therefore the category judgment, the position positioning and the area and flow rate calculation of the drift are accurately achieved. The river drift detection is realized through a visual mode, the labor cost is low, the dependence on hardware equipment is weak, the maintenance is convenient, and the detection precision is high.

Description

Visual detection method and device for river drift
Technical Field
The invention belongs to the technical field of intelligent identification, and particularly relates to a visual detection method and device for river drift objects.
Background
The river drift intelligent detection is significant for improving the current situation of river pollution and promoting the construction of smart cities. The current river drift detection is more dependent on manual operation, and mainly comprises manual inspection and salvage, unmanned aerial vehicle patrol and alarm. The manual detection has large labor demand and high labor cost, and cannot cope with large-area and accidental application environments. The automatic and intelligent river drift detection is the future development direction of river pollution treatment. The current river drift treatment technology mainly comprises two aspects:
the construction of the intelligent salvage ship: aims to change the current lack of an efficient and intelligent water garbage cleaning technology. The method mainly adopts manual cleaning, and adopts a gauze filtering device to screen the river surface garbage by arranging a proper ship structure. The method depends on manpower seriously, has limited detection area, discontinuous operation and serious missing detection phenomenon.
River drift detection based on computer vision: the cameras are arranged on the two sides of the river channel, river surface video data are collected, and river drift object target detection is achieved based on a video or picture processing technology. The method has the advantages of strong universality, low labor cost and continuous operation, and can realize the classification, positioning, area and flow rate calculation of drifter under a unified technical framework. In contrast, a river drift detection scheme based on computer vision is a main development direction of river pollution source treatment in the future.
However, the application achievement of the current computer vision technology on river drift detection mainly depends on the traditional image processing technology, and the adopted method is mainly based on the traditional technical means such as model matching, background difference, regional feature extraction and identification. And the river drift detection has the problems that the volume of the river drift is relatively small, river video data samples with or without the drift show serious unbalance, the types of the river drift are various, the postures are different, the influence of water wave fluctuation caused by drift flow on the identification model is large, and the scale difference of different river drift is large. Drift detection belongs to multi-target and multi-scale detection problems, and the problems of serious influence of river bank background and the like easily cause the failure of the traditional image identification technology, so that the problem of low identification success rate and the like is caused.
Disclosure of Invention
In order to solve the problems of low success rate, easy error and the like in the prior art, the invention provides a visual detection method and device for a drift in a river course, which have the characteristics of high recognition success rate, difficult error and the like.
The visual detection method for the river drift objects comprises the following steps:
acquiring a river channel image;
extracting an interested region of the river channel image based on a preset extraction method;
performing difference feature extraction on the region of interest based on a twin neural network;
and identifying the difference features extracted by the twin neural network based on an SSD target detection algorithm so as to determine the type and the position of the drift.
Further, the visual detection method for the river drift further comprises the following steps:
the method comprises the steps of determining the position of the drifter in a geodetic coordinate system based on a camera calibration technology, and further determining the speed of the drifter.
Further, the determining the velocity of the drifter comprises:
determining the flow velocity v of the drifter based on the target position change deltas under the preset time interval deltat:
Figure BDA0003046432590000021
further, the extracting of the region of interest of the river channel image based on a preset extraction method includes:
and extracting the region of interest of the river channel image based on a template matching method or a CNN feature extraction method.
Further, the extracting of the region of interest of the river channel image based on the template matching method includes:
performing point multiplication operation based on the fixed template and the river channel image to extract an effective river surface interesting area:
Figure BDA0003046432590000031
wherein ROI and I are respectively an original image and a river channel image, P is a template,
Figure BDA0003046432590000032
is a matrix dot product operation.
Further, the extracting of the region of interest of the river channel image based on the CNN feature extraction method includes:
setting a grid division mechanism by taking a river channel image as input, and establishing a binary discrimination database of river surfaces and non-river surface areas; and (3) realizing a two-classification task of image grid region input by adopting a lightweight Alexnet network, and extracting the region of interest.
Further, the acquiring the river channel image includes:
and taking river channel pictures or video stream image frames as river channel images.
According to the specific embodiment of the invention, the visual detection device for the river drift objects comprises:
the image acquisition module is used for acquiring a river channel image;
the interesting region extraction module is used for extracting an interesting region of the river channel image based on a preset extraction method;
the difference feature extraction module is used for carrying out difference feature extraction on the region of interest based on the twin neural network; and
and the target detection module is used for identifying the difference characteristics extracted by the twin neural network based on an SSD target detection algorithm so as to determine the drifter.
The invention has the beneficial effects that: extracting an interested area of the river channel image based on a preset extraction method; then extracting difference features of the region of interest based on the twin neural network; therefore, detection of the drifter is converted into identification and positioning of difference features, the difference features extracted from the twin neural network are identified through an SSD-based target detection algorithm to determine the drifter, and compared with the prior art, the method can be used for identifying drifters with various sizes, and is higher in identification precision, accuracy and efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without any creative effort.
FIG. 1 is a flow chart of a method for visual detection of river drift provided in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram of a river drift object target detection model under a deep learning framework provided in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram of twin network difference feature extraction provided in accordance with an exemplary embodiment;
fig. 4 is a schematic diagram of a river drift detection device according to an exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for visually detecting a river drift, which specifically includes the following steps:
101. acquiring a river channel image; for the acquisition of the images of the river and the river, complicated hardware equipment is not needed, the images acquired by image acquisition equipment such as a camera and the like can be used as input, or video stream image frames are directly read as input, so that a wider range of image acquisition forms can be tried, and the method is not limited to the image acquisition forms.
102. Extracting an interested region of the river channel image based on a preset extraction method; the river drift object target detection is realized through a computer vision technology, and scenes on two sides of a river bank are not concerned. In order to improve robustness and stability of the detection algorithm, a river surface ROI (region of interest) extraction framework needs to be constructed.
103. Performing difference feature extraction on the region of interest based on the twin neural network;
104. and identifying the difference features extracted by the twin neural network based on an SSD target detection algorithm so as to determine the type and the position of the drift.
Through constructing the river drift object target detection model under the deep learning framework, because river drift species are different, sizes are different, and the gesture is very different: the method is a typical multi-label and multi-scale target detection problem, the current vision-based target detection precision cannot meet the application requirements, and the phenomena of serious missing detection and false detection exist for small-size drifters. Aiming at a plurality of problems of river drift detection, the invention provides a river drift target detection model combining a twin neural network and an SSD (Single Shot Multi Box Detector) target detection algorithm, thereby being capable of identifying drift more quickly and accurately.
In another specific embodiment of the present invention, the method further comprises:
the method comprises the steps of determining the position of the drifter in a geodetic coordinate system based on a camera calibration technology, and further determining the speed of the drifter.
Determining the velocity of the drift includes:
determining the flow velocity v of the drifter based on the target position change deltas under the preset time interval deltat:
Figure BDA0003046432590000051
therefore, the position of the detected target in the geodetic coordinate system can be accurately obtained through the camera calibration technology. By analyzing and comparing the target position change (delta s) at a certain time interval (delta t), the flow velocity (v) of the drifter can be easily obtained, and the later fishing operation is helped. The calculation formula is as follows:
Figure BDA0003046432590000052
in some embodiments of the present invention, extracting the interesting region of the river channel image based on the preset extraction method includes:
and extracting the region of interest of the river channel image based on a template matching method or a CNN feature extraction method.
The method for extracting the region of interest of the river channel image based on the template matching method comprises the following steps:
performing point multiplication operation based on the fixed template and the river channel image to extract an effective river surface interesting area:
Figure BDA0003046432590000053
wherein ROI and I are respectively an original image and a river channel image, P is a template,
Figure BDA0003046432590000061
is a matrix dot product operation.
The extraction of the region of interest of the river channel image based on the CNN feature extraction method comprises the following steps:
setting a grid division mechanism by taking a river channel image as input, and establishing a binary discrimination database of river surfaces and non-river surface areas; and (3) realizing a two-classification task of image grid region input by adopting a lightweight Alexnet network, and extracting the region of interest.
Specifically, the river drift object target detection does not pay attention to scenes on two sides of a river bank. In order to improve the robustness and stability of the detection algorithm, a river surface ROI extraction framework needs to be constructed:
1) the template matching method comprises the following steps: the river channel scene is relatively fixed, and the position of the ROI on the river surface is kept unchanged under the fixed camera view angle, so that a feasible foundation is laid for the template matching method. Under a fixed scene, a fixed template is formed by artificially dividing a river surface area and a non-river surface area, and is subjected to dot product operation with an original image to extract an effective river surface ROI, and the formula is as follows:
Figure BDA0003046432590000062
wherein ROI and I are an original image and an input image respectively, P is a template,
Figure BDA0003046432590000063
is a matrix dot product operation.
2) CNN feature extraction method:
convolutional Neural Networks (CNN) are the most typical representation of deep learning models, and feature layer-by-layer extraction and abstraction are realized on input in a multi-layer Neural network structure in a data-driven and self-learning manner, so that perfect fitting of data features is finally realized. CNNs have now enjoyed great success in many areas of research, such as speech recognition, image segmentation, natural language processing, etc. Especially in the field of image processing, the effect of CNN has been recognized both academically and industrially. The method comprises the steps of realizing river ROI extraction through a CNN model, aiming at setting a grid division mechanism by taking river video pictures as input and establishing a binary discrimination database of river and non-river areas; and (3) realizing a binary task of image grid area input by adopting a lightweight Alexnet network.
It is understood that other feature extractions may be used by one skilled in the art to extract features, and the invention is not limited thereto.
Referring to fig. 2 and 3, as river drift species are different, sizes are different, and attitudes are different: the method is a typical multi-label and multi-scale target detection problem, the current vision-based target detection precision cannot meet the application requirement, and the phenomena of serious missing detection and false detection exist for small-size drifts. Therefore, aiming at a plurality of problems of river drift detection, a river drift target detection model combining a twin neural network and an SSD (Single Shot MultiBox Detector) target detection algorithm is provided:
the twin network is first employed to enhance the differential characteristics. The river drift objects are small in target and uncertain in scene, and therefore challenges are brought to acquisition of a large amount of video image data containing the drift objects. The method constructs a twin neural network, takes two pictures with or without driftage in the same scene as input, and adopts parallel CNN feature extractors to extract the difference features of input sample pairs; the characteristic difference corresponds to the characteristic of the river drift objects, the drift object detection problem is converted into the characteristic difference identification problem, and the model training problem caused by small samples, unbalanced data and uncertain drift objects can be effectively solved. The parallel network is composed of CNN frames shared by weight values, and realizes differentiation processing on parallel output so as to strengthen the target characteristics of the drift objects.
And then, performing SSD target detection, wherein the differentiated output of the twin network is used as the input of an SSD target detection model, and finally, target identification and positioning are realized. In the CNN feature extraction process, small target information is reserved in shallow layer features, and the small target features gradually weaken along with the increase of the number of network layers, so that the traditional target detection algorithm cannot effectively cope with the small target detection problem. The SSD is a One-Stage method in the field of target detection, and the biggest bright spot is to use a shallow network to detect a small target, and a deep network to detect a large target. This is mainly because shallow neurons have more detailed information that is more effective for small targets, deep neurons have a larger receptive field, and more abstract semantic information is more effective for large targets.
Fig. 4 is a schematic structural diagram of a river drift visual detection apparatus provided in the second embodiment of the present invention, which is suitable for implementing a river drift visual detection method provided in the second embodiment of the present invention.
As shown in FIG. 4, the device may be embodied in other ways
The method comprises the following steps:
an image acquisition module 401, configured to acquire a river channel image;
the region-of-interest extraction module 402 is configured to extract a region of interest of the river channel image based on a preset extraction method;
a difference feature extraction module 403, configured to perform difference feature extraction on the region of interest based on the twin neural network; and
and the target detection module 404 is configured to identify the difference features extracted by the twin neural network based on an SSD target detection algorithm to determine the drift.
The river drift detection device provided by the embodiment of the invention can execute the river drift visual detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
The visual detection method and device for the river drift objects provided by the embodiment of the invention are based on the computer vision angle, a target detection model based on a twin network is constructed, the appearance of the river drift objects is regarded as feature differences, and the detection of the drift objects is converted into the identification and positioning of the difference features, so that the category judgment, the position positioning and the area and flow rate calculation of the drift objects can be realized more accurately.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A visual detection method for river drift is characterized by comprising the following steps:
acquiring a river channel image;
extracting an interested region of the river channel image based on a preset extraction method;
performing difference feature extraction on the region of interest based on a twin neural network;
and identifying the difference features extracted by the twin neural network based on an SSD target detection algorithm so as to determine the type and the position of the drift.
2. The visual detection method for river drifters according to claim 1, further comprising:
the method comprises the steps of determining the position of the drifter in a geodetic coordinate system based on a camera calibration technology, and further determining the speed of the drifter.
3. The visual detection method of river drift according to claim 2, wherein the determining the velocity of the drift comprises:
determining the flow velocity v of the drifter based on the target position change deltas under the preset time interval deltat:
Figure FDA0003046432580000011
4. the visual detection method for river drift objects according to claim 1, wherein the extracting of the region of interest of the river image based on a preset extraction method comprises:
and extracting the region of interest of the river channel image based on a template matching method or a CNN feature extraction method.
5. The visual detection method of river drift according to claim 4, wherein the extracting of the region of interest of the river image based on the template matching method comprises:
performing point multiplication operation based on the fixed template and the river channel image to extract an effective river surface interesting area:
Figure FDA0003046432580000012
wherein ROI and I are respectively an original image and a river channel image, P is a template,
Figure FDA0003046432580000013
is a matrix dot product operation.
6. The visual detection method for river drifters according to claim 4, wherein the extracting of the region of interest of the river image based on the CNN feature extraction method comprises:
setting a grid division mechanism by taking a river channel image as input, and establishing a binary discrimination database of river surfaces and non-river surface areas; and (3) realizing a two-classification task input in an image grid area by adopting a lightweight Alexnet network, and extracting the region of interest.
7. The method for visually detecting the river drift according to any one of claims 1 to 6, wherein the acquiring the river image comprises:
and taking river channel pictures or video stream image frames as river channel images.
8. The utility model provides a river course drift thing visual detection device which characterized in that includes:
the image acquisition module is used for acquiring a river channel image;
the interesting region extraction module is used for extracting an interesting region of the river channel image based on a preset extraction method;
the difference feature extraction module is used for carrying out difference feature extraction on the region of interest based on the twin neural network; and
and the target detection module is used for identifying the difference characteristics extracted by the twin neural network based on an SSD target detection algorithm so as to determine the drifter.
CN202110474103.1A 2021-04-29 2021-04-29 Visual detection method and device for river drift Pending CN113297918A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110474103.1A CN113297918A (en) 2021-04-29 2021-04-29 Visual detection method and device for river drift

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110474103.1A CN113297918A (en) 2021-04-29 2021-04-29 Visual detection method and device for river drift

Publications (1)

Publication Number Publication Date
CN113297918A true CN113297918A (en) 2021-08-24

Family

ID=77320598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110474103.1A Pending CN113297918A (en) 2021-04-29 2021-04-29 Visual detection method and device for river drift

Country Status (1)

Country Link
CN (1) CN113297918A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120294482A1 (en) * 2011-05-19 2012-11-22 Fuji Jukogyo Kabushiki Kaisha Environment recognition device and environment recognition method
CN111460999A (en) * 2020-03-31 2020-07-28 北京工业大学 Low-altitude aerial image target tracking method based on FPGA
US10725438B1 (en) * 2019-10-01 2020-07-28 11114140 Canada Inc. System and method for automated water operations for aquatic facilities using image-based machine learning
CN212270887U (en) * 2020-12-01 2021-01-01 华潍项目管理有限公司 Water surface floater detection device based on twin grid
CN112287986A (en) * 2020-10-16 2021-01-29 浪潮(北京)电子信息产业有限公司 Image processing method, device and equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120294482A1 (en) * 2011-05-19 2012-11-22 Fuji Jukogyo Kabushiki Kaisha Environment recognition device and environment recognition method
US10725438B1 (en) * 2019-10-01 2020-07-28 11114140 Canada Inc. System and method for automated water operations for aquatic facilities using image-based machine learning
CN111460999A (en) * 2020-03-31 2020-07-28 北京工业大学 Low-altitude aerial image target tracking method based on FPGA
CN112287986A (en) * 2020-10-16 2021-01-29 浪潮(北京)电子信息产业有限公司 Image processing method, device and equipment and readable storage medium
CN212270887U (en) * 2020-12-01 2021-01-01 华潍项目管理有限公司 Water surface floater detection device based on twin grid

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
G. SHEN等: "Infrared Multi-Pedestrian Tracking in Vertical View via Siamese Convolution Network", 《IEEE ACCESS》, vol. 7, pages 42718 - 42725, XP011719222, DOI: 10.1109/ACCESS.2019.2892469 *
唐小敏等: "基于SSD 深度网络的河道漂浮物检测技术研究", 《计算机技术与发展》, vol. 30, no. 9, pages 1 *
王超奇等: "基于孪生网络结构的单样本图例检测方法", 《计算机与现代化》, no. 12, pages 2 *

Similar Documents

Publication Publication Date Title
CN107123131B (en) Moving target detection method based on deep learning
CN109255317B (en) Aerial image difference detection method based on double networks
CN103164706B (en) Object counting method and device based on video signal analysis
CN111899227A (en) Automatic railway fastener defect acquisition and identification method based on unmanned aerial vehicle operation
CN109816689A (en) A kind of motion target tracking method that multilayer convolution feature adaptively merges
CN103886325B (en) Cyclic matrix video tracking method with partition
CN109341703A (en) A kind of complete period uses the vision SLAM algorithm of CNNs feature detection
CN111340855A (en) Road moving target detection method based on track prediction
CN107978110A (en) Fence intelligence identifying system in place and recognition methods based on images match
CN103593679A (en) Visual human-hand tracking method based on online machine learning
CN106778540A (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN109086803A (en) A kind of haze visibility detection system and method based on deep learning and the personalized factor
CN114049572A (en) Detection method for identifying small target
CN104778699A (en) Adaptive object feature tracking method
CN112465854A (en) Unmanned aerial vehicle tracking method based on anchor-free detection algorithm
CN112232240A (en) Road sprinkled object detection and identification method based on optimized intersection-to-parallel ratio function
Tao et al. Smoky vehicle detection based on range filtering on three orthogonal planes and motion orientation histogram
CN105335985B (en) A kind of real-time capturing method and system of docking aircraft based on machine vision
CN113129336A (en) End-to-end multi-vehicle tracking method, system and computer readable medium
CN110334703B (en) Ship detection and identification method in day and night image
Xiang et al. A real-time vehicle traffic light detection algorithm based on modified YOLOv3
Xingxin et al. Adaptive auxiliary input extraction based on vanishing point detection for distant object detection in high-resolution railway scene
CN103258433B (en) Intelligent clear display method for number plates in traffic video surveillance
CN113379603B (en) Ship target detection method based on deep learning
CN116246096A (en) Point cloud 3D target detection method based on foreground reinforcement knowledge distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824