CN114972730A - Large-wave reverse light environment infrared target detection method combining multiple processing domains - Google Patents

Large-wave reverse light environment infrared target detection method combining multiple processing domains Download PDF

Info

Publication number
CN114972730A
CN114972730A CN202210395430.2A CN202210395430A CN114972730A CN 114972730 A CN114972730 A CN 114972730A CN 202210395430 A CN202210395430 A CN 202210395430A CN 114972730 A CN114972730 A CN 114972730A
Authority
CN
China
Prior art keywords
target
similarity
frame
difference
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210395430.2A
Other languages
Chinese (zh)
Inventor
董丽丽
马冬冬
张萌
高瑞丽
刘君琪
许文海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202210395430.2A priority Critical patent/CN114972730A/en
Publication of CN114972730A publication Critical patent/CN114972730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting an infrared target in a large-wave adverse light environment by combining multiple processing domains, which comprises the following steps: splitting the sequence image into single-frame images, carrying out Gabor filtering and downsampling operation with the step length of 2, and constructing directional Gaussian pyramid maps under multiple scales; extracting regions of interest of the constructed directional Gaussian pyramid image, performing self-difference operation on the regions of interest, and filtering by using a threshold value to obtain a single-frame difference result; extracting regions of interest based on the obtained difference results, judging the mutual similarity of the regions of interest of two adjacent frames of images, and filtering by a threshold value to obtain a similarity result of the adjacent frames; and performing continuous mean significance matching based on the obtained multi-frame similarity results, and judging as a real target when a matching threshold is reached. The technical scheme of the invention solves the technical problem that no reliable detection method is available in the infrared backlight environment in the prior art.

Description

Large-wave adverse light environment infrared target detection method combining multiple processing domains
Technical Field
The invention relates to the technical field of image processing, in particular to a method for detecting an infrared target in a high-wave adverse light environment by combining multiple processing domains.
Background
The complex and variable sea conditions will bring more challenges and difficulties to the maritime infrared search and rescue system. Therefore, it is an urgent task to research the offshore infrared target detection algorithm under different sea conditions. In order to guarantee the life and property safety of marine personnel, research on efficient target detection algorithms aiming at different severe conditions is an urgent work and task. The challenges in a high-wave adverse light environment can be described in several ways: the ocean wave generator comprises local high-significance ocean waves, light and shade alternating ocean waves caused by sunlight flicker, and ocean waves which are similar to a target shape and low in strength.
The existing infrared target detection methods generally fall into two categories: single frame detection and sequence frame detection. But the real target under the environment of strong wind waves and backlight cannot be accurately detected.
Disclosure of Invention
According to the technical problem that no reliable detection method is available in the infrared backlight environment in the prior art, the invention provides a large-wave backlight environment infrared target detection method combining multiple processing domains. Secondly, an intra-frame self-dissimilarity measurement method is applied according to the difference between the target image block and the background image block in the same frame. Then, an inter-frame mutual similarity scoring function is applied based on the continuity of the inter-frame target image block region. And finally, using an anti-jitter multi-frame continuous matching strategy according to the stability of the target significant value in the multi-frame image to obtain an accurate target detection result.
The technical means adopted by the invention are as follows:
a method for detecting an infrared target in a large-wave adverse light environment combined with multiple processing domains comprises the following steps:
s1, splitting the sequence image into single frame images, carrying out Gabor filtering and downsampling operation with the step length of 2, and constructing directional Gaussian pyramid images under multiple scales;
s2, extracting regions of interest from the directional Gaussian pyramid chart constructed in the step S1, performing self-difference operation on the regions of interest, and filtering by using a threshold value to obtain a single-frame difference result;
s3, extracting regions of interest based on the difference results obtained in the step S2, judging the mutual similarity of the regions of interest of the two adjacent frames of images, and filtering the regions of interest by a threshold value to obtain a similarity result of the two adjacent frames;
and S4, performing continuous mean significance matching based on the multi-frame similarity result obtained in the step S3, and judging as a real target when a matching threshold is reached.
Further, in step S1, feature extraction and fusion are performed on the split single-frame image by using 0 ° and 90 ° Gabor filters in the visual attention model.
Further, the self-difference operation in step S2 is to apply a mean square error to measure the target area and the sea wave background area, and the specific operation formula is as follows:
Figure BDA0003597172500000021
wherein, Self DS (k) Indicating the degree of difference between the k-th group of regions; c pi (x1, y1) and C pj (x2, y2) represents the gray values of the pi-th region (x1, y1) position and the pj-th region (x2, y2) position of the current frame;
Figure BDA0003597172500000022
and
Figure BDA0003597172500000023
respectively representing the average value of the gray levels of the pi area and the pj area of the current frame; num (C) RVAM ) The number of connected components of the directional gaussian pyramid is shown.
Further, the similarity determination in step S3 adopts a point-to-point difference operation to measure the similarity of the image blocks in the adjacent frames, and the operation formula is as follows:
Figure BDA0003597172500000024
wherein mutus (k) represents the similarity degree of the kth group of region matching, Fpi (x, y) is the gray value of the position of the pi-th region (x, y) of the previous frame, and Cpj (x, y) represents the gray value of the position of the pj-th region (x, y) of the current frame; theta represents a correlation factor controlling the degree of similarity between the final frames; num (F) DS ) And num (C) DS ) The number of connected fields representing the difference result of the previous frame and the current frame, respectively.
Further, the continuous mean significance matching in step S4 is measured according to the target mean significance between multiple frames, and the measurement formula is as follows:
Figure BDA0003597172500000031
||S avg(j) -S avg(j+1) || 2 <0.05j∈1,2,...,j%5
and when the significance difference value of the adjacent frames is within 0.05 and is continuous for 5 times, judging the region as a real target, and if not, judging the region as a false target and removing the false target.
Compared with the prior art, the invention has the following advantages:
the method for detecting the infrared target in the large-wave backlighting environment combining multiple processing areas can overcome the interference of light and shade alternate waves caused by local high-significance waves and sunlight flicker and waves with similar shapes and lower intensity as the target, and realizes stable target detection in the large-wave backlighting environment.
For the above reasons, the present invention can be widely applied to the fields of image processing and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a test chart and a target detection result chart provided in the embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. Any specific values in all examples shown and discussed herein are to be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In the description of the present invention, it is to be understood that the orientation or positional relationship indicated by the directional terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal" and "top, bottom", etc., are generally based on the orientation or positional relationship shown in the drawings, and are used for convenience of description and simplicity of description only, and in the absence of any contrary indication, these directional terms are not intended to indicate and imply that the device or element so referred to must have a particular orientation or be constructed and operated in a particular orientation, and therefore should not be considered as limiting the scope of the present invention: the terms "inner and outer" refer to the inner and outer relative to the profile of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "above … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and the terms have no special meanings unless otherwise stated, and therefore, the scope of the present invention should not be construed as being limited.
As shown in fig. 1, the invention provides a method for detecting an infrared target in a large-wave backlighting environment in a combined multi-processing domain, mainly aiming at the conditions that local high-significance waves, bright and dark alternate waves caused by sunlight flicker, and wave interference similar to the target in shape and low in strength can occur in the large-wave backlighting environment, and the method comprises the following steps:
s1, splitting the sequence image into single frame images, carrying out Gabor filtering and downsampling operation with the step length of 2, and constructing directional Gaussian pyramid images under multiple scales;
in the present embodiment, the sequence of high-wind-wave backlight images as in fig. 2(a1) is input, and the split single-frame images are subjected to feature extraction and fusion by using 0 ° and 90 ° Gabor filters in the visual attention model (existing model).
S2, extracting regions of interest from the directional Gaussian pyramid chart constructed in the step S1, performing self-difference operation on the regions of interest, and filtering by using a threshold value to obtain a single-frame difference result;
in specific implementation, as a preferred embodiment of the present invention, the self-difference operation in step S2 is to apply a mean square error to measure the target area and the sea wave background area, and a specific operation formula is as follows:
Figure BDA0003597172500000061
wherein, Self DS (k) Indicating the degree of difference between the k-th group of regions; c pi (x1, y1) and C pj (x2, y2) represents the gray values of the pi-th region (x1, y1) position and the pj-th region (x2, y2) position of the current frame;
Figure BDA0003597172500000062
and
Figure BDA0003597172500000063
respectively representing the average value of the gray levels of the pi area and the pj area of the current frame; num (C) RVAM ) The number of connected components of the directional gaussian pyramid is shown.
S3, extracting regions of interest based on the difference results obtained in the step S2, judging the mutual similarity of the regions of interest of two adjacent frames of images, and filtering by a threshold value to obtain a similarity result of the adjacent frames;
in a specific implementation, as a preferred embodiment of the present invention, the similarity determination in step S3 measures the similarity of the image blocks in adjacent frames by using a point-to-point difference operation, where the operation formula is as follows:
Figure BDA0003597172500000064
wherein mutus (k) represents the similarity degree of the kth group of region matching, Fpi (x, y) is the gray value of the position of the pi-th region (x, y) of the previous frame, and Cpj (x, y) represents the gray value of the position of the pj-th region (x, y) of the current frame; theta represents a correlation factor controlling the degree of similarity between the final frames; num (F) DS ) And num (C) DS ) The number of connected fields representing the difference result of the previous frame and the current frame, respectively.
And S4, performing continuous mean significance matching based on the multi-frame similarity result obtained in the step S3, and judging as a real target when a matching threshold is reached.
In specific implementation, as a preferred embodiment of the present invention, the continuous mean significance matching in step S4 is measured according to the target mean significance between multiple frames, and the measurement formula is as follows:
Figure BDA0003597172500000065
||S avg(j) -S avg(j+1) || 2 <0.05j∈k1,2,...,j%5
and when the significance difference value of the adjacent frames is within 0.05 and is continuous for 5 times, judging the region as a real target, and if not, judging the region as a false target and removing the false target. The final result is shown in fig. 2(b 1).
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A method for detecting an infrared target in a large-wave adverse light environment combined with multiple processing domains is characterized by comprising the following steps:
s1, splitting the sequence image into single frame images, carrying out Gabor filtering and downsampling operation with the step length of 2, and constructing directional Gaussian pyramid images under multiple scales;
s2, extracting regions of interest from the directional Gaussian pyramid chart constructed in the step S1, performing self-difference operation on the regions of interest, and filtering by using a threshold value to obtain a single-frame difference result;
s3, extracting regions of interest based on the difference results obtained in the step S2, judging the mutual similarity of the regions of interest of two adjacent frames of images, and filtering by a threshold value to obtain a similarity result of the adjacent frames;
and S4, performing continuous mean significance matching based on the multi-frame similarity result obtained in the step S3, and judging as a real target when a matching threshold is reached.
2. The method for detecting the infrared target in the high-wave inverse light environment according to claim 1, wherein in step S1, the split single-frame images are subjected to feature extraction and fusion by using 0 ° and 90 ° Gabor filters in a visual attention model.
3. The method for detecting the infrared target in the large-wave and adverse-light environment according to claim 1, wherein the self-differencing operation in step S2 is to apply a mean square error to measure the target area and the wave background area, and the specific operation formula is as follows:
Figure FDA0003597172490000011
wherein, Self DS (k) Indicating the degree of difference between the k-th group of regions; c pi (x1, y1) and C pj (x2, y2) represents the gray values of the pi-th region (x1, y1) position and the pj-th region (x2, y2) position of the current frame;
Figure FDA0003597172490000012
and
Figure FDA0003597172490000013
respectively representing the average value of the gray levels of the pi area and the pj area of the current frame; num (C) RVAM ) The number of connected components in the directional gaussian pyramid is shown.
4. The method for detecting the infrared target in the high-wave inverse light environment according to claim 1, wherein the mutual similarity determination in step S3 adopts point-to-point difference operation to measure the similarity of the image blocks of the adjacent frames, and the operation formula is as follows:
Figure FDA0003597172490000021
wherein mutus (k) represents the similarity degree of the kth group of region matching, Fpi (x, y) is the gray value of the position of the pi-th region (x, y) of the previous frame, and Cpj (x, y) represents the gray value of the position of the pj-th region (x, y) of the current frame; theta represents a correlation factor controlling the degree of similarity between the final frames; num (F) DS ) And num (C) DS ) The number of connected fields representing the difference result of the previous frame and the current frame, respectively.
5. The method for detecting the infrared target in the high-wave inverse light environment of the combined multi-processing domain as recited in claim 1, wherein the continuous mean significance matching in the step S4 is measured according to the mean significance of the target among the multiple frames, and the measurement formula is as follows:
Figure FDA0003597172490000022
||S avg(j) -S avg(j+1) || 2 <0.05j∈1,2,...,j%5
and when the significance difference value of the adjacent frames is within 0.05 and is continuous for 5 times, judging the region as a real target, and if not, judging the region as a false target and removing the false target.
CN202210395430.2A 2022-04-14 2022-04-14 Large-wave reverse light environment infrared target detection method combining multiple processing domains Pending CN114972730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210395430.2A CN114972730A (en) 2022-04-14 2022-04-14 Large-wave reverse light environment infrared target detection method combining multiple processing domains

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210395430.2A CN114972730A (en) 2022-04-14 2022-04-14 Large-wave reverse light environment infrared target detection method combining multiple processing domains

Publications (1)

Publication Number Publication Date
CN114972730A true CN114972730A (en) 2022-08-30

Family

ID=82977907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210395430.2A Pending CN114972730A (en) 2022-04-14 2022-04-14 Large-wave reverse light environment infrared target detection method combining multiple processing domains

Country Status (1)

Country Link
CN (1) CN114972730A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304231A1 (en) * 2008-06-09 2009-12-10 Arcsoft, Inc. Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
US9196053B1 (en) * 2007-10-04 2015-11-24 Hrl Laboratories, Llc Motion-seeded object based attention for dynamic visual imagery
CN109993744A (en) * 2019-04-09 2019-07-09 大连海事大学 A kind of infrared target detection method under sea backlight environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196053B1 (en) * 2007-10-04 2015-11-24 Hrl Laboratories, Llc Motion-seeded object based attention for dynamic visual imagery
US20090304231A1 (en) * 2008-06-09 2009-12-10 Arcsoft, Inc. Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN109993744A (en) * 2019-04-09 2019-07-09 大连海事大学 A kind of infrared target detection method under sea backlight environment

Similar Documents

Publication Publication Date Title
US11580647B1 (en) Global and local binary pattern image crack segmentation method based on robot vision
CN103871053B (en) Vision conspicuousness-based cloth flaw detection method
CN105405142B (en) A kind of the side defect inspection method and system of glass panel
CN103234976B (en) Based on the online visible detection method of tricot machine Fabric Defect of Gabor transformation
CN103729842B (en) Based on the fabric defect detection method of partial statistics characteristic and overall significance analysis
CN105913415A (en) Image sub-pixel edge extraction method having extensive adaptability
CN109146833A (en) A kind of joining method of video image, device, terminal device and storage medium
CN105354847A (en) Fruit surface defect detection method based on adaptive segmentation of sliding comparison window
CN103442209A (en) Video monitoring method of electric transmission line
CN107292879B (en) A kind of sheet metal surface method for detecting abnormality based on image analysis
CN107665348B (en) Digital identification method and device for digital instrument of transformer substation
CN109993744B (en) Infrared target detection method under offshore backlight environment
CN105225216A (en) Based on the Iris preprocessing algorithm of space apart from circle mark rim detection
CN111563896B (en) Image processing method for detecting abnormality of overhead line system
Gyimah et al. A robust completed local binary pattern (rclbp) for surface defect detection
Shi et al. A faster-rcnn based chemical fiber paper tube defect detection method
CN109117855A (en) Abnormal power equipment image identification system
CN105550703A (en) Image similarity calculating method suitable for human body re-recognition
CN107067595A (en) State identification method, device and the electronic equipment of a kind of indicator lamp
Bullkich et al. Moving shadow detection by nonlinear tone-mapping
CN110321855A (en) A kind of greasy weather detection prior-warning device
Wang et al. License plate location algorithm based on edge detection and morphology
Hashmani et al. A survey on edge detection based recent marine horizon line detection methods and their applications
Diao et al. Image sequence measures for automatic target tracking
CN114972730A (en) Large-wave reverse light environment infrared target detection method combining multiple processing domains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination