CN107274391A - A kind of underwater picture Objective extraction system - Google Patents

A kind of underwater picture Objective extraction system Download PDF

Info

Publication number
CN107274391A
CN107274391A CN201710424076.0A CN201710424076A CN107274391A CN 107274391 A CN107274391 A CN 107274391A CN 201710424076 A CN201710424076 A CN 201710424076A CN 107274391 A CN107274391 A CN 107274391A
Authority
CN
China
Prior art keywords
mrow
msub
image
local area
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710424076.0A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuzhou Xing Neng Agriculture Science And Technology Co Ltd
Original Assignee
Wuzhou Xing Neng Agriculture Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuzhou Xing Neng Agriculture Science And Technology Co Ltd filed Critical Wuzhou Xing Neng Agriculture Science And Technology Co Ltd
Priority to CN201710424076.0A priority Critical patent/CN107274391A/en
Publication of CN107274391A publication Critical patent/CN107274391A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

Image object extraction system in a kind of water, including main control module, underwater robot module, image processing module, information transmission modular and display module, the main control module is used for the running for controlling underwater robot module and the collection of original image under water, described image processing module is used to handle the original image, so as to recognize the target in image, described information transport module is shown for the image information after processing to be transferred into display module.Beneficial effects of the present invention are:The system carries out effective collection of underwater picture by underwater robot module, and subarea processing is carried out to the image collected, the interference such as the effective noise eliminated in original image, and then by carrying out edge treated and centre of form extraction to filtered image, realize the quick identification of submarine target.

Description

A kind of underwater picture Objective extraction system
Technical field
The invention is related to object detection field, and in particular to a kind of submarine target extraction system.
Background technology
In recent years, continuing to develop with international situation, ocean is increasingly becoming the new strategic center of gravity in countries in the world, ocean phase The research in pass field is important for all having in terms of marine resources detection and exploitation, marine environmental monitoring and ocean Military Application Meaning.Therefore, the present invention provides a kind of underwater picture Objective extraction system, realizes the effective detection of submarine target, improves The accuracy of underwater target detection.
The content of the invention
In view of the above-mentioned problems, a kind of the present invention is intended to provide Objective extraction system in water.
The purpose of the invention is achieved through the following technical solutions:
Image object extraction system in a kind of water, including main control module, underwater robot module, image processing module, letter Transport module and display module are ceased, the main control module is used for the running for controlling underwater robot and control machine people is carried out under water The collection of original image, described image processing module is used to handle the original image, so as to recognize the mesh in image Mark, described information transport module is shown for the image information after processing to be transferred into display module.
The beneficial effect of the invention:The system carries out effective collection of underwater picture by underwater robot module, And subarea processing is carried out to the image collected, effectively eliminate noise in original image etc. and disturb, and then pass through Edge treated is carried out to filtered image and the centre of form is extracted, the quick identification of submarine target is realized.
Brief description of the drawings
Innovation and creation are described further using accompanying drawing, but the embodiment in accompanying drawing does not constitute and the invention is appointed What is limited, on the premise of not paying creative work, can also be according to the following drawings for one of ordinary skill in the art Obtain other accompanying drawings.
Fig. 1 is schematic structural view of the invention;
Fig. 2 is underwater robot modular structure schematic diagram of the present invention
Fig. 3 is image processing module structural representation of the present invention.
Reference:
Main control module 1;Underwater robot module 2;Image processing module 3;Information transmission modular 3;Display module 4;Control Device unit 21;Underwater camera unit 22;Underwater lighting unit 23;Gaussian filtering unit 31;Edge treated unit 32;The centre of form is extracted Unit 33.
Embodiment
The invention will be further described with the following Examples.
Referring to Fig. 1, Fig. 2 and Fig. 3, image object extraction system in a kind of water of the present embodiment, including main control module 1, water Lower robot module 2, image processing module 3, information transmission modular 4 and display module 5, the main control module 1 are used to control water The running of lower robot module 2 and the under water collection of original image, described image processing module 3 are used to enter the original image Row processing, so as to recognize the target in image, described information transport module 4 is used to the image information after processing being transferred to display Module 5 is shown.
Preferably, the underwater robot module 2 includes controller unit 21, underwater camera unit 22 and underwater lighting list Member 23, the controller unit 21 receives the instruction from main control module 1 by information transmission modular 4, so as to be adjusted according to instruction The running status of whole robot, the underwater camera unit 22 is used to gather original image under water, the underwater lighting unit 23 are used to be illuminated when image unit 22 is gathered under water.
Preferably, described information transport module 4 is communicated using water-proof cable with main control module 1.
This preferred embodiment carries out effective collection of underwater picture, and the figure to collecting by underwater robot module As carrying out subarea processing, effectively eliminate noise in original image etc. and disturb, and then by entering to filtered image Row edge treated and the centre of form are extracted, and realize the quick identification of submarine target.
Preferably, described image processing module 3 includes gaussian filtering unit 31, edge treated unit 32 and centre of form extraction list Member 33, the gaussian filtering unit 31 is used to handle the original image collected, removes noise in image etc. and does Disturb, and realize being accurately positioned for image border, the edge treated unit 32 is used to handle the edge of filtered image, Remaining pseudo-edge in filtered image is removed, the profile that the centre of form extraction unit 33 is used for the image according to obtained by detection is carried The centre of form of target is taken, so as to realize the quick identification of target.
Preferably, the gaussian filtering unit 31 is used to carry out at gaussian filtering the original image under water collected Reason, it uses a kind of improved filter scale algorithm to carry out the determination of gaussian filtering scale parameter, specifically included:
A. subregion filtering is carried out to the original image, calculates the combinatorial complexity factor of the regional area of image, its Computational methods are as follows:
In formula, ExFor image local area DxPixel average, ρxFor image local area DxPixel standard Variance, (m, n) is image local area DxCentral point, s be image local area DxRanks value, XijFor image local area DxThe pixel at position (i, j) place;
In formula, δxFor image local area DxInformation entropy, h is total gray level of image, ziFor image local area DxMiddle gray level is i pixel sum, piFor image local area DxMiddle gray level is the probability of i pixel;
Define image local area DxPosition (mi,ni) place pixel be f (mi,ni), then image local area DxIn Pixel f (mi,ni) θ directions average gradient value be ωx
In formula, g is image local area DxPixel sum,For image local area DxPosition (mi,ni) place's picture The abscissa value of vegetarian refreshments,For image local area DxPosition (mi,ni) place's pixel ordinate value;
Then image local area DxCombinatorial complexity factor fxCalculation formula it is as follows:
In formula, δxFor the information entropy of image local area, ρxFor image local area DxPixel standard variance, ωxFor the average gradient value of image local area,WithRespectively image local area information entropy, standard variance and Average gradient value is in combinatorial complexity factor fxIn shared ratio;
B. according to the combinatorial complexity factor of the image local area of above-mentioned gained calculate regional area filter scale because Sub- px, then regional area DxFilter scale factor pxCalculation formula be:
pmax=log2[max(H,L)]
In formula, fxIt is image local area DxLocal Complexity, fmaxIt is the maximum Local Complexity in regional area, fminIt is the minimum Local Complexity of image local area, pmaxIt is the filter scale factor of Local Minimum complexity, (H, L) is office The resolution ratio of portion's least complex correspondence image regional area;
C. according to the filter scale factor p of each regional area of above-mentioned calculating gainedx, calculate the gaussian filtering ginseng of corresponding region Number σxValue be:
This preferred embodiment has introduced the standard deviation of regional area, letter in the calculating process of image local area complexity The combination of entropy and average gradient value is ceased as the count factor of complexity, adds the standard of the Local Complexity result of calculation of image True property, carries out regional area filtering to the original image under water collected, is defined according to the Local Complexity of each regional area The filter scale size in each region, realizes the filtering that the different zones of underwater picture are carried out with different scale.
Preferably, the edge treated unit 32 is used to handle filtered image, removes in filtered image Remaining pseudo-edge, is specifically included:
A. the Grad of filtered image is calculated, so as to set up the histogram of gradients of filtered image, gradient Nogata is defined The most gradient magnitude of pixel number is T in figurem, define the high threshold y of edge connectionm, then threshold value ymCalculation formula be:
In formula, TiFor the gradient magnitude of image, TmFor the most gradient magnitude of pixel number in histogram of gradients, N is image The number of middle Grad;
B. remove and be more than above-mentioned gained high threshold y in histogram of gradientsmPixel, the gradient for setting up residual pixel point is straight Fang Tu, it is K to define the most gradient magnitude of pixel number in the histogram of gradients of residual pixel pointm, define the low of edge connection Threshold value yl, then Low threshold ylCalculation formula be:
In formula, KiFor the Grad in the histogram of gradients of residual pixel point, M is in the histogram of gradients of residual pixel point The number of gradient magnitude, KmFor the most gradient magnitude of pixel number in the histogram of gradients of residual pixel point;
C. remove and be more than threshold value y in histogram of gradientslPixel, remaining pixel is the edge of image.
This preferred embodiment is according to the adaptive threshold value of the histogrammic statistical information of image gradient, with the fixed threshold of tradition The method of value is compared, it is to avoid lose gray level value changes slow local edges, it is ensured that the accuracy of edge detection results.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than to present invention guarantor The limitation of scope is protected, although being explained with reference to preferred embodiment to the present invention, one of ordinary skill in the art should Work as understanding, technical scheme can be modified or equivalent substitution, without departing from the reality of technical solution of the present invention Matter and scope.

Claims (6)

1. image object extraction system in a kind of water, it is characterized in that, including main control module, underwater robot module, image procossing Module, information transmission modular and display module, the main control module are used for the running for controlling underwater robot module and former under water The collection of beginning image, described image processing module is used to handle the original image, so that the target in image is recognized, Described information transport module is shown for the image information after processing to be transferred into display module.
2. image object extraction system in a kind of water according to claim 1, it is characterized in that, the underwater robot module Including controller unit, underwater camera unit and underwater lighting unit, the controller unit is received by information transmission modular Instruction from main control module, so as to adjust the running status of robot according to instruction, the underwater camera unit is used to gather Original image under water, the underwater lighting unit is used to be illuminated when camera unit is gathered under water.
3. image object extraction system in a kind of water according to claim 2, it is characterized in that, described information transport module is adopted Communicated with water-proof cable with main control module.
4. image object extraction system in a kind of water according to claim 3, it is characterized in that, described image processing module bag Gaussian filtering unit, edge treated unit and centre of form extraction unit are included, the gaussian filtering unit is used for the original to collecting Beginning image is handled, and is removed noise in image etc. and is disturbed, and realizes being accurately positioned for image border, the edge treated list Member is used to handle the edge of filtered image, removes remaining pseudo-edge in filtered image, and the centre of form is extracted single Member is used for the centre of form of the contours extract target of the image according to obtained by detection, so as to realize the quick identification of target.
5. image object extraction system in a kind of water according to claim 4, it is characterized in that, the gaussian filtering unit is used Gaussian filtering process is carried out in the original image under water to collecting, it uses a kind of improved filter scale algorithm to carry out The determination of gaussian filtering scale parameter, is specifically included:
A. subregion filtering is carried out to the original image, calculates the combinatorial complexity factor of the regional area of image, it is calculated Method is as follows:
<mrow> <msub> <mi>E</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mi>m</mi> <mo>-</mo> <mrow> <mo>(</mo> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> <mrow> <mi>m</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>s</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </msubsup> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mi>n</mi> <mo>-</mo> <mrow> <mo>(</mo> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> <mrow> <mi>n</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>s</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </msubsup> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> <msup> <mi>S</mi> <mn>2</mn> </msup> </mfrac> </mrow>
<mrow> <msub> <mi>&amp;rho;</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <msqrt> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mi>m</mi> <mo>-</mo> <mrow> <mo>(</mo> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> <mrow> <mi>m</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>s</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </msubsup> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mi>n</mi> <mo>-</mo> <mrow> <mo>(</mo> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> <mrow> <mi>n</mi> <mo>+</mo> <mrow> <mo>(</mo> <mi>s</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>E</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mi>S</mi> </mfrac> </mrow>
In formula, ExFor image local area DxPixel average, ρxFor image local area DxPixel standard variance, (m, n) is image local area DxCentral point, s be image local area DxRanks value, XijFor image local area DxPosition The pixel at (i, j) place;
<mrow> <msub> <mi>&amp;delta;</mi> <mi>x</mi> </msub> <mo>=</mo> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>h</mi> </munderover> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mi>logp</mi> <mi>i</mi> </msub> </mrow>
<mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>Z</mi> <mi>i</mi> </msub> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>h</mi> </msubsup> <msub> <mi>z</mi> <mi>i</mi> </msub> </mrow> </mfrac> </mrow>
In formula, δxFor image local area DxInformation entropy, h is total gray level of image, ziFor image local area DxIn Gray level is i pixel sum, piFor image local area DxMiddle gray level is the probability of i pixel number;
Define image local area DxPosition (mi,ni) place pixel be f (mi,ni), then image local area DxPosition (mi, ni) pixel f (m in placei,ni) θ directions average gradient value be ωx
<mrow> <msub> <mi>&amp;omega;</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>g</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <msub> <mi>m</mi> <mi>i</mi> </msub> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msub> <mi>f</mi> <msub> <mi>n</mi> <mi>i</mi> </msub> </msub> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> <mi>g</mi> </mfrac> </mrow> 1
In formula, g is image local area DxPixel sum,For image local area DxPosition (mi,ni) place's pixel Abscissa value,For image local area DxPosition (mi,ni) place's pixel ordinate value;
Then image local area DxCombinatorial complexity factor fxCalculation formula it is as follows:
<mrow> <msub> <mi>f</mi> <mi>x</mi> </msub> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mn>1</mn> </msub> <msub> <mi>&amp;delta;</mi> <mi>x</mi> </msub> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mn>2</mn> </msub> <msub> <mi>&amp;omega;</mi> <mi>x</mi> </msub> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mn>3</mn> </msub> <msup> <mi>ln</mi> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>+</mo> <msub> <mi>&amp;rho;</mi> <mi>x</mi> </msub> </mrow> <mo>)</mo> </mrow> </msup> </mrow>
In formula, δxFor the information entropy of image local area, ρxFor image local area DxPixel standard variance, ωxFor The average gradient value of image local area, θ1、θ2And θ3Respectively image local area information entropy, standard variance and average ladder Angle value is in combinatorial complexity factor fxIn shared ratio;
B. the filter scale factor p of regional area is calculated according to the combinatorial complexity factor of the image local area of above-mentioned gainedx, Then regional area DxFilter scale factor pxCalculation formula be:
<mrow> <msub> <mi>p</mi> <mi>x</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>p</mi> <mi>max</mi> </msub> <mo>;</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <mo>=</mo> <msub> <mi>f</mi> <mi>min</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>p</mi> <mi>max</mi> </msub> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <msub> <mi>f</mi> <mi>x</mi> </msub> <mrow> <msub> <mi>f</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>min</mi> </msub> </mrow> </mfrac> </mrow> <mo>)</mo> </mrow> <mo>;</mo> <msub> <mi>f</mi> <mi>min</mi> </msub> <mo>&lt;</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <mo>&lt;</mo> <msub> <mi>f</mi> <mi>max</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>1</mn> <mo>;</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <mo>=</mo> <msub> <mi>f</mi> <mi>max</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
pmax=log2[max(H,L)]
In formula, fxIt is image local area DxLocal Complexity, fmaxIt is the maximum Local Complexity in regional area, fminIt is The minimum Local Complexity of image local area, pmaxIt is the filter scale factor of Local Minimum complexity, (H, L) is minimum office The resolution ratio of portion's complexity correspondence image regional area;
C. according to the filter scale factor p of each regional area of above-mentioned calculating gainedx, calculate the gaussian filtering parameter σ of corresponding regionx Value be:
<mrow> <msub> <mi>&amp;sigma;</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>p</mi> <mi>x</mi> </msub> <mn>3</mn> </mfrac> </mrow>
6. image object extraction system in a kind of water according to claim 5, it is characterized in that, the edge treated unit is used Handled in filtered image, remove remaining pseudo-edge in filtered image, specifically include:
A. the Grad of filtered image is calculated, so as to set up the histogram of gradients of filtered image, is defined in histogram of gradients The most gradient magnitude of pixel number is Tm, define the high threshold y of edge connectionm, then threshold value ymCalculation formula be:
<mrow> <msub> <mi>k</mi> <mi>m</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </msubsup> <mo>|</mo> <msub> <mi>T</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>T</mi> <mi>m</mi> </msub> <mo>|</mo> </mrow> <mi>N</mi> </mfrac> </mrow>
<mrow> <msub> <mi>y</mi> <mi>m</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <msub> <mi>k</mi> <mi>m</mi> </msub> </msup> <mo>)</mo> </mrow> </mrow>
In formula, TiFor the gradient magnitude of image, TmFor the most gradient magnitude of pixel number in histogram of gradients, N is ladder in image Spend the number of amplitude;
B. remove and be more than above-mentioned gained high threshold y in histogram of gradientsmPixel, set up the histogram of gradients of residual pixel point, It is K to define the most gradient magnitude of pixel number in the histogram of gradients of residual pixel pointm, define the Low threshold y of edge connectionl, Then Low threshold ylCalculation formula be:
<mrow> <msub> <mi>g</mi> <mi>l</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>M</mi> </msubsup> <mo>|</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>K</mi> <mi>m</mi> </msub> <mo>|</mo> </mrow> <mi>M</mi> </mfrac> </mrow>
<mrow> <msub> <mi>y</mi> <mi>l</mi> </msub> <mo>=</mo> <msub> <mi>K</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <msub> <mi>g</mi> <mi>l</mi> </msub> </msup> <mo>)</mo> </mrow> </mrow>
In formula, KiFor the Grad in the histogram of gradients of residual pixel point, M is gradient in the histogram of gradients of residual pixel point The number of amplitude, KmFor the most gradient magnitude of pixel number in the histogram of gradients of residual pixel point;
C. remove and be more than threshold value y in histogram of gradientslPixel, remaining pixel is the edge of image.
CN201710424076.0A 2017-06-07 2017-06-07 A kind of underwater picture Objective extraction system Withdrawn CN107274391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710424076.0A CN107274391A (en) 2017-06-07 2017-06-07 A kind of underwater picture Objective extraction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710424076.0A CN107274391A (en) 2017-06-07 2017-06-07 A kind of underwater picture Objective extraction system

Publications (1)

Publication Number Publication Date
CN107274391A true CN107274391A (en) 2017-10-20

Family

ID=60066555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710424076.0A Withdrawn CN107274391A (en) 2017-06-07 2017-06-07 A kind of underwater picture Objective extraction system

Country Status (1)

Country Link
CN (1) CN107274391A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540771A (en) * 2018-04-02 2018-09-14 深圳智达机械技术有限公司 A kind of efficient resource detection system
CN111246158A (en) * 2019-04-15 2020-06-05 桑尼环保(江苏)有限公司 Command distribution execution method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020947A (en) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 Image quality analysis method and device
CN104567820A (en) * 2015-01-24 2015-04-29 无锡桑尼安科技有限公司 Underwater target central position searching system
CN104700421A (en) * 2015-03-27 2015-06-10 中国科学院光电技术研究所 Edge detection algorithm based on canny self-adaptive threshold value
CN104999164A (en) * 2015-08-04 2015-10-28 李小春 Underwater robot based on multiple filtering processing
CN105070094A (en) * 2015-08-27 2015-11-18 上海仪电电子股份有限公司 Machine vision based parking space detection system and parking space detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020947A (en) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 Image quality analysis method and device
CN104567820A (en) * 2015-01-24 2015-04-29 无锡桑尼安科技有限公司 Underwater target central position searching system
CN104700421A (en) * 2015-03-27 2015-06-10 中国科学院光电技术研究所 Edge detection algorithm based on canny self-adaptive threshold value
CN104999164A (en) * 2015-08-04 2015-10-28 李小春 Underwater robot based on multiple filtering processing
CN105070094A (en) * 2015-08-27 2015-11-18 上海仪电电子股份有限公司 Machine vision based parking space detection system and parking space detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG ZHI ET AL.: "Fast Adaptive Threshold for the Canny Edge Detector", 《PROC. OF SPIE》 *
张巍: "水下图像的目标检测与定位研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540771A (en) * 2018-04-02 2018-09-14 深圳智达机械技术有限公司 A kind of efficient resource detection system
CN111246158A (en) * 2019-04-15 2020-06-05 桑尼环保(江苏)有限公司 Command distribution execution method
CN111246158B (en) * 2019-04-15 2020-11-06 杨丽 Command distribution execution method

Similar Documents

Publication Publication Date Title
CN104134209B (en) A kind of feature extracting and matching method and system in vision guided navigation
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN107133973B (en) Ship detection method in bridge collision avoidance system
CN109800735A (en) Accurate detection and segmentation method for ship target
CN110275153A (en) A kind of waterborne target detection and tracking based on laser radar
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
CN106780560B (en) Bionic robot fish visual tracking method based on feature fusion particle filtering
Selvakumar et al. The performance analysis of edge detection algorithms for image processing
CN104899866A (en) Intelligent infrared small target detection method
WO2018000252A1 (en) Oceanic background modelling and restraining method and system for high-resolution remote sensing oceanic image
CN109583442A (en) False detection method of license plate and device based on Line segment detection
CN106845410B (en) Flame identification method based on deep learning model
CN108564602A (en) Airplane detection method based on airport remote sensing image
CN108520203A (en) Multiple target feature extracting method based on fusion adaptive more external surrounding frames and cross pond feature
CN103020959B (en) Gravity model-based oceanic front information extraction method
CN102903108A (en) Edge detection method based on underwater image statistical property
CN106874912A (en) A kind of image object detection method based on improvement LBP operators
CN106650580A (en) Image processing based goods shelf quick counting method
CN104778707A (en) Electrolytic capacitor detecting method for improving general Hough transform
CN103020967A (en) Unmanned aerial vehicle aerial image accurate matching method based on island edge characteristics
CN105608689B (en) A kind of panoramic mosaic elimination characteristics of image error hiding method and device
CN107274391A (en) A kind of underwater picture Objective extraction system
CN105139391A (en) Edge detecting method for traffic image in fog-and-haze weather
CN107862262A (en) A kind of quick visible images Ship Detection suitable for high altitude surveillance
CN110717454B (en) Wheel type robot obstacle detection method in stage environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20171020