CN103226820B - The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved - Google Patents

The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved Download PDF

Info

Publication number
CN103226820B
CN103226820B CN201310134194.XA CN201310134194A CN103226820B CN 103226820 B CN103226820 B CN 103226820B CN 201310134194 A CN201310134194 A CN 201310134194A CN 103226820 B CN103226820 B CN 103226820B
Authority
CN
China
Prior art keywords
image
target
dimensional
gray
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310134194.XA
Other languages
Chinese (zh)
Other versions
CN103226820A (en
Inventor
柏连发
张毅
陈钱
顾国华
韩静
岳江
刘颖彬
吴经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201310134194.XA priority Critical patent/CN103226820B/en
Publication of CN103226820A publication Critical patent/CN103226820A/en
Application granted granted Critical
Publication of CN103226820B publication Critical patent/CN103226820B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of two-dimensional maximum entropy division night vision image fusion target detection algorithm of improvement.First by the improvement to two-dimensional histogram, namely the gray level of the area grayscale maximal value of gray level-weighting sets up two-dimensional histogram, choosing weights utilizes this histogram calculation maximum entropy to infrared and twilight image segmentation, more traditional maximum entropy partitioning algorithm successful in target detection, has the effect of Background suppression and extraction target.Then that verify multidimensional characteristic with validity that is computing, target is detected to the infrared and twilight image feature-based fusion after segmentation.Detection algorithm of the present invention all has good effect and applicability for the target detection under complex background and multi-target detection aspect.

Description

The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved
Technical field
The invention belongs to infrared and LLL image processing field, the two-dimensional maximum entropy division night vision image fusion target detection algorithm of particularly a kind of improvement.
Background technology
Night vision image target detection technique develops all rapider in military field and civil area.In night vision imaging, single band image is the heat radiation difference of target object and scene as the image key reaction of infrared band, and its situation elements is abundant not, and imaging is clear not and be unfavorable for that human eye distinguishes; Twilight image contrast is lower, and gray scale dynamic range is large not, and signal to noise ratio (S/N ratio) is low, and noise ratio is more serious.Present stage, generally by the detection of Iamge Segmentation and Objective extraction realize target, concrete partitioning algorithm mainly contains rim detection and region growing two class.Wherein Threshold segmentation is the image partition method that purposes is wider, as common iterative Threshold selection, Otsu method Threshold selection, and least error Threshold selection method etc.Due to infrared and picture characteristics that is low-light, common partitioning algorithm is adopted to detect target, segmentation obtain bianry image often target to be submerged in background and partitioning algorithm does not have versatility, the background area simultaneously obtained is many, can not extract target accurately.
Target is generally as " people ", and part shared in infrared night vision image is that brightness ratio is larger.Add that complex background is as close heat radiation background, traditional two-dimensional histogram choose maximum entropy split the result obtained often make target cover by the background of close radiation.For twilight image, target is generally comparatively dark areas or brighter areas, due to the relatively abundanter background close with target gray level of the situation elements in twilight image and noise ratio more, make false target in segmentation result many, more difficultly extract target.
Summary of the invention
The object of the present invention is to provide a kind of segmentation effect good, night vision image is had to the two-dimensional maximum entropy division night vision image fusion target detection algorithm of the improvement of good target detection function.
The technical solution realizing the object of the invention is:
A two-dimensional maximum entropy division night vision image fusion target detection algorithm for improvement, comprises the following steps:
Step one: improve on the method for building up of two-dimensional histogram, select different weights λ, the value of λ is (0.01,3) between, innovatory algorithm is effective, weights λ is equivalent to the gray level contraction-expansion factor of ordinate, and namely original image and area grayscale strengthen image configuration two-dimensional histogram, according to the gamma characteristic of target in infrared and twilight image, if to infrared Image Segmentation then λ get the value being less than 1, if split twilight image, λ gets the value being greater than 1;
Step 2: histogram is divided, the two-dimensional histogram that two thresholdings that t and neighborhood averaging gray scale equal s carry out partitioned image is equaled with pixel grey scale, the region that histogram marks off is being initial near the region of true origin, clockwise direction is followed successively by and represents background respectively, noise, the region of target information and marginal information, gray level f (the m of image, n) with g (m, n) gray level pair is formed, wherein g (m, n) be a pixel be (m, n) gray-scale value, with the two tuple (i defined in two-dimensional histogram Region dividing, j) represent, image mid point gray-scale value is i, neighborhood gray average is j, k (i, j) be the pixel sum that gray level corresponding in two-dimensional histogram is right, M × N is image size, M is row, N is row, L is the number of greyscale levels of piece image, so the joint probability density that can obtain image is:
p ( i , j ) = k ( i , j ) M × N i , j = 1 , 2 , ... , L - - - ( 1 )
The two-dimensional entropy in definition background and target two regions is:
H 1 = Σ i = 1 t Σ j = 1 s ( p ( i , j ) / p 1 ( t , s ) ) log ( p ( i , j ) / p 1 ( t , s ) ) = log ( p 1 ( t , s ) + h 1 ( t , s ) / p 1 ( t , s ) - - - ( 2 )
H 2 = Σ i = t + 1 L Σ j = s + 1 L ( p ( i , j ) / p 2 ( t , s ) ) log ( p ( i , j ) / p 2 ( t , s ) ) = log ( p 2 ( t , s ) + h 2 ( t , s ) / p 2 ( t , s ) - - - ( 3 )
In formula for the probability distribution of background area, for the probability distribution of target area, noise and marginal information region are ignored, so p 1(t, s)+p 2(t, s)=1, then the entropy in target and background region is:
h 1 ( t , s ) = - Σ i = 1 t Σ j = 1 s p ( i , j ) l o g ( p ( i , j ) ) - - - ( 4 )
h 2 ( t , s ) = - Σ i = t + 1 L Σ j = s + 1 L p ( i , j ) l o g ( p ( i , j ) ) - - - ( 5 )
Total information entropy is the addition in target and background region, namely
H ( t , s ) = log ( p 1 ( t , s ) p 2 ( t , s ) + h 1 ( t , s ) p 1 ( t , s ) + h 2 ( t , s ) p 2 ( t , s ) = log { p 1 ( t , s ) [ 1 - p 1 ( t , s ) ] } + h 1 ( t , s ) p 1 ( t , s ) + h 2 ( t , s ) 1 - p 1 ( t , s ) - - - ( 6 )
(t when making H (t, s) obtain maximal value *, s *) value is the optimal threshold that Two-dimensional maximum-entropy method is tried to achieve, (t *, s *) being respectively pixel grey scale corresponding to optimal threshold and neighborhood averaging gray scale, optimal threshold vector is as shown in the formula expression:
( t * , s * ) T = A r g max 1 < t < L max 1 < s < L ( H ( t , s ) ) - - - ( 7 )
Step 3: extraction work is carried out to the image after segmentation, utilizes the phase between image and computing, target more clearly in result is extracted;
Step 4: last, verifies that the bianry image after to segmentation adopts mutually and the feasibility of computing, if the image after the improved maximum entropy segmentation of twilight image is I a(x 1, y 1), the image of infrared image after same method segmentation is I b(x 2, y 2), according to the following formula feature level information fusion is carried out to them, extract target and obtain final testing result T (x, y), wherein: x 1for the row variable y of twilight image 1for the row variable x of twilight image 2for the row variable y of infrared image 2the row variable of to be the row variable y of final detection result the be final detection result of the row variable x for infrared image,
T(x,y)=I a(x 1,y 1)∧I b(x 2,y 2)(8)。
The present invention compared with prior art, its remarkable advantage:
The present invention proposes a kind of two-dimensional maximum entropy division night vision image fusion target detection algorithm of improvement.Due to infrared and picture characteristics that is low-light, common partitioning algorithm is adopted to detect target, segmentation obtain bianry image often target to be submerged in background and partitioning algorithm does not have versatility, the background area simultaneously obtained is many, can not extract target accurately.The present invention is directed to the improvement Two-dimensional maximum-entropy dividing method that area grayscale maximal value gray level that infrared and this difficult point that is twilight image target detection proposes gray level-weighting sets up two-dimensional histogram, can by the different weights of setting, to infrared and twilight image segmentation, target can be extracted preferably in respective image, and suppression has to a great extent been accomplished to background and noise, make algorithm in target detection, have certain practicality, better than classic method to the effect of Target Segmentation.Then to the image information fusion after segmentation, combine infrared image to the abundant feature of thermal radiation responsive and twilight image details, improve target detection accuracy and further restraint speckle reach the object of target detection.
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Accompanying drawing explanation
Fig. 1 is algorithm flow chart of the present invention.
Fig. 2 is the block diagram that existing Data Fusion Target detects.
Fig. 3 is the division schematic diagram of the two-dimensional histogram that the present invention improves.
Fig. 4 is the Two-dimensional maximum-entropy segmentation schematic diagram that the present invention improves; Wherein figure (a), (b), (c) correspond respectively to infrared image preimage, image after traditional Two-dimensional maximum-entropy process and the image after this paper algorithm process, d (), (e), (f) correspond respectively to corresponding to twilight image preimage, the image after traditional Two-dimensional maximum-entropy process and the image after this paper algorithm process.
Fig. 5 is final detection result schematic diagram of the present invention.
Fig. 6 is multi-target detection schematic diagram of the present invention; Wherein scheme (a), (b), (c), (d), (e) correspond respectively to twilight image preimage, infrared image preimage, twilight image is split, infrared Image Segmentation and through with computing after image.
Embodiment
As shown in Figure 1: the two-dimensional maximum entropy division night vision image fusion target detection algorithm of a kind of improvement of the present invention, comprises the following steps:
Step one: improve on the method for building up of two-dimensional histogram, select different weights λ, the value of λ is (0.01,3) between, innovatory algorithm is effective, weights λ is equivalent to the gray level contraction-expansion factor of ordinate, and namely original image and area grayscale strengthen image configuration two-dimensional histogram, according to the gamma characteristic of target in infrared and twilight image, if to infrared Image Segmentation then λ get the value being less than 1, if split twilight image, λ gets the value being greater than 1;
Step 2: histogram is divided, the two-dimensional histogram that two thresholdings that t and neighborhood averaging gray scale equal s carry out partitioned image is equaled with pixel grey scale, the region that histogram marks off is being initial near the region of true origin, clockwise direction is followed successively by and represents background respectively, noise, the region of target information and marginal information, gray level f (the m of image, n) with g (m, n) gray level pair is formed, wherein g (m, n) be a pixel be (m, n) gray-scale value, with the two tuple (i defined in two-dimensional histogram Region dividing, j) represent, image mid point gray-scale value is i, neighborhood gray average is j, k (i, j) be the pixel sum that gray level corresponding in two-dimensional histogram is right, M × N is image size, M is row, N is row, L is the number of greyscale levels of piece image, so the joint probability density that can obtain image is:
p ( i , j ) = k ( i , j ) M &times; N i , j = 1 , 2 , ... , L - - - ( 1 )
The two-dimensional entropy in definition background and target two regions is:
H 1 = &Sigma; i = 1 t &Sigma; j = 1 s ( p ( i , j ) / p 1 ( t , s ) ) log ( p ( i , j ) / p 1 ( t , s ) ) = log ( p 1 ( t , s ) + h 1 ( t , s ) / p 1 ( t , s ) - - - ( 2 )
H 2 = &Sigma; i = t + 1 L &Sigma; j = s + 1 L ( p ( i , j ) / p 2 ( t , s ) ) log ( p ( i , j ) / p 2 ( t , s ) ) = log ( p 2 ( t , s ) + h 2 ( t , s ) / p 2 ( t , s ) - - - ( 3 )
In formula for the probability distribution of background area, for the probability distribution of target area, noise and marginal information region are ignored, so p 1(t, s)+p 2(t, s)=1, then the entropy in target and background region is:
h 1 ( t , s ) = - &Sigma; i = 1 t &Sigma; j = 1 s p ( i , j ) l o g ( p ( i , j ) ) - - - ( 4 )
h 2 ( t , s ) = - &Sigma; i = t + 1 L &Sigma; j = s + 1 L p ( i , j ) l o g ( p ( i , j ) ) - - - ( 5 )
Total information entropy is the addition in target and background region, namely
H ( t , s ) = log ( p 1 ( t , s ) p 2 ( t , s ) + h 1 ( t , s ) p 1 ( t , s ) + h 2 ( t , s ) p 2 ( t , s ) = log { p 1 ( t , s ) &lsqb; 1 - p 1 ( t , s ) &rsqb; } + h 1 ( t , s ) p 1 ( t , s ) + h 2 ( t , s ) 1 - p 1 ( t , s ) - - - ( 6 )
(t when making H (t, s) obtain maximal value *, s *) value is the optimal threshold that Two-dimensional maximum-entropy method is tried to achieve, (t *, s *) being respectively pixel grey scale corresponding to optimal threshold and neighborhood averaging gray scale, optimal threshold vector is as shown in the formula expression:
( t * , s * ) T = A r g max 1 < t < L max 1 < s < L ( H ( t , s ) ) - - - ( 7 )
Step 3: extraction work is carried out to the image after segmentation, utilizes the phase between image and computing, target more clearly in result is extracted;
Step 4: last, verifies that the bianry image after to segmentation adopts mutually and the feasibility of computing, if the image after the improved maximum entropy segmentation of twilight image is I a(x 1, y 1), the image of infrared image after same method segmentation is I b(x 2, y 2), according to the following formula feature level information fusion is carried out to them, extract target and obtain final testing result T (x, y), wherein, x 1for the row variable y of twilight image 1for the row variable x of twilight image 2for the row variable y of infrared image 2the row variable of to be the row variable y of final detection result the be final detection result of the row variable x for infrared image,
T(x,y)=I a(x 1,y 1)∧I b(x 2,y 2)(8)。
Concrete steps are as follows:
One, the two-dimensional histogram improved:
The present invention improves in the middle of Two-dimensional maximum-entropy partitioning algorithm, the method that the gray level proposing the area grayscale maximal value of gray level-weighting sets up two-dimensional histogram carries out maximum entropy segmentation, make it can reasonable bright (secretly) target extracting needs in infrared and twilight image by choosing suitable weights, suppress noise and background, this is that traditional Two-dimensional maximum-entropy partitioning algorithm does not have simultaneously.
Two, the Two-dimensional maximum-entropy segmentation improved:
First be choose with pixel (m in the image of M × N in size, n) 3 × 3 region G (p centered by, q) maximal value in, be equivalent to extract the detail section in image, wherein M, N are respectively the row and column of image, m and n is the coordinate figure of pixel, G (p, q) represents that center is 3 × 3 regions of (m, n).Then be weighted (weights λ represents), i.e. the gray level of 3 × 3 interior maximal values in region G (p, q) of weighting replaces the gray level of neighborhood averaging value as ordinate, pixel (m, n) gray-scale value g (m, n) represents, then
g(m,n)=λmax(G(p,q))(10)
F (m, n) represents gray-scale value and the g (m of pixel (m, n), n) new two-dimensional histogram is formed, as can be seen from the above equation, weights λ is equivalent to the gray level contraction-expansion factor of ordinate, and namely original image and " area grayscale enhancing image " construct two-dimensional histogram.Select different weights, realize different segmentation objects, add algorithm applicability.
By repeatedly obtaining infrared and twilight image split-run test, value this algorithm between (0.01,3) of λ is effective.According to the gamma characteristic of target in infrared and twilight image, generalized case is if get to the λ in infrared Image Segmentation then formula (10) value being less than 1, if split twilight image, λ gets the value being greater than 1.
Infrared image is adopted in the present invention and sets up histogrammic new method and set up two-dimensional histogram, adopt the cross curve parallel with ordinate with horizontal ordinate to divide as Fig. 3 histogram.Gray level f (m, n) and the g (m, n) of image form gray level pair, i and j namely in formula (1), then the calculation procedure calculating maximum entropy of through type (1) to (7) tries to achieve corresponding threshold value (t *, s *), g (m, n) in image is less than s *t is less than with f (m, n) *part compose 0, remainder compose 255, set up bianry image and obtain segmentation result, concrete effect is shown in Fig. 4, and Fig. 5 is final detection result.
Three, the phase of multidimensional characteristic and computing
In the information fusion of image two images with computing use many.In the middle of rough set theory, have an important knowledge to be attribute reduction, it defines the angle from mathematics, by n dimension data x=(x 1, x 2x n), by certain means dimensionality reduction, the new data obtained reaches to simplify and calculates, and eliminates redundant information, improves the object of accuracy.Multidimensional characteristic matrix phase and computing are exactly by multidimensional characteristic matrix step-by-step phase and (in literary composition mainly bidimensional eigenmatrix), define two m * n matrix A m × nand B m × nas shown in the formula expression:
Then two matrix AND operation bit-by-bits as shown in the formula:
Bianry image after one width segmentation can regard the matrix of a M × N as, and entry of a matrix element is made up of 0 and 255.To above formula analyzing examples, suppose two 3 × 3 bianry image matrix A 3 × 3, B 3 × 3be respectively:
A 3 &times; 3 = 0 0 255 0 255 255 0 255 255 B 3 &times; 3 = 255 0 0 0 255 255 0 255 255
Ask for C=A 3 × 3∧ B 3 × 3, then C is:
C = 0 0 0 0 255 255 0 255 255
For binary map A 3 × 3be equivalent to one-dimensional characteristic information, a 13noise, a 22, a 23, a 32, a 33four gray-scale value composition target areas; And for B 3 × 3be equivalent to another dimensional feature information, b 11noise, b 22, b 23, b 32, b 33four gray-scale values composition target areas, from operation result C can find out last filtering noise, remain target area.Change bidimensional and be characterized as one-dimensional characteristic, the function completing attribute reduction theory is namely removed redundancy and is improved accuracy, and what demonstrate multidimensional characteristic matrix can realization character level fusion detection function with computing.
Four, Multidimensional object detects
Adopt mutually and the feasibility of computing to demonstrate the bianry image after to segmentation theoretically, the present invention is directed to the feature of target property and the background that will detect, the concept that combining information merges, infrared and the low-light bianry image phase that partitioning algorithm of the present invention is obtained and, filtering residual noise, background and the false target separately in bianry image, the accuracy of raising target detection.
As block diagram 2, suppose that the image after the improved maximum entropy segmentation of twilight image is I a(x, y), the image of infrared image after same method segmentation is I b(x, y), carries out feature level information fusion to them according to the following formula, extracts target and obtains final testing result T (x, y):
T(x,y)=I a(x,y)∧I b(x,y)(8)
The association finally the obtained advantage of integration technology, some backgrounds in the middle of segmentation separately of filtering infrared band image and twilight image and noise, extract target, as shown in Figure 6, improve again the accuracy of the detection in complex background and multi-target detection.

Claims (2)

1. the two-dimensional maximum entropy division night vision image fusion target detection algorithm improved, is characterized in that: comprise the following steps:
Step one: improve on the method for building up of two-dimensional histogram, select different weights λ, the value of λ is (0.01,3) between, innovatory algorithm is effective, weights λ is equivalent to the gray level contraction-expansion factor of ordinate, and namely original image and area grayscale strengthen image configuration two-dimensional histogram, according to the gamma characteristic of target in infrared and twilight image, if to infrared Image Segmentation then λ get the value being less than 1, if split twilight image, λ gets the value being greater than 1; Wherein, ordinate is the gray level of 3 × 3 interior maximal values in region G (p, q), and g (m, n) is obtained by formula g (m, n)=λ max (G (p, q));
Step 2: histogram is divided, the two-dimensional histogram that two thresholdings that t and neighborhood averaging gray scale equal s carry out partitioned image is equaled with pixel grey scale, the region that histogram marks off is being initial near the region of true origin, clockwise direction is followed successively by and represents background respectively, noise, the region of target information and marginal information, gray level f (the m of image, n) with g (m, n) gray level pair is formed, wherein g (m, n) be a pixel be (m, n) gray-scale value, with the two tuple (i defined in two-dimensional histogram Region dividing, j) represent, image mid point gray-scale value is i, neighborhood gray average is j, k (i, j) be the pixel sum that gray level corresponding in two-dimensional histogram is right, M × N is image size, M is row, N is row, L is the number of greyscale levels of piece image, so the joint probability density that can obtain image is:
The two-dimensional entropy in definition background and target two regions is:
In formula for the probability distribution of background area, for the probability distribution of target area, noise and marginal information region are ignored, so p 1(t, s)+p 2(t, s)=1, then the entropy in target and background region is:
Total information entropy is the addition in target and background region, namely
(t when making H (t, s) obtain maximal value *, s *) value is the optimal threshold that Two-dimensional maximum-entropy method is tried to achieve, (t *, s *) being respectively pixel grey scale corresponding to optimal threshold and neighborhood averaging gray scale, optimal threshold vector is as shown in the formula expression:
Wherein, f (m, n) represents the gray-scale value of pixel (m, n), t and s is the optimized parameter needing to solve;
Step 3: extraction work is carried out to the image after segmentation, utilizes the phase between image and computing, the target in result is extracted;
Step 4: last, verifies that the bianry image after to segmentation adopts mutually and the feasibility of computing, if the image after the improved maximum entropy segmentation of twilight image is I a(x 1, y 1), the image of infrared image after same method segmentation is I b(x 2, y 2), according to the following formula feature level information fusion is carried out to them, extract target and obtain final testing result T (x, y), wherein, x 1for the row variable y of twilight image 1for the row variable x of twilight image 2for the row variable y of infrared image 2the row variable of to be the row variable y of final detection result the be final detection result of the row variable x for infrared image,
T(x,y)=I a(x 1,y 1)∧I b(x 2,y 2)(8)。
2. the two-dimensional maximum entropy division night vision image fusion target detection algorithm of a kind of improvement according to claim 1, it is characterized in that: the concrete steps of step 3 are: multidimensional characteristic matrix phase and computing be exactly by multidimensional characteristic matrix step-by-step phase with, define two m * n matrix A m × nand B m × nas shown in the formula expression:
Then two matrix AND operation bit-by-bits as shown in the formula:
Bianry image after one width segmentation can regard the matrix of a M × N as, and entry of a matrix element is made up of 0 and 255; To above formula analyzing examples, suppose two 3 × 3 bianry image matrix A 3 × 3, B 3 × 3be respectively:
Ask for C=A 3 × 3∧ B 3 × 3, then C is:
For binary map A 3 × 3be equivalent to one-dimensional characteristic information, a 13noise, a 22, a 23, a 32, a 33four gray-scale value composition target areas; And for B 3 × 3be equivalent to another dimensional feature information, b 11noise, b 22, b 23, b 32, b 33four gray-scale values composition target areas, from operation result C can find out last filtering noise, remain target area.
CN201310134194.XA 2013-04-17 2013-04-17 The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved Expired - Fee Related CN103226820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310134194.XA CN103226820B (en) 2013-04-17 2013-04-17 The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310134194.XA CN103226820B (en) 2013-04-17 2013-04-17 The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved

Publications (2)

Publication Number Publication Date
CN103226820A CN103226820A (en) 2013-07-31
CN103226820B true CN103226820B (en) 2016-01-20

Family

ID=48837255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310134194.XA Expired - Fee Related CN103226820B (en) 2013-04-17 2013-04-17 The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved

Country Status (1)

Country Link
CN (1) CN103226820B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296671B (en) * 2016-08-03 2019-06-11 昆明理工大学 A kind of image partition method based on Gabor entropy of histogram
CN106529391B (en) * 2016-09-08 2019-06-18 长安大学 A kind of speed limit road traffic sign detection of robust and recognition methods
CN107194943A (en) * 2017-04-07 2017-09-22 北京工业大学 Image partition method and device, image partition method and device for slag piece
JP7114965B2 (en) * 2017-07-14 2022-08-09 富士通株式会社 Target detection method, device and image processing device
CN108782145B (en) * 2018-05-30 2020-11-13 宁夏悦丰生态农业科技股份有限公司 Intelligent orchard management system
CN108834667A (en) * 2018-05-30 2018-11-20 深圳市鑫汇达机械设计有限公司 A kind of greenhouse system based on Internet of Things
CN109035217B (en) * 2018-07-06 2021-02-02 同济大学 Quantitative evaluation method for soil body fracture based on image two-dimensional entropy
CN109272503B (en) * 2018-09-29 2019-11-19 西安工程大学 A kind of filoplume detection method blended based on maximum entropy and DBSCAN
CN109859245B (en) * 2019-01-22 2020-12-11 深圳大学 Multi-target tracking method and device for video target and storage medium
CN110033499B (en) * 2019-04-21 2021-01-15 南京林业大学 Landsat OLI image-based water body mapping method
CN110084818B (en) * 2019-04-29 2021-04-23 清华大学深圳研究生院 Dynamic down-sampling image segmentation method
CN111291762B (en) * 2020-03-10 2022-12-13 上海航天控制技术研究所 Multi-feature-point-difference-based multi-band image fusion detection method
CN112837335B (en) * 2021-01-27 2023-05-09 上海航天控制技术研究所 Medium-long wave infrared composite anti-interference method
CN118469994A (en) * 2024-06-07 2024-08-09 奈米科学仪器装备(杭州)有限公司 Chip defect detection method based on maximum entropy double-threshold image segmentation algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059870A (en) * 2007-05-23 2007-10-24 华中科技大学 Image cutting method based on attribute histogram
CN102831604A (en) * 2012-07-30 2012-12-19 常州大学 Two-dimensional Renyi entropic threshold segmentation method for grayscale images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059870A (en) * 2007-05-23 2007-10-24 华中科技大学 Image cutting method based on attribute histogram
CN102831604A (en) * 2012-07-30 2012-12-19 常州大学 Two-dimensional Renyi entropic threshold segmentation method for grayscale images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A review on image segmentaion techniques;Nikthl R.pal,Sankar K.pal;《Pattern Recognition Society》;19931231;第26卷(第9期);全文 *
二维直方图创建的新方法实现图像自动分割;张云飞等;《光电工程》;20070131;第34卷(第1期);全文 *
基于二维最大熵阈值图像分割技术的改进方法;谈国军等;《软件导刊》;20080131;第7卷(第1期);全文 *
改进的最大熵阈值分割及其快速实现;张新明等;《计算机科学》;20110831;第38卷(第8期);全文 *
粒矩阵属性约简的启发式算法;钟珞;《小型微型计算机系统》;20110331;第32卷(第3期);全文 *

Also Published As

Publication number Publication date
CN103226820A (en) 2013-07-31

Similar Documents

Publication Publication Date Title
CN103226820B (en) The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved
CN109934200B (en) RGB color remote sensing image cloud detection method and system based on improved M-Net
CN109360232B (en) Indoor scene layout estimation method and device based on condition generation countermeasure network
CN112288008B (en) Mosaic multispectral image disguised target detection method based on deep learning
CN109460764B (en) Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN109767439A (en) A kind of multiple dimensioned difference of self-adapting window and the object detection method of bilateral filtering
CN105701434A (en) Image correction method for two-dimensional code distorted image
CN111126412B (en) Image key point detection method based on characteristic pyramid network
CN107844795A (en) Convolutional neural network feature extraction method based on principal component analysis
CN110782477A (en) Moving target rapid detection method based on sequence image and computer vision system
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN104217436B (en) SAR image segmentation method based on multiple features combining sparse graph
CN102903111B (en) Large area based on Iamge Segmentation low texture area Stereo Matching Algorithm
CN112766089B (en) Cross-domain road extraction method based on global-local confrontation learning framework
CN101976436A (en) Pixel-level multi-focus image fusion method based on correction of differential image
CN106600613B (en) Improvement LBP infrared target detection method based on embedded gpu
CN106355608A (en) Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane
CN103034983A (en) Defogging method based on anisotropic filtering
CN114612769A (en) Integrated sensing infrared imaging ship detection method integrated with local structure information
CN107609571A (en) A kind of adaptive target tracking method based on LARK features
CN110009670A (en) The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN114926826A (en) Scene text detection system
CN104835142A (en) Vehicle queuing length detection method based on texture features
CN104143203A (en) Image editing and communication method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160120

CF01 Termination of patent right due to non-payment of annual fee