CN107168930A - A kind of tight frame Grouplet associated domain computational methods - Google Patents

A kind of tight frame Grouplet associated domain computational methods Download PDF

Info

Publication number
CN107168930A
CN107168930A CN201710356482.8A CN201710356482A CN107168930A CN 107168930 A CN107168930 A CN 107168930A CN 201710356482 A CN201710356482 A CN 201710356482A CN 107168930 A CN107168930 A CN 107168930A
Authority
CN
China
Prior art keywords
mover
mrow
rsqb
lsqb
grouplet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710356482.8A
Other languages
Chinese (zh)
Other versions
CN107168930B (en
Inventor
闫敬文
袁振国
王宏志
陈宏达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN201710356482.8A priority Critical patent/CN107168930B/en
Publication of CN107168930A publication Critical patent/CN107168930A/en
Application granted granted Critical
Publication of CN107168930B publication Critical patent/CN107168930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing

Abstract

The embodiment of the invention discloses a kind of computational methods of tight frame Grouplet associated domains, by the way that line detective operators are incorporated into the calculating of associated domain, on the premise of unobvious loss coefficient matrix degree of rarefication and energy perfect reconstruction original image, significantly reduce the time loss of conventional tight frame Grouplet conversion, improve the efficiency of conversion, application of the tight frame Grouplet conversion in image procossing direction is widened, with important practical significance.

Description

A kind of tight frame Grouplet associated domain computational methods
Technical field
The present invention relates to a kind of image processing method, more particularly to a kind of tight frame Grouplet associated domain computational methods.
Background technology
Strokes lines can just sketch the contours of the apparent figure or texture of a feature.Natural image typically all include by The region that local direction structure is constituted.Complex geometry information in image can preferably be represented and the rarefaction representation of image is obtained be The key of image procossing.
The resolution ratio that wavelet transformation can come in adaptive adjustment image procossing according to the systematicness of topography, thus It is especially efficient for the expression of image.But wavelet basis is not optimal on the image for representing geometry, because it Side's support can not adaptively represent geometry of direction attribute.Curvelets, Contourlets, Curvelets, Bnadlets, Wedgelets is successively suggested to improve this problem, and achieves good effect in the application of specific image procossing Really, however their improvement effects expected in theory in the application of real image processing are so good, may be due to The texture structure of these pictures excessively complexity is so that their base can not represent image well.In order to overcome this shortcoming, Mallat proposed Grouplet conversion in 2008.This is a kind of brand-new conversion, and its base can be as image be in different chis Spend the change of lower geometry and change, thus the geometric properties of image can be utilized to greatest extent.Meanwhile, Grouplet becomes The calculating changed is simple, and the transform method of itself is exactly a kind of quick calculation method.
Tight frame Grouplet decomposes the calculating comprising associated domain layer and two layers of coefficient layer.Tight frame Grouplet is converted The searching of middle associated domain has very big influence to the performance of conversion.Tight frame Grouplet conversion employs Block Matching algorithms find associated domain, and the benefit of this method is that it is discretization, can accurately reflect each pixel Change, but be due to need to delimit grid in advance, thus can not be adaptively according to picture structure selected directions, thus can not The image for including complex texture is represented well.
The content of the invention
Technical problem to be solved of the embodiment of the present invention is that there is provided a kind of tight frame Grouplet associated domains calculating side Method.The high complexity that Block Matching algorithms are brought in former conversion can be reduced, tight frame Grouplet conversion is improved Efficiency.
In order to solve the above-mentioned technical problem, the embodiments of the invention provide a kind of tight frame Grouplet associated domains calculating side Method, comprises the following steps:
1) gray-scale map is converted into the image of input;
2) the gray-scale map Grouplet is decomposed, calculates jth layer coefficients layer coefficients, calculate optimal by line detective operators Match point simultaneously calculates jth layer associated domain layer coefficients, wherein 1≤j≤J;
3) repetitive cycling step 2), finished until J layer coefficients are all calculated, wherein J is given empirical value.
Further, 3 × 3 matrix that the line detective operators template is used.
Further, the formula of the line detective operators calculating Optimum Matching point is:
WhereinBe withCentered on 3 × 3 image data matrixs after binaryzation;OPi[j] is detection direction i Line detective operators, i represents to gather a direction in {+45 °, 0 °, -45 ° };Symbol j represents j-th of position of homography.
Further, the step of line detective operators calculating Optimum Matching point is:
1) in jth layer, with pointCentered on, it is 3 × 3 to choose size, the data matrix BP that pixel value is constituted,
2) data matrix BP described in binaryzation3×3, note matrix B P3×3Average be Av, then work as BPx,yLess than for the average During Av, BPx,yValue 0, is otherwise 1,
3) formula for calculating Optimum Matching point using the line detective operators is tried to achieve a littleOptimum Matching point m,
4) associated domain layer coefficients are calculatedWith coefficient layer coefficients { dj[m],aJ[m]}1≤j≤J, wherein
Associated domain layer coefficients:
When j increases from 1 to J, coefficient layer factor v is updated by below equation:
Wherein, a [m] represents the equal value coefficient at point m, as j=1, and a [m] is the pixel value at point m;dj[m] represents point Difference coefficient at m;S [m] represents the size of support frame at point m, as j=1, s [m]=1.As j=J, aJ[m] is by public affairs FormulaCalculating is obtained
5) repetitive cycling above-mentioned steps, to 1 to J layer of institute, a little all matching is completed.
Implement the embodiment of the present invention, have the advantages that:The method of the present invention reduces Block Matching calculations The high complexity that method is brought, when being converted applied to tight frame Grouplet, will greatly save the time of conversion, improves efficiency.
Brief description of the drawings
Fig. 1 is the flowage structure schematic diagram of the present invention;
Fig. 2 is for -45 °, 0 ° ,+45 ° of detection templates;
Fig. 3 is the correction data of line detection algorithms and Block Matching algorithm effects.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, the present invention is made into one below in conjunction with accompanying drawing It is described in detail on step ground.
Flowage structure figure as shown in Figure 1.
A kind of tight frame Grouplet associated domain computational methods of the embodiment of the present invention, comprise the following steps:1. input is schemed Picture is simultaneously converted into gray level image I;2. Grouplet decomposition is carried out, j layer coefficients layer coefficients are calculated, is calculated most by line detective operators Excellent match point simultaneously calculates j layers of associated domain layer coefficients;3. 2. repetitive cycling step, finishes until J layer coefficients are all calculated.
For the gray-scale map I that size is M × N, J layers of tight frame Grouplet passes will be obtained after line detective operators algorithm Join domain layer coefficients Aj(J+1) coefficient layer coefficients { dj[m],aJ[m]}1≤j≤J
Explanation:(1)OB_N45:Size is 3X3-45 ° of detection template matrixes;OB_0:Size is 3X3 0 ° of detection template Matrix;OB_P45:Size is 3X3+45 ° of detection template matrixes;(2)Qm,nRepresent m rows, the point Q of n row, it is assumed that current point is Pm,n, then point to be matched is Qk,(n-1), wherein k ∈ { m-1, m, m+1 }, point from top to bottom successively with pattern matrix OB_N45, OB_0 is corresponding with OB_P45.(3) J is that total number of plies is decomposed in tight frame Grouplet conversion;Gather { dj[m],aJ[m]}1≤j≤JStorage (J+1) layer coefficients number of plies value;Matrix AjStore jth layer associated domain number of plies value.As shown in Figure 2.
Specific steps:
(1) j=1, { d are initializedj[m],aJ[m]}1≤j≤J=0, Aj=0, m=n=1;
(2)Pm,nCentered on, choose data matrix BP3×3
(3) binaryzation matrix B P:The average for remembering matrix B P is Av, thenWherein Positive integer x, y ∈ [1,3];
(4) corresponding data in BP and line detection template matrix OB_N45, OB_0, OB_P45 is multiplied respectively, and will be multiplied As a result asked after adding up and thoroughly deserve T1, T2, T3;Point to be matched corresponding to min { T1, T2, T3 } is match point.
(5) point P is calculatedm,nLocate associated domain layer coefficients Aj(m, n) and coefficient layer coefficients { dj[P],aJ[P]}1≤j≤J.It is specific by Below equation is obtained, and its midpoint Q is point P Optimum Matching point:
Associated domain layer coefficients:
Aj[P]=Q-P
Coefficient layer factor v is updated by below equation:
If m<M, with regard to m=m+1;
Otherwise
If n<N
With regard to n=N+1;
M=1;
Otherwise m=1;N=1;
J=j+1;
If j>J, end loop obtains associated domain layer coefficients AjWith coefficient layer coefficients { dj[m],aJ[m]}1≤j≤JOtherwise jump Go to step (2).
Example is tested under Matlab R2014a environment, the associated domain that line detection algorithms proposed by the present invention are calculated Layer coefficients are the same with Block Matching algorithms, can show the trend of geometry flow in original image.From Fig. 3 correction data In it is apparent that, in unobvious loss coefficient layer coefficients degree of rarefication, and can be with perfect reconstruction original image on the premise of, Will be significantly lower than the time required to line detective operators matching algorithm proposed by the present invention using Block Matching algorithms when Between.
Above disclosed is only a kind of preferred embodiment of the invention, can not limit the power of the present invention with this certainly Sharp scope, therefore the equivalent variations made according to the claims in the present invention, still belong to the scope that the present invention is covered.

Claims (4)

1. a kind of tight frame Grouplet associated domain computational methods, it is characterised in that comprise the following steps:
1) gray-scale map is converted into the image of input;
2) the gray-scale map Grouplet is decomposed, calculates jth layer coefficients layer coefficients, Optimum Matching is calculated by line detective operators Put and calculate jth layer associated domain layer coefficients, wherein 1≤j≤J;
3) repetitive cycling step 2), finished until J layer coefficients are all calculated, wherein J is setting value.
2. tight frame Grouplet associated domain computational methods according to claim 1, it is characterised in that the line detection is calculated 3 × 3 matrix that subtemplate is used.
3. tight frame Grouplet associated domain computational methods according to claim 2, it is characterised in that the line detection is calculated Son calculate Optimum Matching point formula be:
<mrow> <mi>m</mi> <mo>=</mo> <mi>arg</mi> <mi> </mi> <mi>max</mi> <mo>{</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <msub> <mi>BP</mi> <mover> <mi>m</mi> <mo>~</mo> </mover> </msub> <mo>&amp;lsqb;</mo> <mi>j</mi> <mo>&amp;rsqb;</mo> <mo>&amp;times;</mo> <msub> <mi>OP</mi> <mi>i</mi> </msub> <mo>&amp;lsqb;</mo> <mi>j</mi> <mo>&amp;rsqb;</mo> <mo>}</mo> <mo>,</mo> </mrow>
WhereinBe withCentered on 3 × 3 image data matrixs after binaryzation;OPi[j] is the line detection for detecting direction i Operator, i represents to gather a direction in {+45 °, 0 °, -45 ° };J represents j-th of position of homography.
4. tight frame Grouplet associated domain computational methods according to claim 3, it is characterised in that the line detection is calculated Son calculate Optimum Matching point the step of be:
1) in jth layer, with pointCentered on, it is 3 × 3 to choose size, the data matrix BP that pixel value is constituted,
2) data matrix BP described in binaryzation3×3, note matrix B P3×3Average be Av, then work as BPx,yDuring less than for the average Av, BPx,yValue 0, is otherwise 1,
3) formula for calculating Optimum Matching point using the line detective operators is tried to achieve a littleOptimum Matching point m,
4) associated domain layer coefficients are calculatedWith coefficient layer coefficients { dj[m],aJ[m]}1≤j≤J,
Associated domain layer coefficients:
<mrow> <msub> <mi>A</mi> <mi>j</mi> </msub> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> <mo>=</mo> <mi>m</mi> <mo>-</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> </mrow>
When j increases from 1 to J, coefficient layer factor v is updated by below equation:
<mrow> <mover> <mi>s</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>s</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mo>+</mo> <mi>s</mi> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> </mrow>
<mrow> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> <mo>=</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> <mo>-</mo> <mi>a</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> <msqrt> <mfrac> <mrow> <mi>s</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mi>s</mi> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> </mrow> <mover> <mi>s</mi> <mo>^</mo> </mover> </mfrac> </msqrt> <mo>,</mo> </mrow>
<mrow> <mi>a</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mo>=</mo> <mfrac> <mrow> <mi>s</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mi>a</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mo>+</mo> <mi>s</mi> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> <mi>a</mi> <mo>&amp;lsqb;</mo> <mover> <mi>m</mi> <mo>~</mo> </mover> <mo>&amp;rsqb;</mo> </mrow> <mover> <mi>s</mi> <mo>^</mo> </mover> </mfrac> <mo>.</mo> </mrow>
<mrow> <mi>s</mi> <mo>&amp;lsqb;</mo> <mi>m</mi> <mo>&amp;rsqb;</mo> <mo>=</mo> <mover> <mi>s</mi> <mo>^</mo> </mover> </mrow>
Wherein, a [m] represents the equal value coefficient at point m, as j=1, and a [m] is the pixel value at point m;dj[m] is represented at point m Difference coefficient;S [m] represents the size of support frame at point m, as j=1, s [m]=1.As j=J, aJ[m] is by formulaCalculating is obtained;
5) repetitive cycling above-mentioned steps, until 1 to J layer of institute, a little all matching is completed.
CN201710356482.8A 2017-05-19 2017-05-19 Close frame group association domain calculation method Active CN107168930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710356482.8A CN107168930B (en) 2017-05-19 2017-05-19 Close frame group association domain calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710356482.8A CN107168930B (en) 2017-05-19 2017-05-19 Close frame group association domain calculation method

Publications (2)

Publication Number Publication Date
CN107168930A true CN107168930A (en) 2017-09-15
CN107168930B CN107168930B (en) 2020-11-17

Family

ID=59815683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710356482.8A Active CN107168930B (en) 2017-05-19 2017-05-19 Close frame group association domain calculation method

Country Status (1)

Country Link
CN (1) CN107168930B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699852B2 (en) * 2011-10-10 2014-04-15 Intellectual Ventures Fund 83 Llc Video concept classification using video similarity scores
CN104240201A (en) * 2014-09-04 2014-12-24 南昌航空大学 Fracture image denoising and enhancing method based on group-contour wavelet transformation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699852B2 (en) * 2011-10-10 2014-04-15 Intellectual Ventures Fund 83 Llc Video concept classification using video similarity scores
CN104240201A (en) * 2014-09-04 2014-12-24 南昌航空大学 Fracture image denoising and enhancing method based on group-contour wavelet transformation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ALDO MAALOUF 等: "A grouplet-based reduced reference image quality assessment", 《2009 INTERNATIONAL WORKSHOP ON QUALITY OF MULTIMEDIA EXPERIENCE》 *
ALDO MAALOUF 等: "Grouplet-based color image super-resolution", 《2009 17TH EUROPEAN SIGNAL PROCESSING CONFERENCE》 *
周志宇: "基于Grouplet变换的金属断口图像处理方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
孙熠 等: "基于Grouplet熵与关联向量机的断口图像识别方法研究", 《失效分析与预防》 *
张玉庆: "智能交通系统中车牌定位问题的研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Also Published As

Publication number Publication date
CN107168930B (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN110189253A (en) A kind of image super-resolution rebuilding method generating confrontation network based on improvement
CN104182954B (en) Real-time multi-modal medical image fusion method
Kothari et al. Trumpets: Injective flows for inference and inverse problems
CN109727195B (en) Image super-resolution reconstruction method
CN106934766A (en) A kind of infrared image super resolution ratio reconstruction method based on rarefaction representation
CN101980284A (en) Two-scale sparse representation-based color image noise reduction method
CN105631807A (en) Single-frame image super resolution reconstruction method based on sparse domain selection
CN107341765A (en) A kind of image super-resolution rebuilding method decomposed based on cartoon texture
CN110322404B (en) Image enhancement method and system
CN106910179A (en) Multimode medical image fusion method based on wavelet transformation
CN111080591A (en) Medical image segmentation method based on combination of coding and decoding structure and residual error module
CN114170088A (en) Relational reinforcement learning system and method based on graph structure data
CN108090885A (en) For handling the method and apparatus of image
Mo et al. The research of image inpainting algorithm using self-adaptive group structure and sparse representation
CN115829876A (en) Real degraded image blind restoration method based on cross attention mechanism
CN114897694A (en) Image super-resolution reconstruction method based on mixed attention and double-layer supervision
Wang et al. JPEG artifacts removal via compression quality ranker-guided networks
CN108898568A (en) Image composition method and device
CN109741258B (en) Image super-resolution method based on reconstruction
CN107292855A (en) A kind of image de-noising method of the non local sample of combining adaptive and low-rank
CN105488754B (en) Image Feature Matching method and system based on local linear migration and affine transformation
CN107169498A (en) It is a kind of to merge local and global sparse image significance detection method
CN107240059A (en) The modeling method of image digital watermark embedment strength regressive prediction model
CN107168930A (en) A kind of tight frame Grouplet associated domain computational methods
CN109448031A (en) Method for registering images and system based on Gaussian field constraint and manifold regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant