CN105160679A - Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation - Google Patents

Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation Download PDF

Info

Publication number
CN105160679A
CN105160679A CN201510557317.XA CN201510557317A CN105160679A CN 105160679 A CN105160679 A CN 105160679A CN 201510557317 A CN201510557317 A CN 201510557317A CN 105160679 A CN105160679 A CN 105160679A
Authority
CN
China
Prior art keywords
pixel
overbar
point
window
census
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510557317.XA
Other languages
Chinese (zh)
Inventor
孙爱娟
顾国华
周玉蛟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201510557317.XA priority Critical patent/CN105160679A/en
Publication of CN105160679A publication Critical patent/CN105160679A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation. The algorithm comprises: one of a left view and a right view is selected as a reference image; Census conversion is carried out on all pixel points of the left view and the right view so as to obtain matching cists of all pixel points in the reference image under different optical parallaxes d; support windows are established, a reference segmentation unit of each pixel point is established in one support window by using each pixel point of the reference image as a center, a non-reference segmentation unit of each pixel point is established in one support window by using each pixel point of a non-reference image as a center, and a weight value of the central pixel point in each support window is calculated; and according to the weight values, an optimum parallax of the left view and the right view is obtained.

Description

Based on the sectional perspective matching algorithm that adaptive weighting combines with Iamge Segmentation
Technical field
The present invention relates to the Stereo Matching Technology in a kind of binocular stereo vision, particularly a kind of sectional perspective matching algorithm of the adaptive weighting based on improving.
Background technology
Stereo matching as most important in binocular vision be also the most scabrous problem, be always various countries researchist pay close attention to focus, its object is to the optimum parallax value finding left and right view.Real-time and high precision weigh the standard of Stereo Matching Algorithm quality, existing algorithm Problems existing is that the matching algorithm that real-time is good cannot reach excellent effect in degree of accuracy, and the very high algorithm of degree of accuracy sacrifices a large amount of operation time for cost.Stereo Matching Algorithm can be decomposed into 2 committed steps: Matching power flow calculates, and Matching power flow is polymerized.According to the difference of parallax calculation method, Stereo Matching Algorithm can be divided into again sectional perspective matching algorithm and overall Stereo Matching Algorithm.The precision of the ratio of precision sectional perspective coupling of overall situation Stereo matching is high, but is but difficult to the requirement meeting real-time, and the overall Stereo Matching Algorithm used at present relies on high performance hardware device to improve speed.Thus the application of sectional perspective matching algorithm is more extensive.
Summary of the invention
The object of the present invention is to provide a kind of sectional perspective matching algorithm combined with Iamge Segmentation based on adaptive weighting, comprising:
Choose left and right view wherein a width figure as benchmark image;
Respectively Census conversion is carried out to all pixels of left and right view, obtains the Matching power flow of each pixel under different parallax d in benchmark image;
Set up and support window, centered by each pixel of benchmark image, the benchmark cutting unit of each pixel is set up in support window, supporting the non-referenced cutting unit setting up each pixel in window centered by each pixel of non-referenced image, calculate the weighted value that each supports the central pixel point in window;
The optimum parallax of left and right view is obtained according to weighted value.
The present invention compared with prior art, has the following advantages: the present invention is on the basis of Stereo Matching Algorithm, and adopt the adaptive weighting cost algorithms improved to realize the polymerization of Matching power flow, real-time, accuracy is high.
Below in conjunction with Figure of description, the present invention is described further.
Accompanying drawing explanation
Fig. 1 is method flow diagram of the present invention.
Fig. 2 is the method flow diagram that Census of the present invention converts.
Embodiment
Video camera sustained height one the first from left right side of the same model of having demarcated two is parallel to be placed on directly over target scene to be captured, obtain left and right view, choose left and right view wherein a width figure as benchmark image, the present invention is being again that figure is for benchmark image, obtain the optimum parallax of sectional perspective matching algorithm as follows, as shown in Figure 1:
Step S101, carries out Census conversion respectively to all pixels of left and right view, obtains the Matching power flow of each pixel under different parallax d in benchmark image;
Step S102, set up and support window, centered by each pixel of benchmark image, the benchmark cutting unit of each pixel is set up in support window, supporting the non-referenced cutting unit setting up each pixel in window centered by each pixel of non-referenced image, calculate the weighted value that each supports the central pixel point in window;
Step S103, obtains the optimum parallax of left and right view according to weighted value.
Composition graphs 2, the implementation of step S101 is as follows:
Step S1011, choosing right view is benchmark image, centered by a pixel of right figure, sets up Census window, compares the magnitude relationship of the pixel in Census window and the pixel value between central pixel point, and carries out binary conversion treatment by δ function
δ ( m c r , m ‾ a r ) = 0 m c r ≤ m ‾ a r 1 m c r > m ‾ a r
Wherein, for central pixel point pixel value in Census window in right view, for the pixel value of rest of pixels point in Census window in right view, a is rest of pixels point index value.
Step S1012, sets up slip Census window centered by left view one pixel, compares the magnitude relationship of the pixel value between the pixel in slip Census window and the central pixel point in this Census window,
δ ( m c l , m ‾ a l ) = 0 m c l ≤ m ‾ a l 1 m c l > m ‾ a l
Wherein, slip Census window size is identical with Census window size in step 11, for central point pixel value in Census window in left view, for the pixel value of rest of pixels point in this Census window in left view, a is rest of pixels point index value.
Step S1013, carries out Census conversion respectively to left and right view, obtains a width Census image.Census conversion is the non-parametric transformations based on mathematical statistics.It is by setting up a window centered by reference image vegetarian refreshments, then the pixel in statistical window and the magnitude relationship between reference image vegetarian refreshments, and what value characterized is numerical relation between reference image vegetarian refreshments and neighborhood territory pixel point.Census conversion is as shown in formula (1):
C(x,y)=Bitstringδ(I(x,y),I(x+i,y+j))
i ∈ [ - M 2 , M 2 ] , j ∈ [ - N 2 , N 2 ]
In formula (1), Bitstring represents the pixel value in window is arranged in bit strings, I (x, y) for coordinate in image is (x, the grey scale pixel value of point y), I (x+i, y+j) represent that decentering point distance is (i, j) grey scale pixel value of place's point, C (x, y) denotation coordination is (x, the Census transformed value of point y), M × N represents the size of the rectangular window centered by (x, y), is called Census window.
Step S1014, the HAMMING distance between Calculation Basis character string and each compare string string
S ( p , q ) = H A M M I N G ( C ( p ) , C ( q ) ) = C ( p ) ⊕ C ( q )
Wherein, p is the central point of Census window in right view, q be in left view under d parallax with p Corresponding matching point, C (*) is Census transformed value, and S (p, q) represents the HAMMING distance of p, q 2 under parallax d condition;
Step S1015, changes the value of d, namely scans left view with slip Census window, repeats step 12 ~ 14, until all pixels of left view are completed by scanning;
Step S1016, all pixels in traversal right view, repeat step S1011 ~ S1015.
Step S1017, calculates the Matching power flow of Census image.Matching power flow is used to judge that whether pixel in right figure is the standard of the match point of pixel in left figure, and the general common trait finding pre-matching point in the two width images of left and right judges.
e ( p , q ) d = S ( p , q ) d + Σ p c , q c ∈ window C e n s u s S ( p c , q c ) d
All Matching power flow calculating is carried out to all unique points in right figure, obtains one group of Matching power flow value.Wherein, p crepresent except excentral point in the Census window centered by p, q crepresent p under parallax d condition ccorresponding match point.
The implementation of step S102 is as follows:
Step S1021, with right figure for benchmark image, sets up the support window that size is M' × N' in right figure centered by pixel p, the point supported in window beyond p point is designated as in left figure, centered by pixel q, the M' × N' setting up formed objects supports window, and the point supported in window beyond q point is designated as wherein q be in left view under d parallax with p Corresponding matching point;
Step S1022, adopts following condition to set up benchmark segmentation cell S to benchmark image p:
D c ( p , p &OverBar; ) < &tau; 1 i f 0 < D s ( p , p &OverBar; ) < l 1 D c ( p , p &OverBar; ) < &tau; 2 i f l 1 < D s ( p , p &OverBar; ) < l 2
Support the pixel meeting above-mentioned condition in window same cutting unit is belonged to central pixel point p, wherein, be pixel p and color difference, be pixel p and distance difference, τ 1, τ 2color threshold, l 1, l 2distance threshold, τ 1> τ 2, l 1<l 2, τ 1, τ 2span be [20,50], l 1, l 2span be support [1/4,1/2] of window size, and the computing formula of color difference and distance difference is as follows:
D c ( p , p &OverBar; ) = I p - I p &OverBar;
D s ( p , p &OverBar; ) = ( x p - x p &OverBar; ) 2 + ( y p - y p &OverBar; ) 2
Wherein, (x p, y p) be the pixel coordinate of p point, for the pixel coordinate of point;
Step S1023, adopts following condition to set up non-referenced cutting unit S to non-referenced image q:
D c ( q , q &OverBar; ) < &tau; 1 i f 0 < D s ( q , q &OverBar; ) < l 1 D c ( q , q &OverBar; ) < &tau; 2 i f l 1 < D s ( q , q &OverBar; ) < l 2
Support the pixel meeting above-mentioned condition in window same cutting unit is belonged to central pixel point q, wherein, be pixel q and color difference, be pixel q and distance difference, τ 1, τ 2color threshold, l 1, l 2distance threshold, τ 1> τ 2, l 1<l 2, τ 1, τ 2span be [20,50], l 1, l 2span be support [1/4,1/2] of window size, and the computing formula of color difference and distance difference is as follows:
D c ( q , q &OverBar; ) = I q - I q &OverBar;
D s ( q , q &OverBar; ) = ( x q - x q &OverBar; ) 2 + ( y q - y q &OverBar; ) 2
Wherein, (x q, y q) be the pixel coordinate of q point, for the pixel coordinate of point.
The method of step S103 is as follows:
Step S1031, calculates the weighted value of the pixel supported in window
w ( q , q &OverBar; ) = 1 q &Element; S q exp ( - D c ( q , q &OverBar; ) &gamma; c ) o t h e r w i s e
w ( p , p &OverBar; ) = 1 p &Element; S p exp ( - D c ( p , p &OverBar; ) &gamma; c ) o t h e r w i s e
Wherein, in benchmark image, set up the support window that size is M' × N' centered by pixel p, the point supported in window beyond p point is designated as in left figure, centered by pixel q, the M' × N' setting up formed objects supports window, and the point supported in window beyond q point is designated as wherein q be in left view under d parallax with p Corresponding matching point, for the weight of pixel in benchmark image, for the weight of pixel in benchmark image, S pfor benchmark cutting unit, S qfor non-referenced cutting unit, be pixel p and color difference, be pixel q and color difference, γ cbe similarity threshold, its span is [10,20];
Step S1032, in order to improve the precision of final Matching power flow, considers that weighted value in left figure and right figure Census window is to Matching power flow e (p, q) simultaneously doptimized by following formula
E ( p , q ) d = &Sigma; w ( p , p &OverBar; ) w ( q , q &OverBar; ) e ( p &OverBar; , q &OverBar; ) d &Sigma; w ( p , p &OverBar; ) w ( q , q &OverBar; )
Wherein, according to the cost computing formula of the pixel in step 1, in image, support the point outside some p in window matching power flow be
e ( p &OverBar; , q &OverBar; ) d = S ( p &OverBar; , q &OverBar; ) d + &Sigma; p c &OverBar; , q c &OverBar; &Element; window C e n s u s S ( p c &OverBar; , q c &OverBar; ) d
Step S1033, repeats step S1032, under different parallax d condition, to reference view carries out Matching power flow optimization a little;
Step S1034, the Matching power flow value putting the optimization obtained under different parallax according to each in reference view sorts, and the parallax of wherein maximal value representative is the optimum parallax of this point.

Claims (5)

1., based on the sectional perspective matching algorithm that adaptive weighting combines with Iamge Segmentation, it is characterized in that, comprising:
Choose left and right view wherein a width figure as benchmark image;
Respectively Census conversion is carried out to all pixels of left and right view, obtains the Matching power flow of each pixel under different parallax d in benchmark image;
Set up and support window, centered by each pixel of benchmark image, the benchmark cutting unit of each pixel is set up in support window, supporting the non-referenced cutting unit setting up each pixel in window centered by each pixel of non-referenced image, calculate the weighted value that each supports the central pixel point in window;
The optimum parallax of left and right view is obtained according to weighted value.
2. sectional perspective matching algorithm according to claim 1, is characterized in that, the Census change of each pixel described comprises:
Step 11, choosing right view is benchmark image, centered by a pixel of right figure, sets up Census window, compares the magnitude relationship of the pixel in Census window and the pixel value between central pixel point, and carries out binary conversion treatment by δ function
&delta; ( m c r , m &OverBar; a r ) = 0 m c r &le; m &OverBar; a r 1 m c r > m &OverBar; a r
Wherein, for central pixel point pixel value in Census window in right view, for the pixel value of rest of pixels point in Census window in right view, a is rest of pixels point index value;
Step 12, sets up slip Census window centered by left view one pixel, compares the magnitude relationship of the pixel value between the pixel in slip Census window and the central pixel point in this Census window, and carries out binary conversion treatment by δ function
&delta; ( m c l , m &OverBar; a l ) = 0 m c l &le; m &OverBar; a l 1 m c l > m &OverBar; a l
Wherein, slip Census window size is identical with Census window size in step 11, for central point pixel value in Census window in left view, for the pixel value of rest of pixels point in this Census window in left view, a is rest of pixels point index value;
Step 13, the order from left to right first up and then down according to the following formula to the pixel value in each Census window in the view of left and right after δ functional transformation is arranged in bit strings, right view be benchmark character string, left view be compare string string
C ( x , y ) = B i t s t r i n g i &Element; &lsqb; - M 2 , M 2 &rsqb; , j &Element; &lsqb; - N 2 , N 2 &rsqb; &delta; ( I ( x , y ) , I ( x + i , y + j ) )
Wherein, I (x, y) is the grey scale pixel value of the point of (x, y) for coordinate in image, I (x+i, y+j) represent that decentering point distance is the grey scale pixel value of (i, j) place point, C (x, y) denotation coordination is (x, the Census transformed value of point y), M × N represents the size of the Census window centered by (x, y);
Step 14, the HAMMING distance between Calculation Basis character string and each compare string string
S ( p , q ) = H A M M I N G ( C ( p ) , C ( q ) ) = C ( p ) &CirclePlus; C ( q )
Wherein, p is the central point of Census window in right view, q be in left view under d parallax with p Corresponding matching point, C (*) is Census transformed value, and S (p, q) represents the HAMMING distance of p, q 2 under parallax d condition;
Step 15, changes the value of d, namely scans left view with slip Census window, repeats step 12 ~ 14, until all pixels of left view are completed by scanning;
Step 16, all pixels in traversal right view, repeat step 11 ~ 15.
3. sectional perspective matching algorithm according to claim 2, is characterized in that, adopts following formula to obtain the Matching power flow of each pixel under different parallax d in benchmark image.
e ( p , q ) d = S ( p , q ) d + &Sigma; p c , q c &Element; window C e n s u s S ( p c , q c ) d
Wherein, p crepresent except excentral point in the Census window centered by p, q crepresent p under parallax d condition ccorresponding match point.
4. sectional perspective matching algorithm according to claim 1, is characterized in that, describedly centered by each pixel of benchmark image, sets up cutting unit comprise:
Step 21, with right figure for benchmark image, sets up the support window that size is M' × N' in right figure centered by pixel p, the point supported in window beyond p point is designated as in left figure, centered by pixel q, the M' × N' setting up formed objects supports window, and the point supported in window beyond q point is designated as wherein q be in left view under d parallax with p Corresponding matching point;
Step 22, adopts following condition to set up benchmark segmentation cell S to benchmark image p:
D c ( p , p &OverBar; ) < &tau; 1 i f 0 < D s ( p , p &OverBar; ) < l 1
D c ( p , p &OverBar; ) < &tau; 2 i f l 1 < D s ( p , p &OverBar; ) < l 2
Support the pixel meeting above-mentioned condition in window same cutting unit is belonged to central pixel point p, wherein, be pixel p and color difference, be pixel p and distance difference, τ 1, τ 2color threshold, l 1, l 2distance threshold, τ 1> τ 2, l 1<l 2, τ 1, τ 2span be [20,50], l 1, l 2span be support [1/4,1/2] of window size, and the computing formula of color difference and distance difference is as follows:
D c ( p , p &OverBar; ) = I p - I p &OverBar;
D s ( p , p &OverBar; ) = ( x p - x p &OverBar; ) 2 + ( y p - y p &OverBar; ) 2
Wherein, (x p, y p) be the pixel coordinate of p point, for the pixel coordinate of point;
Step 23, adopts following condition to set up non-referenced cutting unit S to non-referenced image q:
D c ( q , q &OverBar; ) < &tau; 1 i f 0 < D s ( q , q &OverBar; ) < l 1
D c ( q , q &OverBar; ) < &tau; 2 i f l 1 < D s ( q , q &OverBar; ) < l 2
Support the pixel meeting above-mentioned condition in window same cutting unit is belonged to central pixel point q, wherein, be pixel q and color difference, be pixel q and distance difference, τ 1, τ 2color threshold, l 1, l 2distance threshold, τ 1> τ 2, l 1<l 2, τ 1, τ 2span be [20,50], l 1, l 2span be support [1/4,1/2] of window size, and the computing formula of color difference and distance difference is as follows:
D c ( q , q &OverBar; ) = I q - I q &OverBar;
D s ( q , q &OverBar; ) = ( x q - x q &OverBar; ) 2 + ( y q - y q &OverBar; ) 2
Wherein, (x q, y q) be the pixel coordinate of q point, for the pixel coordinate of point.
5. sectional perspective matching algorithm according to claim 1, is characterized in that, the finding method of optimum parallax comprises:
Step 31, calculates the weighted value of the pixel supported in window
w ( q , q &OverBar; ) = 1 q &Element; S q exp ( - D c ( q , q &OverBar; ) &gamma; c ) o t h e r w i s e
w ( p , p &OverBar; ) = 1 p &Element; S p exp ( - D c ( p , p &OverBar; ) &gamma; c ) o t h e r w i s e
Wherein, in benchmark image, set up the support window that size is M' × N' centered by pixel p, the point supported in window beyond p point is designated as in left figure, centered by pixel q, the M' × N' setting up formed objects supports window, and the point supported in window beyond q point is designated as wherein q be in left view under d parallax with p Corresponding matching point, for the weight of pixel in benchmark image, for the weight of pixel in benchmark image, S pfor benchmark cutting unit, S qfor non-referenced cutting unit, be pixel p and color difference, be pixel q and color difference, γ cbe similarity threshold, its span is [10,20];
Step 32, to Matching power flow e (p, q) doptimized by following formula
E ( p , q ) d = &Sigma; w ( p , p &OverBar; ) w ( q , q &OverBar; ) e ( p &OverBar; , q &OverBar; ) d &Sigma; w ( p , p &OverBar; ) w ( q , q &OverBar; )
Wherein, according to the cost computing formula of the pixel in step 1, in image, support the point outside some p in window matching power flow be
e ( p &OverBar; , q &OverBar; ) d = S ( p &OverBar; , q &OverBar; ) d + &Sigma; p c &OverBar; , q c &OverBar; &Element; window C e n s u s S ( p c &OverBar; , q c &OverBar; ) d
Step 33, repeats step 32, under different parallax d condition, to reference view carries out Matching power flow optimization a little;
Step 34, the Matching power flow value putting the optimization obtained under different parallax according to each in reference view sorts, and the parallax of wherein maximal value representative is the optimum parallax of this point.
CN201510557317.XA 2015-09-01 2015-09-01 Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation Pending CN105160679A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510557317.XA CN105160679A (en) 2015-09-01 2015-09-01 Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510557317.XA CN105160679A (en) 2015-09-01 2015-09-01 Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation

Publications (1)

Publication Number Publication Date
CN105160679A true CN105160679A (en) 2015-12-16

Family

ID=54801521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510557317.XA Pending CN105160679A (en) 2015-09-01 2015-09-01 Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation

Country Status (1)

Country Link
CN (1) CN105160679A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355608A (en) * 2016-09-09 2017-01-25 南京信息工程大学 Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN106846440A (en) * 2017-01-06 2017-06-13 厦门美图之家科技有限公司 A kind of image intelligent area-selecting method, device and computing device
CN106991693A (en) * 2017-03-17 2017-07-28 西安电子科技大学 Binocular solid matching process based on fuzzy support weight
CN109544611A (en) * 2018-11-06 2019-03-29 深圳市爱培科技术股份有限公司 A kind of binocular vision solid matching method and system based on bit feature

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070140549A1 (en) * 2003-03-10 2007-06-21 Cranial Technologies, Inc. Three-dimensional image capture system
CN101841730A (en) * 2010-05-28 2010-09-22 浙江大学 Real-time stereoscopic vision implementation method based on FPGA

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070140549A1 (en) * 2003-03-10 2007-06-21 Cranial Technologies, Inc. Three-dimensional image capture system
CN101841730A (en) * 2010-05-28 2010-09-22 浙江大学 Real-time stereoscopic vision implementation method based on FPGA

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
顾骋: "基于双目视觉的立体匹配算法研究与应用", 《中国优秀硕士学位论文全文数据库》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355608A (en) * 2016-09-09 2017-01-25 南京信息工程大学 Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN106355608B (en) * 2016-09-09 2019-03-26 南京信息工程大学 The solid matching method with S-census transformation is calculated based on Changeable weight cost
CN106846440A (en) * 2017-01-06 2017-06-13 厦门美图之家科技有限公司 A kind of image intelligent area-selecting method, device and computing device
CN106846440B (en) * 2017-01-06 2020-09-01 厦门美图之家科技有限公司 Intelligent image region selection method and device and computing equipment
CN106991693A (en) * 2017-03-17 2017-07-28 西安电子科技大学 Binocular solid matching process based on fuzzy support weight
CN106991693B (en) * 2017-03-17 2019-08-06 西安电子科技大学 Based on the fuzzy binocular solid matching process for supporting weight
CN109544611A (en) * 2018-11-06 2019-03-29 深圳市爱培科技术股份有限公司 A kind of binocular vision solid matching method and system based on bit feature
CN109544611B (en) * 2018-11-06 2021-05-14 深圳市爱培科技术股份有限公司 Binocular vision stereo matching method and system based on bit characteristics

Similar Documents

Publication Publication Date Title
CN111192292B (en) Target tracking method and related equipment based on attention mechanism and twin network
KR101622344B1 (en) A disparity caculation method based on optimized census transform stereo matching with adaptive support weight method and system thereof
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN103455991B (en) A kind of multi-focus image fusing method
CN112132023A (en) Crowd counting method based on multi-scale context enhanced network
CN106127688B (en) A kind of super-resolution image reconstruction method and its system
CN105160679A (en) Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation
CN104077742B (en) Human face sketch synthetic method and system based on Gabor characteristic
CN106650615A (en) Image processing method and terminal
CN102567973A (en) Image denoising method based on improved shape self-adaptive window
CN102231788A (en) Method and apparatus for high-speed and low-complexity piecewise geometric transformation of signals
CN105809182B (en) Image classification method and device
CN113408577A (en) Image classification method based on attention mechanism
CN104376565A (en) Non-reference image quality evaluation method based on discrete cosine transform and sparse representation
CN105631469A (en) Bird image recognition method by multilayer sparse coding features
CN111553296B (en) Two-value neural network stereo vision matching method based on FPGA
CN112149526B (en) Lane line detection method and system based on long-distance information fusion
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN104143203A (en) Image editing and communication method
CN104751470A (en) Image quick-matching method
CN109447952B (en) Semi-reference image quality evaluation method based on Gabor differential box weighting dimension
CN116188778A (en) Double-sided semantic segmentation method based on super resolution
CN105321175A (en) Structure texture sparse representation based objective assessment method for stereoscopic image quality
EP3076370B1 (en) Method and system for selecting optimum values for parameter set for disparity calculation
CN114693951A (en) RGB-D significance target detection method based on global context information exploration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151216

WD01 Invention patent application deemed withdrawn after publication