CN108537739A - A kind of feature based matching degree without reference video enhancing effect evaluating method - Google Patents

A kind of feature based matching degree without reference video enhancing effect evaluating method Download PDF

Info

Publication number
CN108537739A
CN108537739A CN201810127313.1A CN201810127313A CN108537739A CN 108537739 A CN108537739 A CN 108537739A CN 201810127313 A CN201810127313 A CN 201810127313A CN 108537739 A CN108537739 A CN 108537739A
Authority
CN
China
Prior art keywords
frame
video
feature
component
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810127313.1A
Other languages
Chinese (zh)
Other versions
CN108537739B (en
Inventor
刘浩
刘洋
孙嘉曈
邓开连
孙韶媛
魏国林
廖荣生
黄震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
National Dong Hwa University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201810127313.1A priority Critical patent/CN108537739B/en
Publication of CN108537739A publication Critical patent/CN108537739A/en
Application granted granted Critical
Publication of CN108537739B publication Critical patent/CN108537739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of feature based matching degrees without reference video enhancing effect evaluating method, includes the following steps:Quality enhancing is carried out to the original video V containing N frames using a certain video enhancement algorithm E, has been enhanced video Ve;Being obtained successively by frame sequential has enhanced the primary color component image of video frame, extracts the feature vector of component image respectively using invariant features operator;Characteristic matching is carried out to the feature vector of same primary color component in front and back frame successively, calculates separately out the two successful numbers of width component image Feature Points Matching;Judge whether present frame is to have enhanced video VeLast frame, if it is not, then return above-mentioned steps;Otherwise, into next step;The characteristic matching points of accumulative all front and back frame primary color components, every frame component average value that all characteristic matchings are counted is as the characteristic matching degree of the video enhancement algorithm.The present invention provides a kind of objective evaluating criterion to improve without reference video enhancing algorithm.

Description

A kind of feature based matching degree without reference video enhancing effect evaluating method
Technical field
The present invention relates to technical field of image processing, increase without reference video more particularly to a kind of feature based matching degree Potent fruit evaluating method.
Background technology
In machine vision applications, the feature extraction of image is the basic problem of image procossing, and accurate efficient feature is calculated Son provides solid underlying basis for the solution of relevant issues.Feature operator based on scale space is by original image Change of scale is carried out, the expression sequence under Image Multiscale is obtained, these sequences are carried out with the extraction of scale space main outline, and Using the main outline as a kind of invariant features.Such invariant features operator is not necessarily to the priori of image, can realize side Feature extraction on edge, Corner Detection and different resolution.In recent years, with Scale invariant features transform (Scale Invariant Feature Transform, SIFT), accelerate robust features (Speeded Up Robust Features, SURF) be representative Invariant features operator be gradually widely used in various machine vision and area of pattern recognition, utilize invariant features operator Extraction is characterized in the local feature of image, maintains the invariance to rotation, scaling, brightness change.
Machine vision applications may face low-quality original video scene, and can not obtain other scene informations, this is just Need to introduce the video source modeling mechanism of no reference.No reference video enhancing algorithm is to add one to original video by certain means A little information or transformation data selectively protrude interested feature in video or inhibit certain unwanted spies in video Sign expands the difference between different objects feature in video, to meet certain special demands.Currently, enhancing without reference video The effect evaluating mechanism of algorithm is perfect not enough, it is difficult to a kind of objective evaluating criterion of suitable machine vision applications is provided, with card Bright a certain video enhancement algorithm has better robustness and validity, the conjunction which has limited related application to video enhancement algorithm Reason selection.
It was found by the inventors of the present invention that since the picture material of video consecutive frame usually has very strong relevance, it is adjacent Largely variation is smaller for the content for including between frame, and the matching degree of consecutive frame image feature can preferably embody video source modeling The effect of algorithm.Variation of the enhanced video image relative to original image, mainly image local feature point increase.Profit With invariant features operator, same or analogous target in two images can be quickly found, this is commented for video source modeling effect Survey provides a kind of new thinking.How without reference to video, video source modeling is measured according to objective criteria The technical issues of effect is urgent need to resolve.
Invention content
Technical problem to be solved by the invention is to provide a kind of feature based matching degrees without reference video enhancing effect Evaluating method can evaluate application effect of no reference video enhancing algorithm in machine vision.
The technical solution adopted by the present invention to solve the technical problems is:There is provided a kind of feature based matching degree without reference Video source modeling effect evaluating method, includes the following steps:
(1) quality enhancing is carried out to the original video V containing N frames using a certain video enhancement algorithm E, has been enhanced and has been regarded Frequency Ve
(2) being obtained successively by frame sequential has enhanced the primary color component image of video frame, is distinguished using invariant features operator Extract the feature vector of component image;
(3) characteristic matching is carried out to the feature vector of same primary color component in front and back frame successively, calculates separately out two width point Measure the successful number of Image Feature Point Matching;
(4) judge whether present frame is to have enhanced video VeLast frame, if it is not, then going to step (2);It is no Then, it enters step (5);
(5) add up the characteristic matching points of all front and back frame primary color components, every frame component that all characteristic matchings are counted Characteristic matching degree of the average value as the video enhancement algorithm.
The step (2) is specially:Reading has enhanced video Ve, demarcate frame number;Being obtained successively by frame number has enhanced Video VeIn the i-th frame R, G, B three primary colours component image, three kinds of component images are carried out with the positioning and description of main outline respectively, The feature for being extracted i-th three kinds of component images of frame respectively using invariant features operator, is obtained respectively by scale space extremum extracting The position of component image initial characteristics point and dimensional information, thus to obtain the feature vector of each component image of the i-th frame
The invariant features operator is SIFT operators or SURF operators.
The step (3) is specially:Enable the (i-1)-th frame be previous frame image, the i-th frame be a later frame image, then the (i-1)-th frame and I-th frame forms jth to front and back frame, and characteristic matching is carried out using the feature vector of same primary color component in the (i-1)-th frame and the i-th frame, I.e. in gainedWithWithWithIn matched one by one, calculate separately two width component image characteristic points The number of successful match obtains jth and counts the characteristic matching of each primary color component of front and back frame after removing erroneous matchingIf successful Feature Points Matching, spy of the jth to front and back frame primary color component is not present in a certain primary color component Sign matching pointsOrOr
The matching double points that mistake is eliminated using RANSAC algorithm, according to the nearest neighbor distance of comparative feature vector Characteristic point is matched with the mode of secondary nearest neighbor distance, nearest neighbor distance and the distance rates of time nearest neighbor distance are less than threshold epsilon When, then it is assumed that it is correct matching double points.
The step (5) is specially:The characteristic matching points of accumulative all front and back frames of three primary colours component, finally calculate all Every frame component average value of characteristic matching points, finding out video enhancement algorithm E is enhancing video VeOn characteristic matching degreeCharacteristic matching degree D is the foundation for evaluating and testing video enhancement algorithm, and the higher expression of D values should The effect of video enhancement algorithm is more preferable.
Advantageous effect
Due to the adoption of the above technical solution, compared with prior art, the present invention having the following advantages that and actively imitating Fruit:The present invention obtains the characteristic matching points for having enhanced front and back frame component image in video using invariant features operator, in turn The good and bad degree of various no reference video enhancing algorithms is evaluated and tested by characteristic matching degree.Carried criterion is regarded without necessarily referring to original Frequently, the characteristic matching degree index for being capable of providing fine quantization carries out the nothing of video source modeling effect and participates in evaluation and electing survey, is kept away in test process Parameter adjustment is exempted from, has been not necessarily to manual intervention, suitable for the application scenarios of no reference video enhancing, especially various machine vision are answered With.
Description of the drawings
Fig. 1 is the flow chart that criterion is evaluated and tested without reference video enhancing effect of feature based matching degree.
Fig. 2 is the various enhancing algorithm effect exemplary plots using SIFT feature matching degree.
Fig. 3 is the various enhancing algorithm effect exemplary plots using SURF characteristic matching degree.
Specific implementation mode
Present invention will be further explained below with reference to specific examples.It should be understood that these embodiments are merely to illustrate the present invention Rather than it limits the scope of the invention.In addition, it should also be understood that, after reading the content taught by the present invention, people in the art Member can make various changes or modifications the present invention, and such equivalent forms equally fall within the application the appended claims and limited Range.
Embodiments of the present invention be related to a kind of feature based matching degree without reference video enhancing effect evaluating method, packet Include following steps:Quality enhancing is carried out to the original video V containing N frames using a certain video enhancement algorithm E, has been enhanced and has been regarded Frequency Ve;Being obtained successively by frame sequential has enhanced the primary color component image of video frame, is extracted and is divided respectively using invariant features operator The feature vector of spirogram picture;Characteristic matching is carried out to the feature vector of same primary color component in front and back frame successively, is calculated separately out The two successful numbers of width component image Feature Points Matching;Judge whether present frame is to have enhanced video VeLast frame, if It is not then to return to above-mentioned steps;Otherwise, into next step;The characteristic matching points of accumulative all front and back frame primary color components, will Characteristic matching degree of the every frame component average value of all characteristic matching points as the video enhancement algorithm.
It is further illustrated the present invention below by specific embodiment.
In the present embodiment, six kinds without reference video enhancing algorithm be respectively DeepDetail algorithms, Derain algorithms, MRF algorithms, NLD algorithms, DCP algorithms, SP algorithms, they are respectively to same original video Hazyvideo_riverside.avi Quality enhancing is carried out, the above method is then respectively adopted and calculates the characteristic matching degree for having enhanced video, there is higher characteristic matching The enhancing algorithm of degree will be judged as having better enhancing effect.For convenience, setting video enhancing algorithm E is represented above-mentioned Any one of six kinds of video enhancement algorithms, Fig. 1 give the video source modeling effect evaluation and test criterion for implementing feature based matching degree Flow chart, carried criterion pass through step in detail below realize:
Step 1:The video HazyVideo_riverside.avi containing 300 frames is carried out using video enhancement algorithm E Quality enhances, and after all picture frames enhance in HazyVideo_riverside.avi, has been enhanced video Ve
Step 2:Reading has enhanced video Ve, demarcate frame number.Video V is obtained firsteIn the 1st frame R, G, B three primary colours Component image carries out the R component image, G component images, B component image of the frame positioning and description of main outline respectively, utilizes Invariant features operator extracts the SIFT feature or SURF features of three kinds of component images of the 1st frame respectively, passes through scale space extreme value Detection obtains position and the dimensional information of each primary color component initial characteristics point, and obtains the feature vector of each component image respectively
Step 3:Obtain video VeIn the i-th frame (i >=2) R, G, B three primary colours component image, calculated using invariant features Son extracts the SIFT feature or SURF features of three kinds of component images of the i-th frame respectively, and the frame is obtained by scale space extremum extracting The position of each primary color component initial characteristics point and dimensional information, thus obtain the feature vector of each component image respectively
Step 4:(i-1)-th frame image and the i-th frame image form jth to frame before and after (j=i-1), successively to acquisition before and after Frame primary color component is handled, and characteristic matching is carried out using the feature vector of same primary color component in front and back frame.Carried criterion is adopted With the SIFT feature vector or SURF feature vectors of each component image of invariant features operator extraction.Feature vector is between any two Euclidean distance passes through formulaObtain, wherein p (x, y) indicate feature vector x and feature vector y it Between Euclidean distance, n indicate feature vector n-th dimension.Carried criterion is in gainedWithWithWith In carry out characteristic matching one by one, the matching double points typically resulted in can all have the matching of mistake, therefore be gone using RANSAC algorithms Except the matching double points of mistake, calculate the nearest neighbor distance and time nearest neighbor distance between feature vector, if nearest neighbor distance with it is time close The ratio of neighborhood distance is then considered correct matching double points less than threshold epsilon=0.5, to extract correct match point with a high credibility It is right.The same successful number of primary color component Feature Points Matching of the front and back frame of carried criterion statistics, respectively obtains jth to each base of front and back frame The characteristic matching of colouring component is countedPer the characteristic point of a pair of successful match, indicate the two characteristic points in reality Belong to same content in the scene of border.
Step 5:If not reaching has enhanced video VeLast frame, then continue to obtain successively by frame number latter Frame, i.e. i=i+1, go to step 3;Conversely, into last step six.
Step 6:The characteristic matching points of accumulative all front and back frames, using the size of the sum of characteristic matching points as effect The generation foundation of scoring calculates every frame component average value of all characteristic matching points, using formulaVideo enhancement algorithm E is found out in video VeOn SIFT or SURF characteristic matchings Degree.Fig. 2 gives the various enhancing algorithm effect examples using SIFT feature matching degree, the experimental results showed that NLD video source modelings Algorithm has the effect of best.Fig. 3 gives the various enhancing algorithm effect examples using SURF characteristic matching degree, experimental result It is best to show that DCP video enhancement algorithms have the effect of.In this example, NLD algorithms and DCP algorithms obtain higher D values, Therefore there is relatively better enhancing effect.
It is not difficult to find that the present invention provides a kind of objective evaluating criterion to improve without reference video enhancing algorithm.The present invention Being obtained using invariant features operator has enhanced the characteristic matching points of front and back frame component image in video, and then passes through feature Various no reference videos are evaluated and tested with degree enhances the good and bad degree of algorithm.Carried criterion is without necessarily referring to original video, Neng Gouti It carries out the nothing of video source modeling effect for the characteristic matching degree index of fine quantization and participates in evaluation and electing survey, parameter tune is avoided in test process It is whole, it is not necessarily to manual intervention, suitable for the application scenarios of no reference video enhancing, especially various machine vision applications.

Claims (6)

1. a kind of feature based matching degree without reference video enhancing effect evaluating method, which is characterized in that include the following steps:
(1) quality enhancing is carried out to the original video V containing N frames using a certain video enhancement algorithm E, has been enhanced video Ve
(2) being obtained successively by frame sequential has enhanced the primary color component image of video frame, is extracted respectively using invariant features operator The feature vector of component image;
(3) characteristic matching is carried out to the feature vector of same primary color component in front and back frame successively, calculates separately out two width component maps As the successful number of Feature Points Matching;
(4) judge whether present frame is to have enhanced video VeLast frame, if it is not, then going to step (2);Otherwise, enter Step (5);
(5) the characteristic matching points for adding up all front and back frame primary color components, every frame component that all characteristic matchings are counted is averaged It is worth the characteristic matching degree as the video enhancement algorithm.
2. feature based matching degree according to claim 1 without reference video enhancing effect evaluating method, feature exists In the step (2) is specially:Reading has enhanced video Ve, demarcate frame number;Being obtained successively by frame number has enhanced video Ve In the i-th frame R, G, B three primary colours component image, three kinds of component images are carried out with the positioning and description of main outline respectively, using not Characteristics of variables operator extracts the feature of i-th three kinds of component images of frame respectively, and each component map is obtained by scale space extremum extracting Position as initial characteristics point and dimensional information, thus to obtain the feature vector of each component image of the i-th frame
3. feature based matching degree according to claim 2 without reference video enhancing effect evaluating method, feature exists In the invariant features operator is SIFT operators or SURF operators.
4. feature based matching degree according to claim 1 without reference video enhancing effect evaluating method, feature exists In the step (3) is specially:It is previous frame image to enable the (i-1)-th frame, and the i-th frame is a later frame image, then the (i-1)-th frame and the i-th frame Jth is formed to front and back frame, carries out characteristic matching using the feature vector of same primary color component in the (i-1)-th frame and the i-th frame, i.e., in institute WithWithWithIn matched one by one, calculate separately two width component image Feature Points Matchings Successful number obtains jth and counts the characteristic matching of each primary color component of front and back frame after removing erroneous matchingIf successful Feature Points Matching, spy of the jth to front and back frame primary color component is not present in a certain primary color component Sign matching pointsOrOr
5. feature based matching degree according to claim 4 without reference video enhancing effect evaluating method, feature exists In, the matching double points of mistake are eliminated using RANSAC algorithm, according to the nearest neighbor distance of comparative feature vector with it is secondary The mode of nearest neighbor distance matches characteristic point, when nearest neighbor distance and the distance rates of time nearest neighbor distance are less than threshold epsilon, then It is considered correct matching double points.
6. feature based matching degree according to claim 1 without reference video enhancing effect evaluating method, feature exists In the step (5) is specially:The characteristic matching points of accumulative all front and back frames of three primary colours component, finally calculate all features Every frame component average value of points is matched, finding out video enhancement algorithm E is enhancing video VeOn characteristic matching degreeCharacteristic matching degree D is the foundation for evaluating and testing video enhancement algorithm, and the higher expression of D values should The effect of video enhancement algorithm is more preferable.
CN201810127313.1A 2018-02-08 2018-02-08 Non-reference video enhancement effect evaluation method based on feature matching degree Active CN108537739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810127313.1A CN108537739B (en) 2018-02-08 2018-02-08 Non-reference video enhancement effect evaluation method based on feature matching degree

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810127313.1A CN108537739B (en) 2018-02-08 2018-02-08 Non-reference video enhancement effect evaluation method based on feature matching degree

Publications (2)

Publication Number Publication Date
CN108537739A true CN108537739A (en) 2018-09-14
CN108537739B CN108537739B (en) 2021-11-09

Family

ID=63485436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810127313.1A Active CN108537739B (en) 2018-02-08 2018-02-08 Non-reference video enhancement effect evaluation method based on feature matching degree

Country Status (1)

Country Link
CN (1) CN108537739B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868907A (en) * 2012-09-29 2013-01-09 西北工业大学 Objective evaluation method for quality of segmental reference video
CN103674968A (en) * 2013-12-20 2014-03-26 纪钢 Method and device for evaluating machine vision original-value detection of exterior corrosion appearance characteristics of material
CN103996192A (en) * 2014-05-12 2014-08-20 同济大学 Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
US20150213585A1 (en) * 2014-01-30 2015-07-30 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN105006001A (en) * 2015-08-19 2015-10-28 常州工学院 Quality estimation method of parametric image based on nonlinear structural similarity deviation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868907A (en) * 2012-09-29 2013-01-09 西北工业大学 Objective evaluation method for quality of segmental reference video
CN103674968A (en) * 2013-12-20 2014-03-26 纪钢 Method and device for evaluating machine vision original-value detection of exterior corrosion appearance characteristics of material
US20150213585A1 (en) * 2014-01-30 2015-07-30 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN103996192A (en) * 2014-05-12 2014-08-20 同济大学 Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN105006001A (en) * 2015-08-19 2015-10-28 常州工学院 Quality estimation method of parametric image based on nonlinear structural similarity deviation

Also Published As

Publication number Publication date
CN108537739B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
Wen et al. COVERAGE—A novel database for copy-move forgery detection
Nafchi et al. Efficient no-reference quality assessment and classification model for contrast distorted images
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
Chen et al. Image splicing detection via camera response function analysis
CN104462381A (en) Trademark image retrieval method
CN109472770B (en) Method for quickly matching image characteristic points in printed circuit board detection
US20080292200A1 (en) Visual enhancement for reduction of visual noise in a text field
Shahroudnejad et al. Copy-move forgery detection in digital images using affine-SIFT
CN115244542A (en) Method and device for verifying authenticity of product
CN110136125B (en) Image copying and moving counterfeiting detection method based on hierarchical feature point matching
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN111126412A (en) Image key point detection method based on characteristic pyramid network
Khalid et al. Bhattacharyya Coefficient in Correlation of Gray-Scale Objects.
CN103080979A (en) System and method for synthesizing portrait sketch from photo
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN106682679A (en) Significance detection method based on level-set super pixel and Bayesian framework
US20110085026A1 (en) Detection method and detection system of moving object
CN109635679B (en) Real-time target paper positioning and loop line identification method
CN114913463A (en) Image identification method and device, electronic equipment and storage medium
Julliand et al. Automated image splicing detection from noise estimation in raw images
CN108537739A (en) A kind of feature based matching degree without reference video enhancing effect evaluating method
CN106056575A (en) Image matching method based on object similarity recommended algorithm
CN112184533B (en) Watermark synchronization method based on SIFT feature point matching
CN111402281B (en) Book edge detection method and device
CN108876849B (en) Deep learning target identification and positioning method based on auxiliary identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant