CN108549872A - A kind of vision attention fusion method being suitable for redirecting image quality measure - Google Patents

A kind of vision attention fusion method being suitable for redirecting image quality measure Download PDF

Info

Publication number
CN108549872A
CN108549872A CN201810342794.8A CN201810342794A CN108549872A CN 108549872 A CN108549872 A CN 108549872A CN 201810342794 A CN201810342794 A CN 201810342794A CN 108549872 A CN108549872 A CN 108549872A
Authority
CN
China
Prior art keywords
saliency maps
fusion
significance value
equalization
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810342794.8A
Other languages
Chinese (zh)
Other versions
CN108549872B (en
Inventor
牛玉贞
张帅
林嘉雯
陈俊豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810342794.8A priority Critical patent/CN108549872B/en
Publication of CN108549872A publication Critical patent/CN108549872A/en
Application granted granted Critical
Publication of CN108549872B publication Critical patent/CN108549872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of vision attention fusion methods being suitable for redirecting image quality measure, include the following steps:1, original image is read, two kinds of Saliency maps are generated using two kinds of obvious object detection algorithms;2, reduce the distributional difference of two kinds of Saliency maps using equalization operation, generate two width and equalize Saliency maps;3, it is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, fusion Saliency maps are generated through normalization operation;4, the face and line information in original image are detected;5, under conditions of extreme value is amplified in constraint, adaptively the significance value of the face rectangle frame and lines region in amplification fusion Saliency maps, generates the fusion Saliency maps for including face and line information;6, the contrast for being increased fusion Saliency maps using conspicuousness enhancing model is generated vision attention through normalization operation and merges Saliency maps.This method can improve the consistency of objective quality assessment result and subjective perception.

Description

A kind of vision attention fusion method being suitable for redirecting image quality measure
Technical field
It is especially a kind of to be suitable for redirection map picture the present invention relates to image and video processing and computer vision field The vision attention fusion method of quality evaluation.
Background technology
Image redirection algorithm is using a series of images map function in the size and the ratio of width to height for changing original image to fit While answering different display equipment, visually important content and structure is preserved, that is, reduces contents lost and structure distortion.It is logical It crosses adjustment picture material and adapts to have important meaning to adapt to different display equipment in image display.It has proposed at present a variety of Image redirects algorithm, but the research of the method for evaluating quality of counterweight directional images is still one challenging Business.Existing many redirection maps are as objective quality assessment method is by calculating conspicuousness information loss or being added using conspicuousness A part of the picture material similitude of power as evaluation index, it is objective although these methods all achieve good performance Consistency between appearance quality assessment result and subjective scoring is not still high, and one of reason is that these methods are typically simply adopted With single obvious object detection algorithm, the influence that conspicuousness testing result shows final appraisal procedure is had ignored.
The redirection picture quality appraisal procedure design of early stage is simple, only simple to be assessed by calculating image distance The visual quality of redirection map picture.Classical appraisal procedure has an edge histogram (EH), color layout (CL) and soil move away from From (EMD).The design of these appraisal procedures is simple, can not keep preferable consistency with human subject's assessment result.With the mankind The continuous understanding of vision system (HSV), redirection picture quality appraisal procedure start introducing vision significance information and are assessed, The appraisal procedure for proposing view-based access control model conspicuousness makes assessment result be closer to human subject's perception.
The redirection picture quality appraisal procedure of view-based access control model conspicuousness considers human visual system and is redirected in perception Some intrinsic features when picture quality, such as there is difference to the deformation sensitivity of awareness in different images region.Typically comment The method of estimating has a PGDIL, this method using image sift-flow vector fields localized variation and introduce Saliency maps and simulate the mankind Perception of the vision system to geometric distortion, in addition, also weighing the contents lost of redirection map picture using conspicuousness information loss. Liu et al. people establishes the local pixel correspondence between redirection map picture and original image using sift-flow, then using aobvious The property weights similarity measurement to assess the quality of redirection map picture.Liang et al. is retained by considering marking area, is divided Analyse the influence of artifact, image overall structure retains, keeps five indexs to assess redirection map in accordance with aesthstic rule and symmetry The objective score of picture.Zhang et al. divides an image into uniform grid, and after to method for registering images come analog image Redirect operate in the geometric transformation undergone, finally the ratio of width to height similarity definition of significance weighted is attached most importance to directional images Objective quality.These methods introduce vision significance information, are taken compared with the redirection picture quality appraisal procedure of early stage It obtained prodigious promotion, but was regarded since single obvious object detection algorithm is difficult to simulate the human eye in redirection map quality evaluation Feel note that therefore the consistency between objective evaluation result and subjective scores is not still high.
Invention content
The purpose of the present invention is to provide a kind of vision attention fusion methods being suitable for redirecting image quality measure, should Method can improve the consistency of objective quality assessment result and subjective perception.
To implement above-mentioned purpose, the technical scheme is that:A kind of vision being suitable for redirecting image quality measure Pay attention to fusion method, includes the following steps:
Step S1:Original image is read, two kinds of Saliency maps are generated using two kinds of obvious object detection algorithms;
Step S2:Reduce the distributional difference of two kinds of Saliency maps using equalization operation, generates two width and equalize conspicuousness Figure;
Step S3:It is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, and Fusion Saliency maps are generated using normalization operation;
Step S4:Face and line information in user's face detection algorithm and line detection algorithm detection original image;
Step S5:According to the length of the area and lines region of the step S4 face rectangle frame regions detected, constraining Under conditions of amplifying extreme value, the face rectangle frame in the fusion Saliency maps that adaptively amplification procedure S3 is obtained and lines region Significance value, generate and include the fusion Saliency maps of face and line information;
Step S6:Increase the contrast for the fusion Saliency maps that step S5 is obtained using conspicuousness enhancing model, and uses Normalization operation generates vision attention and merges Saliency maps.
Further, in the step S2, using equalization operation while protecting original Saliency maps overall distribution, Reduce the distributional difference of two width Saliency maps, with being equalized Saliency maps, calculation formula is:
Wherein, Sp、S′pIt is the significance value at the forward and backward pixel p of equalization respectively;Parameter t and b are adaptive thresholds, Calculation formula is:
Wherein, w and h indicates the width and height of original image respectively;ptIndicate theA position, pbIndicate theA position,Indicate downward rounding;SdIt is that the Saliency maps S that step S1 is obtained is dropped by significance value The sequence of gained after sequence arrangement,WithSequence S is indicated respectivelydIn ptAnd pbIt is worth accordingly at position;K is control equalization The parameter of degree.
Further, in the step S3, it is equal that method two width of fusion being averaging are added using the significance value of corresponding points Weighing apparatusization Saliency maps, and fusion Saliency maps are generated using normalization operation, include the following steps:
Step S31:It is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, meter It is as follows to calculate formula:
Wherein,WithTwo width equalization Saliency maps S is indicated respectivelyD' and SBSignificance value at ' middle pixel p; Indicate that the adduction of the significance value at pixel p is averaging as a result, to obtain by two width equalization Saliency maps SD' and SB' in it is right Result figure S obtained from the significance value adduction that should be put is averagingm
Step S32:In order to ensure that the codomain of the Saliency maps after fusion is [0,1], the result that adduction is averaging is carried out Normalization operation, calculation formula are as follows:
Wherein, min (Sm) and max (Sm) result of calculation figure S is indicated respectivelymThe minimum value and maximum value of middle all pixels value;Indicate obtained fusion Saliency maps SFSignificance value at pixel p.
Further, in the step S5, according to the area of the step S4 face rectangle frame regions detected and fringe area The length in domain adaptively amplifies the significance value of face rectangle frame and lines region on the basis of merging Saliency maps, and It constrains amplified significance value and is no more than maximum value 1, calculation formula is as follows:
Wherein,WithI-th of the face rectangle frame region detected and j-th of lines region, a are indicated respectivelyiAnd lj The length of the area and j-th of lines region of i-th of face rectangle frame region is indicated respectively;A and L indicate original image respectively Area and cornerwise length;C1And C2For corresponding two weights.
Further, in the step S6, increase the fusion Saliency maps that step S5 is obtained using conspicuousness enhancing model Contrast, and using normalization operation generate vision attention merge Saliency maps, include the following steps:
Step S61:Increase the contrast of fusion Saliency maps using conspicuousness enhancing model, simulation redirects picture quality Human eye vision in assessment is note that calculation formula is as follows:
Wherein,Indicate fusion Saliency maps enhanced significance value at pixel p, it is enhanced notable to obtain Property figureC3For the weights of setting;
Step S62:To enhanced Saliency mapsOperation processing is normalized, obtains final vision attention fusion Saliency maps.
Compared to the prior art, the beneficial effects of the invention are as follows:The present invention is by merging two kinds of obvious object detection algorithms The Saliency maps of generation reduce the limitation of single obvious object detection algorithm.Secondly, by adaptively amplify face and The significance value in lines region generates the fusion Saliency maps for including human eye vision more sensitive face and line information.Most Afterwards, the contrast that a conspicuousness enhancing model increases Saliency maps is designed, the people in image quality measure is redirected with simulation Eye vision attention.To sum up, method of the invention can be perfectly suitable for the redirection picture quality of view-based access control model conspicuousness and comment Estimate, so that objective evaluation result and human subject is scored and keep better consistency, can be used for redirecting image quality measure and The fields such as image reorientation method optimization.
Description of the drawings
Fig. 1 is the flow diagram of the method for the present invention.
Fig. 2 is the implementation flow chart of holistic approach in the embodiment of the present invention.
Specific implementation mode
Below in conjunction with the accompanying drawings and specific embodiment, the present invention is described in further details.
The present invention provides a kind of vision attention fusion method being suitable for redirecting image quality measure, such as Fig. 1 and Fig. 2 institutes Show, includes the following steps:
Step S1:Original image is read, two width Saliency maps are generated using two kinds of obvious object detection algorithms.In this implementation In example, two kinds of obvious object detection algorithms are DCT and BSCA algorithms.
Step S2:Reduce the distributional difference of two kinds of Saliency maps using equalization operation, generates two width and equalize conspicuousness Figure.
Specifically, using equalization operation while protecting original Saliency maps overall distribution, reduce two width conspicuousnesses The distributional difference of figure, with being equalized Saliency maps, calculation formula is:
Wherein, Sp、S′pIt is the significance value at the forward and backward pixel p of equalization respectively;Parameter t and b are adaptive thresholds, Calculation formula is:
Wherein, w and h indicates the width and height of original image respectively;ptIndicate theA position, pbIndicate theA position,Indicate downward rounding;SdIt is that the Saliency maps S that step S1 is obtained is dropped by significance value The sequence of gained after sequence arrangement,WithSequence S is indicated respectivelydIn ptAnd pbIt is worth accordingly at position;K is control equalization The Saliency maps that two kinds of obvious object detection algorithms are calculated can be arranged different k values in the parameter of degree.
Step S3:It is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, and Fusion Saliency maps are generated using normalization operation.Specifically include following steps:
Step S31:It is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, meter It is as follows to calculate formula:
Wherein,WithTwo width equalization Saliency maps S is indicated respectivelyD' and SBSignificance value at ' middle pixel p; Indicate that the adduction of the significance value at pixel p is averaging as a result, to obtain by two width equalization Saliency maps SD' and SB' in it is right Result figure S obtained from the significance value adduction that should be put is averagingm
Step S32:In order to ensure that the codomain of the Saliency maps after fusion is [0,1], the result that adduction is averaging is carried out Normalization operation, calculation formula are as follows:
Wherein, min (Sm) and max (Sm) result of calculation figure S is indicated respectivelymThe minimum value and maximum value of middle all pixels value;Indicate obtained fusion Saliency maps SFSignificance value at pixel p.
Step S4:Face and line information in user's face detection algorithm and line detection algorithm detection original image.
Step S5:According to the length of the area and lines region of the step S4 face rectangle frame regions detected, constraining Under conditions of amplifying extreme value, the face rectangle frame in the fusion Saliency maps that adaptively amplification procedure S3 is obtained and lines region Significance value, generate and include the fusion Saliency maps of face and line information.
Specifically, according to the length of the area and lines region of the step S4 face rectangle frame regions detected, merging Adaptively amplify the significance value of face rectangle frame and lines region on the basis of Saliency maps, and constrains amplified notable Property value be no more than maximum value 1, calculation formula is as follows:
Wherein,WithI-th of the face rectangle frame region detected and j-th of lines region, a are indicated respectivelyiAnd lj The length of the area and j-th of lines region of i-th of face rectangle frame region is indicated respectively;A and L indicate original image respectively Area and cornerwise length;C1And C2For corresponding two weights.
Step S6:Increase the contrast for the fusion Saliency maps that step S5 is obtained using conspicuousness enhancing model, and uses Normalization operation generates vision attention and merges Saliency maps.Specifically include following steps:
Step S61:Increase the contrast of fusion Saliency maps using conspicuousness enhancing model, simulation redirects picture quality Human eye vision in assessment is note that calculation formula is as follows:
Wherein,Indicate fusion Saliency maps enhanced significance value at pixel p, it is enhanced notable to obtain Property figureC3For the weights of setting;
Step S62:To enhanced Saliency mapsBe normalized operation processing, normalized step with step S32, Obtain final vision attention fusion Saliency maps.
The vision attention fusion frame for being suitable for redirecting image quality measure of the present invention, it is aobvious by merging two kinds first The Saliency maps that object detection algorithms generate are write to reduce the limitation of single obvious object detection algorithm;Secondly, by adaptive Amplify the significance value of face and lines region with answering, generates melting comprising human eye vision more sensitive face and line information Close Saliency maps;Finally, the contrast that a conspicuousness enhancing model increases Saliency maps is devised, to simulate redirection map picture Human eye vision in quality evaluation pays attention to.This method can be perfectly suitable for the redirection picture quality of view-based access control model conspicuousness Assessment makes objective evaluation result and human subject score and keeps better consistency, can be used for redirecting image quality measure with And the fields such as image reorientation method optimization.
The above are preferred embodiments of the present invention, all any changes made according to the technical solution of the present invention, and generated function is made When with range without departing from technical solution of the present invention, all belong to the scope of protection of the present invention.

Claims (5)

1. a kind of vision attention fusion method being suitable for redirecting image quality measure, which is characterized in that include the following steps:
Step S1:Original image is read, two kinds of Saliency maps are generated using two kinds of obvious object detection algorithms;
Step S2:Reduce the distributional difference of two kinds of Saliency maps using equalization operation, generates two width and equalize Saliency maps;
Step S3:It is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, and uses Normalization operation generates fusion Saliency maps;
Step S4:Face and line information in user's face detection algorithm and line detection algorithm detection original image;
Step S5:According to the length of the area and lines region of the step S4 face rectangle frame regions detected, amplify in constraint Under conditions of extreme value, face rectangle frame and lines region in the fusion Saliency maps that adaptively amplification procedure S3 is obtained it is aobvious Work property value, generates the fusion Saliency maps for including face and line information;
Step S6:Increase the contrast for the fusion Saliency maps that step S5 is obtained using conspicuousness enhancing model, and uses normalizing Change operation and generates vision attention fusion Saliency maps.
2. a kind of vision attention fusion method being suitable for redirecting image quality measure according to claim 1, special Sign is, in the step S2, using equalization operation while protecting original Saliency maps overall distribution, it is aobvious to reduce two width The distributional difference of work property figure, with being equalized Saliency maps, calculation formula is:
Wherein, Sp、S′pIt is the significance value at the forward and backward pixel p of equalization respectively;Parameter t and b are adaptive thresholds, are calculated Formula is:
Wherein, w and h indicates the width and height of original image respectively;ptIndicate theA position, pbIndicate theA position,Indicate downward rounding;SdIt is that the Saliency maps S that step S1 is obtained is dropped by significance value The sequence of gained after sequence arrangement,WithSequence S is indicated respectivelydIn ptAnd pbIt is worth accordingly at position;K is control equalization The parameter of degree.
3. a kind of vision attention fusion method being suitable for redirecting image quality measure according to claim 1, special Sign is, in the step S3, is added the method being averaging using the significance value of corresponding points and merges two width equalization conspicuousness Figure, and fusion Saliency maps are generated using normalization operation, include the following steps:
Step S31:It is added the method being averaging using the significance value of corresponding points and merges two width equalization Saliency maps, calculates public Formula is as follows:
Wherein,WithTwo width equalization Saliency maps S is indicated respectivelyD′And SB′Significance value at middle pixel p;It indicates Significance value at pixel p sums it up being averaging as a result, to obtain by two width equalization Saliency maps SD′And SB′Middle corresponding points Significance value adduction be averaging obtained from result figure Sm
Step S32:In order to ensure that the codomain of the Saliency maps after fusion is [0,1], normalizing is carried out to the result that adduction is averaging Change operation, calculation formula is as follows:
Wherein, min (Sm) and max (Sm) result of calculation figure S is indicated respectivelymThe minimum value and maximum value of middle all pixels value;Table Show obtained fusion Saliency maps SFSignificance value at pixel p.
4. a kind of vision attention fusion method being suitable for redirecting image quality measure according to claim 1, special Sign is, in the step S5, according to the length of the area and lines region of the step S4 face rectangle frame regions detected, Adaptively amplify the significance value of face rectangle frame and lines region on the basis of fusion Saliency maps, and constrains amplified Significance value is no more than maximum value 1, and calculation formula is as follows:
Wherein,WithI-th of the face rectangle frame region detected and j-th of lines region, a are indicated respectivelyiAnd ljRespectively Indicate the length of the area and j-th of lines region of i-th of face rectangle frame region;A and L indicates the area of original image respectively With cornerwise length;C1And C2For corresponding two weights.
5. a kind of vision attention fusion method being suitable for redirecting image quality measure according to claim 1, special Sign is, in the step S6, increases the contrast for the fusion Saliency maps that step S5 is obtained using conspicuousness enhancing model, and Vision attention is generated using normalization operation and merges Saliency maps, is included the following steps:
Step S61:Increase the contrast of fusion Saliency maps using conspicuousness enhancing model, simulation redirects image quality measure In human eye vision note that calculation formula is as follows:
Wherein,Fusion Saliency maps enhanced significance value at pixel p is indicated, to obtain enhanced Saliency mapsC3For the weights of setting;
Step S62:To enhanced Saliency mapsOperation processing is normalized, it is notable to obtain final vision attention fusion Property figure.
CN201810342794.8A 2018-04-17 2018-04-17 Visual attention fusion method suitable for quality evaluation of redirected image Active CN108549872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810342794.8A CN108549872B (en) 2018-04-17 2018-04-17 Visual attention fusion method suitable for quality evaluation of redirected image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810342794.8A CN108549872B (en) 2018-04-17 2018-04-17 Visual attention fusion method suitable for quality evaluation of redirected image

Publications (2)

Publication Number Publication Date
CN108549872A true CN108549872A (en) 2018-09-18
CN108549872B CN108549872B (en) 2022-03-22

Family

ID=63515381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810342794.8A Active CN108549872B (en) 2018-04-17 2018-04-17 Visual attention fusion method suitable for quality evaluation of redirected image

Country Status (1)

Country Link
CN (1) CN108549872B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978858A (en) * 2019-03-27 2019-07-05 华南理工大学 A kind of double frame thumbnail image quality evaluating methods based on foreground detection
CN111242916A (en) * 2020-01-09 2020-06-05 福州大学 Image display adaptation evaluation method based on registration confidence measurement
CN111311486A (en) * 2018-12-12 2020-06-19 北京沃东天骏信息技术有限公司 Method and apparatus for processing image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050069206A1 (en) * 2003-09-30 2005-03-31 Yu-Fei Ma Contrast-based image attention analysis framework
CN102509299A (en) * 2011-11-17 2012-06-20 西安电子科技大学 Image salient area detection method based on visual attention mechanism
US20130084013A1 (en) * 2011-09-29 2013-04-04 Hao Tang System and method for saliency map generation
US20130121619A1 (en) * 2008-08-28 2013-05-16 Chintan Intwala Seam Carving Using Seam Energy Re-computation in Seam Neighborhood
CN103679173A (en) * 2013-12-04 2014-03-26 清华大学深圳研究生院 Method for detecting image salient region
US20150117783A1 (en) * 2013-10-24 2015-04-30 Adobe Systems Incorporated Iterative saliency map estimation
CN104992403A (en) * 2015-07-07 2015-10-21 方玉明 Hybrid operator image redirection method based on visual similarity measurement
CN106506901A (en) * 2016-09-18 2017-03-15 昆明理工大学 A kind of hybrid digital picture halftoning method of significance visual attention model
CN107578395A (en) * 2017-08-31 2018-01-12 中国地质大学(武汉) The image quality evaluating method that a kind of view-based access control model perceives

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050069206A1 (en) * 2003-09-30 2005-03-31 Yu-Fei Ma Contrast-based image attention analysis framework
US20130121619A1 (en) * 2008-08-28 2013-05-16 Chintan Intwala Seam Carving Using Seam Energy Re-computation in Seam Neighborhood
US20130084013A1 (en) * 2011-09-29 2013-04-04 Hao Tang System and method for saliency map generation
CN102509299A (en) * 2011-11-17 2012-06-20 西安电子科技大学 Image salient area detection method based on visual attention mechanism
US20150117783A1 (en) * 2013-10-24 2015-04-30 Adobe Systems Incorporated Iterative saliency map estimation
CN103679173A (en) * 2013-12-04 2014-03-26 清华大学深圳研究生院 Method for detecting image salient region
CN104992403A (en) * 2015-07-07 2015-10-21 方玉明 Hybrid operator image redirection method based on visual similarity measurement
CN106506901A (en) * 2016-09-18 2017-03-15 昆明理工大学 A kind of hybrid digital picture halftoning method of significance visual attention model
CN107578395A (en) * 2017-08-31 2018-01-12 中国地质大学(武汉) The image quality evaluating method that a kind of view-based access control model perceives

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOFEI ZHOU ET AL.: "Adaptive saliency fusion based on quality assessment", 《MULTIMED TOOLS APPL》 *
郑阳: "基于显著区域检测的图像语义层次管理", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311486A (en) * 2018-12-12 2020-06-19 北京沃东天骏信息技术有限公司 Method and apparatus for processing image
CN109978858A (en) * 2019-03-27 2019-07-05 华南理工大学 A kind of double frame thumbnail image quality evaluating methods based on foreground detection
CN111242916A (en) * 2020-01-09 2020-06-05 福州大学 Image display adaptation evaluation method based on registration confidence measurement
CN111242916B (en) * 2020-01-09 2022-06-14 福州大学 Image display adaptation evaluation method based on registration confidence measurement

Also Published As

Publication number Publication date
CN108549872B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
Wang et al. An imaging-inspired no-reference underwater color image quality assessment metric
US7480419B2 (en) Image signal processing
CN102722868B (en) Tone mapping method for high dynamic range image
US11375922B2 (en) Body measurement device and method for controlling the same
CN106127688B (en) A kind of super-resolution image reconstruction method and its system
CN108549872A (en) A kind of vision attention fusion method being suitable for redirecting image quality measure
JP5031877B2 (en) Image processing apparatus and image processing method
CN101853286B (en) Intelligent selection method of video thumbnails
CN110264426A (en) Image distortion correction method and apparatus
CN101601287A (en) Produce the equipment and the method for photorealistic image thumbnails
CN107481236A (en) A kind of quality evaluating method of screen picture
CN108537758B (en) Image contrast enhancement method based on display and human eye visual characteristics
US10909709B2 (en) Body measurement device and method for controlling the same
CN104700405B (en) A kind of foreground detection method and system
CN111709914B (en) Non-reference image quality evaluation method based on HVS characteristics
CN103400367A (en) No-reference blurred image quality evaluation method
WO2019071734A1 (en) Method for enhancing local contrast of image
CN103226824B (en) Maintain the video Redirectional system of vision significance
CN104159104B (en) Based on the full reference video quality appraisal procedure that multistage gradient is similar
CN107689039A (en) Estimate the method and apparatus of image blur
JP4962159B2 (en) Measuring apparatus and method, program, and recording medium
Trivedi et al. A novel HVS based image contrast measurement index
CN109214367A (en) A kind of method for detecting human face of view-based access control model attention mechanism
Shen et al. Color enhancement algorithm based on Daltonization and image fusion for improving the color visibility to color vision deficiencies and normal trichromats
Toprak et al. A new full-reference image quality metric based on just noticeable difference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant