CN102354388A - Method for carrying out adaptive computing on importance weights of low-level features of image - Google Patents
Method for carrying out adaptive computing on importance weights of low-level features of image Download PDFInfo
- Publication number
- CN102354388A CN102354388A CN2011102845511A CN201110284551A CN102354388A CN 102354388 A CN102354388 A CN 102354388A CN 2011102845511 A CN2011102845511 A CN 2011102845511A CN 201110284551 A CN201110284551 A CN 201110284551A CN 102354388 A CN102354388 A CN 102354388A
- Authority
- CN
- China
- Prior art keywords
- image
- importance
- value
- grad
- brightness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses a method for carrying out adaptive computing on importance weights of low-level features of an image, which is used for ensuring the visual integrity of a sensitive target in the process of image compression. In the invention, according to an image, the optimal compression parameters are adaptively computed, wherein the image comprises four low-level features such as color, gradient, brightness and center distance; in the process of computing the importance weight of color, an image to be processed is subjected to color histogram statistics, and a weight function is established for computing the weight of color according to frequencies; in the process of computing the importance weight of gradient, the image is divided into blocks, the gradient of pixels in each block is computed, then the orientation of the obtained gradient is subjected to histogram statistics, and an inter-block orientation change rule is computed so as to determine the importance weight of gradient; in the process of computing the importance weight of brightness, the image to be processed is divided into two parts, the brightness value of each part is computed, and the part with a larger value is taken as the main reference basis for weight computing; and in the process of computing the importance weight of position, a fixed value is assigned, and finally, corresponding weight parameters of each low-level feature are obtained.
Description
Technical field
The present invention is the method for important area in a kind of computed image automatically, and the self-adaptation that relates in particular to multiple low-level image feature weight coefficient in image important area identifying of image is calculated, and belongs to technical field of computer vision.
Background technology
People are when observing and understanding image; Can by the light of nature it be divided into important area (visual impression region-of-interest; Like people in the main part building in the piece image, birds and flowers, the portraiture photography or the like) and non-important area; The subjective visual quality do of entire image often depends on the visual quality of important area, but not usually degrading of important area be difficult for being perceiveed by the people, and is less to the influence of entire image visual quality.Therefore, the method for distilling of important area all has very important significance in applications such as compression of images and video analysis in the image.
Under the condition of Channel Transmission bandwidth and limited storage space; In order to guarantee the quality of compressed image to greatest extent; Can be on the image important area basis of extracting; Zones of different to image adopts the different compression strategy, reduces the information loss of important area in the image as far as possible, thereby has not only guaranteed the visual effect of reconstructed image but also improved compression efficiency.
In the Shot Detection and cluster process of video analysis; If on the image important area basis of extracting; Only the similarity to important area in each frame compares and analyzes, and not only can improve the efficient of Shot Detection and cluster, and can improve its accuracy rate.
In order to extract the important area in the image, initial method is to adopt manual type, through manually marking out the important area of image, carries out follow-up processing again towards different concrete application then.But, along with being on the increase of view data, and improving constantly of requiring of Flame Image Process real-time, the efficient of the manual work of important area mark mode no longer can meet the demands in the image, and therefore, some image important area extraction methods constantly are suggested.
Conventional images important area extraction method; General all is that a certain low-level image feature (for example color, texture, brightness etc.) to image is analyzed; Draw one group of statistics empirical value through a large amount of experiments, analyze the important area in the image automatically based on this group empirical value then.Though these class methods can realize the automatic extraction of important area, still exist not enough: 1) owing to only utilized single low-level image feature, so the extraction of important area is inaccurate sometimes; 2) since its adopt be fixed coefficient therefore, all kinds of images are not had universality.
Number of patent application is the calculating that also relates to the image important area in the patent of 201010185241.X, and the method that wherein adopts is multiple low-level image features such as the color, gradient, brightness, position of comprehensive utilization image, realizes the calculating to the image important area.Though this method has overcome the deficiency of only utilizing the single low-level image feature of image; But when the weight coefficient of each low-level image feature is chosen, employing be the empirical value of fixing, therefore; Can not adapt to the variation of different images texture, color etc. well, universality is still waiting to improve.
Summary of the invention
The method that the purpose of this invention is to provide important area in a kind of self-adaptation computed image.This method synthesis utilizes multiple low-level image feature information such as the color, gradient, brightness, position of image; Can calculate the important area in the image more exactly and utilize the physical significance of these low-level image features to integrate; Consider from the overall situation; Dynamically provide the weight of various bottom-up informations, taken into account general applicability the different content image.
For realizing above-mentioned purpose, the present invention adopts following technical scheme.It is characterized in that may further comprise the steps:
Step 1; Calculate the color weights of importance: pending image transitions is arrived the HSV space; Pixel foundation to the H passage is the capable histogram of step-length with n and adds up; Preserve maximal value and minimum value in each row histogram, calculate the weighted value w that in pending image, characterizes color importance through following formula (1)
C,
F wherein
MinBe the mean value of minimum value in all row histograms, f
MaxBe peaked mean value in all row histograms;
Step 2; Compute gradient weights of importance: pending image is divided into some; Calculate the gradient and the statistical gradient direction of each pixel in each piece and set up histogram of gradients; Calculate the variance of said each piece interior pixel value then, judge through following formula (2) to calculate the weighted value w that in pending image, characterizes gradient importance
g
Wherein image is divided into some, grad
MaxBe the maximum value of Grad in all pieces, grad
MinBe the minimum value of Grad in all pieces, the interval number that M is divided into 0 to 2 π scope for predetermined space.
Formula (3) is divided into M direction, D (S with 0 to 2 π interval to be predefined at interval
i) fall into the variance of each direction, patch for the piece histogram of gradients
iIn the piece divided of expression any one;
Step 3; Calculate the brightness weights of importance: in the image the higher zone of brightness often the sensitizing range in the image since in the LAB space L component represented the brightness of image; Pending image transitions is arrived the LAB space; Image after the conversion is divided into two zones that area equates with " returning " font, the L component in two zones is carried out normalization is handled and summation, calculate the weighted value w of sign brightness importance in pending image according to following formula (4), (5)
i
W wherein
iBe brightness weights of importance, S
iBe brightness importance,
I
(x, y)Be illustrated in the Lab space pixel coordinate for (x, brightness value y), region1 are " returning " font central area, and region2 is a remaining area, S
RegionlBe the area of " returning " font central area,
Step 4 is with location prominence weighted value w
pBe made as fixing value: w wherein
pIn (0,1) scope,
Step 5: to four kinds of weighted value w of above acquisition
c, w
g, w
I, w
pCarry out normalization according to following formula and obtain final low-level image feature weights of importance value, w '
c, w '
g, w '
i, w '
pBe four kinds of final weighted values that bottom is extraordinary after the normalization,
Aforesaid method in the wherein said step 1, is removed numerical value and is 0 statistical value when calculating row histogram minimum value.
Aforesaid method, in the wherein said step 4, location prominence weighted value w
pBe preferably 0.1.
The method of important area can calculate the important area in the image exactly according to the own characteristic of different images in the self-adaptation computed image provided by the present invention.Relevant The tested results shows that this method can both obtain effect preferably to the inhomogeneity image.This method also can be used for the patent that the inventor herein once applied for " a kind of image-scaling method that keeps visual quality of sensitive target ", improves the image zoom quality of former method effectively.
Description of drawings
Fig. 1 is the process flow diagram of auto-adaptive parameter production method of the present invention
Embodiment
Before address, low-level image feature information such as the color of analysis-by-synthesis image of the present invention, gradient, brightness calculate the weight coefficient of various low-level image features in important area computation process, thereby the calculating of image important area can be adapted with picture material.
Below in conjunction with description of drawings implementation of the present invention, clearly represented process of the present invention among Fig. 1.At first, computed image color weights of importance; Secondly, compute gradient weights of importance; Then, calculate the brightness weights of importance; At last, confirm the location prominence weight.
It should be noted that following only is the exemplary one embodiment of the present invention of having enumerated: step 1: computed image color weights of importance
Though the expression of RGB three primary colors directly; But three attributes of R, G, B numerical value and color are directly contact not; Can not disclose the relation between the color, and the HSV colour model develops from the CIE three-dimensional color space, what it adopted is user's color description method intuitively; It is more approaching with the HVC ball-type colour solid of Munsell Color Appearance System, so when research color importance, adopt the HSV space more convenient.
Adopt the H passage in HSV space that the capable histogram of image is added up among the present invention, so just can find out the frequency that color occurs intuitively, and be the bigger weight of the less tax of distribution of color.
A kind of exemplary implementation step of step 1 is following:
Getting the value that imports the picture tone passage, is 10 foundation row hue histograms with the bin size, adds up each bin value.To be worth after bin (thinking the tone of noisy pixel) less than 10 removes the maximum bin value f of record this moment then
MaxWith minimum bin value f
Min, every capable pixel is all carried out such processing, and maximum f in all row
MaxAnd minimum f in all row
MinMake even all and to handle, we think f
Min/ f
MaxValue has been represented the harmony of distribution of color; When change ratio when big account for color distribute relatively more balanced; The more big importance height of probability that belongs to foreground area of color category; Otherwise hour distribution of color is inhomogeneous when this ratio, and color category is less to be that probability big importance in background area is low, afterwards just by formula (1) calculate the color weights of importance
Step 2: computed image gradient weights of importance
Image gradient is the variation of gradation of image value; Because gradient has reflected information such as picture structure; Be widely used in the feature extraction and the rim detection of image at computer vision field; Edge of image has been represented the most significant zone of gray-value variation in the image, and the degree that the image border is paid close attention to will be higher than other zone of image.At first will import the piece that image division is m * m, part can not round on the edge of then can cut out or the pixel of a not enough piece replenished into and make it to constitute a piece, and with the value zero setting of pixel.In each piece, calculate the gradient of each pixel, and the direction of gradient is set up histogram add up, during statistics; 0 to 2 π scope interval is divided into M bin with predetermined space; To its variance of the bin in each piece statistics, we think if the number of a direction interior pixel and average differ positive and negative 2 the direction of this image block distribute and belong to uniform, this image block is that the probability of background area is big; The number and the average of the pixel in the opposite direction differ by more than 2; This region direction skewness so, the probability that graded belongs to foreground area more greatly is big, so the variance D (s of computing block i
i), if D is (s
i) less than 4M, think that then the grain of this piece is stronger, belong to the lower zone of importance, the gradient importance of this piece is changed to 0; Otherwise, think that the importance of this piece is higher.Thereafter again with after the Grad normalization of each piece as its gradient weights of importance.Its computation process is shown in formula (2).
Wherein image is divided into some, grad
MaxBe the maximum value of Grad in all pieces, grad
MinBe the minimum value of Grad in all pieces, the interval number that M is divided into 0 to 2 π scope for predetermined space.
Formula (3) D (S
i) for the piece histogram of gradients falls into the variance of each direction, 0 to 2 π interval is divided into M bin, x with predetermined space
jBe the number of the pixel that falls into each bin, x is that pixel falls into each bin mean number, patch
iIn the piece divided of expression any one.
Step 3: computed image brightness weights of importance
Generally speaking, in the image the higher zone of brightness often the sensitizing range in the image since in the LAB space L component represented the brightness of image, therefore, can be the LAB color space by the RGB color space conversion with image.We draw through a large amount of experiments, and in the ordinary course of things, the important information of image generally is distributed in the middle part of image.We carry out " returning " font to image cuts apart and guarantees that two parts area of image equates, calculates this two-part L component summation then respectively, and the part that value is bigger is as the luminance weights design main basis.When carrying out image segmentation for " returning " shape, if the length and width of image all are odd numbers, just can't be divided into two parts that area equates, at this moment we take the method with rightmost one row pixel-expanded.The brightness value that we make " going back to " font central area when calculating brightness importance is that the brightness value of region1 peripheral region is region2.We can obtain brightness importance S by formula (5) thus
iAgain since priori is told our piece image part whether important can not come by luminance factor fully about, limit so we tackle its weight when considering the brightness weights of importance, we can obtain brightness weights of importance w thus
iComputing formula (4).
W wherein
iBe brightness weights of importance, S
iBe brightness importance.
I
(x, y)The remarked pixel coordinate is (x, brightness value y), I
RegionlBe " returning " font central area, I
Region2Be remaining area, S
RegionlArea for " returning " font central area.S
iThe brightness importance values of expression entire image.
Step 4: computed image location prominence weight
In conjunction with above-mentioned three kinds of bottom characteristics, the position of giving image is with fixing weights of importance.Because the important objects of most of image information is concentrated in the image middle part, so might as well the location prominence weight be set to fixed value w
PBy top three the step can obtain w respectively
c, w
g, w
iBut we and guaranteed not former three with one fix in (0,1) scope.The weight of establishing final color, gradient, brightness, location prominence here is respectively w '
c, w '
g, w '
i, w '
p
More than disclosed only be instantiation of the present invention, based on thought provided by the invention, those skilled in the art can think and variation, all should fall in protection scope of the present invention.
Claims (3)
1. the method for a self-adaptation computed image low-level image feature weights of importance may further comprise the steps:
Step 1 is calculated the color weights of importance: pending image transitions to the HSV space, is set up with n the every capable pixel of H passage and to be the capable histogram of step-length and to add up, preserve maximal value and minimum value in each capable histogram; Calculate the weighted value w that in pending image, characterizes color importance through following formula (1)
C,
F wherein
MinBe the mean value of minimum value in all row histograms, f
MaxBe peaked mean value in all row histograms;
Step 2; Compute gradient weights of importance: pending image is divided into some; Calculate the gradient and the statistical gradient direction of each pixel in each piece; Set up histogram of gradients, calculate the variance of said each piece interior pixel value then, calculate the weighted value w that in pending image, characterizes gradient importance through following formula (2)
g
Wherein image is divided into some, grad
MaxBe the maximum value of Grad in all pieces, grad
MinBe the minimum value of Grad in all pieces, the interval number of M for 0 to 2 π interval is cut apart with predetermined space;
Formula (3) is divided into M direction, D (S with 0 to 2 π interval with predetermined space
i) fall into the variance of each direction, patch for the piece histogram of gradients
iIn the piece divided of expression any one;
Step 3; Calculate the brightness weights of importance: in the image the higher zone of brightness often the sensitizing range in the image since in the LAB space L component represented the brightness of image; Pending image transitions is arrived the LAB space; Image after the conversion is divided into two zones that area equates with " returning " font, the L component in two zones is carried out normalization is handled and summation, calculate the weighted value w of sign brightness importance in pending image according to following formula (4), (5)
i
W wherein
iBe brightness weights of importance, S
iBe brightness importance,
I
(x, y)Be illustrated in the Lab space pixel coordinate for (x, brightness value y), region1 are " returning " font central area, and region2 is a remaining area, S
Region1Be the area of " returning " font central area,
Step 4 is with location prominence weighted value w
pBe made as fixing value, wherein w
pIn (0,1) scope;
Step 5: to four kinds of weighted value w of above acquisition
c, w
g, w
I, w
pCarry out normalization according to following formula and obtain final low-level image feature weights of importance value, w '
c, w '
g, w '
i, w '
pBe four kinds of the most ultimate heavy values that bottom is extraordinary after the normalization,
2. like claim 1 described method, it is characterized in that in the said step 1, when calculating row histogram minimum value, remove numerical value and be 0 statistical value.
3. like claim 1 described method, it is characterized in that position weights of importance value w in the said step 4
pBe preferably 0.1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110284551 CN102354388B (en) | 2011-09-22 | 2011-09-22 | Method for carrying out adaptive computing on importance weights of low-level features of image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110284551 CN102354388B (en) | 2011-09-22 | 2011-09-22 | Method for carrying out adaptive computing on importance weights of low-level features of image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102354388A true CN102354388A (en) | 2012-02-15 |
CN102354388B CN102354388B (en) | 2013-03-20 |
Family
ID=45577949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110284551 Expired - Fee Related CN102354388B (en) | 2011-09-22 | 2011-09-22 | Method for carrying out adaptive computing on importance weights of low-level features of image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102354388B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455818A (en) * | 2013-04-28 | 2013-12-18 | 南京理工大学 | Multi-level description method for extracting human body features |
CN104599263A (en) * | 2014-12-23 | 2015-05-06 | 安科智慧城市技术(中国)有限公司 | Image detecting method and device |
CN104751167A (en) * | 2013-12-31 | 2015-07-01 | 西门子医疗保健诊断公司 | Method and device for classifying urine visible components |
CN105496459A (en) * | 2016-01-15 | 2016-04-20 | 飞依诺科技(苏州)有限公司 | Automatic adjustment method and system for ultrasonic imaging equipment |
CN110505412A (en) * | 2018-05-18 | 2019-11-26 | 杭州海康威视数字技术股份有限公司 | A kind of calculation method and device of area-of-interest brightness value |
CN112099217A (en) * | 2020-08-18 | 2020-12-18 | 宁波永新光学股份有限公司 | Automatic focusing method for microscope |
CN114782432A (en) * | 2022-06-20 | 2022-07-22 | 南通电博士自动化设备有限公司 | Edge detection method of improved canny operator based on textural features |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101290760A (en) * | 2008-06-06 | 2008-10-22 | 清华大学 | Computation method of video pixel scalability |
US20090087088A1 (en) * | 2007-09-28 | 2009-04-02 | Samsung Electronics Co., Ltd. | Image forming system, apparatus and method of discriminative color features extraction thereof |
CN101872468A (en) * | 2010-05-27 | 2010-10-27 | 北京航空航天大学 | Image scaling method for keeping visual quality of sensitive target |
CN101923703A (en) * | 2010-08-27 | 2010-12-22 | 北京工业大学 | Semantic-based image adaptive method by combination of slit cropping and non-homogeneous mapping |
-
2011
- 2011-09-22 CN CN 201110284551 patent/CN102354388B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090087088A1 (en) * | 2007-09-28 | 2009-04-02 | Samsung Electronics Co., Ltd. | Image forming system, apparatus and method of discriminative color features extraction thereof |
CN101290760A (en) * | 2008-06-06 | 2008-10-22 | 清华大学 | Computation method of video pixel scalability |
CN101872468A (en) * | 2010-05-27 | 2010-10-27 | 北京航空航天大学 | Image scaling method for keeping visual quality of sensitive target |
CN101923703A (en) * | 2010-08-27 | 2010-12-22 | 北京工业大学 | Semantic-based image adaptive method by combination of slit cropping and non-homogeneous mapping |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455818A (en) * | 2013-04-28 | 2013-12-18 | 南京理工大学 | Multi-level description method for extracting human body features |
CN104751167A (en) * | 2013-12-31 | 2015-07-01 | 西门子医疗保健诊断公司 | Method and device for classifying urine visible components |
CN104599263A (en) * | 2014-12-23 | 2015-05-06 | 安科智慧城市技术(中国)有限公司 | Image detecting method and device |
CN104599263B (en) * | 2014-12-23 | 2017-08-15 | 安科智慧城市技术(中国)有限公司 | A kind of method and device of image detection |
CN105496459A (en) * | 2016-01-15 | 2016-04-20 | 飞依诺科技(苏州)有限公司 | Automatic adjustment method and system for ultrasonic imaging equipment |
CN110505412A (en) * | 2018-05-18 | 2019-11-26 | 杭州海康威视数字技术股份有限公司 | A kind of calculation method and device of area-of-interest brightness value |
CN110505412B (en) * | 2018-05-18 | 2021-01-29 | 杭州海康威视数字技术股份有限公司 | Method and device for calculating brightness value of region of interest |
US11205088B2 (en) | 2018-05-18 | 2021-12-21 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and apparatus for calculating a luminance value of a region of interest |
CN112099217A (en) * | 2020-08-18 | 2020-12-18 | 宁波永新光学股份有限公司 | Automatic focusing method for microscope |
CN114782432A (en) * | 2022-06-20 | 2022-07-22 | 南通电博士自动化设备有限公司 | Edge detection method of improved canny operator based on textural features |
Also Published As
Publication number | Publication date |
---|---|
CN102354388B (en) | 2013-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102354388B (en) | Method for carrying out adaptive computing on importance weights of low-level features of image | |
US10088600B2 (en) | Weather recognition method and device based on image information detection | |
CN108876743B (en) | Image rapid defogging method, system, terminal and storage medium | |
CN109448001B (en) | Automatic picture clipping method | |
CN109740721B (en) | Wheat ear counting method and device | |
US20110050723A1 (en) | Image processing apparatus and method, and program | |
CN105184808B (en) | Scape automatic division method before and after a kind of light field image | |
CN111798467A (en) | Image segmentation method, device, equipment and storage medium | |
CN103218619A (en) | Image aesthetics evaluating method | |
CN109859257B (en) | Skin image texture evaluation method and system based on texture directionality | |
CN106127735B (en) | A kind of facilities vegetable edge clear class blade face scab dividing method and device | |
CN102779273A (en) | Human-face identification method based on local contrast pattern | |
CN107945200A (en) | Image binaryzation dividing method | |
CN102306307B (en) | Positioning method of fixed point noise in color microscopic image sequence | |
CN110276764A (en) | K-Means underwater picture background segment innovatory algorithm based on the estimation of K value | |
CN101853286A (en) | Intelligent selection method of video thumbnails | |
CN114004834B (en) | Method, equipment and device for analyzing foggy weather condition in image processing | |
CN109871900A (en) | The recognition positioning method of apple under a kind of complex background based on image procossing | |
CN104504703A (en) | Welding spot color image segmentation method based on chip element SMT (surface mounting technology) | |
CN104637068A (en) | Detection method and detection device for shielding of video frames and video pictures | |
CN109858394A (en) | A kind of remote sensing images water area extracting method based on conspicuousness detection | |
CN106373096A (en) | Multi-feature weight adaptive shadow elimination method | |
CN106846343A (en) | A kind of pathological image feature extracting method based on cluster super-pixel segmentation | |
CN115131375A (en) | Automatic ore cutting method | |
Hafiz et al. | Foreground segmentation-based human detection with shadow removal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130320 Termination date: 20130922 |