CN105787966A - An aesthetic evaluation method for computer pictures - Google Patents

An aesthetic evaluation method for computer pictures Download PDF

Info

Publication number
CN105787966A
CN105787966A CN201610157571.5A CN201610157571A CN105787966A CN 105787966 A CN105787966 A CN 105787966A CN 201610157571 A CN201610157571 A CN 201610157571A CN 105787966 A CN105787966 A CN 105787966A
Authority
CN
China
Prior art keywords
image
candidate frame
feature
follows
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610157571.5A
Other languages
Chinese (zh)
Other versions
CN105787966B (en
Inventor
路红
朱志斌
姚泽平
瞿鹏亮
杨博弘
白云汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201610157571.5A priority Critical patent/CN105787966B/en
Publication of CN105787966A publication Critical patent/CN105787966A/en
Application granted granted Critical
Publication of CN105787966B publication Critical patent/CN105787966B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer image processing and specifically relates to an aesthetic evaluation method for computer pictures. The image aesthetic evaluation method, based on image composition characteristics in an object area, comprises the following steps: detecting image objects through a BING method, and carrying out clustering and optimizing on candidate boxes obtained through the BING method to obtain an image object region; and then, carrying out aesthetic evaluation on image object region by utilizing the image composition characteristics. In the method, image composition relevant characteristics are integrated and improved, so that aesthetic evaluation performance is improved. The method can realize effective aesthetic evaluation for the computer pictures.

Description

A kind of aesthstic appraisal procedure of computer picture
Technical field
The invention belongs to computer image processing technology field, be specifically related to the aesthstic appraisal procedure of a kind of computer picture.
Background technology
Along with being on the increase of image capture device, the quantity of image data is explosion type ground growth therewith also.And how quickly substantial amounts of image carries out aesthstic assessment effectively and accurately is also one of current study hotspot.The picture that aesthetic values are high can bring higher visual enjoyment to people, improves the quality of the life of people.
It is a very challenging job that the technology utilizing computer vision carries out aesthstic assessment.First, it would be desirable in still image, automatically choose some characteristics of image relevant to aesthetic evaluation and it needs to these features are carried out further conversion and combination, and then assesses the aesthetic values of image exactly.Utilize computer that image carries out aesthstic assessment exactly, it is possible to help us to filter out the picture that some aesthetic values are not high, thus retaining the picture that aesthetic values are high.
The evaluation work of computer picture aesthetic values, usual process is exactly choose some characteristics of image relevant to image aesthetic evaluation, and by the combination of certain rule and then describe image aesthetic values, recycle the related tools such as support vector machine (SVM) afterwards to be trained, and the aesthetic values of target image are carried out classification assessment.In the research of computer aesthetic values assessment, having chosen with design show from low to high of image aesthetic features, namely from low-level image feature to the trend of high-level characteristic transition.Low-level image feature usually not comprises the semantic information relevant to image aesthetic values, some conventional low-level image features such as: color characteristic, textural characteristics, shape facility etc..High-level characteristic is usually after considering basic aesthetics law, the combination that low-level image feature is carried out.
Up to the present, Many researchers is proposed many aesthstic appraisal procedures based on computer vision.Such as, some researcheres propose three category features: pattern features, picture material, image background environment.Pattern features wherein comprises this aesthetic principle of triad law of trichotomy.Picture material predominantly detects the typical objects objects such as such as people, animal, plant, and image background environment then mainly judges residing environment, and ratio is such as whether be indoor environment, and if weather conditions etc. when being in outdoor.These high-level characteristics all by training a series of relevant low-level image features to obtain, can utilize these to contain the high-level characteristic of certain semantic information, achieve pretty good effect.In this aspect of the region of feature extraction, the existing global characteristics extracted on whole pictures in conventional related work, also there is extraction local feature on the regional area of picture, also have the local feature to extraction to be combined again.
Summary of the invention
It is an object of the invention to provide a kind of can provide robust, accurately, the adaptable aesthstic assessment algorithm based on computer vision.
Aesthstic appraisal procedure based on computer vision provided by the invention, the picture utilizing image is constituted, carry out the design of high-rise aesthetic features, namely provide a kind of new aesthetic features to combine, thus obtain robust, accurately, the adaptable aesthstic assessment algorithm based on computer vision.
The aesthstic assessment algorithm based on computer vision that the present invention proposes, is based on subject area pattern features, and Regional high level feature application aesthetic principle carries out aesthstic assessment, it is possible to computer picture carries out effective aesthstic assessment.Under various image and complicated background, the method can relatively accurately embody the aesthetic values of image.It specifically comprises the following steps that
(1) BING method is utilized to extract subject area;
(2) pattern features is utilized to carry out aesthstic assessment in subject area.
Wherein:
The described extraction subject area of step (1), detailed process is as follows:
(11) for the picture of each input, gather some candidate window, be adjusted to 8 × 8 sizes simultaneously;
(12) by the window vectorization of 8 × 8 sizes, obtain the vector of several 64 dimensions, use grader to be trained;
(13) selecting window size, output mark and the final object target triadic relation that whether contains of grader model, and re-use a grader, select the highest the exporting as target frame of mark under every kind of window size;
(14) candidate frame obtained clustered and optimize;
Step (2) is described utilizes pattern features to carry out aesthstic assessment in subject area, and detailed process is as follows:
(21) film composition of image mainly considers triad law of trichotomy, diagonal rule and eurythmy principle.We use the method for BING to obtain each image obj ect areas, by the spatial information design feature of these regional areas;
(22) use grader that the feature of design is trained, obtain aesthstic assessment models;
Described in step (1), the candidate frame obtained being clustered and optimize, detailed process is as follows:
(141) we define the weighing criteria that ratio is cluster of two candidate frames common factors and union, and formula is as follows;
In formula,,Refer to image obj ect areas;
(142) preset some cluster centres, calculate the similarity of every a pair candidate frame, and store these value information with a two-dimensional matrix;
(143) for each candidate frame, it is added up with the similarity of all candidate frames, namely, using candidate frame the highest for accumulated value as first initial cluster centre;
(144) with this initial cluster centre similarity less thanOther candidate frames in, equally using candidate frame the highest for similarity accumulated value as next cluster centre,It is the threshold values that are manually set of needs;
(145) repeat this process, continue with all initial cluster centre similarities less thanCandidate frame in, choose the maximum candidate frame of similarity and be added to new initial cluster center, until be absent from meet with all initial cluster center similarities less thanCandidate frame, so obtain some cluster centres;
(146) for each cluster, out, x is the value that needs are manually set to the zone marker all covered by the candidate frame of x%;
(147) for mark part, continually look for closure region, its edge is made noise and judges, the candidate frame end product the most of edge noise will be eliminated.
By the spatial information design feature of these regional areas described in step (2), detailed process is as follows:
(211) image object orientative feature, the harmonious image of film composition often meets " triad law of trichotomy ", can design feature is as follows successively:
Wherein,Refer to image obj ect areas, andRefer to the barycenter (subscript attached x, y represent this barycenter coordinate in x, y direction) of image obj ect areas,Represent in 4 golden section point, this image obj ect areas barycenter of distance is nearest on image point (subscript attached x, y represent this some coordinate in x, y direction);According to above-mentioned formulaJust can be used to represent the range information of image obj ect areas distance golden section point, when nearest, this value is 1 to the maximum, and during lie farthest away, this value is minimum is-1, so just can be mapped to by image object orientative feature in [-1,1] interval, convenient research.
(212) image object angle character, utilizing diagonal rule composition is also a kind of important way improving composition aesthetic feeling, and we can make cornerwise two parallel lines, to determine diagonal region.For upper left to lower-right diagonal position line, for convenience of description, we represent image top left corner apex with coordinate (0,0), so that (w, h) represents summit, the lower right corner, and wherein w is picture traverse, and h is picture altitude.Making Article 1 diagonal parallel lines, starting point coordinate is (w/6,0), and terminal is (w, 5*h/6).Article 2 parallel lines starting point coordinate is (9, h/6), terminal be (5*w/6, h).Using the region between these two parallel lines as diagonal region, it is designated as T.
Design feature is as follows:
(213) image object area features, in order to meet balanced proportion principle, in the area of image object, it is necessary to control image object ratio in entire image in the reasonable scope, approx the area of image object is regarded as image object place candidate frame area.
Therefore design feature is as follows:
Wherein,For candidate frame width,For candidate frame height, n is candidate frame quantity, and above formula Middle molecule part is the average area of image subject place candidate frame.Weigh is image overall width, and high is image total height, is multiplied and is total image area.Namely this formula represents the ratio in the gross area of area shared by image obj ect areas.
(214) image object shape facility, it is considered to the shape of image object, need to control the shape of image object in the reasonable scope, and the shape of image object is regarded as the length-width ratio of image object place candidate frame approx, and therefore design feature is as follows:
Equally, whereinFor candidate frame width,For candidate frame height, n is candidate frame quantity, and above formula Middle molecule part is the mean aspect ratio of image subject place candidate frame.Weigh is image overall width, and high is image total height, compares and is length-width ratio.Namely this formula represents the similarity degree of image obj ect areas length-width ratio and general image length-width ratio.Its value, closer to 1, illustrates that its similarity degree is more high.
(215) image object depth of field feature, image object has the picture of higher resolution can obtain higher aesthetic evaluation.We can utilize rate of gray level to represent definition.Represent be (i, j) pixel at coordinate place, pixel is as follows about:
The grey scale change of this point can calculate with following formula:
If because withWithDistance is 1, then namely the weights of 2 distances of diagonal take
So this feature just can calculate with following formula:
This eigenvalue is more big, illustrates that the grey scale change value of its average each pixel is more big, also just may be considered its definition more high, just can be applicable to aesthetic evaluation.
Compared with prior art, the invention have the benefit that
1, for integrating based on the correlated characteristic of subject area composition, improve this work performance in some aspects;
2, the result of (objectness) is detected for image object, carry out aesthstic assessment, utilize existing data set that grader is effectively trained, according to the film composition feature detected based on the image object sensitivity when different aesthetic score pictures being carried out aesthstic assessment, and for this sensitivity, classifying quality can be improved can obtain the grader of a comparatively robust by choosing suitable training set;
3, the present invention automatic fitration can fall the picture that some aesthetic values are low, presents, to user, the picture that some aesthetic values are high.
Accompanying drawing explanation
Fig. 1 is the overall block flow diagram of the novel aesthstic assessment algorithm of the present invention.
Fig. 2 is the flow chart obtaining image object in Fig. 1 described in step (1).
Fig. 3 is the result figure that Fig. 2 step (14) cluster optimizes.
Fig. 4 is the flow chart carrying out aesthstic assessment for film composition feature relevant range in Fig. 1 described in step (2).
Fig. 5 is the angle character result figure in Fig. 4 described in step (212).
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail.
With reference to Fig. 1, the aesthstic assessment algorithm of the present invention, it specifically comprises the following steps that
(1) BING method is utilized to extract subject area, as in figure 2 it is shown, it comprises the following steps that;
(11) for the picture of each input, gather some time windows, be adjusted to 8 × 8 sizes simultaneously;
(12) vector of several 64 dimensions will be obtained after the window vectorization of 8 × 8 sizes, use grader to be trained;
(13) selecting window size, output mark and the final object target triadic relation that whether contains of grader model, and re-use a grader, select the highest the exporting as target frame of mark under every kind of window size;
(14) candidate frame obtained clustered and optimize.Fig. 3 represents cluster and the result optimized, and wherein a, c and b, d are corresponding between two;
(2) utilizing pattern features to carry out aesthstic assessment in subject area, concrete steps are as shown in Figure 4;
(21) film composition of image mainly considers triad law of trichotomy, diagonal rule and eurythmy principle.We use the method for BING to obtain each image obj ect areas, by the spatial information design feature of these regional areas.
(211) designed image object aspect feature, calculates character pair with the triad law of trichotomy of composition:
WhereinRefer to image obj ect areas, andRefer to the barycenter of image obj ect areas,Represent the point that on image, in 4 golden section point, this image obj ect areas barycenter of distance is nearest.According to above-mentioned formulaJust can be used to represent the range information of image obj ect areas distance golden section point, when nearest, this value is 1 to the maximum, and during lie farthest away, this value is minimum is-1.
(212) designed image object angle character, concrete way utilizes diagonal rule patterning process to calculate individual features exactly, and we can make cornerwise two parallel lines, to determine diagonal region.For upper left to lower-right diagonal position line, for convenience of description, we represent image top left corner apex with coordinate (0,0), so that (w, h) represents summit, the lower right corner, and wherein w is picture traverse, and h is picture altitude.Making Article 1 diagonal parallel lines, starting point coordinate is (w/6,0), and terminal is (w, 5*h/6).Article 2 parallel lines starting point coordinate is (9, h/6), and terminal is that (5*w/6, h), concrete outcome is shown in Fig. 5.Using the region between these two parallel lines as diagonal region, it is designated as T.
Design feature is as follows:
(213) designed image object area feature, in order to meet balanced proportion principle, in the area of image object, it is necessary to control image object ratio in entire image in the reasonable scope, approx the area of image object is regarded as image object place candidate frame area.
Therefore design feature is as follows:
WhereinFor candidate frame width,For candidate frame height, n is candidate frame quantity, and above formula Middle molecule part is the average area of image subject place candidate frame.Weigh is image overall width, and high is image total height, is multiplied and is total image area.Namely this formula represents the ratio in the gross area of area shared by image obj ect areas.
(214) designed image object shapes feature, it is considered to the shape of image object, need to control the shape of image object in the reasonable scope, and the shape of image object is regarded as the length-width ratio of image object place candidate frame approx, and therefore design feature is as follows:
Likewise of whichFor candidate frame width,For candidate frame height, n is candidate frame quantity, and above formula Middle molecule part is the mean aspect ratio of image subject place candidate frame.Weigh is image overall width, and high is image total height, compares and is length-width ratio.Namely this formula represents the similarity degree of image obj ect areas length-width ratio and general image length-width ratio.Its value, closer to 1, illustrates that its similarity degree is more high.
(215) image object depth of field feature, image object has the picture of higher resolution can obtain higher aesthetic evaluation.I can utilize rate of gray level to represent definition.For (i, j) pixel at coordinate place, pixel is as follows about:
The grey scale change of this point can calculate with following formula:
If because withWithDistance is 1, then namely the weights of 2 distances of diagonal take
So this feature just can calculate with following formula:
This feature, for representing the definition of image, therefore can apply to aesthetic evaluation.
(22) use grader that the feature of design is trained, obtain aesthstic assessment models.

Claims (3)

1. a computer picture aesthetics assessment algorithm, it is characterised in that specifically comprise the following steps that
(1) BING method is utilized to extract subject area;
(2) pattern features is utilized to carry out aesthstic assessment in subject area;
Wherein:
The described extraction subject area of step (1), detailed process is as follows:
(11) for the picture of each input, gather some candidate window, be adjusted to 8 × 8 sizes simultaneously;
(12) by the window vectorization of 8 × 8 sizes, obtain the vector of several 64 dimensions, use grader to be trained;
(13) select window size, the output mark of grader and finally whether contain the modeling of object target triadic relation, re-using a grader, select under every kind of window size mark the highest as the output of target frame;
(14) candidate frame obtained clustered and optimize;
Step (2) is described utilizes pattern features to carry out aesthstic assessment in subject area, and detailed process is as follows:
(21) according to the triad law of trichotomy of the film composition of image, diagonal rule and eurythmy principle, the method for BING is used to obtain each image obj ect areas, by the spatial information design feature of these regional areas;
(22) use grader that the feature of design is trained, obtain aesthstic assessment models.
2. computer picture aesthetics assessment algorithm according to claim 1, it is characterised in that described in step (1), the candidate frame obtained is clustered and optimizes by profit, and detailed process is as follows:
(141) defining the weighing criteria that ratio is cluster of two candidate frames common factors and union, formula is as follows:
(142) preset some cluster centres, calculate the similarity of every a pair candidate frame, and store these value information with a two-dimensional matrix;
(143) for each candidate frame, it is added up with the similarity of all candidate frames, namely, using candidate frame the highest for accumulated value as first initial cluster centre;
(144) with this initial cluster centre similarity less thanOther candidate frames in, equally using candidate frame the highest for similarity accumulated value as next cluster centre;
(145) repeat this process, continue with all initial cluster centre similarities less thanCandidate frame in, choose the maximum candidate frame of similarity and be added to new initial cluster center, until be absent from meet with all initial cluster center similarities less thanCandidate frame, so obtain some cluster centres,It is the threshold values that are manually set of needs;
(146) for each cluster, the zone marker all covered by the candidate frame of x% is out;
(147) for mark part, continually look for closure region, its edge is made noise and judges, the candidate frame end product the most of edge noise will be eliminated;
Wherein,Refer to image obj ect areas.
3. computer picture aesthetics assessment algorithm according to claim 2, it is characterised in that by the spatial information design feature of these regional areas described in step (2), detailed process is as follows:
(211) image object orientative feature, the image satisfied " triad law of trichotomy " that film composition is harmonious, special this of design is levied as follows:
Wherein,Refer to image obj ect areas, andRefer to the barycenter of image obj ect areas,Represent the point that on image, in 4 golden section point, this image obj ect areas barycenter of distance is nearest;Above-mentioned formulaBeing used for representing the range information of image obj ect areas distance golden section point, when nearest, this value is 1 to the maximum, and during lie farthest away, this value is minimum is-1, is so just mapped to by image object orientative feature in [-1,1] interval;
(212) image object angle character, makes cornerwise two parallel lines, to determine diagonal region;For upper left to lower-right diagonal position line, represent image top left corner apex with coordinate (0,0), so that (w, h) represents summit, the lower right corner, and wherein w is picture traverse, and h is picture altitude;Making Article 1 diagonal parallel lines, starting point coordinate is (w/6,0), and terminal is (w, 5*h/6);
Article 2 parallel lines starting point coordinate is (9, h/6), terminal be (5*w/6, h);Using the region between these two parallel lines as diagonal region, it is designated as T;
Design this feature as follows:
(213) image object area features, designs this feature as follows:
Wherein,For candidate frame width,For candidate frame height, n is candidate frame quantity, and above formula Middle molecule part is the average area of image subject place candidate frame;Weigh is image overall width, and high is image total height, is multiplied and is total image area;Namely this formula represents the ratio in the gross area of area shared by image obj ect areas;
(214) image object shape facility, designs this feature as follows:
Equally, whereinFor candidate frame width,For candidate frame height, n is candidate frame quantity, and above formula Middle molecule part is the mean aspect ratio of image subject place candidate frame;Weigh is image overall width, and high is image total height, compares and is length-width ratio;Namely this formula represents the similarity degree of image obj ect areas length-width ratio and general image length-width ratio, and its value, closer to 1, illustrates that its similarity degree is more high;
(215) image object depth of field feature, image object has the picture of higher resolution can obtain higher aesthetic evaluation, utilizes rate of gray level to represent definition,Represent be (i, j) pixel at coordinate place, pixel is as follows about:
The grey scale change following formula of this point calculates:
So, depth of field feature following formula calculates:
This eigenvalue is more big, illustrates that the grey scale change value of its average each pixel is more big, and its definition is more high.
CN201610157571.5A 2016-03-21 2016-03-21 A kind of aesthstic appraisal procedure of computer picture Expired - Fee Related CN105787966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610157571.5A CN105787966B (en) 2016-03-21 2016-03-21 A kind of aesthstic appraisal procedure of computer picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610157571.5A CN105787966B (en) 2016-03-21 2016-03-21 A kind of aesthstic appraisal procedure of computer picture

Publications (2)

Publication Number Publication Date
CN105787966A true CN105787966A (en) 2016-07-20
CN105787966B CN105787966B (en) 2019-05-31

Family

ID=56393133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610157571.5A Expired - Fee Related CN105787966B (en) 2016-03-21 2016-03-21 A kind of aesthstic appraisal procedure of computer picture

Country Status (1)

Country Link
CN (1) CN105787966B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650737A (en) * 2016-11-21 2017-05-10 中国科学院自动化研究所 Image automatic cutting method
CN106778788A (en) * 2017-01-13 2017-05-31 河北工业大学 The multiple features fusion method of aesthetic evaluation is carried out to image
CN107145905A (en) * 2017-05-02 2017-09-08 重庆大学 The image recognizing and detecting method that elevator fastening nut loosens
CN107146198A (en) * 2017-04-19 2017-09-08 中国电子科技集团公司电子科学研究院 A kind of intelligent method of cutting out of photo and device
CN107590445A (en) * 2017-08-25 2018-01-16 西安电子科技大学 Aesthetic images quality evaluating method based on EEG signals
WO2018090355A1 (en) * 2016-11-21 2018-05-24 中国科学院自动化研究所 Method for auto-cropping of images
CN110796663A (en) * 2019-09-17 2020-02-14 北京迈格威科技有限公司 Picture clipping method, device, equipment and storage medium
CN111046953A (en) * 2019-12-12 2020-04-21 电子科技大学 Image evaluation method based on similarity comparison

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102576461A (en) * 2009-09-25 2012-07-11 伊斯曼柯达公司 Estimating aesthetic quality of digital images
CN103218619A (en) * 2013-03-15 2013-07-24 华南理工大学 Image aesthetics evaluating method
CN104346801A (en) * 2013-08-02 2015-02-11 佳能株式会社 Image-composition evaluating device, information processing device and method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102576461A (en) * 2009-09-25 2012-07-11 伊斯曼柯达公司 Estimating aesthetic quality of digital images
US8311364B2 (en) * 2009-09-25 2012-11-13 Eastman Kodak Company Estimating aesthetic quality of digital images
CN103218619A (en) * 2013-03-15 2013-07-24 华南理工大学 Image aesthetics evaluating method
CN104346801A (en) * 2013-08-02 2015-02-11 佳能株式会社 Image-composition evaluating device, information processing device and method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MING-MING CHENG 等: "BING: Binarized Normed Gradients for Objectness Estimation at 300fps", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
陈仁杰: "内容敏感的图像重映射算法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018090355A1 (en) * 2016-11-21 2018-05-24 中国科学院自动化研究所 Method for auto-cropping of images
CN106650737B (en) * 2016-11-21 2020-02-28 中国科学院自动化研究所 Automatic image cutting method
CN106650737A (en) * 2016-11-21 2017-05-10 中国科学院自动化研究所 Image automatic cutting method
CN106778788A (en) * 2017-01-13 2017-05-31 河北工业大学 The multiple features fusion method of aesthetic evaluation is carried out to image
CN106778788B (en) * 2017-01-13 2019-11-12 河北工业大学 The multiple features fusion method of aesthetic evaluation is carried out to image
CN107146198A (en) * 2017-04-19 2017-09-08 中国电子科技集团公司电子科学研究院 A kind of intelligent method of cutting out of photo and device
CN107146198B (en) * 2017-04-19 2022-08-16 中国电子科技集团公司电子科学研究院 Intelligent photo cutting method and device
CN107145905A (en) * 2017-05-02 2017-09-08 重庆大学 The image recognizing and detecting method that elevator fastening nut loosens
CN107145905B (en) * 2017-05-02 2020-04-21 重庆大学 Image recognition detection method for looseness of elevator fastening nut
CN107590445B (en) * 2017-08-25 2019-05-21 西安电子科技大学 Aesthetic images quality evaluating method based on EEG signals
CN107590445A (en) * 2017-08-25 2018-01-16 西安电子科技大学 Aesthetic images quality evaluating method based on EEG signals
CN110796663A (en) * 2019-09-17 2020-02-14 北京迈格威科技有限公司 Picture clipping method, device, equipment and storage medium
CN110796663B (en) * 2019-09-17 2022-12-02 北京迈格威科技有限公司 Picture clipping method, device, equipment and storage medium
CN111046953A (en) * 2019-12-12 2020-04-21 电子科技大学 Image evaluation method based on similarity comparison
CN111046953B (en) * 2019-12-12 2022-06-21 电子科技大学 Image evaluation method based on similarity comparison

Also Published As

Publication number Publication date
CN105787966B (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN105787966A (en) An aesthetic evaluation method for computer pictures
CN102117413B (en) Method for automatically filtering defective image based on multilayer feature
CN104536009B (en) Above ground structure identification that a kind of laser infrared is compound and air navigation aid
Recky et al. Windows detection using k-means in cie-lab color space
US8462992B2 (en) Method of change detection for building models
CN105139015B (en) A kind of remote sensing images Clean water withdraw method
CN103337072B (en) A kind of room objects analytic method based on texture and geometric attribute conjunctive model
CN110992341A (en) Segmentation-based airborne LiDAR point cloud building extraction method
CN103218787B (en) Multi-source heterogeneous remote sensing image reference mark automatic acquiring method
CN104751478A (en) Object-oriented building change detection method based on multi-feature fusion
CN105574063A (en) Image retrieval method based on visual saliency
CN103186904A (en) Method and device for extracting picture contours
TW201005673A (en) Example-based two-dimensional to three-dimensional image conversion method, computer readable medium therefor, and system
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN105868717B (en) A kind of high-resolution remote sensing image list wood crown information extraction method based on classification
CN101901343A (en) Remote sensing image road extracting method based on stereo constraint
CN107273608A (en) A kind of reservoir geology profile vectorization method
CN104484680B (en) A kind of pedestrian detection method of multi-model multi thresholds combination
JP5151472B2 (en) Distance image generation device, environment recognition device, and program
CN107392929A (en) A kind of intelligent target detection and dimension measurement method based on human vision model
CN105631892A (en) Aviation image building damage detection method based on shadow and texture characteristics
CN108537782A (en) A method of building images match based on contours extract with merge
CN106484692A (en) A kind of method for searching three-dimension model
CN108154158B (en) Building image segmentation method for augmented reality application
CN109886267A (en) A kind of soft image conspicuousness detection method based on optimal feature selection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190531