CN106650737A - Image automatic cutting method - Google Patents

Image automatic cutting method Download PDF

Info

Publication number
CN106650737A
CN106650737A CN201611041091.9A CN201611041091A CN106650737A CN 106650737 A CN106650737 A CN 106650737A CN 201611041091 A CN201611041091 A CN 201611041091A CN 106650737 A CN106650737 A CN 106650737A
Authority
CN
China
Prior art keywords
image
cutting
candidate
aesthetic feeling
cutting image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611041091.9A
Other languages
Chinese (zh)
Other versions
CN106650737B (en
Inventor
黄凯奇
赫然
考月英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201611041091.9A priority Critical patent/CN106650737B/en
Publication of CN106650737A publication Critical patent/CN106650737A/en
Application granted granted Critical
Publication of CN106650737B publication Critical patent/CN106650737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image automatic cutting method. The method comprises the following steps: extracting an aesthetic response graph and a gradient energy graph of an image to be cut; intensively extracting candidate cutting images from the image to be cut; based on the aesthetic response graph, screening the candidate cutting images; and based on the aesthetic response graph and the gradient energy graph, estimating composition fractions of screened candidate cutting images and determining that a candidate cutting image with the highest score is a cutting image. According to the scheme, an aesthetic influence area of a picture is studied by use of the aesthetic response graph, an aesthetic reservation portion is determined by use of the aesthetic response graph, high aesthetic quality of the cutting image is better reserved to be greatest extent, at the same time, a gradient distribution rule is analyzed by use of the gradient energy graph, and the composition fraction of the cutting image is evaluated based on the aesthetic response graph and the gradient energy graph. The image automatic cutting method provided by the embodiment of the invention makes up for the defect of image composition expression and solves the problem of how to improve the robustness and precision of image automatic cutting.

Description

Automatic image cutting out method
Technical field
The present invention relates to pattern-recognition, machine learning and technical field of computer vision, more particularly to a kind of image is automatic Method of cutting out.
Background technology
With the fast development of computer technology and digital media technology, people are to computer vision, artificial intelligence, machine The demand in the fields such as perception and expectation also more and more higher.One during the automatic cutting of image is edited automatically as image weighs very much Also to get growing concern for and develop with common task.Image automatic cutting technology be exactly want to remove it is unnecessary Region, area-of-interest is emphasized, so as to improve the overall composition and aesthetic qualities of image.A kind of effective and automatic image Method of cutting out can not only be such that the mankind free from loaded down with trivial details work, and can also provide to some layman The suggestion of the picture editting of specialty.
Because image cropping is the task of a unusual subjectivity, existing rule is difficult to consider all influence factors.Pass The image automatic cutting region of system Saliency maps are usually used to recognize image in main region or area-of-interest, while logical Cross some rules formulated to come computation energy function minimum or Study strategies and methods to find clipping region.What but these were formulated Rule is to the task of image cropping this subjectivity and not comprehensive enough, and precision also is difficult to reach user's request.
In view of this, it is special to propose the present invention.
The content of the invention
In order to solve the problems referred to above of the prior art, be solve how to improve image automatic cutting robustness and The technical problem of precision and a kind of automatic image cutting out method is provided.
To achieve these goals, there is provided technical scheme below:
A kind of automatic image cutting out method, methods described includes:
The aesthetic feeling response diagram and gradient energy figure of cutting image is treated in extraction;
The intensive extraction candidate's cutting image of cutting image is treated to described;
Based on the aesthetic feeling response diagram, candidate's cutting image is screened;
Based on the aesthetic feeling response diagram and the gradient energy figure, the composition for estimating the candidate's cutting image for filtering out divides Number, and candidate's cutting image of highest scoring is defined as into cutting image.
Further, it is described to extract the aesthetic feeling response diagram and gradient energy figure for treating cutting image, specifically include:
Using depth convolutional neural networks and classification response mapping method, and cutting figure is treated using described in equation below extraction The aesthetic feeling response diagram of picture:
Wherein, the M (x, y) represents the aesthetic feeling response at locus (x, y) place;The K represents depth convolution god The overall channel number of the characteristic pattern of last layer of convolutional layer of Jing networks;The k represents k-th passage;The fk(x, y) is represented Characteristic value of k-th passage at locus (x, the y) place;The wkRepresent the characteristic pattern pond of k-th passage Weights of the result after change to high aesthetic feeling classification;
Treat that cutting image is smoothed to described, and calculate the Grad of each pixel, so as to obtain the ladder Degree energy diagram.
Further, the depth convolutional neural networks are trained obtain in the following manner:
In the bottom of the depth convolutional neural networks structure, convolutional layer is set;
By the method in global average pond after last convolutional layer of the depth convolutional neural networks structure, Each characteristic pattern pond is turned into a point;
Connection and the full articulamentum of aesthetic qualities class categories number identical and loss function.
Further, it is described based on the aesthetic feeling response diagram, candidate's cutting image is screened, specifically include:
The aesthetic feeling retention score of candidate's cutting image is calculated by equation below:
Wherein, the Sa(C) the aesthetic feeling retention score of candidate's cutting image is represented;The C represents the time Select cutting image;(i, j) represents the position of pixel;The I represents original image;The A(i,j)Represent in (i, j) position The aesthetic feeling response at place;
All candidate's cutting images are ranked up from big to small according to the aesthetic feeling retention score;
Choose a part of candidate's cutting image of highest scoring.
Further, it is described based on the aesthetic feeling response diagram and the gradient energy figure, estimate the candidate's cutting for filtering out The composition fraction of image, and candidate's cutting image of highest scoring is defined as into cutting image, specifically include:
Composition model is set up based on the aesthetic feeling response diagram and the gradient energy figure;
Using the composition fraction of the candidate's cutting image filtered out described in composition model estimation, and by the score most High candidate's cutting image is defined as the cutting image.
Further, the composition model is obtained in the following manner:
Training image collection is set up based on the aesthetic feeling response diagram and the gradient energy figure;
The mark of aesthetic qualities classification is carried out to training image;
Using the training image training depth convolutional neural networks of mark;
For the training image for having marked, using the depth convolutional neural networks for training, extract the aesthetic feeling and ring The spatial pyramid feature with the gradient energy figure should be schemed;
By the spatial pyramid merging features for extracting together;
It is trained using grader, automatically study composition rule, obtains composition model.
The embodiment of the present invention provides a kind of automatic image cutting out method.The method includes:The aesthetic feeling of cutting image is treated in extraction Response diagram and gradient energy figure;Treat the intensive extraction candidate's cutting image of cutting image;Based on aesthetic feeling response diagram, screening candidate cut out Cut image;Based on aesthetic feeling response diagram and gradient energy figure, the composition fraction of candidate's cutting image for filtering out is estimated, and by score Highest candidate's cutting image is defined as cutting image.This programme goes to probe into the aesthetic feeling zone of influence of picture using aesthetic feeling response diagram Domain, using aesthetic feeling response diagram aesthetic feeling member-retaining portion is determined, so as to more farthest remain the high aesthetic feeling matter of cutting image Amount, while this programme also goes to analyze gradient distribution rule using gradient energy figure, and based on aesthetic feeling response diagram and gradient energy Figure is assessing the composition fraction of cutting figure.The embodiment of the present invention compensate for the defect of image composition expression, solves and how to improve The robustness of image automatic cutting and the technical problem of precision.The embodiment of the present invention can apply to be related to the crowd of image automatic cutting It is multi-field, including the reorientation of picture editting, photography and image etc..
Description of the drawings
Fig. 1 is the schematic flow sheet of automatic image cutting out method according to embodiments of the present invention;
Fig. 2 is the structural representation of depth convolutional neural networks according to embodiments of the present invention;
Fig. 3 a are according to embodiments of the present invention to treat cutting image schematic diagram;
Fig. 3 b are the image schematic diagrames after cutting according to embodiments of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings and the specific embodiment technical problem that the embodiment of the present invention is solved, the technical side that adopted Case and the technique effect of realization carry out clear, complete description.Obviously, described embodiment is only of the application Divide embodiment, be not whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not paying creation Property work on the premise of, all other equivalent for being obtained or the embodiment of obvious modification are all fallen within protection scope of the present invention. The embodiment of the present invention can embody according to the multitude of different ways being defined and covered by claim.
Deep learning has obtained quick development and well effect in every field.The embodiment of the present invention is considered using deep The automatically study influence area important to image cropping is gone in degree study, with comprehensively learning rules automatically, so that in cutting When retain high aesthetic feeling region as much as possible.
For this purpose, the embodiment of the present invention provides a kind of automated graphics method of cutting out.Fig. 1 schematically illustrates image automatic cutting The flow process of shear method.As shown in figure 1, the method can include:
S100:The aesthetic feeling response diagram and gradient energy figure of cutting image is treated in extraction.
Specifically, this step can include:
S101:Using depth convolutional neural networks and classification response mapping method, and cutting is treated using equation below extraction The aesthetic feeling response diagram of image:
Wherein, M (x, y) represents the aesthetic feeling response at locus (x, y) place;K represents the depth convolution god for training The overall channel number of the characteristic pattern f of last layer of convolutional layer of Jing networks;K represents k-th passage;fk(x, y) represents k-th to lead to Characteristic value of the road at locus (x, y) place;wkRepresent the result behind the characteristic pattern pond of k-th passage to high aesthetic feeling classification Weights.
Above-mentioned steps can according to actual needs train depth convolutional neural networks when aesthetic feeling response diagram is extracted.Depth is rolled up The training of product neutral net can be carried out in the following manner:
Step 1:In the bottom of depth convolutional neural networks structure, convolutional layer is set.
Step 2:By the side in global average pond after last convolutional layer of depth convolutional neural networks structure Method, by each characteristic pattern pond a point is turned to.
Step 3:Connection one and the full articulamentum of aesthetic qualities class categories number identical and loss function.
Fig. 2 schematically illustrates a depth convolutional neural networks structure.
One depth convolutional neural networks model under aesthetic qualities classification task can be trained by step 1-3.So Afterwards, depth convolutional neural networks and classification response mapping method that aesthetic qualities classification task is trained are utilized as;Again using upper Formula is stated, the aesthetic feeling response diagram M that cutting image is treated under high aesthetic feeling classification can be calculated.
S102:Treat cutting image to be smoothed, and calculate the Grad of each pixel, so as to obtain gradient energy Spirogram.
S110:Treat the intensive extraction candidate's cutting image of cutting image.
Here it is possible to using the sliding window of all sizes less than image size, treat intensive extraction of cutting image and wait Crop window is selected, candidate's cutting image is extracted by candidate's crop window.
S120:Based on aesthetic feeling response diagram, candidate's cutting image is screened.
Specifically, this step can include:
S121:The aesthetic feeling retention score of candidate's cutting image is calculated by equation below:
Wherein, Sa(C) the aesthetic feeling retention score of candidate's cutting image is represented;C represents candidate's cutting image;(i, j) is represented The position of pixel;I represents original image;A(i,j)Represent the aesthetic feeling response at (i, j) place.
Aesthetic feeling reserving model can be built by this step.Candidate's crop window is filtered out into U.S. through aesthetic feeling reserving model The higher candidate window of sense retention score.
S122:All candidate's cutting images are ranked up from big to small according to aesthetic feeling retention score.
S123:Choose a part of candidate's cutting image of highest scoring.
For example:The candidate's cutting image retained in front 10000 candidate's crop windows can be set in practical application.
S130:The composition fraction of the candidate's cutting image filtered out based on aesthetic feeling response diagram and gradient energy figure, estimation, and Candidate's cutting image of highest scoring is defined as into cutting image.
Specifically, this step can be realized by step S131 to step S133.
S131:Composition model is set up based on aesthetic feeling response diagram and gradient energy figure.
This step can train composition model when composition model is set up according to actual conditions.In the mistake of training composition model Cheng Zhong, training data can adopt the preferable image of composition as positive sample, and using the image for having patterning defects as negative sample.
Composition model can in the following manner be trained:
Step a:Training image collection is set up based on aesthetic feeling response diagram and gradient energy figure.
Step b:The mark of aesthetic qualities classification is carried out to training image.
Step c:Using the training image training depth convolutional neural networks of mark.
The training process of this step may be referred to above-mentioned steps 1 to step 3, will not be described here.
Step d:For the training image for having marked, using the depth convolutional neural networks for training, aesthetic feeling response is extracted The spatial pyramid feature of figure and gradient energy figure.
Step e:By the spatial pyramid merging features for extracting together.
Step f:It is trained using grader, automatically study composition rule, obtains composition model.
Wherein, grader can for example adopt support vector machine classifier.
S132:The composition fraction of candidate's cutting image for estimating to filter out using composition model, and by the time of highest scoring Cutting image is selected to be defined as cutting image.
Fig. 3 a are schematically illustrated and are treated cutting image;Fig. 3 b schematically illustrate the image after cutting.
Again the present invention is better described with a preferred embodiment below.
Step A:The image data set for being labeled with aesthetic qualities classification is sent into depth convolutional neural networks carries out aesthetic feeling matter Amount class models training.
Step B:The image data set for being labeled with composition classification is input into the depth convolutional neural networks for training, is extracted most The characteristic pattern of later layer convolutional layer, and aesthetic feeling response diagram is calculated, while aesthetic feeling gradient map is calculated, then using SVMs point Class device trains composition model.
Step C:Treat test image and extract aesthetic feeling response diagram and gradient energy figure.
The method that the extracting method of this step refers to the training stage.
Step D:The intensive candidate's crop window for gathering image to be tested.
For example, on 1000 × 1000 image to be tested, carried out using the sliding window at intervals of 30 pixels Collection is extracted.
Step E:Candidate's crop window is screened using aesthetic feeling reserving model.
The aesthetic feeling retention score of candidate's crop window that this step is collected using aesthetic feeling reserving model computation-intensive, screening Go out a part of candidate's crop window of aesthetic feeling classification highest, for example:Filter out 10000 candidate's crop windows.
Step F:The candidate's crop window filtered out using composition model evaluation.
The composition model that this step collection training stage trains removes the composition point for assessing the candidate's crop window for filtering out Number, using highest scoring as last crop window, so as to obtain cutting image.
In sum, method provided in an embodiment of the present invention make use of well aesthetic feeling response diagram and gradient energy figure to come most Big degree ground retains the composition rule of aesthetic qualities and image, obtains more robust, the automatic cutting of the higher image of precision Can, aesthetic feeling response diagram and gradient energy figure have been further related to for the validity of image automatic cutting.
Although describing method provided in an embodiment of the present invention, ability according to above-mentioned precedence in above-described embodiment Field technique personnel are appreciated that to realize the effect of the present embodiment, can be with parallel or reverse the right order etc. different Performing, these simply change all within protection scope of the present invention order.
The above, the only specific embodiment in the present invention, but protection scope of the present invention is not limited thereto, and appoints What be familiar with the people of the technology disclosed herein technical scope in, it will be appreciated that the conversion expected or replacement, all should cover The present invention include within the scope of, therefore, protection scope of the present invention should be defined by the protection domain of claims.

Claims (6)

1. a kind of automatic image cutting out method, it is characterised in that methods described includes:
The aesthetic feeling response diagram and gradient energy figure of cutting image is treated in extraction;
The intensive extraction candidate's cutting image of cutting image is treated to described;
Based on the aesthetic feeling response diagram, candidate's cutting image is screened;
The composition fraction of the candidate's cutting image filtered out based on the aesthetic feeling response diagram and the gradient energy figure, estimation, and Candidate's cutting image of highest scoring is defined as into cutting image.
2. method according to claim 1, it is characterised in that the aesthetic feeling response diagram and gradient of cutting image is treated in the extraction Energy diagram, specifically includes:
Using depth convolutional neural networks and classification response mapping method, and cutting image is treated using described in equation below extraction The aesthetic feeling response diagram:
M ( x , y ) = Σ k = 1 K w k f k ( x , y )
Wherein, the M (x, y) represents the aesthetic feeling response at locus (x, y) place;The K represents depth convolutional Neural net The overall channel number of the characteristic pattern of last layer of convolutional layer of network;The k represents k-th passage;The fk(x, y) represents described Characteristic value of k-th passage at locus (x, the y) place;The wkAfter representing the characteristic pattern pond of k-th passage Result to high aesthetic feeling classification weights;
Treat that cutting image is smoothed to described, and calculate the Grad of each pixel, so as to obtain the gradient energy Spirogram.
3. method according to claim 2, it is characterised in that the depth convolutional neural networks are trained in the following manner Obtain:
In the bottom of the depth convolutional neural networks structure, convolutional layer is set;
By the method in global average pond after last convolutional layer of the depth convolutional neural networks structure, will be every One characteristic pattern pond turns to a point;
Connection and the full articulamentum of aesthetic qualities class categories number identical and loss function.
4. method according to claim 1, it is characterised in that described based on the aesthetic feeling response diagram, screens the candidate Cutting image, specifically includes:
The aesthetic feeling retention score of candidate's cutting image is calculated by equation below:
S a ( C ) = Σ ( i , j ) ∈ C A ( i , j ) Σ ( i , j ) ∈ I A ( i , j )
Wherein, the Sa(C) the aesthetic feeling retention score of candidate's cutting image is represented;The C represents candidate's cutting Image;(i, j) represents the position of pixel;The I represents original image;The A(i,j)Represent the U.S. at (i, j) position Sense response;
All candidate's cutting images are ranked up from big to small according to the aesthetic feeling retention score;
Choose a part of candidate's cutting image of highest scoring.
5. method according to claim 1, it is characterised in that described based on the aesthetic feeling response diagram and the gradient energy Figure, the composition fraction of candidate's cutting image that estimation is filtered out, and candidate's cutting image of highest scoring is defined as into cutting figure Picture, specifically includes:
Composition model is set up based on the aesthetic feeling response diagram and the gradient energy figure;
Using the composition model estimate described in the composition fraction of candidate's cutting image that filters out, and by the highest scoring Candidate's cutting image is defined as the cutting image.
6. method according to claim 5, it is characterised in that the composition model is obtained in the following manner:
Training image collection is set up based on the aesthetic feeling response diagram and the gradient energy figure;
The mark of aesthetic qualities classification is carried out to training image;
Using the training image training depth convolutional neural networks of mark;
For the training image for having marked, using the depth convolutional neural networks for training, the aesthetic feeling response diagram is extracted With the spatial pyramid feature of the gradient energy figure;
By the spatial pyramid merging features for extracting together;
It is trained using grader, automatically study composition rule, obtains composition model.
CN201611041091.9A 2016-11-21 2016-11-21 Automatic image cutting method Active CN106650737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611041091.9A CN106650737B (en) 2016-11-21 2016-11-21 Automatic image cutting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611041091.9A CN106650737B (en) 2016-11-21 2016-11-21 Automatic image cutting method

Publications (2)

Publication Number Publication Date
CN106650737A true CN106650737A (en) 2017-05-10
CN106650737B CN106650737B (en) 2020-02-28

Family

ID=58811471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611041091.9A Active CN106650737B (en) 2016-11-21 2016-11-21 Automatic image cutting method

Country Status (1)

Country Link
CN (1) CN106650737B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107317962A (en) * 2017-05-12 2017-11-03 广东网金控股股份有限公司 A kind of intelligence, which is taken pictures, cuts patterning system and application method
CN107392244A (en) * 2017-07-18 2017-11-24 厦门大学 The image aesthetic feeling Enhancement Method returned based on deep neural network with cascade
CN107545576A (en) * 2017-07-31 2018-01-05 华南农业大学 Image edit method based on composition rule
CN108154464A (en) * 2017-12-06 2018-06-12 中国科学院自动化研究所 The method and device of picture automatic cutting based on intensified learning
CN108566512A (en) * 2018-03-21 2018-09-21 珠海市魅族科技有限公司 A kind of intelligence image pickup method, device, computer equipment and readable storage medium storing program for executing
CN109518446A (en) * 2018-12-21 2019-03-26 季华实验室 Intelligent cutting method of cutting machine
CN109523503A (en) * 2018-09-11 2019-03-26 北京三快在线科技有限公司 A kind of method and apparatus of image cropping
CN109886317A (en) * 2019-01-29 2019-06-14 中国科学院自动化研究所 General image aesthetics appraisal procedure, system and equipment based on attention mechanism
CN110062173A (en) * 2019-03-15 2019-07-26 北京旷视科技有限公司 Image processor and image processing method, equipment, storage medium and intelligent terminal
WO2020186385A1 (en) * 2019-03-15 2020-09-24 深圳市大疆创新科技有限公司 Image processing method, electronic device, and computer-readable storage medium
WO2020232672A1 (en) * 2019-05-22 2020-11-26 深圳市大疆创新科技有限公司 Image cropping method and apparatus, and photographing apparatus
CN112839167A (en) * 2020-12-30 2021-05-25 Oppo(重庆)智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN113436224A (en) * 2021-06-11 2021-09-24 华中科技大学 Intelligent image clipping method and device based on explicit composition rule modeling
WO2023093851A1 (en) * 2021-11-29 2023-06-01 维沃移动通信有限公司 Image cropping method and apparatus, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717413A (en) * 2013-12-12 2015-06-17 北京三星通信技术研究有限公司 Shooting assistance method and equipment
CN105488758A (en) * 2015-11-30 2016-04-13 河北工业大学 Image scaling method based on content awareness
CN105528786A (en) * 2015-12-04 2016-04-27 小米科技有限责任公司 Image processing method and device
CN105787966A (en) * 2016-03-21 2016-07-20 复旦大学 An aesthetic evaluation method for computer pictures
CN105894025A (en) * 2016-03-30 2016-08-24 中国科学院自动化研究所 Natural image aesthetic feeling quality assessment method based on multitask deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717413A (en) * 2013-12-12 2015-06-17 北京三星通信技术研究有限公司 Shooting assistance method and equipment
CN105488758A (en) * 2015-11-30 2016-04-13 河北工业大学 Image scaling method based on content awareness
CN105528786A (en) * 2015-12-04 2016-04-27 小米科技有限责任公司 Image processing method and device
CN105787966A (en) * 2016-03-21 2016-07-20 复旦大学 An aesthetic evaluation method for computer pictures
CN105894025A (en) * 2016-03-30 2016-08-24 中国科学院自动化研究所 Natural image aesthetic feeling quality assessment method based on multitask deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
侯丹红: "相片中重要对象布局优化系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王伟凝 等: "基于并行深度卷积神经网络的图像美感分类", 《自动化学报》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107317962A (en) * 2017-05-12 2017-11-03 广东网金控股股份有限公司 A kind of intelligence, which is taken pictures, cuts patterning system and application method
CN107317962B (en) * 2017-05-12 2019-11-08 广东网金控股股份有限公司 A kind of intelligence, which is taken pictures, cuts patterning system and application method
CN107392244A (en) * 2017-07-18 2017-11-24 厦门大学 The image aesthetic feeling Enhancement Method returned based on deep neural network with cascade
CN107392244B (en) * 2017-07-18 2020-08-28 厦门大学 Image aesthetic feeling enhancement method based on deep neural network and cascade regression
CN107545576A (en) * 2017-07-31 2018-01-05 华南农业大学 Image edit method based on composition rule
CN108154464A (en) * 2017-12-06 2018-06-12 中国科学院自动化研究所 The method and device of picture automatic cutting based on intensified learning
CN108154464B (en) * 2017-12-06 2020-09-22 中国科学院自动化研究所 Method and device for automatically clipping picture based on reinforcement learning
CN108566512A (en) * 2018-03-21 2018-09-21 珠海市魅族科技有限公司 A kind of intelligence image pickup method, device, computer equipment and readable storage medium storing program for executing
WO2020052523A1 (en) * 2018-09-11 2020-03-19 北京三快在线科技有限公司 Method and apparatus for cropping image
CN109523503A (en) * 2018-09-11 2019-03-26 北京三快在线科技有限公司 A kind of method and apparatus of image cropping
CN109518446A (en) * 2018-12-21 2019-03-26 季华实验室 Intelligent cutting method of cutting machine
CN109886317A (en) * 2019-01-29 2019-06-14 中国科学院自动化研究所 General image aesthetics appraisal procedure, system and equipment based on attention mechanism
CN109886317B (en) * 2019-01-29 2021-04-27 中国科学院自动化研究所 General image aesthetic evaluation method, system and equipment based on attention mechanism
CN110062173A (en) * 2019-03-15 2019-07-26 北京旷视科技有限公司 Image processor and image processing method, equipment, storage medium and intelligent terminal
WO2020186385A1 (en) * 2019-03-15 2020-09-24 深圳市大疆创新科技有限公司 Image processing method, electronic device, and computer-readable storage medium
WO2020232672A1 (en) * 2019-05-22 2020-11-26 深圳市大疆创新科技有限公司 Image cropping method and apparatus, and photographing apparatus
CN112839167A (en) * 2020-12-30 2021-05-25 Oppo(重庆)智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN113436224A (en) * 2021-06-11 2021-09-24 华中科技大学 Intelligent image clipping method and device based on explicit composition rule modeling
CN113436224B (en) * 2021-06-11 2022-04-26 华中科技大学 Intelligent image clipping method and device based on explicit composition rule modeling
WO2023093851A1 (en) * 2021-11-29 2023-06-01 维沃移动通信有限公司 Image cropping method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN106650737B (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN106650737A (en) Image automatic cutting method
CN109800736B (en) Road extraction method based on remote sensing image and deep learning
CN104616274B (en) A kind of multi-focus image fusing method based on salient region extraction
EP3819859B1 (en) Sky filter method for panoramic images and portable terminal
CN110376198B (en) Cervical liquid-based cell slice quality detection system
CN107665492B (en) Colorectal panoramic digital pathological image tissue segmentation method based on depth network
CN109241967B (en) Thyroid ultrasound image automatic identification system based on deep neural network, computer equipment and storage medium
EP3343440A1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
WO2018090355A1 (en) Method for auto-cropping of images
CN105989347A (en) Intelligent marking method and system of objective questions
CN106651872A (en) Prewitt operator-based pavement crack recognition method and system
CN107240096A (en) A kind of infrared and visual image fusion quality evaluating method
CN104766097B (en) Surface of aluminum plate defect classification method based on BP neural network and SVMs
CN103206208A (en) Method for macroscopically quantizing microscopic remaining oil in different occurrence states
US20110002516A1 (en) Method and device for dividing area of image of particle in urine
CN102750538A (en) Go competition result analysis method based on image processing technique
CN107341790A (en) A kind of image processing method of environment cleanliness detection
CN111008647B (en) Sample extraction and image classification method based on void convolution and residual linkage
CN109978771A (en) Cell image rapid fusion method based on content analysis
CN106780514A (en) The computational methods of the heavy rain Ji Lao areas depth of accumulated water based on monitor video image
Bobbe et al. A primer on mapping vegetation using remote sensing
CN110874824B (en) Image restoration method and device
CN105760878A (en) Method and device for selecting urinary sediment microscope image with optimal focusing performance
CN114419465B (en) Method, device and equipment for detecting change of remote sensing image and storage medium
EP4261774A1 (en) Object classification device, object classification system, and object classification program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant