CN106650737B - Automatic image cutting method - Google Patents
Automatic image cutting method Download PDFInfo
- Publication number
- CN106650737B CN106650737B CN201611041091.9A CN201611041091A CN106650737B CN 106650737 B CN106650737 B CN 106650737B CN 201611041091 A CN201611041091 A CN 201611041091A CN 106650737 B CN106650737 B CN 106650737B
- Authority
- CN
- China
- Prior art keywords
- image
- aesthetic
- candidate
- response
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000004044 response Effects 0.000 claims abstract description 55
- 238000012216 screening Methods 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 25
- 238000013527 convolutional neural network Methods 0.000 claims description 24
- 230000014759 maintenance of location Effects 0.000 claims description 12
- 238000010586 diagram Methods 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims 1
- 238000009966 trimming Methods 0.000 abstract description 9
- 230000007547 defect Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an automatic image cropping method. The method comprises the following steps: extracting an aesthetic response graph and a gradient energy graph of an image to be cut; intensively extracting candidate clipping images from the image to be clipped; screening candidate cutting images based on the aesthetic feeling response graph; and estimating the composition score of the screened candidate trimming images based on the aesthetic feeling response graph and the gradient energy graph, and determining the candidate trimming image with the highest score as the trimming image. The scheme utilizes the aesthetic feeling response graph to explore the aesthetic feeling influence area of the picture, utilizes the aesthetic feeling response graph to determine the aesthetic feeling retaining part, thereby retaining the high aesthetic feeling quality of the cut image to the maximum extent, and simultaneously utilizes the gradient energy graph to analyze the gradient distribution rule and evaluates the composition score of the cut image based on the aesthetic feeling response graph and the gradient energy graph. The embodiment of the invention makes up the defect of image composition expression and solves the technical problem of how to improve the robustness and precision of automatic image cutting.
Description
Technical Field
The invention relates to the technical field of pattern recognition, machine learning and computer vision, in particular to an automatic image cutting method.
Background
With the rapid development of computer technology and digital media technology, people have higher and higher demands and expectations for the fields of computer vision, artificial intelligence, machine perception and the like. Automatic cropping of images is also gaining increasing attention and development as a very important and common task in automatic editing of images. The image automatic cropping technology is expected to remove redundant areas and emphasize interested areas, so that the overall composition and aesthetic quality of an image are improved. An efficient and automatic image cropping method not only frees the human from the tedious work, but also provides some non-professional image editing suggestions.
Since image cropping is a very subjective task, existing rules have difficulty considering all the influencing factors. Conventional automatic cropping of images typically uses saliency maps to identify the main or interesting regions in the image, while computing energy function minimization or learning classifiers to find the cropping area through some established rules. However, the subjective task of image cropping by the established rules is not comprehensive enough, and the precision is difficult to meet the requirement of users.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
The method for automatically cropping the image aims to solve the problems in the prior art, namely, the technical problem of how to improve the robustness and the precision of the automatic cropping of the image is solved.
In order to realize the purpose, the following technical scheme is provided:
an automatic cropping method of an image, the method comprising:
extracting an aesthetic response graph and a gradient energy graph of an image to be cut;
extracting candidate clipping images from the image to be clipped in a dense mode;
screening the candidate clipping image based on the aesthetic feeling response image;
and estimating the composition score of the screened candidate cutting image based on the aesthetic feeling response graph and the gradient energy graph, and determining the candidate cutting image with the highest score as the cutting image.
Further, the extracting of the aesthetic response map and the gradient energy map of the image to be cropped specifically includes:
extracting the aesthetic feeling response graph of the image to be cut by utilizing a deep convolutional neural network and a category response mapping method and adopting the following formula:
wherein the M (x, y) represents an aesthetic response value at a spatial location (x, y); the K represents the total channel number of the characteristic diagram of the last convolutional layer of the deep convolutional neural network; the k represents the kth channel; f isk(x, y) representsA characteristic value of the k-th channel at the spatial position (x, y); said wkRepresenting the result of pooling the characteristic diagram of the kth channel to a weight of a high aesthetic sense category;
and smoothing the image to be cut, and calculating the gradient value of each pixel point to obtain the gradient energy map.
Further, the deep convolutional neural network is trained by the following method:
arranging a convolution layer on the bottom layer of the deep convolutional neural network structure;
pooling each feature map into a point by a global average pooling method after the last convolutional layer of the deep convolutional neural network structure;
connect the same number of fully connected layers and loss functions as the aesthetic quality classification categories.
Further, the screening the candidate cropped image based on the aesthetic response map specifically includes:
calculating an aesthetic retention score of the candidate cropped image by the following formula:
wherein, the Sa(C) Representing the aesthetic retention score of the candidate cropped image; the C represents the candidate cropping image; the (i, j) represents a position of a pixel; the I represents an original image; a is described(i,j)Representing an aesthetic response value at the (i, j) location;
sorting all candidate cutting images from large to small according to the aesthetic feeling retention scores;
and selecting a part of candidate clipping images with the highest scores.
Further, the estimating, based on the aesthetic response map and the gradient energy map, a composition score of the screened candidate cropped image, and determining the candidate cropped image with the highest score as the cropped image specifically includes:
establishing a composition model based on the aesthetic response map and the gradient energy map;
and estimating the composition score of the screened candidate clipping image by using the composition model, and determining the candidate clipping image with the highest score as the clipping image.
Further, the composition model is obtained by:
establishing a training image set based on the aesthetic response map and the gradient energy map;
marking the training image in aesthetic quality category;
training a deep convolutional neural network by using the labeled training image;
aiming at the marked training image, extracting the space pyramid characteristics of the aesthetic feeling response image and the gradient energy image by using a trained deep convolution neural network;
splicing the extracted spatial pyramid features together;
and training by using a classifier, and automatically learning a composition rule to obtain a composition model.
The embodiment of the invention provides an automatic image cutting method. The method comprises the following steps: extracting an aesthetic response graph and a gradient energy graph of an image to be cut; intensively extracting candidate clipping images from the image to be clipped; screening candidate cutting images based on the aesthetic feeling response graph; and estimating the composition score of the screened candidate trimming images based on the aesthetic feeling response graph and the gradient energy graph, and determining the candidate trimming image with the highest score as the trimming image. The scheme utilizes the aesthetic feeling response graph to explore the aesthetic feeling influence area of the picture, utilizes the aesthetic feeling response graph to determine the aesthetic feeling retaining part, thereby retaining the high aesthetic feeling quality of the cut image to the maximum extent, and simultaneously utilizes the gradient energy graph to analyze the gradient distribution rule and evaluates the composition score of the cut image based on the aesthetic feeling response graph and the gradient energy graph. The embodiment of the invention makes up the defect of image composition expression and solves the technical problem of how to improve the robustness and precision of automatic image cutting. The embodiment of the invention can be applied to a plurality of fields related to automatic image cutting, including image editing, photography, image repositioning and the like.
Drawings
FIG. 1 is a flow chart of an automatic image cropping method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a deep convolutional neural network according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of an image to be cropped according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a cropped image according to an embodiment of the present invention.
Detailed Description
The technical problems solved, the technical solutions adopted and the technical effects achieved by the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings and the specific embodiments. It is to be understood that the described embodiments are merely a few, and not all, of the embodiments of the present application. All other equivalent or obviously modified embodiments obtained by the person skilled in the art based on the embodiments in this application fall within the scope of protection of the invention without inventive step. The embodiments of the invention can be embodied in many different ways as defined and covered by the claims.
The deep learning is developed rapidly and has good effect in various fields. The embodiment of the invention considers that the deep learning is utilized to automatically learn the important influence area on the image cutting so as to automatically and comprehensively learn the rule, thereby keeping the high aesthetic feeling area as much as possible during the cutting.
Therefore, the embodiment of the invention provides an automatic image cropping method. Fig. 1 exemplarily shows a flow of an image automatic cropping method. As shown in fig. 1, the method may include:
s100: and extracting an aesthetic response graph and a gradient energy graph of the image to be cut.
Specifically, the step may include:
s101: extracting an aesthetic feeling response graph of the image to be cut by utilizing a deep convolutional neural network and a category response mapping method and adopting the following formula:
wherein M (x, y) represents an aesthetic response value at the spatial location (x, y); k represents the total channel number of the feature diagram f of the last convolutional layer of the trained deep convolutional neural network; k represents the kth channel; f. ofk(x, y) represents a characteristic value of the kth channel at a spatial position (x, y); w is akAnd representing the result of pooling the feature map of the kth channel to a weight of a high aesthetic class.
The deep convolutional neural network can be trained according to actual needs when the aesthetic feeling response graph is extracted through the steps. The training of the deep convolutional neural network may be performed by:
step 1: and arranging a convolution layer at the bottom layer of the deep convolutional neural network structure.
Step 2: and pooling each feature map into one point by a global average pooling method after the last convolution layer of the deep convolutional neural network structure.
And step 3: a fully connected layer and a loss function are connected, the number of which is the same as the number of aesthetic quality classification categories.
Fig. 2 schematically shows a deep convolutional neural network structure.
Through the steps 1-3, a deep convolutional neural network model under the aesthetic quality classification task can be trained. Then, utilizing a deep convolutional neural network and a class response mapping method which are well trained for an aesthetic quality classification task; and then, by adopting the formula, the aesthetic feeling response graph M of the image to be cut under the high aesthetic feeling category can be calculated.
S102: and smoothing the image to be cut, and calculating the gradient value of each pixel point to obtain a gradient energy map.
S110: and densely extracting candidate clipping images from the image to be clipped.
Here, the candidate cropping window may be extracted densely with a sliding window of all sizes smaller than the image size, and the candidate cropping image may be extracted through the candidate cropping window.
S120: and screening candidate cutting images based on the aesthetic feeling response graph.
Specifically, the step may include:
s121: calculating an aesthetic retention score of the candidate cropped image by the following formula:
wherein S isa(C) An aesthetic retention score representing a candidate cropped image; c represents a candidate cropped image; (i, j) represents the position of the pixel; i represents an original image; a. the(i,j)Indicating the aesthetic response value at (i, j).
Through this step, an aesthetic retention model can be constructed. And screening the candidate clipping window through an aesthetic feeling retention model to obtain a candidate window with a higher aesthetic feeling retention score.
S122: and sorting all candidate cutting images from large to small according to the aesthetic feeling retention scores.
S123: and selecting a part of candidate clipping images with the highest scores.
For example: in practical application, the candidate cropping images in the first 10000 candidate cropping windows can be set and reserved.
S130: and estimating the composition score of the screened candidate trimming images based on the aesthetic feeling response graph and the gradient energy graph, and determining the candidate trimming image with the highest score as the trimming image.
Specifically, this step may be realized by step S131 to step S133.
S131: and establishing a composition model based on the aesthetic response diagram and the gradient energy diagram.
In the step, the composition model can be trained according to the actual situation when the composition model is established. In the process of training the composition model, the training data can adopt the image with better composition as a positive sample and the image with composition defect as a negative sample.
The composition model may be trained by:
step a: a training image set is established based on the aesthetic response map and the gradient energy map.
Step b: and marking the training image in an aesthetic quality category.
Step c: and training the deep convolutional neural network by using the marked training image.
The training process in this step may refer to steps 1 to 3, which are not described herein again.
Step d: and aiming at the marked training image, extracting the spatial pyramid characteristics of the aesthetic response image and the gradient energy image by using the trained deep convolution neural network.
Step e: and splicing the extracted spatial pyramid features together.
Step f: and training by using a classifier, and automatically learning a composition rule to obtain a composition model.
The classifier may be, for example, a support vector machine classifier.
S132: and estimating the composition score of the screened candidate clipping image by using the composition model, and determining the candidate clipping image with the highest score as the clipping image.
FIG. 3a schematically shows an image to be cropped; fig. 3b exemplarily shows the cropped image.
The invention will be better illustrated by means of a preferred embodiment.
Step A: and sending the image data set labeled with the aesthetic quality category into a deep convolution neural network for aesthetic quality category model training.
And B: inputting the image data set marked with the composition category into a trained deep convolutional neural network, extracting the characteristic graph of the last convolutional layer, calculating an aesthetic feeling response graph, simultaneously calculating an aesthetic feeling gradient graph, and then training a composition model by adopting a support vector machine classifier.
And C: and extracting an aesthetic feeling response graph and a gradient energy graph for the image to be tested.
The extraction method in this step can refer to the method in the training stage.
Step D: and intensively collecting candidate cutting windows of the images to be tested.
For example, on a 1000 x 1000 image to be tested, a sliding window at intervals of 30 pixels is used for acquisition or extraction.
Step E: and screening candidate clipping windows by using the aesthetic feeling retention model.
In this step, an aesthetic feeling retention model is used to calculate the aesthetic feeling retention scores of the intensively collected candidate clipping windows, and a part of candidate clipping windows with the highest aesthetic feeling classification is screened out, for example: 10000 candidate clipping windows are screened out.
Step F: and evaluating the screened candidate clipping windows by using the composition model.
In the step, a well-trained patterning model in a training stage is collected to evaluate the patterning scores of the screened candidate cutting windows, and the highest score is used as the final cutting window, so that a cutting image is obtained.
In summary, the method provided by the embodiment of the present invention well utilizes the aesthetic response graph and the gradient energy graph to maximally retain the aesthetic quality and the composition rule of the image, so as to obtain the more robust and higher-precision automatic cropping performance of the image, thereby further explaining the effectiveness of the aesthetic response graph and the gradient energy graph on the automatic cropping of the image.
Although the method provided by the embodiment of the present invention is described in the foregoing sequence, those skilled in the art will understand that, in order to achieve the effect of the embodiment, the method may also be performed in different sequences, such as in parallel or in reverse order, and these simple changes are all within the protection scope of the present invention.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.
Claims (5)
1. An automatic image cropping method, characterized in that the method comprises:
extracting an aesthetic response graph and a gradient energy graph of an image to be cut;
extracting candidate clipping images from the image to be clipped in a dense mode;
screening the candidate clipping image based on the aesthetic feeling response image;
estimating composition scores of the screened candidate cutting images based on the aesthetic feeling response graph and the gradient energy graph, and determining the candidate cutting images with the highest scores as cutting images;
the method for extracting the aesthetic response map and the gradient energy map of the image to be cut specifically comprises the following steps:
extracting the aesthetic feeling response graph of the image to be cut by utilizing a deep convolutional neural network and a category response mapping method and adopting the following formula:
wherein the M (x, y) represents an aesthetic response value at a spatial location (x, y); the K represents the total channel number of the characteristic diagram of the last convolutional layer of the deep convolutional neural network; the k represents the kth channel; f isk(x, y) represents a feature value of the k-th channel at the spatial location (x, y); said wkRepresenting the result of pooling the characteristic diagram of the kth channel to a weight of a high aesthetic sense category;
and smoothing the image to be cut, and calculating the gradient value of each pixel point to obtain the gradient energy map.
2. The method of claim 1, wherein the deep convolutional neural network is trained by:
arranging a convolution layer on the bottom layer of the deep convolutional neural network structure;
pooling each feature map into a point by a global average pooling method after the last convolutional layer of the deep convolutional neural network structure;
connect the same number of fully connected layers and loss functions as the aesthetic quality classification categories.
3. The method of claim 1, wherein the filtering the candidate cropped images based on the aesthetic response map comprises:
calculating an aesthetic retention score of the candidate cropped image by the following formula:
wherein, the Sa(C) Representing the aesthetic retention score of the candidate cropped image; the C represents the candidate cropping image; the (i, j) represents a position of a pixel; the I represents an original image; a is described(i,j)Representing an aesthetic response value at the (i, j) location;
sorting all candidate cutting images from large to small according to the aesthetic feeling retention scores;
and selecting a part of candidate clipping images with the highest scores.
4. The method according to claim 1, wherein the estimating composition scores of the screened candidate cropped images based on the aesthetic response map and the gradient energy map, and determining the candidate cropped image with the highest score as the cropped image comprises:
establishing a composition model based on the aesthetic response map and the gradient energy map;
and estimating the composition score of the screened candidate clipping image by using the composition model, and determining the candidate clipping image with the highest score as the clipping image.
5. The method of claim 4, wherein the composition model is obtained by:
establishing a training image set based on the aesthetic response map and the gradient energy map;
marking the training image in aesthetic quality category;
training a deep convolutional neural network by using the labeled training image;
aiming at the marked training image, extracting the space pyramid characteristics of the aesthetic feeling response image and the gradient energy image by using a trained deep convolution neural network;
splicing the extracted spatial pyramid features together;
and training by using a classifier, and automatically learning a composition rule to obtain a composition model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611041091.9A CN106650737B (en) | 2016-11-21 | 2016-11-21 | Automatic image cutting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611041091.9A CN106650737B (en) | 2016-11-21 | 2016-11-21 | Automatic image cutting method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106650737A CN106650737A (en) | 2017-05-10 |
CN106650737B true CN106650737B (en) | 2020-02-28 |
Family
ID=58811471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611041091.9A Active CN106650737B (en) | 2016-11-21 | 2016-11-21 | Automatic image cutting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106650737B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107317962B (en) * | 2017-05-12 | 2019-11-08 | 广东网金控股股份有限公司 | A kind of intelligence, which is taken pictures, cuts patterning system and application method |
CN107392244B (en) * | 2017-07-18 | 2020-08-28 | 厦门大学 | Image aesthetic feeling enhancement method based on deep neural network and cascade regression |
CN107545576A (en) * | 2017-07-31 | 2018-01-05 | 华南农业大学 | Image edit method based on composition rule |
CN108154464B (en) * | 2017-12-06 | 2020-09-22 | 中国科学院自动化研究所 | Method and device for automatically clipping picture based on reinforcement learning |
CN108566512A (en) * | 2018-03-21 | 2018-09-21 | 珠海市魅族科技有限公司 | A kind of intelligence image pickup method, device, computer equipment and readable storage medium storing program for executing |
CN109523503A (en) * | 2018-09-11 | 2019-03-26 | 北京三快在线科技有限公司 | A kind of method and apparatus of image cropping |
CN109518446B (en) * | 2018-12-21 | 2021-01-01 | 季华实验室 | Intelligent cutting method of cutting machine |
CN109886317B (en) * | 2019-01-29 | 2021-04-27 | 中国科学院自动化研究所 | General image aesthetic evaluation method, system and equipment based on attention mechanism |
CN111316319A (en) * | 2019-03-15 | 2020-06-19 | 深圳市大疆创新科技有限公司 | Image processing method, electronic device, and computer-readable storage medium |
CN110062173A (en) * | 2019-03-15 | 2019-07-26 | 北京旷视科技有限公司 | Image processor and image processing method, equipment, storage medium and intelligent terminal |
WO2020232672A1 (en) * | 2019-05-22 | 2020-11-26 | 深圳市大疆创新科技有限公司 | Image cropping method and apparatus, and photographing apparatus |
CN112839167B (en) * | 2020-12-30 | 2023-06-30 | Oppo(重庆)智能科技有限公司 | Image processing method, device, electronic equipment and computer readable medium |
CN113436224B (en) * | 2021-06-11 | 2022-04-26 | 华中科技大学 | Intelligent image clipping method and device based on explicit composition rule modeling |
CN114119373A (en) * | 2021-11-29 | 2022-03-01 | 广东维沃软件技术有限公司 | Image cropping method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104717413A (en) * | 2013-12-12 | 2015-06-17 | 北京三星通信技术研究有限公司 | Shooting assistance method and equipment |
CN105488758A (en) * | 2015-11-30 | 2016-04-13 | 河北工业大学 | Image scaling method based on content awareness |
CN105528786A (en) * | 2015-12-04 | 2016-04-27 | 小米科技有限责任公司 | Image processing method and device |
CN105787966A (en) * | 2016-03-21 | 2016-07-20 | 复旦大学 | An aesthetic evaluation method for computer pictures |
CN105894025A (en) * | 2016-03-30 | 2016-08-24 | 中国科学院自动化研究所 | Natural image aesthetic feeling quality assessment method based on multitask deep learning |
-
2016
- 2016-11-21 CN CN201611041091.9A patent/CN106650737B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104717413A (en) * | 2013-12-12 | 2015-06-17 | 北京三星通信技术研究有限公司 | Shooting assistance method and equipment |
CN105488758A (en) * | 2015-11-30 | 2016-04-13 | 河北工业大学 | Image scaling method based on content awareness |
CN105528786A (en) * | 2015-12-04 | 2016-04-27 | 小米科技有限责任公司 | Image processing method and device |
CN105787966A (en) * | 2016-03-21 | 2016-07-20 | 复旦大学 | An aesthetic evaluation method for computer pictures |
CN105894025A (en) * | 2016-03-30 | 2016-08-24 | 中国科学院自动化研究所 | Natural image aesthetic feeling quality assessment method based on multitask deep learning |
Non-Patent Citations (2)
Title |
---|
基于并行深度卷积神经网络的图像美感分类;王伟凝 等;《自动化学报》;20160630;第42卷(第6期);第904-913页 * |
相片中重要对象布局优化系统;侯丹红;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110615(第6期);第17-28页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106650737A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650737B (en) | Automatic image cutting method | |
CN106960195B (en) | Crowd counting method and device based on deep learning | |
CN109801256B (en) | Image aesthetic quality assessment method based on region of interest and global features | |
CN107665492B (en) | Colorectal panoramic digital pathological image tissue segmentation method based on depth network | |
CN105096259B (en) | The depth value restoration methods and system of depth image | |
CN106611160B (en) | Image hair identification method and device based on convolutional neural network | |
WO2018090355A1 (en) | Method for auto-cropping of images | |
WO2020007307A1 (en) | Sky filter method for panoramic images and portable terminal | |
CN104504365A (en) | System and method for smiling face recognition in video sequence | |
CN107358141B (en) | Data identification method and device | |
CN110647875A (en) | Method for segmenting and identifying model structure of blood cells and blood cell identification method | |
IL172480A (en) | Method for automatic detection and classification of objects and patterns in low resolution environments | |
CN112668594A (en) | Unsupervised image target detection method based on antagonism domain adaptation | |
CN108710893A (en) | A kind of digital image cameras source model sorting technique of feature based fusion | |
CN111008647B (en) | Sample extraction and image classification method based on void convolution and residual linkage | |
CN107743225A (en) | It is a kind of that the method for carrying out non-reference picture prediction of quality is characterized using multilayer depth | |
CN111382766A (en) | Equipment fault detection method based on fast R-CNN | |
CN110309789A (en) | Video monitoring human face clarity evaluation method and device based on deep learning | |
CN113781510A (en) | Edge detection method and device and electronic equipment | |
CN106529441A (en) | Fuzzy boundary fragmentation-based depth motion map human body action recognition method | |
CN114596316A (en) | Road image detail capturing method based on semantic segmentation | |
JP7300027B2 (en) | Image processing device, image processing method, learning device, learning method, and program | |
CN109325434A (en) | A kind of image scene classification method of the probability topic model of multiple features | |
CN112784854B (en) | Clothing color segmentation extraction method, device and equipment based on mathematical statistics | |
CN109741351A (en) | A kind of classification responsive type edge detection method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |