CN103996186B - Image cropping method and device - Google Patents

Image cropping method and device Download PDF

Info

Publication number
CN103996186B
CN103996186B CN201410178276.9A CN201410178276A CN103996186B CN 103996186 B CN103996186 B CN 103996186B CN 201410178276 A CN201410178276 A CN 201410178276A CN 103996186 B CN103996186 B CN 103996186B
Authority
CN
China
Prior art keywords
significance
image
region
target
value
Prior art date
Application number
CN201410178276.9A
Other languages
Chinese (zh)
Other versions
CN103996186A (en
Inventor
王琳
秦秋平
陈志军
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to CN201410178276.9A priority Critical patent/CN103996186B/en
Publication of CN103996186A publication Critical patent/CN103996186A/en
Application granted granted Critical
Publication of CN103996186B publication Critical patent/CN103996186B/en

Links

Abstract

The disclosure discloses a kind of image cropping method and device, belongs to image processing field.Described image method of cutting out includes:The face significance model of image is set up, the face significance model is used for characterizing the impact in described image after each face superposition to the significance value of each pixel in described image;The color significance model for pre-building and the face significance model are carried out linear superposition, target significance model is obtained;Cutting is carried out to described image using the target significance model.By color combining significance model and face significance model to the accurate cutting of image;Solve in correlation technique in the image cropping based on color of image significance analysis, take into consideration only the colouring information in image, easily cause the problem of the mistake cutting to the image comprising other key characters;Reached the effect that can be also carried out effective cutting when cutting being carried out to image to face therein.

Description

Image cropping method and device
Technical field
It relates to image processing field, more particularly to a kind of image cropping method and device.
Background technology
Some redundancies would generally be included in image, these redundancies can take a part of capacity, therefore in order to subtract In few image, the capacity of redundancy takes, it usually needs carry out cutting to image.
Related cutting is carried out to image during, first, set up the color significance model of original image, according to this Color significance model determines the color significance measure value of each element in original image, according to the face of each element in original image Color significance measure value, obtains the color Saliency maps of original image;Then, color Saliency maps are confined using specified rectangle frame, What selection significance composition was most confines region;Finally, cutting is carried out to original image region is confined to cut out this.
Inventor has found that correlation technique at least has following defect during the disclosure is realized:It is being based on image face In the image cropping of color significance analysis, the colouring information in image is taken into consideration only, easily caused to including other important spies The mistake cutting of the image that levies, other key characters can be face, specified object etc..
Content of the invention
In order to solve in correlation technique, in the image cropping based on color of image significance analysis, to take into consideration only in image Colouring information, easily cause the problem of the mistake cutting to the image comprising other key characters, the disclosure provides a kind of image Method of cutting out and device.The technical scheme is as follows:
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of image cropping method, including:
The face significance model of image is set up, the face significance model is used for characterizing each face in described image Impact after superposition to the significance value of each pixel in described image;
The color significance model for pre-building and the face significance model are carried out linear superposition, target is obtained Significance model;
Cutting is carried out to described image using the target significance model.
Optionally, the face significance model is:
Wherein,For the significance value of ith pixel point in described image,For in described image The width of p face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in described image, The position of the central point of the human face region corresponding to p-th face in described image,
It is optionally, described by the color significance model for pre-building and the face significance model linear superposition, Target significance model is obtained, including:
The color significance model is multiplied with the first weights, the first product is obtained;
The face significance model is multiplied with the second weights, the second product is obtained;
By first product and second product addition, the target significance model is obtained;
Wherein, first weights and second weights and be 1.
Optionally, described cutting is carried out to described image using the target significance model, including:
Described image is confined using predetermined crop box, obtain at least one and confine region;
Using the target significance model, each overall significance value for confining region is calculated;
What selected overall significance value was maximum confines region;
Cut out and selected described confine region.
Optionally, described using the target significance model, each overall significance value for confining region is calculated, including:
For each confines region, each pixel in region is confined using described in target significance model calculating Significance value;
The significance value that confines in region corresponding to each pixel is added, obtains described confining the total aobvious of region Work property value.
Optionally, also include:
Whether there is face in detection described image;
If testing result is there is face in described image, the step of the face significance model for setting up image is executed Suddenly.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of image cropping device, including:
Optionally, the face significance model is:
Wherein,For the significance value of ith pixel point in described image,For in described image The width of p face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in described image, The position of the central point of the human face region corresponding to p-th face in described image,
Optionally, the laminating module, including:
First multiplying unit, for the color significance model is multiplied with the first weights, obtains the first product;
Second multiplying unit, for the face significance model is multiplied with the second weights, obtains the second product;
Addition unit, the first product and second multiplying unit for obtaining first multiplying unit are obtained Second product addition, obtains the target significance model;
Wherein, first weights and second weights and be 1.
Optionally, the cutting module, including:
Unit is confined, for being confined to described image using predetermined crop box, at least one is obtained and is confined region;
Computing unit, for the target significance model obtained using the laminating module, is calculated each and confines region Overall significance value;
Selected unit, for select overall significance value maximum confine region;
Cutting unit, for cut out selected described in confine region.
Optionally, the computing unit, including:
Computation subunit, for confining region for each, using the target significance model calculate described in confine area The significance value of each pixel in domain;
Subelement is added, for the significance value that confines in region corresponding to each pixel to be added, institute is obtained State the overall significance value for confining region.
Optionally, also include:
Detection module, whether there is face for detecting in described image;
Described set up module, be additionally operable to the detection module testing result be described image in there is face when, build The face significance model of vertical image.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of image cropping device, including:
Processor;
For storing the memorizer of the processor executable;
Wherein, the processor is configured to:
The face significance model of image is set up, the face significance model is used for characterizing each face in described image Impact after superposition to the significance value of each pixel in described image;
The color significance model for pre-building and the face significance model are carried out linear superposition, target is obtained Significance model;
Cutting is carried out to described image using the target significance model.
The technical scheme that embodiment of the disclosure is provided can include following beneficial effect:
By color combining significance model and face significance model to the accurate cutting of image;Due to carrying out to image Impact of the face to pixel in image is take into account during cutting, therefore when the image of face is included, is cutting out image Also include important face information while middle prominent object, solve in correlation technique notable based on color of image Property analysis image cropping in, take into consideration only the colouring information in image, easily cause to comprising other key characters image Mistake cutting problem;Reached the effect that can be also carried out effective cutting when cutting being carried out to image to face therein.
It should be appreciated that above general description and detailed description hereinafter are only exemplary, this can not be limited Open.
Description of the drawings
Accompanying drawing herein is merged in description and constitutes the part of this specification, shows the enforcement for meeting the present invention Example, and the principle for being used for together explaining the present invention in description.
Fig. 1 is a kind of flow chart of the image cropping method according to an exemplary embodiment;
Fig. 2A is a kind of flow chart of the image cropping method that implements to exemplify according to another exemplary;
Fig. 2 B are a kind of schematic diagrams for including facial image according to an exemplary embodiment;
Fig. 2 C are a kind of schematic diagrams confined to image by the utilization rectangle frame according to an exemplary embodiment;
Fig. 2 D are a kind of signals repeatedly confined to image by the utilization rectangle frame according to an exemplary embodiment Figure;
Fig. 2 E are a kind of schematic diagrams for carrying out cutting to including facial image according to an exemplary embodiment;
Fig. 3 A are a kind of flow charts of the image cropping method according to another exemplary embodiment;
Fig. 3 B are a kind of to not comprising the schematic diagram for having facial image to carry out cutting according to an exemplary embodiment;
Fig. 4 is a kind of block diagram of the image cropping device according to an exemplary embodiment;
Fig. 5 is a kind of block diagram of the image cropping device that implements to exemplify according to another exemplary;
Fig. 6 is a kind of block diagram of the image cropping device according to another exemplary embodiment.
Specific embodiment
Here in detail exemplary embodiment will be illustrated, its example is illustrated in the accompanying drawings.Explained below is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent and the consistent all embodiments of the present invention.Conversely, they be only with as appended by The example of consistent apparatus and method in terms of some being described in detail in claims, of the invention.
" electronic equipment " that is said in text can be smart mobile phone, panel computer, intelligent television, E-book reader, MP3 Player (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio frequency Aspect 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard sound Frequency aspect 4) player, pocket computer on knee and desk computer etc..
Fig. 1 is a kind of flow chart of the image cropping method according to an exemplary embodiment, as shown in figure 1, the figure As method of cutting out is applied in electronic equipment, comprise the following steps.
In a step 101, the face significance model of image set up, and face significance model is used in phenogram picture each Impact after face superposition to the significance value of each pixel in image.
In a step 102, the color significance model for pre-building and face significance model are carried out linear superposition, Obtain target significance model.
In step 103, cutting is carried out to image using target significance model.
In sum, the image cropping method for providing in the embodiment of the present disclosure, by color combining significance model and people Face significance model is to the accurate cutting of image;Pixel in due to take into account face when cutting is carried out to image to image Affect, therefore when the image of face is included, also include while prominent object in image is cut out important Face information, solves in correlation technique in the image cropping based on color of image significance analysis, takes into consideration only in image Colouring information, easily cause to comprising other key characters image mistake cutting problem;Having reached can be to image The effect of effective cutting is also carried out when carrying out cutting to face therein.
Fig. 2A is a kind of flow chart of the image cropping method that implements to exemplify according to another exemplary, as shown in Figure 2 A, The image cropping method is applied in electronic equipment, is comprised the following steps.
In step 201, whether there is face in detection image.
In application scenes, during the information that user needs in image is obtained, if there is face in image, generally Also need to know the face information in figure, therefore before image cropping is carried out, it is necessary first in detection image, whether there is people Face.
In actual applications, can be according to whether there is face, and recognition of face skill in face recognition technology detection image Art is that those skilled in the art can realize, just repeats no more here.
In step 202., if testing result is there is face in image, the face significance model of image is set up.
Generally, face significance model can be used in phenogram picture after each face superposition to each picture in image The impact of the significance value of vegetarian refreshments.Such as, when there is a face in image, the face generally influences whether in image each The significance value of pixel;Also such as, when there is multiple faces in image, these faces can affect each in the image simultaneously The significance value of individual pixel.For example, in face, the significance value of each pixel is generally higher, apart from people in image The significance value of the closer pixel of face is affected than larger, and apart from the pixel that face is distant in image by face Significance value then affected by face smaller.
In a kind of possible implementation, the face significance model of foundation can be:
Wherein,For the significance value of ith pixel point in image,For p-th people in image The width of face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in image,In for image The position of the central point of the human face region corresponding to p-th face, (xi,yi) it is i-th picture The coordinate of vegetarian refreshments.
In actual applications, the width W of faceFThe width of the minimum enclosed rectangle being corresponding to the face, face Length HFThe length of the minimum enclosed rectangle being corresponding to the face.Refer to shown in Fig. 2 B, which is according to an exemplary reality Apply a kind of schematic diagram for including facial image for exemplifying.Include face T1 in the image, wherein the face with one The length of individual minimum enclosed rectangle b, minimum enclosed rectangle b is h1, and the width of minimum enclosed rectangle b is w1, now then may be used So that h1 to be defined as the length of face T1, width that w1 is defined as face T1.
According to the face significance value that above-mentioned face significance model can calculate any one pixel in image.
In step 203, color significance model is multiplied with the first weights, obtains the first product.
Color significance model is used for impact of the color to the significance value of each pixel in image in phenogram picture.
In actual applications, color significance model can be set up according to various ways, and sets up color significance model It is that those skilled in the art can realize, just repeats no more here.
In step 204, face significance model is multiplied with the second weights, obtains the second product.
In step 205, by the first product and the second product addition, target significance model is obtained.
That is, target significance model can be:
Wherein,For the significance value of ith pixel point in image,Aobvious for color Work property model,For face significance model, w1For the first weights, w2For the second weights, wherein w1+w2=1.
In actual applications, the first weight w 1 and the second weight w2Ratio can be set according to practical situation, such as Really in needing to weigh consideration color and face to image during the impact of pixel significance, can be by the first weight w1With second Weight w20.5 is disposed as, the impact of pixel significance in the face for needing to consider is to image need to be more than color to image During the impact of middle pixel significance, then can by the second weight w 2 arrange more than the first weight w1, conversely, then can be by One weight w1Arrange more than the second weight w2.
In step 206, image is confined using predetermined crop box, obtains at least one and confine region.
Here the crop box that is said can be predetermined arbitrary shape and size.Such as, crop box can be rectangle frame, circle Shape frame, oval frame, polygon frame etc..
Refer to shown in Fig. 2 C, which is that the one kind according to an exemplary embodiment carries out frame using rectangle frame to image Fixed schematic diagram.Image is confined using predetermined rectangle frame c1 in Fig. 2 C, area is confined by what rectangle frame c1 was confined Domain is c2.
In actual applications, need to be confined using crop box successively and various in image possible confine region.Such as, can be with The central point of crop box is traveled through each pixel in image successively, the central point of crop box, should in each pixel The region confined by crop box is one and confines region;Or, in order to obtain complete determined by crop box confine region, The central point of crop box each satisfactory pixel in image can be traveled through successively, the satisfactory pixel can Think that the pixel is more than or equal to the distance of the central point of crop box to crop box left frame to the distance of image left frame, should Pixel is more than or equal to the distance of the central point of crop box to crop box left frame, the pixel to the distance of image left frame Distance to image upper side frame is more than or equal to the distance of the central point of crop box to crop box upper side frame, and the pixel is to image The distance of lower frame is more than or equal to the distance of the central point of crop box to crop box lower frame.
For example, by taking rectangle frame as an example, refer to shown in Fig. 2 D, which is the one kind according to an exemplary embodiment The schematic diagram that image is repeatedly confined using rectangle frame.The height of rectangle frame R is h2, and the width of rectangle frame R is w2, When confining first time, rectangle frame R can be confined at upper left position first from image, obtained one and confined region M1, namely the central point D of rectangle frame R is initially positioned at first vegetarian refreshments p1, the coordinate of this yuan of vegetarian refreshments p1 is (h22,w22);At second When confining, rectangle frame R is moved right string, also will rectangle frame central point D from the first vegetarian refreshments p1 being currently located to the right Move at first vegetarian refreshments p2, first vegetarian refreshments p1 and unit vegetarian refreshments p2 are two adjacent unit's vegetarian refreshments of same a line, corresponding, rectangle frame R this When confine confine region for m2.When one's own profession is confined, often carry out once later new when confining, by the rectangle frame Central point D moves right a first vegetarian refreshments, till the right edge of rectangle frame R and the rightmost side side of image are Chong Die;Then The central point D of rectangle frame is moved in next line at first vegetarian refreshments adjacent with first vegetarian refreshments p1, then proceedes to frame be carried out in the row Fixed, the rest may be inferred.
It will be apparent that complete when confining region determined by crop box if necessary to obtain, if the first kind of crop box While length (when one group of opposite side is the first kind, another group of opposite side is Equations of The Second Kind side) identical with the one type edge lengths of image, Gradually movement is confined then only to need the bearing of trend being located on the Equations of The Second Kind side of image, namely is only needed in crop box Heart point is gradually moved backward in the bearing of trend that Equations of The Second Kind side is located.
It will be apparent that in actual applications, in other way or sequentially image can also be confined, this is just no longer Describe one by one.
In step 207, using target significance model, each overall significance value for confining region is calculated.
In a kind of possible implementation, using target significance model, each overall significance for confining region is calculated Value, can include:
First, for each confines region, calculated using target significance model and confine the aobvious of each pixel in region Work property value;
Second, the significance value that confines in region corresponding to each pixel is added, obtains confining the total notable of region Property value.
In a step 208, that selectes overall significance value maximum confines region.
In step 209, cut out and selected confine region.
That is, cut out overall significance value maximum confine region.
For example, refer to shown in Fig. 2 E, which is one kind according to an exemplary embodiment to including face Image carries out the schematic diagram of cutting.Include face T1 in Fig. 2 E, when image cropping is carried out according to target significance model, While including face T and other have the high thing of significance in the clipping region (region of dash area institute labelling) for obtaining Body T2.
It should be added that, in actual applications, can also select in other way needs cropped frame Determine region.For example, for each confines region, the significance value for confining region each pixel total can be calculated, really Number of the significance value more than the pixel of predetermined significance threshold value is made, the number is confined the significant figure in region as this Mesh;Select out effective number most confine region, cut out effective number most confine region.
In sum, the image cropping method for providing in the embodiment of the present disclosure, by color combining significance model and people Face significance model is to the accurate cutting of image;Pixel in due to take into account face when cutting is carried out to image to image Affect, therefore when the image of face is included, also include while prominent object in image is cut out important Face information, solves in correlation technique in the image cropping based on color of image significance analysis, takes into consideration only in image Colouring information, easily cause to comprising other key characters image mistake cutting problem;Having reached can be to image The effect of effective cutting is also carried out when carrying out cutting to face therein.
In actual applications, when needing to carry out substantial amounts of image image cropping and need to consider face cutting, if In these images, there is some image face, another part image not there is face, then each image cut out Before cutting, need to judge with the presence or absence of face in the image, for the image that there is no face can be directly notable using color Property model realization image cropping, and for the image of face is included, in order to avoid the mistake cutting of face, then can be with color combining Significance model and face significance model realization image cropping.For the process that each image carries out cutting may refer to as Under description to Fig. 3 A.
Fig. 3 A are a kind of flow charts of the image cropping method according to another exemplary embodiment, as shown in Figure 3A, The image cropping method is applied in electronic equipment, is comprised the following steps.
In step 301, whether there is face in detection image.
In step 302, if testing result is there is face in image, the face significance model of image is set up.
Generally, face significance model can be used in phenogram picture after each face superposition to each picture in image The impact of the significance value of vegetarian refreshments.Such as, when there is a face in image, the face generally influences whether in image each The significance value of pixel;Also such as, when there is multiple faces in image, these faces can affect each in the image simultaneously The significance value of individual pixel.For example, in face, the significance value of each pixel is generally higher, apart from people in image The significance value of the closer pixel of face is affected than larger, and apart from the pixel that face is distant in image by face Significance value then affected by face smaller.
In a kind of possible implementation, the face significance model of foundation is:
Wherein,For the significance value of ith pixel point in image,For p-th people in image The width of face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in image,In for image The position of the central point of the human face region corresponding to p-th face, (xi,yi) it is ith pixel The coordinate of point.
In actual applications, the width W of faceFThe width of the minimum enclosed rectangle being corresponding to the face, face Length HFThe length of the minimum enclosed rectangle being corresponding to the face.
In step 303, by the color significance model for pre-building and face significance model linear superposition, obtain Target significance model.
In a kind of possible implementation, by the color significance model for pre-building and face significance model line Property superposition, obtaining target significance model can include:Color significance model is multiplied with the first weights, first is obtained and is taken advantage of Product;Face significance model is multiplied with the second weights, the second product is obtained;By the first product and the second product addition, obtain Target significance model, wherein, the first weights and the second weights and be 1.
Color significance model can be used for shadow of the color to the significance value of each pixel in image in phenogram picture Ring.In actual applications, color significance model can be set up according to various ways, and to set up color significance model is ability Domain those of skill in the art can realize, just repeat no more here.
That is, target significance model can be:
Wherein,For the significance value of ith pixel point in image,Aobvious for color Work property model,For face significance model, w1For the first weights, w2For the second weights, wherein w1+w2=1.
In actual applications, the first weight w1With the second weight w2Ratio can be set according to practical situation, if In needing to weigh consideration color and face to image during the impact of pixel significance, can be by the first weight w1With the second power Value w2Be disposed as 0.5, when need consideration face to image in pixel significance impact need to be more than color to image in During the impact of pixel significance, then can be by the second weight w2Arrange more than the first weight w1, conversely, then can be by first Weight w1Arrange more than the second weight w 2.
In step 304, if testing result is there is no face in image, will be true for the color significance model for pre-building It is set to target significance model.
In step 305, image is confined using predetermined crop box, obtains at least one and confine region.
Here the crop box that is said can be predetermined arbitrary shape and size.Such as, crop box can be rectangle frame, circle Shape frame, oval frame, polygon frame etc..
In actual applications, need to be confined using crop box successively and various in image possible confine region.Such as, can be with The central point of crop box is traveled through each pixel in image successively, the central point of crop box, should in each pixel The region confined by crop box is one and confines region;Or, in order to obtain complete determined by crop box confine region, The central point of crop box each satisfactory pixel in image can be traveled through successively, the satisfactory pixel can Think that the pixel is more than or equal to the distance of the central point of crop box to crop box left frame to the distance of image left frame, should Pixel is more than or equal to the distance of the central point of crop box to crop box left frame, the pixel to the distance of image left frame Distance to image upper side frame is more than or equal to the distance of the central point of crop box to crop box upper side frame, and the pixel is to image The distance of lower frame is more than or equal to the distance of the central point of crop box to crop box lower frame.
It will be apparent that in actual applications, in other way or sequentially image can also be confined, this is just no longer Describe one by one.
Within step 306, using target significance model, each overall significance value for confining region is calculated.
In a kind of possible implementation, using target significance model, each overall significance for confining region is calculated Value, can include:
First, for each confines region, calculated using target significance model and confine the aobvious of each pixel in region Work property value;
Second, the significance value that confines in region corresponding to each pixel is added, obtains confining the total notable of region Property value.
In step 307, that selectes overall significance value maximum confines region.
In step 308, cut out and selected confine region.
That is, cut out overall significance value maximum confine region.
For example, when in image without face, refer to shown in Fig. 3 B, which is illustrated according to an exemplary embodiment A kind of to not comprising the schematic diagram that has facial image to carry out cutting.When image cropping is carried out according to target significance model, Only comprising the object T2 that significance is high in the clipping region for obtaining.
Again for example, referring still to shown in Fig. 2 E, in Fig. 2 E, include face T1, when according to target significance model When carrying out image cropping, while including face T1 and other have the high object T2 of significance in the clipping region for obtaining.
It should be added that, in actual applications, can also select in other way needs cropped frame Determine region.For example, for each confines region, the significance value for confining region each pixel total can be calculated, really Number of the significance value more than the pixel of predetermined significance threshold value is made, the number is confined the significant figure in region as this Mesh;Select out effective number most confine region, cut out effective number most confine region.
In sum, the image cropping method for providing in the embodiment of the present disclosure, by color combining significance model and people Face significance model is to the accurate cutting of image;Pixel in due to take into account face when cutting is carried out to image to image Affect, therefore when the image of face is included, also include while prominent object in image is cut out important Face information, solves in correlation technique in the image cropping based on color of image significance analysis, takes into consideration only in image Colouring information, easily cause to comprising other key characters image mistake cutting problem;Having reached can be to image The effect of effective cutting is also carried out when carrying out cutting to face therein.
Following for disclosure device embodiment, can be used for executing method of disclosure embodiment.For disclosure device reality The details not disclosed in example is applied, method of disclosure embodiment is refer to.
Fig. 4 is a kind of block diagram of the image cropping device according to an exemplary embodiment, as shown in figure 4, the image Scissoring device is applied in electronic equipment, and the image cropping device is included but is not limited to:Set up module 402,404 and of laminating module Cutting module 406.
This sets up the face significance model that module 402 is configured to set up image, and face significance model is used for characterizing Impact in image after each face superposition to the significance value of each pixel in image.
The laminating module 404 is configured to the color significance model that will be pre-build and sets up what module 402 was set up Face significance model carries out linear superposition, obtains target significance model.
The cutting module 406 is configured to, with the target significance model that laminating module 404 obtains and image is cut out Cut.
In sum, the image cropping device for providing in the embodiment of the present disclosure, by color combining significance model and people Face significance model is to the accurate cutting of image;Pixel in due to take into account face when cutting is carried out to image to image Affect, therefore when the image of face is included, also include while prominent object in image is cut out important Face information, solves in correlation technique in the image cropping based on color of image significance analysis, takes into consideration only in image Colouring information, easily cause to comprising other key characters image mistake cutting problem;Having reached can be to image The effect of effective cutting is also carried out when carrying out cutting to face therein.
Fig. 5 is a kind of block diagram of the image cropping device that implements to exemplify according to another exemplary, as shown in figure 5, the figure As Scissoring device is applied in electronic equipment.The image cropping device can be included but is not limited to:Set up module 502, superposition mould Block 504 and cutting module 506.
This sets up the face significance model that module 502 is configured to set up image, and face significance model is used for characterizing Impact in image after each face superposition to the significance value of each pixel in image.
The laminating module 504 is configured to the color significance model that will be pre-build and sets up what module 502 was set up Face significance model carries out linear superposition, obtains target significance model.
The cutting module 506 is configured to, with the target significance model that laminating module 504 obtains and image is cut out Cut.
In the possible implementation of in the embodiment shown in fig. 5 the first, the face significance model is:
Wherein,For the significance value of ith pixel point in image,For p-th people in image The width of face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in image,In for image The position of the central point of the human face region corresponding to p-th face,
In second possible implementation in the embodiment shown in fig. 5, the laminating module 504 can include:First Multiplying unit 504a, the second multiplying unit 504b and addition unit 504c.
First multiplying unit 504a is configured to color significance model is multiplied with the first weights, obtains first and takes advantage of Product.
Second multiplying unit 504b is configured to face significance model is multiplied with the second weights, obtains second and takes advantage of Product.
Addition unit 504c is configured to the first product for obtaining the first multiplying unit 504a and the second multiplying unit The second product addition that 504b is obtained, obtains target significance model.
Wherein, the first weights and the second weights and be 1.
In the third possible implementation in the embodiment shown in fig. 5, the cutting module 506 can include:Confine Unit 506a, computing unit 506b, selected unit 506c and cutting unit 506d.
This is confined unit 506a and is configured to, with predetermined crop box and image is confined, and obtains at least one and confines Region.
Computing unit 506b is configured to, with the target significance model that laminating module 504 is obtained, and calculates each frame Determine the overall significance value in region.
The selected unit 506c be configured to select overall significance value maximum confine region.
Cutting unit 506d is configured to cut out and selected confines region.
In the 4th kind of possible implementation in the embodiment shown in fig. 5, computing unit 506b can include:Calculate Subelement 506b1 be added subelement 506b2.
Computation subunit 506b1 is configured to confine region for each, is calculated using target significance model and is confined The significance value of each pixel in region.
Addition subelement 506b2 is configured to be added the significance value that confines in region corresponding to each pixel, Obtain the overall significance value for confining region.
In the 5th kind of possible implementation in the embodiment shown in fig. 5, the image cropping device can also include:Inspection Survey module 508.
The detection module 508 is configured to whether there is face in detection image.
This is set up module 502 and is additionally configured to, when the testing result of detection module 508 is there is face in image, set up The face significance model of image.
In sum, the image cropping device for providing in the embodiment of the present disclosure, by color combining significance model and people Face significance model is to the accurate cutting of image;Pixel in due to take into account face when cutting is carried out to image to image Affect, therefore when the image of face is included, also include while prominent object in image is cut out important Face information, solves in correlation technique in the image cropping based on color of image significance analysis, takes into consideration only in image Colouring information, easily cause to comprising other key characters image mistake cutting problem;Having reached can be to image The effect of effective cutting is also carried out when carrying out cutting to face therein.
Device in regard to above-described embodiment, wherein modules execute the concrete mode of operation in relevant the method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 6 is a kind of block diagram of the image cropping device 600 according to another exemplary embodiment.For example, device 600 Can be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, armarium, Body-building equipment, personal digital assistant etc..
With reference to Fig. 6, device 600 can include following one or more assemblies:Process assembly 602, memorizer 604, power supply Component 606, multimedia groupware 608, audio-frequency assembly 610, the interface 612 of input/output (I/O), sensor cluster 614, and Communication component 616.
The integrated operation of 602 usual control device 600 of process assembly, such as with display, call, data communication, phase The associated operation of machine operation and record operation.Process assembly 602 can refer to execute including one or more processors 618 Order, to complete all or part of step of above-mentioned method.Additionally, process assembly 602 can include one or more modules, just Interaction between process assembly 602 and other assemblies.For example, process assembly 602 can include multi-media module, many to facilitate Interaction between media component 608 and process assembly 602.
Memorizer 604 is configured to store various types of data to support the operation in device 600.These data are shown Example includes the instruction of any application program for operating on device 600 or method, and contact data, telephone book data disappear Breath, picture, video etc..Memorizer 604 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) erasable are compiled Journey read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 606 provides electric power for the various assemblies of device 600.Power supply module 606 can include power management system System, one or more power supplys, and other generate, manage and distribute the component that electric power is associated with for device 600.
Multimedia groupware 608 is included in the screen of one output interface of offer between described device 600 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive the input signal from user.Touch panel includes one or more touch sensings Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action Border, but also detect and the touch or slide related persistent period and pressure.In certain embodiments, many matchmakers Body component 608 includes a front-facing camera and/or post-positioned pick-up head.When device 600 be in operator scheme, such as screening-mode or During video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and Post-positioned pick-up head can be a fixed optical lens system or there is focusing and optical zoom capabilities.
Audio-frequency assembly 610 is configured to output and/or input audio signal.For example, audio-frequency assembly 610 includes a Mike Wind (MIC), when device 600 is in operator scheme, such as call model, logging mode and speech recognition mode, mike is matched somebody with somebody It is set to reception external audio signal.The audio signal for being received can be further stored in memorizer 604 or via communication set Part 616 sends.In certain embodiments, audio-frequency assembly 610 also includes a speaker, for exports audio signal.
I/O interfaces 612 are to provide interface between process assembly 602 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor cluster 614 includes one or more sensors, comments for providing the state of various aspects for device 600 Estimate.For example, sensor cluster 614 can detect the opening/closed mode of device 600, and the relative localization of component is for example described Display and keypad of the component for device 600, sensor cluster 614 can be with 600 1 components of detection means 600 or device Position change, user is presence or absence of with what device 600 was contacted, 600 orientation of device or acceleration/deceleration and device 600 Temperature change.Sensor cluster 614 can include proximity transducer, be configured to detect when without any physical contact The presence of object nearby.Sensor cluster 614 can also include optical sensor, such as CMOS or ccd image sensor, for into Used in as application.In certain embodiments, the sensor cluster 614 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure transducer or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between device 600 and other equipment.Device 600 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary enforcement In example, communication component 616 receives the broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 616 also includes near-field communication (NFC) module, to promote junction service.Example Such as, NFC module can be based on RF identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, Bluetooth (BT) technology and other technologies are realizing.
In the exemplary embodiment, device 600 can be by one or more application specific integrated circuits (ASIC), numeral letter Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing above-mentioned method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include that the memorizer 604 for instructing, above-mentioned instruction can be executed to complete above-mentioned method by the processor 618 of device 600.Example Such as, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, soft Disk and optical data storage devices etc..
Those skilled in the art will readily occur to its of the present invention after considering description and putting into practice invention disclosed herein Its embodiment.The application is intended to any modification of the present invention, purposes or adaptations, these modifications, purposes or Person's adaptations follow the general principle of the present invention and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.Description and embodiments are considered only as exemplary, and true scope and spirit of the invention are by following Claim is pointed out.
It should be appreciated that the precision architecture for being described above and being shown in the drawings is the invention is not limited in, and And various modifications and changes can carried out without departing from the scope.The scope of the present invention is only limited by appended claim.

Claims (11)

1. a kind of image cropping method, it is characterised in that include:
The face significance model of image is set up, the face significance model is used for characterizing each face superposition in described image Impact to the significance value of each pixel in described image afterwards;
The color significance model for pre-building and the face significance model are carried out linear superposition, target is obtained notable Property model;
Cutting is carried out to described image using the target significance model;
Wherein, described by the color significance model for pre-building and the face significance model linear superposition, obtain mesh Mark significance model, including:
The color significance model is multiplied with the first weights, the first product is obtained;
The face significance model is multiplied with the second weights, the second product is obtained;
By first product and second product addition, the target significance model is obtained;
Wherein, first weights and second weights and be 1.
2. method according to claim 1, it is characterised in that the face significance model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein,For the significance value of ith pixel point in described image,For p-th in described image The width of face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in described image,For The position of the central point of the human face region in described image corresponding to p-th face,
3. method according to claim 1, it is characterised in that described using the target significance model to described image Cutting is carried out, including:
Described image is confined using predetermined crop box, obtain at least one and confine region;
Using the target significance model, each overall significance value for confining region is calculated;
What selected overall significance value was maximum confines region;
Cut out and selected described confine region.
4. method according to claim 3, it is characterised in that described using the target significance model, calculates each The overall significance value in region is confined, including:
For each confines region, using the target significance model calculate described in confine the notable of each pixel in region Property value;
The significance value that confines in region corresponding to each pixel is added, the overall significance for confining region is obtained Value.
5. according to arbitrary described method in Claims 1-4, it is characterised in that also include:
Whether there is face in detection described image;
If testing result be described image in there is face, execute described set up image face significance model the step of.
6. a kind of image cropping device, it is characterised in that include:
Module is set up, for setting up the face significance model of image, the face significance model is used for characterizing described image In impact after the superposition of each face to the significance value of each pixel in described image;
Laminating module, for by the color for pre-building significance model and set up module foundation face significance model enter Row linear superposition, obtains target significance model;
Cutting module, the target significance model for being obtained using laminating module carry out cutting to described image;
Wherein, the laminating module, including:
First multiplying unit, for the color significance model is multiplied with the first weights, obtains the first product;
Second multiplying unit, for the face significance model is multiplied with the second weights, obtains the second product;
Addition unit, the first product for obtaining first multiplying unit obtain with second multiplying unit second Product addition, obtains the target significance model;
Wherein, first weights and second weights and be 1.
7. device according to claim 6, it is characterised in that the face significance model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein,For the significance value of ith pixel point in described image,For p-th in described image The width of face minimum enclosed rectangle,For the length of p-th face minimum enclosed rectangle in described image,For The position of the central point of the human face region in described image corresponding to p-th face,
8. device according to claim 6, it is characterised in that the cutting module, including:
Unit is confined, for being confined to described image using predetermined crop box, at least one is obtained and is confined region;
Computing unit, for the target significance model obtained using the laminating module, is calculated each and confines the total aobvious of region Work property value;
Selected unit, for select overall significance value maximum confine region;
Cutting unit, for cut out selected described in confine region.
9. device according to claim 8, it is characterised in that the computing unit, including:
Computation subunit, for confining region for each, using the target significance model calculate described in confine in region The significance value of each pixel;
Subelement is added, for the significance value that confines in region corresponding to each pixel to be added, the frame is obtained Determine the overall significance value in region.
10. according to arbitrary described device in claim 6 to 9, it is characterised in that also include:
Detection module, whether there is face for detecting in described image;
Described set up module, be additionally operable to the detection module testing result be described image in there is face when, set up figure The face significance model of picture.
11. a kind of image cropping devices, it is characterised in that include:
Processor;
For storing the memorizer of the processor executable;
Wherein, the processor is configured to:
The face significance model of image is set up, the face significance model is used for characterizing each face superposition in described image Impact to the significance value of each pixel in described image afterwards;
The color significance model for pre-building and the face significance model are carried out linear superposition, target is obtained notable Property model;
Cutting is carried out to described image using the target significance model;
Wherein, described by the color significance model for pre-building and the face significance model linear superposition, obtain mesh Mark significance model, including:
The color significance model is multiplied with the first weights, the first product is obtained;
The face significance model is multiplied with the second weights, the second product is obtained;
By first product and second product addition, the target significance model is obtained;
Wherein, first weights and second weights and be 1.
CN201410178276.9A 2014-04-29 2014-04-29 Image cropping method and device CN103996186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410178276.9A CN103996186B (en) 2014-04-29 2014-04-29 Image cropping method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410178276.9A CN103996186B (en) 2014-04-29 2014-04-29 Image cropping method and device

Publications (2)

Publication Number Publication Date
CN103996186A CN103996186A (en) 2014-08-20
CN103996186B true CN103996186B (en) 2017-03-15

Family

ID=51310342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410178276.9A CN103996186B (en) 2014-04-29 2014-04-29 Image cropping method and device

Country Status (1)

Country Link
CN (1) CN103996186B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105991920A (en) * 2015-02-09 2016-10-05 钱仰德 Method of using image cutting to make mobile phone capturing frame automatically track object
CN105989572B (en) * 2015-02-10 2020-04-24 腾讯科技(深圳)有限公司 Picture processing method and device
CN105528786B (en) * 2015-12-04 2019-10-01 小米科技有限责任公司 Image processing method and device
CN105761205B (en) * 2016-03-17 2018-12-11 网易有道信息技术(北京)有限公司 A kind of picture put-on method and device
CN107146198A (en) * 2017-04-19 2017-09-08 中国电子科技集团公司电子科学研究院 A kind of intelligent method of cutting out of photo and device
CN107463914A (en) * 2017-08-11 2017-12-12 环球智达科技(北京)有限公司 Image cutting method
CN108062755B (en) * 2017-11-02 2020-10-02 广东数相智能科技有限公司 Intelligent picture clipping method and device
WO2019109268A1 (en) * 2017-12-06 2019-06-13 中国科学院自动化研究所 Method and device for automatically cropping picture based on reinforcement learning
CN108563982B (en) * 2018-01-05 2020-01-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting image
CN108776970B (en) * 2018-06-12 2021-01-12 北京字节跳动网络技术有限公司 Image processing method and device
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110223306A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of method of cutting out and device of image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426704A (en) * 2011-10-28 2012-04-25 清华大学深圳研究生院 Quick detection method for salient object
US8363984B1 (en) * 2010-07-13 2013-01-29 Google Inc. Method and system for automatically cropping images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7565030B2 (en) * 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US8693780B2 (en) * 2009-06-22 2014-04-08 Technion Research & Development Foundation Limited Automated collage formation from photographic images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8363984B1 (en) * 2010-07-13 2013-01-29 Google Inc. Method and system for automatically cropping images
CN102426704A (en) * 2011-10-28 2012-04-25 清华大学深圳研究生院 Quick detection method for salient object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Predicting human gaze using low-level saliency combined with face detection;Moran Cerfet et al.;《Advances in Neural Information Processing Systems》;20071231;第4页 *
Saliency Detection via Dense and Sparse Reconstruction;Xiaohui Li et al.;《ICCV2013》;20131231;2976-2983页 *

Also Published As

Publication number Publication date
CN103996186A (en) 2014-08-20

Similar Documents

Publication Publication Date Title
CN109061985B (en) User interface for camera effect
CN108762605B (en) Device configuration user interface
CN104125332B (en) The method of mobile terminal and control mobile terminal
RU2626090C2 (en) Method, device and terminal device for processing image
CN103685724B (en) Mobile terminal and control method thereof
CN104503689B (en) Application interface display methods and device
CN103916595B (en) For obtaining the method for image and electronic installation for handling this method
KR101992192B1 (en) Mobile terminal
CN106537319A (en) Screen-splitting display method and device
CN104090721B (en) terminal control method and device
CN103996189B (en) Image partition method and device
CN104408402B (en) Face identification method and device
CN106295515B (en) Determine the method and device of the human face region in image
CN103140862B (en) User interface system and operational approach thereof
CN104539009B (en) charging management method and device
CN103577036B (en) Show equipment and its control method
CN103685728B (en) Mobile terminal and its control method
US20150049924A1 (en) Method, terminal device and storage medium for processing image
KR20120006405A (en) Method for photo editing and mobile terminal using this method
CN104166689B (en) The rendering method and device of e-book
CN104301528B (en) The method and device of display information
CN104156915A (en) Skin color adjusting method and device
CN105657173B (en) Volume adjusting method, device and mobile terminal
CN106295566A (en) Facial expression recognizing method and device
CN106355573A (en) Target object positioning method and device in pictures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant