CN103996186A - Image cutting method and image cutting device - Google Patents

Image cutting method and image cutting device Download PDF

Info

Publication number
CN103996186A
CN103996186A CN201410178276.9A CN201410178276A CN103996186A CN 103996186 A CN103996186 A CN 103996186A CN 201410178276 A CN201410178276 A CN 201410178276A CN 103996186 A CN103996186 A CN 103996186A
Authority
CN
China
Prior art keywords
image
face
conspicuousness
conspicuousness model
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410178276.9A
Other languages
Chinese (zh)
Other versions
CN103996186B (en
Inventor
王琳
秦秋平
陈志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201410178276.9A priority Critical patent/CN103996186B/en
Publication of CN103996186A publication Critical patent/CN103996186A/en
Application granted granted Critical
Publication of CN103996186B publication Critical patent/CN103996186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an image cutting method and an image cutting device, which belong to the field of image processing. The image cutting method comprises the following steps: a face saliency model of an image is established, wherein the face saliency model is used for characterizing the influence of faces in the image to the saliency values of pixels in the image after the faces are superposed; a pre-established color saliency model and the face saliency model are linearly superposed to obtain a target saliency model; and the image is cut by using the target saliency model. The image is accurately cut by combining the color saliency model and the face saliency model. The problem in the related technologies that only color information of an image is considered in image cutting based on image color saliency analysis and an image including other important features can be easily mis-cut is solved. Faces in an image can be effectively cut when the image is cut.

Description

Image cropping method and device
Technical field
The disclosure relates to image processing field, particularly a kind of image cropping method and device.
Background technology
In image, conventionally can comprise some redundant informations, these redundant informations can take a part of capacity, therefore take in order to reduce the capacity of redundant information in image, conventionally need to carry out cutting to image.
Relevant image is carried out in the process of cutting, first, set up the color conspicuousness model of original image, determine the color significance measure value of each element in original image according to this color conspicuousness model, according to the color significance measure value of each element in original image, obtain the color Saliency maps of original image; Then, utilize and specify rectangle frame to confine color Saliency maps, choose conspicuousness composition maximum confine region; Finally, original image is carried out to cutting and confine region to cut out this.
Inventor is realizing in process of the present disclosure, find that correlation technique at least exists following defect: in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the mistake cutting of the image to including other key characters, other key characters can be face, specified object etc.
Summary of the invention
In order to solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters, the disclosure provides a kind of image cropping method and device.Described technical scheme is as follows:
According to the first aspect of disclosure embodiment, a kind of image cropping method is provided, comprising:
Set up the face conspicuousness model of image, described face conspicuousness model is for characterizing the impact of the conspicuousness value on each pixel in described image after the stack of each face of described image;
Color conspicuousness model and the described face conspicuousness model set up are in advance carried out to linear superposition, obtain target conspicuousness model;
Utilize described target conspicuousness model to carry out cutting to described image.
Optionally, described face conspicuousness model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in described image, for the width of p the minimum boundary rectangle of face in described image, for the length of p the minimum boundary rectangle of face in described image, for the position of the central point of p the corresponding human face region of face in described image, σ x p = W F p 4 , σ y p = H F p 4 .
Optionally, described by advance set up color conspicuousness model and described face conspicuousness model linear superposition, obtain target conspicuousness model, comprising:
Described color conspicuousness model and the first weights are multiplied each other, obtain the first product;
Described face conspicuousness model and the second weights are multiplied each other, obtain the second product;
Described the first product and described the second product are added, obtain described target conspicuousness model;
Wherein, described the first weights and described the second weights and be 1.
Optionally, describedly utilize described target conspicuousness model to carry out cutting to described image, comprising:
Utilize predetermined crop box to confine described image, obtain at least one and confine region;
Utilize described target conspicuousness model, calculate each and confine the overall significance value in region;
Select the region of confining of overall significance value maximum;
Cut out described in selected and confine region.
Optionally, describedly utilize described target conspicuousness model, calculate each and confine the overall significance value in region, comprising:
For each region of confining, utilize described target conspicuousness model to confine the conspicuousness value of each pixel in region described in calculating;
Described in inciting somebody to action, confine the corresponding conspicuousness value of each pixel in region and be added, described in obtaining, confine the overall significance value in region.
Optionally, also comprise:
Detect and in described image, whether have face;
If testing result is to have face in described image, carry out the step of the described face conspicuousness model of setting up image.
According to the second aspect of disclosure embodiment, a kind of image cropping device is provided, comprising:
Optionally, described face conspicuousness model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in described image, for the width of p the minimum boundary rectangle of face in described image, for the length of p the minimum boundary rectangle of face in described image, for the position of the central point of p the corresponding human face region of face in described image, σ x p = W F p 4 , σ y p = H F p 4 .
Optionally, described laminating module, comprising:
First unit that multiplies each other, for described color conspicuousness model and the first weights are multiplied each other, obtains the first product;
Second unit that multiplies each other, for described face conspicuousness model and the second weights are multiplied each other, obtains the second product;
Addition unit, for described first the first product and described second that unit obtains the second product that unit obtains that multiplies each other that multiplies each other is added, obtains described target conspicuousness model;
Wherein, described the first weights and described the second weights and be 1.
Optionally, described cutting module, comprising:
Confine unit, for utilizing predetermined crop box to confine described image, obtain at least one and confine region;
Computing unit, for the target conspicuousness model that utilizes described laminating module to obtain, calculates each and confines the overall significance value in region;
Selected unit, for the region of confining of selected overall significance value maximum;
Cutting unit, confines region for cutting out described in selected.
Optionally, described computing unit, comprising:
Computation subunit, for for each region of confining, utilizes described target conspicuousness model to confine the conspicuousness value of each pixel in region described in calculating;
Be added subelement, described in inciting somebody to action, confine the corresponding conspicuousness value addition of each pixel of region, described in obtaining, confine the overall significance value in region.
Optionally, also comprise:
Detection module, for detection of whether there being face in described image;
The described module of setting up, also, in the testing result of described detection module being described image while there is face, sets up the face conspicuousness model of image.
According to the third aspect of disclosure embodiment, a kind of image cropping device is provided, comprising:
Processor;
For storing the storer of described processor executable;
Wherein, described processor is configured to:
Set up the face conspicuousness model of image, described face conspicuousness model is for characterizing the impact of the conspicuousness value on each pixel in described image after the stack of each face of described image;
Color conspicuousness model and the described face conspicuousness model set up are in advance carried out to linear superposition, obtain target conspicuousness model;
Utilize described target conspicuousness model to carry out cutting to described image.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect:
By color combining conspicuousness model and face conspicuousness model to the accurate cutting of image; Owing to having considered the impact of pixel during face is on image in the time that image is carried out to cutting, therefore in the time including the image of face, in cutting out in image outstanding object, also include important face information, solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters; Reach the effect that can also carry out effective cutting in the time that image is carried out to cutting to face wherein.
Should be understood that, it is only exemplary that above general description and details are hereinafter described, and can not limit the disclosure.
Brief description of the drawings
Accompanying drawing is herein merged in instructions and forms the part of this instructions, shows embodiment according to the invention, and is used from and explains principle of the present invention in instructions one.
Fig. 1 is according to the process flow diagram of a kind of image cropping method shown in an exemplary embodiment;
Fig. 2 A is according to the process flow diagram of a kind of image cropping method shown in another exemplary embodiment;
Fig. 2 B is according to a kind of schematic diagram that includes facial image shown in an exemplary embodiment;
Fig. 2 C is according to a kind of schematic diagram that utilizes rectangle frame to confine image shown in an exemplary embodiment;
Fig. 2 D is according to a kind of schematic diagram that utilizes rectangle frame repeatedly to confine image shown in an exemplary embodiment;
Fig. 2 E is according to a kind of schematic diagram that carries out cutting to including facial image shown in an exemplary embodiment;
Fig. 3 A is the basis process flow diagram of a kind of image cropping method shown in an exemplary embodiment again;
Fig. 3 B is according to a kind of schematic diagram that carries out cutting to not including facial image shown in an exemplary embodiment;
Fig. 4 is according to the block diagram of a kind of image cropping device shown in an exemplary embodiment;
Fig. 5 is according to the block diagram of a kind of image cropping device shown in another exemplary embodiment;
Fig. 6 is the basis block diagram of a kind of image cropping device shown in an exemplary embodiment again.
Embodiment
Here will at length describe exemplary embodiment, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless separately there is expression, the same numbers in different accompanying drawings represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the present invention.On the contrary, they are only and the example of apparatus and method as consistent in some aspects that described in detail in appended claims, of the present invention.
" electronic equipment " said in literary composition can be smart mobile phone, panel computer, intelligent television, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio frequency aspect 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio frequency aspect 4) player, pocket computer on knee and desk-top computer etc.
Fig. 1 is according to the process flow diagram of a kind of image cropping method shown in an exemplary embodiment, and as shown in Figure 1, this image cropping method is applied in electronic equipment, comprises the following steps.
In step 101, set up the face conspicuousness model of image, the impact of the conspicuousness value on each pixel in image after face conspicuousness model superposes for each face of token image.
In step 102, color conspicuousness model and the face conspicuousness model set up are in advance carried out to linear superposition, obtain target conspicuousness model.
In step 103, utilize target conspicuousness model to carry out cutting to image.
In sum, the image cropping method providing in disclosure embodiment, by color combining conspicuousness model and face conspicuousness model to the accurate cutting of image; Owing to having considered the impact of pixel during face is on image in the time that image is carried out to cutting, therefore in the time including the image of face, in cutting out in image outstanding object, also include important face information, solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters; Reach the effect that can also carry out effective cutting in the time that image is carried out to cutting to face wherein.
Fig. 2 A is according to the process flow diagram of a kind of image cropping method shown in another exemplary embodiment, and as shown in Figure 2 A, this image cropping method is applied in electronic equipment, comprises the following steps.
In step 201, in detected image, whether there is face.
In application scenes, in the time obtaining the information that user in image needs, if there is face in image, conventionally also need to know the face information in figure, therefore before carrying out image cropping, first need whether to exist in detected image face.
In actual applications, can, according to whether there being face in face recognition technology detected image, all can realize and face recognition technology is those skilled in the art, just repeat no more here.
In step 202, if testing result is to have face in image, set up the face conspicuousness model of image.
Generally, face conspicuousness model can be for the impact of the conspicuousness value on each pixel in image after each face stack in token image.Such as, in the time there is a face in image, this face can have influence on the conspicuousness value of each pixel in image conventionally; Also such as, in the time there is multiple face in image, these faces can affect the conspicuousness value of each pixel in this image simultaneously.By way of example, in face, the conspicuousness value of each pixel is conventionally higher, the impact that the conspicuousness value of the closer pixel of image middle distance face is subject to face is larger, and it is smaller that the conspicuousness value of the distant pixel of image middle distance face is subject to the impact of face.
In a kind of possible implementation, the face conspicuousness model of foundation can be:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in image, for the width of p the minimum boundary rectangle of face in image, for the length of p the minimum boundary rectangle of face in image, for the position of the central point of p the corresponding human face region of face in image, (x i, y i) be the coordinate of i pixel.
In actual applications, the width W of face fbe the width of the corresponding minimum boundary rectangle of this face, the length H of face fbe the length of the corresponding minimum boundary rectangle of this face.Refer to shown in Fig. 2 B, it is according to a kind of schematic diagram that includes facial image shown in an exemplary embodiment.In this image, include face T1, wherein this face has a minimum boundary rectangle b, and the length of this minimum boundary rectangle b is h1, and the width of this minimum boundary rectangle b is w1, now h1 can be defined as to the length of this face T1, w1 be defined as to the width of this face T1.
Can calculate the face conspicuousness value of any one pixel in image according to above-mentioned face conspicuousness model.
In step 203, color conspicuousness model and the first weights are multiplied each other, obtain the first product.
Color conspicuousness model is for the impact of the conspicuousness value of token image color on each pixel in image.
In actual applications, can set up color conspicuousness model according to various ways, and to set up color conspicuousness model be that those skilled in the art all can realize, just repeat no more here.
In step 204, face conspicuousness model and the second weights are multiplied each other, obtain the second product.
In step 205, the first product and the second product are added, obtain target conspicuousness model.
That is to say, target conspicuousness model can be:
Saliency ( x i , y i ) = w 1 * ColorSaliency ( x i , y i ) + w 2 * FaceSaliency ( x i , y i ) ,
Wherein, for the conspicuousness value of i pixel in image, for color conspicuousness model, for face conspicuousness model, w 1be the first weights, w 2be the second weights, wherein w 1+ w 2=1.
In actual applications, the first weight w 1 and the second weight w 2ratio can set according to actual conditions, if when needing to weigh while considering the affecting of pixel conspicuousness during color and face are on image, can be by the first weight w 1with the second weight w 2all be set to 0.5, when face that needs are considered on image in the impact of pixel conspicuousness need be greater than color on image in when the affecting of pixel conspicuousness, what the second weight w 2 can be arranged is greater than the first weight w 1otherwise,, can be by the first weight w 1what arrange is greater than the second weight w 2.
In step 206, utilize predetermined crop box to confine image, obtain at least one and confine region.
Here the crop box said can be predetermined arbitrary shape and size.Such as, crop box can be rectangle frame, circular frame, oval frame, polygon frame etc.
Refer to shown in Fig. 2 C, it is according to a kind of schematic diagram that utilizes rectangle frame to confine image shown in an exemplary embodiment.In this Fig. 2 C, utilize predetermined rectangle frame c1 to confine image, the region of confining of being confined by this rectangle frame c1 is c2.
In actual applications, need to utilize crop box to confine successively the various possible regions of confining in image.Such as, the central point of crop box can be traveled through successively to the each pixel in image, the central point of crop box is in the time of each pixel, and the region that this crop box is confined is one and confines region, or, in order to obtain the determined complete region of confining of crop box, the central point of crop box can be traveled through successively to each satisfactory pixel in image, this satisfactory pixel can be more than or equal to the central point of crop box to the distance of crop box left frame for this pixel to the distance of image left frame, this pixel is more than or equal to the central point of crop box to the distance of crop box left frame to the distance of image left frame, this pixel is more than or equal to the central point of crop box to the distance of crop box upper side frame to the distance of image upper side frame, this pixel is more than or equal to the central point of crop box to the distance of crop box lower frame to the distance of image lower frame.
By way of example, taking rectangle frame as example, refer to shown in Fig. 2 D, it is according to a kind of schematic diagram that utilizes rectangle frame repeatedly to confine image shown in an exemplary embodiment.The height of this rectangle frame R is h2, the width of this rectangle frame R is w2, in the time confining for the first time, this rectangle frame R first position, the upper left corner from image confines, obtain one and confine region m1, also first the central point D that is rectangle frame R is positioned at first vegetarian refreshments p1 place, and the coordinate of this yuan of vegetarian refreshments p1 is (h 22, w 22); In the time confining for the second time, by this rectangle frame R row that move right, also move right to first vegetarian refreshments p2 by the central point D of rectangle frame from first vegetarian refreshments p1 at current place, the vegetarian refreshments p1 of unit and first vegetarian refreshments p2 are two first vegetarian refreshments that same a line is adjacent, corresponding, what rectangle frame R now confined confines region is m2.In the time that one's own profession is confined, while often carrying out once new confining, the first vegetarian refreshments that all the central point D of this rectangle frame moved right, until the right edge of this rectangle frame R and the limit, the rightmost side of image are overlapping later; Then the central point D of rectangle frame is moved to first vegetarian refreshments place adjacent with first vegetarian refreshments p1 in next line, then continue to confine at this row, the rest may be inferred.
Obviously, if it is determined complete while confining region to obtain crop box, if (one group of opposite side is first kind limit to the length on the first kind limit of crop box, another group opposite side is Equations of The Second Kind limit) identical with a wherein class edge lengths of image, only need to, successively mobile confining of the bearing of trend at the place, Equations of The Second Kind limit of image, also only need the central point of crop box successively to move at the bearing of trend at place, Equations of The Second Kind limit backward.
Obviously, in actual applications, can also confine image in other way or sequentially, this just describes no longer one by one.
In step 207, utilize target conspicuousness model, calculate each and confine the overall significance value in region.
In a kind of possible implementation, utilize target conspicuousness model, calculate each and confine the overall significance value in region, can comprise:
The first, for each region of confining, utilize target conspicuousness model to calculate the conspicuousness value of confining each pixel in region;
The second, will confine the corresponding conspicuousness value of each pixel in region and be added, obtain confining the overall significance value in region.
In step 208, select the region of confining of overall significance value maximum.
In step 209, cut out the selected region of confining.
Also, cut out the region of confining of overall significance value maximum.
By way of example, refer to shown in Fig. 2 E, it is according to a kind of schematic diagram that carries out cutting to including facial image shown in an exemplary embodiment.In this Fig. 2 E, include face T1, in the time carrying out image cropping according to target conspicuousness model, in the clipping region (region of dash area institute mark) obtaining, include face T and other have the object T2 that conspicuousness is high simultaneously.
It should be added that, in actual applications, can also select in other way and need the cropped region of confining.By way of example, for each region of confining, can calculate this and confine the conspicuousness value of total each pixel in region, determine the number that conspicuousness value is greater than the pixel of predetermined conspicuousness threshold value, this number is confined to effective number in region as this; Select out effective number maximum confine region, cut out effective number maximum confine region.
In sum, the image cropping method providing in disclosure embodiment, by color combining conspicuousness model and face conspicuousness model to the accurate cutting of image; Owing to having considered the impact of pixel during face is on image in the time that image is carried out to cutting, therefore in the time including the image of face, in cutting out in image outstanding object, also include important face information, solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters; Reach the effect that can also carry out effective cutting in the time that image is carried out to cutting to face wherein.
In actual applications, in the time need to carrying out image cropping and need to consider face cutting a large amount of images, if some image has face in these images, another part image does not have face, before each image is carried out to cutting, need to judge and in this image, whether have face, can directly utilize color conspicuousness model realization image cropping for the image that does not have face, and for the image that includes face, for fear of the mistake cutting of face, can color combining conspicuousness model and face conspicuousness model realization image cropping.The process of carrying out cutting for each image can be referring to the following description to Fig. 3 A.
Fig. 3 A is the basis process flow diagram of a kind of image cropping method shown in an exemplary embodiment again, and as shown in Figure 3A, this image cropping method is applied in electronic equipment, comprises the following steps.
In step 301, in detected image, whether there is face.
In step 302, if testing result is to have face in image, set up the face conspicuousness model of image.
Generally, face conspicuousness model can be for the impact of the conspicuousness value on each pixel in image after each face stack in token image.Such as, in the time there is a face in image, this face can have influence on the conspicuousness value of each pixel in image conventionally; Also such as, in the time there is multiple face in image, these faces can affect the conspicuousness value of each pixel in this image simultaneously.By way of example, in face, the conspicuousness value of each pixel is conventionally higher, the impact that the conspicuousness value of the closer pixel of image middle distance face is subject to face is larger, and it is smaller that the conspicuousness value of the distant pixel of image middle distance face is subject to the impact of face.
In a kind of possible implementation, the face conspicuousness model of foundation is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in image, for the width of p the minimum boundary rectangle of face in image, for the length of p the minimum boundary rectangle of face in image, for the position of the central point of p the corresponding human face region of face in image, (x i, y i) be the coordinate of i pixel.
In actual applications, the width W of face fbe the width of the corresponding minimum boundary rectangle of this face, the length H of face fbe the length of the corresponding minimum boundary rectangle of this face.
In step 303, by color conspicuousness model and the face conspicuousness model linear superposition set up in advance, obtain target conspicuousness model.
In a kind of possible implementation, by color conspicuousness model and the face conspicuousness model linear superposition set up in advance, obtaining target conspicuousness model can comprise: color conspicuousness model and the first weights are multiplied each other, obtain the first product; Face conspicuousness model and the second weights are multiplied each other, obtain the second product; The first product and the second product are added, obtain target conspicuousness model, wherein, the first weights and the second weights and be 1.
Color conspicuousness model can be for color in token image the impact of the conspicuousness value on each pixel in image.In actual applications, can set up color conspicuousness model according to various ways, and to set up color conspicuousness model be that those skilled in the art all can realize, just repeat no more here.
That is to say, target conspicuousness model can be:
Saliency ( x i , y i ) = w 1 * ColorSaliency ( x i , y i ) + w 2 * FaceSaliency ( x i , y i ) ,
Wherein, for the conspicuousness value of i pixel in image, for color conspicuousness model, for face conspicuousness model, w 1be the first weights, w 2be the second weights, wherein w 1+ w 2=1.
In actual applications, the first weight w 1with the second weight w 2ratio can set according to actual conditions, if when needing to weigh while considering the affecting of pixel conspicuousness during color and face are on image, can be by the first weight w 1with the second weight w 2all be set to 0.5, when face that needs are considered on image in the impact of pixel conspicuousness need be greater than color on image in when the affecting of pixel conspicuousness, can be by the second weight w 2what arrange is greater than the first weight w 1otherwise,, can be by the first weight w 1what arrange is greater than the second weight w 2.
In step 304, if testing result is not have face in image, the color conspicuousness model of setting up is in advance defined as to target conspicuousness model.
In step 305, utilize predetermined crop box to confine image, obtain at least one and confine region.
Here the crop box said can be predetermined arbitrary shape and size.Such as, crop box can be rectangle frame, circular frame, oval frame, polygon frame etc.
In actual applications, need to utilize crop box to confine successively the various possible regions of confining in image.Such as, the central point of crop box can be traveled through successively to the each pixel in image, the central point of crop box is in the time of each pixel, and the region that this crop box is confined is one and confines region, or, in order to obtain the determined complete region of confining of crop box, the central point of crop box can be traveled through successively to each satisfactory pixel in image, this satisfactory pixel can be more than or equal to the central point of crop box to the distance of crop box left frame for this pixel to the distance of image left frame, this pixel is more than or equal to the central point of crop box to the distance of crop box left frame to the distance of image left frame, this pixel is more than or equal to the central point of crop box to the distance of crop box upper side frame to the distance of image upper side frame, this pixel is more than or equal to the central point of crop box to the distance of crop box lower frame to the distance of image lower frame.
Obviously, in actual applications, can also confine image in other way or sequentially, this just describes no longer one by one.
In step 306, utilize target conspicuousness model, calculate each and confine the overall significance value in region.
In a kind of possible implementation, utilize target conspicuousness model, calculate each and confine the overall significance value in region, can comprise:
The first, for each region of confining, utilize target conspicuousness model to calculate the conspicuousness value of confining each pixel in region;
The second, will confine the corresponding conspicuousness value of each pixel in region and be added, obtain confining the overall significance value in region.
In step 307, select the region of confining of overall significance value maximum.
In step 308, cut out the selected region of confining.
Also, cut out the region of confining of overall significance value maximum.
By way of example, in the time there is no face in image, refer to shown in Fig. 3 B, it is according to a kind of schematic diagram that carries out cutting to not including facial image shown in an exemplary embodiment.In the time carrying out image cropping according to target conspicuousness model, in the clipping region obtaining, only comprise the object T2 that conspicuousness is high.
Again by way of example, still, referring to shown in Fig. 2 E, in this Fig. 2 E, include face T1, in the time carrying out image cropping according to target conspicuousness model, in the clipping region obtaining, include face T1 and other have the object T2 that conspicuousness is high simultaneously.
It should be added that, in actual applications, can also select in other way and need the cropped region of confining.By way of example, for each region of confining, can calculate this and confine the conspicuousness value of total each pixel in region, determine the number that conspicuousness value is greater than the pixel of predetermined conspicuousness threshold value, this number is confined to effective number in region as this; Select out effective number maximum confine region, cut out effective number maximum confine region.
In sum, the image cropping method providing in disclosure embodiment, by color combining conspicuousness model and face conspicuousness model to the accurate cutting of image; Owing to having considered the impact of pixel during face is on image in the time that image is carried out to cutting, therefore in the time including the image of face, in cutting out in image outstanding object, also include important face information, solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters; Reach the effect that can also carry out effective cutting in the time that image is carried out to cutting to face wherein.
Following is disclosure device embodiment, can be for carrying out disclosure embodiment of the method.For the details not disclosing in disclosure device embodiment, please refer to disclosure embodiment of the method.
Fig. 4 is according to the block diagram of a kind of image cropping device shown in an exemplary embodiment, as shown in Figure 4, this image cropping application of installation is in electronic equipment, and this image cropping device includes but not limited to: set up module 402, laminating module 404 and cutting module 406.
This sets up module 402 and is configured to set up the face conspicuousness model of image, face conspicuousness model for each face stack of token image after the impact of conspicuousness value on each pixel in image.
This laminating module 404 is configured to the color conspicuousness model of setting up in advance and sets up the face conspicuousness model that module 402 sets up and carry out linear superposition, obtains target conspicuousness model.
This cutting module 406 is configured to utilize the target conspicuousness model that laminating module 404 obtains to carry out cutting to image.
In sum, the image cropping device providing in disclosure embodiment, by color combining conspicuousness model and face conspicuousness model to the accurate cutting of image; Owing to having considered the impact of pixel during face is on image in the time that image is carried out to cutting, therefore in the time including the image of face, in cutting out in image outstanding object, also include important face information, solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters; Reach the effect that can also carry out effective cutting in the time that image is carried out to cutting to face wherein.
Fig. 5 is according to the block diagram of a kind of image cropping device shown in another exemplary embodiment, and as shown in Figure 5, this image cropping application of installation is in electronic equipment.This image cropping device can include but not limited to: set up module 502, laminating module 504 and cutting module 506.
This sets up module 502 and is configured to set up the face conspicuousness model of image, face conspicuousness model for each face stack of token image after the impact of conspicuousness value on each pixel in image.
This laminating module 504 is configured to the color conspicuousness model of setting up in advance and sets up the face conspicuousness model that module 502 sets up and carry out linear superposition, obtains target conspicuousness model.
This cutting module 506 is configured to utilize the target conspicuousness model that laminating module 504 obtains to carry out cutting to image.
In the possible implementation of the first in the embodiment shown in fig. 5, this face conspicuousness model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in image, for the width of p the minimum boundary rectangle of face in image, for the length of p the minimum boundary rectangle of face in image, for the position of the central point of p the corresponding human face region of face in image, σ y p = H F p 4 .
In the possible implementation of the second in the embodiment shown in fig. 5, this laminating module 504 can comprise: the first multiply each other unit 504a, second multiply each other unit 504b and addition unit 504c.
This first unit 504a that multiplies each other is configured to color conspicuousness model and the first weights to multiply each other, and obtains the first product.
This second unit 504b that multiplies each other is configured to face conspicuousness model and the second weights to multiply each other, and obtains the second product.
This addition unit 504c is configured to first the first product and second that unit 504a obtains the second product that unit 504b obtains that multiplies each other that multiplies each other to be added, and obtains target conspicuousness model.
Wherein, the first weights and the second weights and be 1.
In the third possible implementation in the embodiment shown in fig. 5, this cutting module 506 can comprise: confine unit 506a, computing unit 506b, selected unit 506c and cutting unit 506d.
This is confined unit 506a and is configured to utilize predetermined crop box to confine image, obtains at least one and confines region.
This computing unit 506b is configured to the target conspicuousness model that utilizes laminating module 504 to obtain, and calculates each and confine the overall significance value in region.
This selected unit 506c is configured to the region of confining of selected overall significance value maximum.
This cutting unit 506d is configured to cut out the selected region of confining.
In the 4th kind of possible implementation in the embodiment shown in fig. 5, this computing unit 506b can comprise: computation subunit 506b1 and addition subelement 506b2.
This computation subunit 506b1 is configured to for each region of confining, and utilizes target conspicuousness model to calculate the conspicuousness value of confining each pixel in region.
This addition subelement 506b2 is configured to be added confining the corresponding conspicuousness value of each pixel in region, obtains confining the overall significance value in region.
In the 5th kind of possible implementation in the embodiment shown in fig. 5, this image cropping device can also comprise: detection module 508.
This detection module 508 is configured to whether exist in detected image face.
This is set up module 502 and is also configured to be while there is face in image, to set up the face conspicuousness model of image in the testing result of detection module 508.
In sum, the image cropping device providing in disclosure embodiment, by color combining conspicuousness model and face conspicuousness model to the accurate cutting of image; Owing to having considered the impact of pixel during face is on image in the time that image is carried out to cutting, therefore in the time including the image of face, in cutting out in image outstanding object, also include important face information, solve in correlation technique in the image cropping based on color of image significance analysis, only consider the colouring information in image, easily cause the problem of the mistake cutting of the image to comprising other key characters; Reach the effect that can also carry out effective cutting in the time that image is carried out to cutting to face wherein.
About the device in above-described embodiment, wherein the concrete mode of modules executable operations have been described in detail in the embodiment about the method, will not elaborate explanation herein.
Fig. 6 is the basis block diagram of a kind of image cropping device 600 shown in an exemplary embodiment again.For example, device 600 can be mobile phone, computing machine, digital broadcast terminal, information receiving and transmitting equipment, game console, flat-panel devices, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Fig. 6, device 600 can comprise following one or more assembly: processing components 602, storer 604, power supply module 606, multimedia groupware 608, audio-frequency assembly 610, the interface 612 of I/O (I/O), sensor module 614, and communications component 616.
The integrated operation of processing components 602 common control device 600, such as with demonstration, call, data communication, the operation that camera operation and record operation are associated.Processing components 602 can comprise that one or more processors 618 carry out instruction, to complete all or part of step of above-mentioned method.In addition, processing components 602 can comprise one or more modules, is convenient to mutual between processing components 602 and other assemblies.For example, processing components 602 can comprise multi-media module, to facilitate mutual between multimedia groupware 608 and processing components 602.
Storer 604 is configured to store various types of data to be supported in the operation of device 600.The example of these data comprises for any application program of operation on device 600 or the instruction of method, contact data, telephone book data, message, picture, video etc.Storer 604 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), ROM (read-only memory) (ROM), magnetic store, flash memory, disk or CD.
Power supply module 606 provides electric power for installing 600 various assemblies.Power supply module 606 can comprise power-supply management system, one or more power supplys, and other and the assembly that generates, manages and distribute electric power to be associated for device 600.
Multimedia groupware 608 is included in the screen that an output interface is provided between described device 600 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises that one or more touch sensors are with the gesture on sensing touch, slip and touch panel.Described touch sensor is the border of sensing touch or sliding action not only, but also detects duration and the pressure relevant to described touch or slide.In certain embodiments, multimedia groupware 608 comprises a front-facing camera and/or post-positioned pick-up head.When device 600 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 610 is configured to output and/or input audio signal.For example, audio-frequency assembly 610 comprises a microphone (MIC), and when device 600 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The sound signal receiving can be further stored in storer 604 or be sent via communications component 616.In certain embodiments, audio-frequency assembly 610 also comprises a loudspeaker, for output audio signal.
I/O interface 612 is for providing interface between processing components 602 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor module 614 comprises one or more sensors, is used to device 600 that the state estimation of various aspects is provided.For example, sensor module 614 can detect the opening/closing state of device 600, the relative positioning of assembly, for example described assembly is display and the keypad of device 600, the position of all right pick-up unit 600 of sensor module 614 or 600 1 assemblies of device changes, user is with device 600 existence that contact or do not have the temperature variation of device 600 orientation or acceleration/deceleration and device 600.Sensor module 614 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor module 614 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor module 614 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 616 is configured to be convenient to the communication of wired or wireless mode between device 600 and other equipment.Device 600 wireless networks that can access based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 616 receives broadcast singal or the broadcast related information from external broadcasting management system via broadcast channel.In one exemplary embodiment, described communications component 616 also comprises near-field communication (NFC) module, to promote junction service.For example, can be based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 600 can be realized by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD) (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, for carrying out above-mentioned method.
In the exemplary embodiment, also provide a kind of non-provisional computer-readable recording medium that comprises instruction, for example, comprised the storer 604 of instruction, above-mentioned instruction can have been carried out above-mentioned method by the processor 618 of device 600.For example, described non-provisional computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage equipment etc.
Those skilled in the art, considering instructions and putting into practice after invention disclosed herein, will easily expect other embodiment of the present invention.The application is intended to contain any modification of the present invention, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present invention and comprised undocumented common practise or the conventional techniques means in the art of the disclosure.Instructions and embodiment are only regarded as exemplary, and true scope of the present invention and spirit are pointed out by claim below.
Should be understood that, the present invention is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendments and change not departing from its scope.Scope of the present invention is only limited by appended claim.

Claims (13)

1. an image cropping method, is characterized in that, comprising:
Set up the face conspicuousness model of image, described face conspicuousness model is for characterizing the impact of the conspicuousness value on each pixel in described image after the stack of each face of described image;
Color conspicuousness model and the described face conspicuousness model set up are in advance carried out to linear superposition, obtain target conspicuousness model;
Utilize described target conspicuousness model to carry out cutting to described image.
2. method according to claim 1, is characterized in that, described face conspicuousness model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in described image, for the width of p the minimum boundary rectangle of face in described image, for the length of p the minimum boundary rectangle of face in described image, for the position of the central point of p the corresponding human face region of face in described image, σ x p = W F p 4 , σ y p = H F p 4 .
3. method according to claim 1, is characterized in that, described by advance set up color conspicuousness model and described face conspicuousness model linear superposition, obtain target conspicuousness model, comprising:
Described color conspicuousness model and the first weights are multiplied each other, obtain the first product;
Described face conspicuousness model and the second weights are multiplied each other, obtain the second product;
Described the first product and described the second product are added, obtain described target conspicuousness model;
Wherein, described the first weights and described the second weights and be 1.
4. method according to claim 1, is characterized in that, describedly utilizes described target conspicuousness model to carry out cutting to described image, comprising:
Utilize predetermined crop box to confine described image, obtain at least one and confine region;
Utilize described target conspicuousness model, calculate each and confine the overall significance value in region;
Select the region of confining of overall significance value maximum;
Cut out described in selected and confine region.
5. method according to claim 4, is characterized in that, describedly utilizes described target conspicuousness model, calculates each and confine the overall significance value in region, comprising:
For each region of confining, utilize described target conspicuousness model to confine the conspicuousness value of each pixel in region described in calculating;
Described in inciting somebody to action, confine the corresponding conspicuousness value of each pixel in region and be added, described in obtaining, confine the overall significance value in region.
6. according to arbitrary described method in claim 1 to 5, it is characterized in that, also comprise:
Detect and in described image, whether have face;
If testing result is to have face in described image, carry out the step of the described face conspicuousness model of setting up image.
7. an image cropping device, is characterized in that, comprising:
Set up module, for setting up the face conspicuousness model of image, described face conspicuousness model is for characterizing the impact of the conspicuousness value on each pixel in described image after the stack of each face of described image;
Laminating module, for by advance set up color conspicuousness model and set up module set up face conspicuousness model carry out linear superposition, obtain target conspicuousness model;
Cutting module, for utilizing the target conspicuousness model that laminating module obtains to carry out cutting to described image.
8. device according to claim 7, is characterized in that, described face conspicuousness model is:
FaceSaliency ( x i , y i ) = Σ p = 1 n [ W F p * H F p * exp ( - ( x i - x F p ) 2 2 * σ x p 2 - ( y i - y F p ) 2 2 * σ y p 2 ) ] ,
Wherein, for the conspicuousness value of i pixel in described image, for the width of p the minimum boundary rectangle of face in described image, for the length of p the minimum boundary rectangle of face in described image, for the position of the central point of p the corresponding human face region of face in described image, σ x p = W F p 4 , σ y p = H F p 4 .
9. device according to claim 7, is characterized in that, described laminating module, comprising:
First unit that multiplies each other, for described color conspicuousness model and the first weights are multiplied each other, obtains the first product;
Second unit that multiplies each other, for described face conspicuousness model and the second weights are multiplied each other, obtains the second product;
Addition unit, for described first the first product and described second that unit obtains the second product that unit obtains that multiplies each other that multiplies each other is added, obtains described target conspicuousness model;
Wherein, described the first weights and described the second weights and be 1.
10. device according to claim 7, is characterized in that, described cutting module, comprising:
Confine unit, for utilizing predetermined crop box to confine described image, obtain at least one and confine region;
Computing unit, for the target conspicuousness model that utilizes described laminating module to obtain, calculates each and confines the overall significance value in region;
Selected unit, for the region of confining of selected overall significance value maximum;
Cutting unit, confines region for cutting out described in selected.
11. devices according to claim 10, is characterized in that, described computing unit, comprising:
Computation subunit, for for each region of confining, utilizes described target conspicuousness model to confine the conspicuousness value of each pixel in region described in calculating;
Be added subelement, described in inciting somebody to action, confine the corresponding conspicuousness value addition of each pixel of region, described in obtaining, confine the overall significance value in region.
12. according to arbitrary described device in claim 7 to 11, it is characterized in that, also comprises:
Detection module, for detection of whether there being face in described image;
The described module of setting up, also, in the testing result of described detection module being described image while there is face, sets up the face conspicuousness model of image.
13. 1 kinds of image cropping devices, is characterized in that, comprising:
Processor;
For storing the storer of described processor executable;
Wherein, described processor is configured to:
Set up the face conspicuousness model of image, described face conspicuousness model is for characterizing the impact of the conspicuousness value on each pixel in described image after the stack of each face of described image;
Color conspicuousness model and the described face conspicuousness model set up are in advance carried out to linear superposition, obtain target conspicuousness model;
Utilize described target conspicuousness model to carry out cutting to described image.
CN201410178276.9A 2014-04-29 2014-04-29 Image cropping method and device Active CN103996186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410178276.9A CN103996186B (en) 2014-04-29 2014-04-29 Image cropping method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410178276.9A CN103996186B (en) 2014-04-29 2014-04-29 Image cropping method and device

Publications (2)

Publication Number Publication Date
CN103996186A true CN103996186A (en) 2014-08-20
CN103996186B CN103996186B (en) 2017-03-15

Family

ID=51310342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410178276.9A Active CN103996186B (en) 2014-04-29 2014-04-29 Image cropping method and device

Country Status (1)

Country Link
CN (1) CN103996186B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528786A (en) * 2015-12-04 2016-04-27 小米科技有限责任公司 Image processing method and device
CN105761205A (en) * 2016-03-17 2016-07-13 网易有道信息技术(北京)有限公司 Picture delivery method and device
CN105989572A (en) * 2015-02-10 2016-10-05 腾讯科技(深圳)有限公司 Picture processing method and apparatus thereof
CN105991920A (en) * 2015-02-09 2016-10-05 钱仰德 Method of using image cutting to make mobile phone capturing frame automatically track object
CN107146198A (en) * 2017-04-19 2017-09-08 中国电子科技集团公司电子科学研究院 A kind of intelligent method of cutting out of photo and device
CN107463914A (en) * 2017-08-11 2017-12-12 环球智达科技(北京)有限公司 Image cutting method
CN108062755A (en) * 2017-11-02 2018-05-22 广东数相智能科技有限公司 A kind of picture intelligence method of cutting out and device
CN108563982A (en) * 2018-01-05 2018-09-21 百度在线网络技术(北京)有限公司 Method and apparatus for detection image
CN108776970A (en) * 2018-06-12 2018-11-09 北京字节跳动网络技术有限公司 Image processing method and device
CN109325494A (en) * 2018-08-27 2019-02-12 腾讯科技(深圳)有限公司 Image processing method, task data treating method and apparatus
CN109448001A (en) * 2018-10-26 2019-03-08 山东世纪开元电子商务集团有限公司 A kind of picture automatic cutting method
WO2019109268A1 (en) * 2017-12-06 2019-06-13 中国科学院自动化研究所 Method and device for automatically cropping picture based on reinforcement learning
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110223306A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of method of cutting out and device of image
CN112927231A (en) * 2021-05-12 2021-06-08 深圳市安软科技股份有限公司 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165150A1 (en) * 2003-06-26 2010-07-01 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US20100322521A1 (en) * 2009-06-22 2010-12-23 Technion Research & Development Foundation Ltd. Automated collage formation from photographic images
CN102426704A (en) * 2011-10-28 2012-04-25 清华大学深圳研究生院 Quick detection method for salient object
US8363984B1 (en) * 2010-07-13 2013-01-29 Google Inc. Method and system for automatically cropping images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165150A1 (en) * 2003-06-26 2010-07-01 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US20100322521A1 (en) * 2009-06-22 2010-12-23 Technion Research & Development Foundation Ltd. Automated collage formation from photographic images
US8363984B1 (en) * 2010-07-13 2013-01-29 Google Inc. Method and system for automatically cropping images
CN102426704A (en) * 2011-10-28 2012-04-25 清华大学深圳研究生院 Quick detection method for salient object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MORAN CERFET ET AL.: "Predicting human gaze using low-level saliency combined with face detection", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 *
XIAOHUI LI ET AL.: "Saliency Detection via Dense and Sparse Reconstruction", 《ICCV2013》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105991920A (en) * 2015-02-09 2016-10-05 钱仰德 Method of using image cutting to make mobile phone capturing frame automatically track object
CN105989572A (en) * 2015-02-10 2016-10-05 腾讯科技(深圳)有限公司 Picture processing method and apparatus thereof
CN105989572B (en) * 2015-02-10 2020-04-24 腾讯科技(深圳)有限公司 Picture processing method and device
EP3176731A1 (en) * 2015-12-04 2017-06-07 Xiaomi Inc. Image processing method and device
WO2017092289A1 (en) * 2015-12-04 2017-06-08 小米科技有限责任公司 Image processing method and device
CN105528786A (en) * 2015-12-04 2016-04-27 小米科技有限责任公司 Image processing method and device
US10534972B2 (en) 2015-12-04 2020-01-14 Xiaomi Inc. Image processing method, device and medium
CN105528786B (en) * 2015-12-04 2019-10-01 小米科技有限责任公司 Image processing method and device
CN105761205B (en) * 2016-03-17 2018-12-11 网易有道信息技术(北京)有限公司 A kind of picture put-on method and device
CN105761205A (en) * 2016-03-17 2016-07-13 网易有道信息技术(北京)有限公司 Picture delivery method and device
CN107146198A (en) * 2017-04-19 2017-09-08 中国电子科技集团公司电子科学研究院 A kind of intelligent method of cutting out of photo and device
CN107146198B (en) * 2017-04-19 2022-08-16 中国电子科技集团公司电子科学研究院 Intelligent photo cutting method and device
CN107463914A (en) * 2017-08-11 2017-12-12 环球智达科技(北京)有限公司 Image cutting method
CN108062755A (en) * 2017-11-02 2018-05-22 广东数相智能科技有限公司 A kind of picture intelligence method of cutting out and device
CN108062755B (en) * 2017-11-02 2020-10-02 广东数相智能科技有限公司 Intelligent picture clipping method and device
WO2019109268A1 (en) * 2017-12-06 2019-06-13 中国科学院自动化研究所 Method and device for automatically cropping picture based on reinforcement learning
CN108563982A (en) * 2018-01-05 2018-09-21 百度在线网络技术(北京)有限公司 Method and apparatus for detection image
CN108776970A (en) * 2018-06-12 2018-11-09 北京字节跳动网络技术有限公司 Image processing method and device
CN108776970B (en) * 2018-06-12 2021-01-12 北京字节跳动网络技术有限公司 Image processing method and device
CN109325494A (en) * 2018-08-27 2019-02-12 腾讯科技(深圳)有限公司 Image processing method, task data treating method and apparatus
CN109448001A (en) * 2018-10-26 2019-03-08 山东世纪开元电子商务集团有限公司 A kind of picture automatic cutting method
CN109448001B (en) * 2018-10-26 2021-08-27 世纪开元智印互联科技集团股份有限公司 Automatic picture clipping method
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110223306A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of method of cutting out and device of image
CN112927231A (en) * 2021-05-12 2021-06-08 深圳市安软科技股份有限公司 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device
CN112927231B (en) * 2021-05-12 2021-07-23 深圳市安软科技股份有限公司 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device

Also Published As

Publication number Publication date
CN103996186B (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN103996186A (en) Image cutting method and image cutting device
CN103885588B (en) Automatic switching method and device
CN103955481B (en) image display method and device
CN106951884A (en) Gather method, device and the electronic equipment of fingerprint
CN105512605A (en) Face image processing method and device
CN104918107A (en) Video file identification processing method and device
CN103996211A (en) Image relocation method and device
CN103995666A (en) Method and device for setting work mode
CN106502560A (en) Display control method and device
CN107888984A (en) Short video broadcasting method and device
CN104238890A (en) Text display method and device
CN104123720A (en) Image repositioning method, device and terminal
CN106598429A (en) Method and device for adjusting window of mobile terminal
CN105353901A (en) Method and apparatus for determining validity of touch operation
CN105488145A (en) Webpage content display method and apparatus and terminal
CN105426878A (en) Method and device for face clustering
CN105511777A (en) Session display method and device of touch display screen
CN105094539B (en) Reference information display methods and device
CN105224174A (en) The display packing of Paste and device
CN104820549A (en) Method, device and terminal for transmitting social networking application message
CN104199609A (en) Cursor positioning method and device
CN104216969A (en) Reading marking method and device
CN104599236B (en) A kind of method and apparatus of image rectification
CN105808096A (en) Sliding block display method and apparatus
CN106060707A (en) Reverberation processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant