CN111080667A - Automatic composition cutting method and system for rapid portrait photo - Google Patents

Automatic composition cutting method and system for rapid portrait photo Download PDF

Info

Publication number
CN111080667A
CN111080667A CN201911305183.7A CN201911305183A CN111080667A CN 111080667 A CN111080667 A CN 111080667A CN 201911305183 A CN201911305183 A CN 201911305183A CN 111080667 A CN111080667 A CN 111080667A
Authority
CN
China
Prior art keywords
portrait
point
portrait photo
composition
cut
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911305183.7A
Other languages
Chinese (zh)
Other versions
CN111080667B (en
Inventor
胡耀武
李云夕
陈希玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Quwei Science & Technology Co ltd
Original Assignee
Hangzhou Quwei Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Quwei Science & Technology Co ltd filed Critical Hangzhou Quwei Science & Technology Co ltd
Priority to CN201911305183.7A priority Critical patent/CN111080667B/en
Publication of CN111080667A publication Critical patent/CN111080667A/en
Application granted granted Critical
Publication of CN111080667B publication Critical patent/CN111080667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention discloses a method and a system for automatically composing and cutting a fast portrait photo, wherein the method comprises the following steps: s1, creating a portrait photo composition template which comprises a wide area, a high area and a first interested area key point array; s2, extracting corresponding key points in the portrait photos to be cut, and generating a second region-of-interest key point array; s3, calculating a portrait photo composition template and a datum point of a portrait photo to be cut; s4, calculating composition offset based on the portrait photo composition template and the to-be-cut portrait photos; s5, calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut; s6, performing affine matching on the vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a clipping frame, and clipping the portrait photo to be clipped. The invention realizes the automatic cutting of the portrait photos, reduces the workload and improves the processing efficiency of cutting.

Description

Automatic composition cutting method and system for rapid portrait photo
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for quickly and automatically composing and cutting a portrait photo.
Background
With the continuous enhancement of the photographing performance of the intelligent terminal equipment, people are keen to record beautiful moments in life at will and share the beautiful moments in a social network. But generally, the pictures taken by people are not suitable for being directly uploaded to the internet, and some post-processing is needed. For example, when taking a picture, the finger carelessly blocks the edge of the lens or the edge of the lens is mistakenly entered by a person; only part of the picture in the photo is required to be intercepted for highlighting; there is a nice rectangle self-photograph, while the micro-letter head portrait requires a square. At present, in the image processing software market and the image post-processing fields of camera shooting, wedding shooting and the like, the automatic cutting of the image composition of the portrait photo is significant to the generation of high-quality portrait photos. At present, a portrait photo randomly shot by a user is cut according to a portrait composition mainly by a manual PS or a manual cut of the user, the photo cutting function provided by the mainstream photo/image trimming application at present is to manually edit a screenshot through a free/fixed ratio, but as a common user does not usually have professional photographic knowledge such as composition, the aesthetic quality of the manually cut photo cannot be guaranteed, in addition, the ratio of the manually cut photo may not meet the application requirement, and the processing efficiency is low.
The invention patent application with publication number CN 110147833 a discloses a portrait processing method, device, system and readable storage medium, the method includes: acquiring a portrait to be processed and generating a plurality of candidate cutting frames of the portrait to be processed; inputting the portrait to be processed into a skeleton detection network model for skeleton detection processing to obtain skeleton node positions of the portrait to be processed; calculating a first class aesthetic quantization value of each candidate cutting frame according to each candidate cutting frame and the position of the skeleton node; according to the candidate clipping frames, clipping processing is carried out on the portrait to be processed, and candidate clipping images of the portrait to be processed are obtained; inputting each candidate cutting image into an aesthetic network model to obtain a second type aesthetic quantized value of each candidate cutting frame; and selecting at least one candidate cutting frame as a target cutting frame of the portrait to be processed according to the first class aesthetic quantization value and/or the second class aesthetic quantization value of each candidate cutting bar.
Although the automatic composition clipping method based on deep learning can realize automatic clipping and obtain better aesthetic effect, the processing speed is slower, the requirement on software running environment is higher, and the effect is not satisfactory. Therefore, how to automatically crop a portrait photo meeting aesthetic significance quickly, reduce workload, improve work efficiency, and improve user experience of image software application is a problem to be solved in the field.
Disclosure of Invention
The invention aims to provide a method and a system for automatically composing and cutting a fast portrait photo aiming at the defects of the prior art. The invention creates the portrait photo composition template, and realizes the rapid cutting of the portrait photo according to the portrait photo composition template and the affine of the corresponding key points in the portrait photo to be cut. The invention can fast and automatically cut a portrait photo which accords with aesthetic significance, reduce workload, improve working efficiency and simultaneously improve user experience of image software application.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automatic composition cutting method for fast portrait photo includes the steps:
s1, collecting a plurality of portrait photos with the same proportion and size, and creating a corresponding portrait photo composition template, wherein the portrait photo composition template comprises a wide area key point array, a high area key point array and a first area of interest key point array;
s2, extracting key points corresponding to the key points of the first region of interest in the portrait photos to be cut, and generating a second region of interest key point array;
s3, calculating a portrait photo composition template and reference points of the portrait photos to be cut based on the first interested region key point array and the second interested region key point array respectively;
s4, calculating composition offset based on the width, height and reference point of the portrait photo composition template and the width, height and reference point of the portrait photo to be cut;
s5, calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut based on the composition offset, the first interested area key point array and the second interested area key point array;
s6, performing affine matching on the vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a clipping frame, and clipping the portrait photo to be clipped.
Further, the generating the first region of interest keypoint array specifically includes:
s11, detecting a third region of interest key point array F ═ P0, P1, P2} for each portrait photo, the third region of interest key point array including a cheek leftmost point P0, a chin lowest point P1, a cheek rightmost point P2;
s12, calculating the mean values of the leftmost point of the cheek, the lowest point of the chin, and the rightmost point of the cheek of all the portrait photos, and forming a first region of interest key point array MF ═ MP0, MP1, and MP 2;
Figure RE-GDA0002362359780000031
Figure RE-GDA0002362359780000032
Figure RE-GDA0002362359780000033
wherein, MP0, MP1 and MP2 are the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the portrait photo composition template respectively; n is the number of captured photographs, P0i、 P1i、P2iThe leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the ith portrait photo, respectively, and (x, y) are x and y coordinate values of the corresponding key points, respectively.
Further, the reference point O of the portrait photo composition template M is:
Figure RE-GDA0002362359780000041
wherein o isx、oyX and y coordinate values of the reference point O respectively;
the reference point OS of the portrait photo S to be cut is:
Figure RE-GDA0002362359780000042
wherein, osx、osyX and y coordinate values of the reference point OS, respectively; SP0, SP1 and SP2 are the leftmost point, the lowest point and the rightmost point of the cheek in the photo of the portrait to be cut.
Further, the step S4 is specifically:
s41, calculating mapping position coordinates E (E) of the reference points of the portrait photos to be cut in the portrait photo composition template based on the width and the height of the portrait photo composition template and the width, the height and the reference points of the portrait photos to be cutx,ey);
Figure RE-GDA0002362359780000043
Figure RE-GDA0002362359780000044
Wherein, WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo to be cut, and WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo composition template;
s42, setting an influence factor, and calculating a composition shift amount B (B) based on the mapping position coordinatesx,By);
Bx=osx×(1-factor)+ex×factor-ox
By=osy×(1-factor)+ey×factor-oy
Wherein the influence factor belongs to (0, 1).
Further, the step S5 is specifically:
s51, calculating a target region-of-interest key point array DF { DP0, DP1 and DP2} based on the composition offset and the first region-of-interest key point array;
DP0=MP0+B
DP1=MP1+B
DP2=MP2+B
s52, calculating an affine transformation matrix H based on the target region-of-interest key point array and the second region-of-interest key point array;
the affine transformation matrix H satisfies:
Figure RE-GDA0002362359780000051
SF=DF·H
further, the step S6 includes:
s61, composing the portrait photo into four vertex coordinates M0 (mx) of the template M0,my0), M1(mx1,my1),M2(mx2,my2),M3(mx3,my3) Affine transformation is carried out according to the affine transformation matrix H to obtain new four vertex coordinates D0 (dx)0,dy0),D1(dx1,dy1),D2(dx2,dy2), D3(dx3,dy3);
S62, calculating a coordinate array K of the upper left corner and the lower right corner of the cutting box based on the new four vertex coordinates:
minx=MIN(dx0,dx1,dx2,dx3)。
miny=MIN(dy0,dy1,dy2,dy3)
maxx=MAX(dx0,dx1,dx2,dx3)
maxy=MAX(dy0,dy1,dy2,dy3)
K={minx,miny,maxx,maxy}
the invention also provides an automatic composition cutting system for rapid portrait photos, which comprises:
the template creating module is used for collecting a plurality of portrait photos with the same proportion and size and creating a corresponding portrait photo composition template, wherein the portrait photo composition template comprises a wide area key point array, a high area key point array and a first interest area key point array;
the generating module is used for extracting key points corresponding to the key points of the first region of interest in the portrait photos to be cut and generating a second region of interest key point array;
the reference point calculating module is used for calculating a portrait photo composition template and reference points of a portrait photo to be cut on the basis of the first interested region key point array and the second interested region key point array respectively;
the offset calculation module is used for calculating composition offsets based on the width, the height and the reference points of the portrait photo composition template and the width, the height and the reference points of the portrait photos to be cut;
the affine transformation matrix calculation module is used for calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut based on the composition offset, the first interested area key point array and the second interested area key point array;
and the cutting module is used for carrying out affine matching on vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a cutting frame and cutting the portrait photo to be cut.
Further, the template creation module includes:
a detection module, configured to detect a third region of interest key point array F ═ P0, P1, P2} for each portrait photo, where the third region of interest key point array includes a cheek leftmost point P0, a chin lowest point P1, and a cheek rightmost point P2;
the first calculation module is used for calculating the mean values of the leftmost point, the lowest point of the chin and the rightmost point of the cheek of all the portrait photos, and a first region-of-interest key point array MF is formed by the first calculation module and the second calculation module, wherein the first region-of-interest key point array MF is { MP0, MP1 and MP2 };
Figure RE-GDA0002362359780000061
Figure RE-GDA0002362359780000071
Figure RE-GDA0002362359780000072
wherein, MP0, MP1 and MP2 are the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the portrait photo composition template respectively; n is the number of captured photographs, P0i、 P1i、P2iThe leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the ith portrait photo, respectively, and (x, y) are x and y coordinate values of the corresponding key points, respectively.
Further, the reference point O of the portrait photo composition template M is:
Figure RE-GDA0002362359780000073
wherein o isx、oyX and y coordinate values of the reference point O respectively;
the reference point OS of the portrait photo S to be cut is:
Figure RE-GDA0002362359780000074
wherein, osx、osyX and y coordinate values of the reference point OS, respectively; SP0, SP1 and SP2 are the leftmost point, the lowest point and the rightmost point of the cheek in the photo of the portrait to be cut.
Further, the offset calculation module includes:
a second calculating module for calculating mapping position coordinates E (E) of the reference points of the portrait photos to be cut in the portrait photo composition template based on the width and height of the portrait photo composition template and the width, height and reference points of the portrait photos to be cutx,ey);
Figure RE-GDA0002362359780000075
Figure RE-GDA0002362359780000076
Wherein, WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo to be cut, and WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo composition template;
a third calculation module for setting an influence factor and calculating a composition shift amount B (B) based on the mapping position coordinatesx,By);
Bx=osx×(1-factor)+ex×factor-ox
By=osy×(1-factor)+ey×factor-oy
Wherein, the influence factor belongs to (0, 1);
the affine transformation matrix calculating module includes:
a fourth calculating module, configured to calculate a target region of interest keypoint array DF { DP0, DP1, DP2} based on the composition offset and the first region of interest keypoint array;
DP0=MP0+B
DP1=MP1+B
DP2=MP2+B
the fifth calculation module is used for calculating an affine transformation matrix H based on the target region-of-interest key point array and the second region-of-interest key point array;
the affine transformation matrix H satisfies:
Figure RE-GDA0002362359780000081
SF=DF·H
the invention provides a method and a system for automatically composing and cutting a rapid portrait photo. Through automatic cutting of the portrait photos, the produced portrait photos are more in accordance with aesthetic significance, and meanwhile, the workload of manual processing is reduced. In addition, the method and the device perform fast cutting through affine transformation, solve the problem of low processing efficiency of the existing automatic composition cutting method based on deep learning, have low hardware requirement, can perform fast processing on mobile terminals such as mobile phones and the like, and have high working efficiency. The invention only carries out the creation and mapping of the portrait photo composition template based on the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek, thereby reducing the data processing amount to the maximum extent and further improving the cutting efficiency of the portrait photo.
Drawings
FIG. 1 is a flow chart of an automatic composition clipping method for fast portrait photos according to an embodiment;
FIG. 2 is a block diagram of an automatic composition clipping system for fast portrait photos according to a second embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Example one
As shown in fig. 1, the present embodiment provides an automatic composition clipping method for fast portrait photos, which includes:
s1, collecting a plurality of portrait photos with the same proportion and size, and creating a corresponding portrait photo composition template, wherein the portrait photo composition template comprises a wide area key point array, a high area key point array and a first area of interest key point array;
the invention automatically composes and cuts the portrait photo based on the portrait photo composition template. In the embodiment of the invention, the portrait photo composition template is created by a plurality of portrait photos with the same proportion and size. The proportion of the portrait photos can be 4: 3. 16: 9. 1: 1, etc., and is not limited herein. The size information of the portrait photo includes WIDTH and HEIGHT. In order to make the created portrait photo composition template more representative, the number of captured portrait photos is large, for example, the number of captured portrait photos N > 1000.
The portrait photo composition template comprises information such as a width, a height and a first interested area key point array, and the width and the height of the portrait photo composition template are the same as those of the collected portrait photos because the proportion and the size of the collected portrait photos are the same.
In the technical fields of machine vision, image processing, etc., a region to be processed, called a region of interest, i.e., ROI, is delineated from the processed image in the form of a box, circle, ellipse, irregular polygon, etc., and this region is usually the focus of image analysis and is fixed for further processing. The invention cuts the portrait photo, wherein the human face is the concerned core area, therefore, the interested area of the invention is the human face area, and the first interested area key point array stores the human face key point data. The key points of the human face are the key area positions of the human face, usually including eyebrows, eyes, a nose, a mouth, face contours and the like, and the data volume is huge, and the processing efficiency is low, so that the human image photo composition template is characterized by only adopting a small number of key points of the human face. Preferably, the present invention adopts the lowest point of the chin, the leftmost point of the cheek, and the rightmost point of the cheek as the key points of the face, and the specific process of generating the first region of interest key point array is as follows:
s11, detecting a third interested region key point array of each portrait photo, wherein the third interested region key point array comprises a leftmost point of the cheek, a lowest point of the chin and a rightmost point of the cheek;
the detection of the key points of the human face comprises the detection and the positioning of the key points of the human face or the alignment of the human face, which means that given human face images, the key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned. Preferably, the present invention extracts only the leftmost cheek point P0, the lowest chin point P1, and the rightmost cheek point P2 in the human face. The P0, P1, P2 information for each portrait photograph collectively make up the ROI keypoint F array for that photograph, i.e., F ═ P0, P1, P2. The detection of the face key points can adopt any third-party face key point SDK.
And S12, calculating the average values of the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of all the portrait photos, and jointly forming a first interesting area key point array.
And the third region-of-interest key point array F is { P0, P1, P2}, and for the acquired N portrait photos, the third region-of-interest key point array corresponding to the ith portrait photo is Fi={P0i,P1i,P2i},P0i、P1i、P2iThe leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the ith portrait photo are respectively. Thus, the mean of the leftmost cheek point, the lowest chin point, and the rightmost cheek point is:
Figure RE-GDA0002362359780000111
Figure RE-GDA0002362359780000112
Figure RE-GDA0002362359780000113
wherein, MP0, MP1, MP2 are the leftmost point of the cheek, the lowest point of the chin, and the rightmost point of the cheek of the human image photo composition template, respectively, and (x, y) are the x, y coordinate values corresponding to the key points, respectively.
Therefore, the first region of interest keypoint array is: MF ═ MP0, MP1, MP 2. Therefore, the portrait photo composition template M is completely created and comprises the array of the wide WIDTH, the high HEIGHT and the ROI key point MF.
S2, extracting key points corresponding to the key points of the first region of interest in the portrait photos to be cut, and generating a second region of interest key point array;
for the portrait photo S to be cut, the method is consistent with the portrait photo composition template, only the leftmost point SP0 of the cheek, the lowest point SP1 of the chin and the rightmost point SP2 of the cheek in the portrait photo S to be cut are detected, and any third-party face key point SDK can be adopted for detecting the face key point. Specifically, taking 101 face key as an example, face key point calculation is performed on the input picture S to obtain a face key point P: p ═ x0, y0, x1, y1... x100, y100}, the leftmost point SP0 of the cheek, the lowest point SP1 of the chin, and the rightmost point SP2 of the cheek are extracted from the P key points, and stored in an SF array, and recorded as ROI key point array SF of the image S. Thus, the second region of interest keypoint array is SF ═ { SP0, SP1, SP2 }.
S3, calculating a portrait photo composition template and reference points of the portrait photos to be cut based on the first interested region key point array and the second interested region key point array respectively;
the portrait photo composition template and the reference point of the portrait photo to be cut are the average values of the key points in the corresponding first interested area key point array and the second interested area key point array. Therefore, the reference points O of the portrait photograph composition template M are:
Figure RE-GDA0002362359780000121
wherein o isx、oyThe x and y coordinate values of the reference point O are shown.
The reference point OS of the portrait photo S to be cut is:
Figure RE-GDA0002362359780000122
wherein, osx、osyThe x and y coordinate values of the reference point OS, respectively.
S4, calculating composition offset based on the width, height and reference point of the portrait photo composition template and the width, height and reference point of the portrait photo to be cut;
in order to realize the automatic cutting of the portrait photo, the invention calculates the position relation between the portrait photo to be cut and the portrait photo composition template, so as to position the cutting area in the portrait photo to be cut according to the cutting area of the portrait photo composition template, thereby realizing the automatic cutting of the portrait photo. As an implementation manner of the present invention, a specific implementation flow of S4 is as follows:
s41, calculating mapping position coordinates of the reference points of the portrait photos to be cut in the portrait photo composition template based on the width and height of the portrait photo composition template and the width, height and reference points of the portrait photos to be cut;
assuming that the width and height of the portrait photo S to be cropped are width and height, respectively, the position coordinate E (E) is mappedx,ey) The calculation formula of (a) is as follows:
Figure RE-GDA0002362359780000123
Figure RE-GDA0002362359780000124
s42, setting an influence factor, and calculating composition offset based on the mapping position coordinates;
the influence factor is used for adjusting the mapping position coordinates and the reference point position coordinates of the portrait photo to be cut, specifically, the composition offset B (B)x,By) The calculation formula of (a) is as follows:
Bx=osx×(1-factor)+ex×factor-ox
By=osy×(1-factor)+ey×factor-oy
wherein, the factor belongs to (0,1), preferably, the factor can be 0.5.
S5, calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut based on the composition offset, the first interested area key point array and the second interested area key point array;
the automatic cutting of the portrait photos needs to realize the automatic mapping of the portrait photos to be cut according to a portrait photo composition template, and the specific calculation steps of the affine transformation matrix between the portrait photo composition template and the portrait photos to be cut are as follows:
s51, calculating a target region-of-interest key point array based on the composition offset and the first region-of-interest key point array;
for the composition offset B (B)x,By) And the first region of interest keypoint array MF ═ { MP0, MP1, MP2}, where the target region of interest keypoint array DF ═ { DP0, DP1, DP2} is:
DP0=MP0+B
DP1=MP1+B
DP2=MP2+B
and S52, calculating an affine transformation matrix based on the target region-of-interest key point array and the second region-of-interest key point array.
The affine transformation matrix H satisfies:
Figure RE-GDA0002362359780000131
SF=DF·H
s6, performing affine matching on the vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a clipping frame, and clipping the portrait photo to be clipped.
The cutting frame covers all the position points of the region of interest, so the invention carries out affine on the vertex coordinates in the portrait photo composition template to obtain the corresponding vertex position coordinates in the portrait photo to be cut. Thus, the portrait photo is composed as four vertex coordinates M0 (mx) of the template M0,my0),M1(mx1,my1),M2(mx2,my2),M3(mx3,my3) Affine transformation is carried out according to the affine transformation matrix H to obtain new four vertex coordinates D0 (dx)0,dy0),D1(dx1,dy1), D2(dx2,dy2),D3(dx3,dy3)。
The portrait photo is cut through the rectangular cutting frame, and the vertical edge of the cutting frame is perpendicular to the abscissa of the portrait photo. Therefore, the position of the rectangular cutting frame is determined by the coordinates of the positions of the upper left corner and the lower right corner, and specifically:
minx=MIN(dx0,dx1,dx2,dx3)
miny=MIN(dy0,dy1,dy2,dy3)
maxx=MAX(dx0,dx1,dx2,dx3)
maxy=MAX(dy0,dy1,dy2,dy3)
K={minx,miny,maxx,maxy}
that is, the coordinates of the upper left corner and the lower right corner of the rectangle frame stored in the K array, i.e., the cropping rectangle frame, are located. And determining a unique cutting frame according to the point coordinates of the upper left corner and the lower right corner, cutting the portrait photo to be cut according to the cutting frame, and generating a final portrait photo cutting effect graph D.
Example two
As shown in fig. 2, the present embodiment provides an automatic composition cropping system for fast portrait photos, which includes:
the template creating module is used for collecting a plurality of portrait photos with the same proportion and size and creating a corresponding portrait photo composition template, wherein the portrait photo composition template comprises a wide area key point array, a high area key point array and a first interest area key point array;
the invention automatically composes and cuts the portrait photo based on the portrait photo composition template. In the embodiment of the invention, the portrait photo composition template is created by a plurality of portrait photos with the same proportion and size. The proportion of the portrait photos can be 4: 3. 16: 9. 1: 1, etc., and is not limited herein. The size information of the portrait photo includes WIDTH and HEIGHT. In order to make the created portrait photo composition template more representative, the number of captured portrait photos is large, for example, the number of captured portrait photos N > 1000.
The portrait photo composition template comprises information such as a width, a height and a first interested area key point array, and the width and the height of the portrait photo composition template are the same as those of the collected portrait photos because the proportion and the size of the collected portrait photos are the same.
In the technical fields of machine vision, image processing, etc., a region to be processed, called a region of interest, i.e., ROI, is delineated from the processed image in the form of a box, circle, ellipse, irregular polygon, etc., and this region is usually the focus of image analysis and is fixed for further processing. The invention cuts the portrait photo, wherein the human face is the concerned core area, therefore, the interested area of the invention is the human face area, and the first interested area key point array stores the human face key point data. The key points of the human face are the key area positions of the human face, usually including eyebrows, eyes, a nose, a mouth, face contours and the like, and the data volume is huge, and the processing efficiency is low, so that the human image photo composition template is characterized by only adopting a small number of key points of the human face. Preferably, the present invention adopts the lowest point of the chin, the leftmost point of the cheek, and the rightmost point of the cheek as the key points of the face, whereby the template creation module includes:
the detection module is used for detecting a third interested area key point array of each portrait photo, and the third interested area key point array comprises a leftmost point of the cheek, a lowest point of the chin and a rightmost point of the cheek;
the detection of the key points of the human face comprises the detection and the positioning of the key points of the human face or the alignment of the human face, which means that given human face images, the key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned. Preferably, the present invention extracts only the leftmost cheek point P0, the lowest chin point P1, and the rightmost cheek point P2 in the human face. The P0, P1, P2 information for each portrait photograph collectively make up the ROI keypoint F array for that photograph, i.e., F ═ P0, P1, P2. The detection of the face key points can adopt any third-party face key point SDK.
The first calculating module is used for calculating the mean values of the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of all the portrait photos to jointly form a first interesting region key point array.
And the third region-of-interest key point array F is { P0, P1, P2}, and for the acquired N portrait photos, the third region-of-interest key point array corresponding to the ith portrait photo is Fi={P0i,P1i,P2i},P0i、P1i、P2iThe leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the ith portrait photo are respectively. Thus, the mean of the leftmost cheek point, the lowest chin point, and the rightmost cheek point is:
Figure RE-GDA0002362359780000151
Figure RE-GDA0002362359780000161
Figure RE-GDA0002362359780000162
wherein, MP0, MP1, MP2 are the leftmost point of the cheek, the lowest point of the chin, and the rightmost point of the cheek of the human image photo composition template, respectively, and (x, y) are the x, y coordinate values corresponding to the key points, respectively.
Therefore, the first region of interest keypoint array is: MF ═ MP0, MP1, MP 2. Therefore, the portrait photo composition template M is completely created and comprises the array of the wide WIDTH, the high HEIGHT and the ROI key point MF.
The generating module is used for extracting key points corresponding to the key points of the first region of interest in the portrait photos to be cut and generating a second region of interest key point array;
for the portrait photo S to be cut, the method is consistent with the portrait photo composition template, only the leftmost point SP0 of the cheek, the lowest point SP1 of the chin and the rightmost point SP2 of the cheek in the portrait photo S to be cut are detected, and any third-party face key point SDK can be adopted for detecting the face key point. Specifically, taking 101 face key as an example, face key point calculation is performed on the input picture S to obtain a face key point P: p ═ x0, y0, x1, y1... x100, y100}, the leftmost point SP0 of the cheek, the lowest point SP1 of the chin, and the rightmost point SP2 of the cheek are extracted from the P key points, and stored in an SF array, and recorded as ROI key point array SF of the image S. Thus, the second region of interest keypoint array is SF ═ { SP0, SP1, SP2 }.
The reference point calculating module is used for calculating a portrait photo composition template and reference points of a portrait photo to be cut on the basis of the first interested region key point array and the second interested region key point array respectively;
the portrait photo composition template and the reference point of the portrait photo to be cut are the average values of the key points in the corresponding first interested area key point array and the second interested area key point array. Therefore, the reference points O of the portrait photograph composition template M are:
Figure RE-GDA0002362359780000163
wherein o isx、oyThe x and y coordinate values of the reference point O are shown.
The reference point OS of the portrait photo S to be cut is:
Figure RE-GDA0002362359780000171
wherein, osx、osyThe x and y coordinate values of the reference point OS, respectively.
The offset calculation module is used for calculating composition offsets based on the width, the height and the reference points of the portrait photo composition template and the width, the height and the reference points of the portrait photos to be cut;
in order to realize the automatic cutting of the portrait photo, the invention calculates the position relation between the portrait photo to be cut and the portrait photo composition template, so as to position the cutting area in the portrait photo to be cut according to the cutting area of the portrait photo composition template, thereby realizing the automatic cutting of the portrait photo. As an implementation of the present invention, the offset calculation module includes:
the second calculation module is used for calculating mapping position coordinates of the reference points of the portrait photos to be cut in the portrait photo composition template based on the width and the height of the portrait photo composition template and the width, the height and the reference points of the portrait photos to be cut;
assuming that the width and height of the portrait photo S to be cropped are width and height, respectively, the position coordinate E (E) is mappedx,ey) The calculation formula of (a) is as follows:
Figure RE-GDA0002362359780000172
Figure RE-GDA0002362359780000173
the third calculation module is used for setting an influence factor and calculating composition offset based on the mapping position coordinate;
the influence factor is used for adjusting the mapping position coordinates and the reference point position coordinates of the portrait photo to be cut, specifically, the composition offset B (B)x,By) The calculation formula of (a) is as follows:
Bx=osx×(1-factor)+ex×factor-ox
By=osy×(1-factor)+ey×factor-oy
wherein, the factor belongs to (0,1), preferably, the factor can be 0.5.
The affine transformation matrix calculation module is used for calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut based on the composition offset, the first interested area key point array and the second interested area key point array;
the automatic cutting of the portrait photos requires the automatic mapping of the portrait photos to be cut according to a portrait photo composition template, and an affine transformation matrix between the portrait photo composition template and the portrait photos to be cut specifically comprises the following steps:
the fourth calculation module is used for calculating a target region-of-interest key point array based on the composition offset and the first region-of-interest key point array;
for the composition offset B (B)x,By) And the first region of interest keypoint array MF ═ { MP0, MP1, MP2}, where the target region of interest keypoint array DF ═ { DP0, DP1, DP2} is:
DP0=MP0+B
DP1=MP1+B
DP2=MP2+B
and the fifth calculation module is used for calculating an affine transformation matrix based on the target region-of-interest key point array and the second region-of-interest key point array.
The affine transformation matrix H satisfies:
Figure RE-GDA0002362359780000181
SF=DF·H
and the cutting module is used for carrying out affine matching on vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a cutting frame and cutting the portrait photo to be cut.
The cutting frame covers all the position points of the region of interest, so the invention carries out affine on the vertex coordinates in the portrait photo composition template to obtain the corresponding vertex position coordinates in the portrait photo to be cut. Thus, the portrait photo is composed as four vertex coordinates M0 (mx) of the template M0,my0),M1(mx1,my1),M2(mx2,my2),M3(mx3,my3) Affine transformation is carried out according to the affine transformation matrix H to obtain new four vertex coordinates D0 (dx)0,dy0),D1(dx1,dy1), D2(dx2,dy2),D3(dx3,dy3)。
The portrait photo is cut through the rectangular cutting frame, and the vertical edge of the cutting frame is perpendicular to the abscissa of the portrait photo. Therefore, the position of the rectangular cutting frame is determined by the coordinates of the positions of the upper left corner and the lower right corner, and specifically:
minx=MIN(dx0,dx1,dx2,dx3)
miny=MIN(dy0,dy1,dy2,dy3)
maxx=MAX(dx0,dx1,dx2,dx3)
maxy=MAX(dy0,dy1,dy2,dy3)
K={minx,miny,maxx,maxy}
that is, the coordinates of the upper left corner and the lower right corner of the rectangle frame stored in the K array, i.e., the cropping rectangle frame, are located. And determining a unique cutting frame according to the point coordinates of the upper left corner and the lower right corner, cutting the portrait photo to be cut according to the cutting frame, and generating a final portrait photo cutting effect graph D.
Therefore, according to the automatic composition cutting method and system for the rapid portrait photo, provided by the invention, the portrait photo composition template is created, and the rapid cutting of the portrait photo is realized according to the affine of the corresponding key points in the portrait photo composition template and the portrait photo to be cut. Through automatic cutting of the portrait photos, the produced portrait photos are more in accordance with aesthetic significance, and meanwhile, the workload of manual processing is reduced. In addition, the method and the device perform fast cutting through affine transformation, solve the problem of low processing efficiency of the existing automatic composition cutting method based on deep learning, have low hardware requirement, can perform fast processing on mobile terminals such as mobile phones and the like, and have high working efficiency. The invention only carries out the creation and mapping of the portrait photo composition template based on the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek, thereby reducing the data processing amount to the maximum extent and further improving the cutting efficiency of the portrait photo.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An automatic composition cutting method for fast portrait photos is characterized by comprising the following steps:
s1, collecting a plurality of portrait photos with the same proportion and size, and creating a corresponding portrait photo composition template, wherein the portrait photo composition template comprises a wide area key point array, a high area key point array and a first area of interest key point array;
s2, extracting key points corresponding to the key points of the first region of interest in the portrait photos to be cut, and generating a second region of interest key point array;
s3, calculating a portrait photo composition template and reference points of the portrait photos to be cut based on the first interested region key point array and the second interested region key point array respectively;
s4, calculating composition offset based on the width, height and reference point of the portrait photo composition template and the width, height and reference point of the portrait photo to be cut;
s5, calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut based on the composition offset, the first interested area key point array and the second interested area key point array;
s6, performing affine matching on the vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a clipping frame, and clipping the portrait photo to be clipped.
2. The automatic composition clipping method according to claim 1, wherein generating the first region of interest keypoint array specifically comprises:
s11, detecting a third region of interest key point array F ═ P0, P1, P2} for each portrait photo, the third region of interest key point array including a cheek leftmost point P0, a chin lowest point P1, a cheek rightmost point P2;
s12, calculating the mean values of the leftmost point of the cheek, the lowest point of the chin, and the rightmost point of the cheek of all the portrait photos, and forming a first region of interest key point array MF ═ MP0, MP1, and MP 2;
Figure RE-FDA0002362359770000021
Figure RE-FDA0002362359770000022
Figure RE-FDA0002362359770000023
wherein, MP0, MP1 and MP2 are the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the portrait photo composition template respectively; n is the number of captured photographs, P0i、P1i、P2iThe leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the ith portrait photo, respectively, and (x, y) are x and y coordinate values of the corresponding key points, respectively.
3. The automatic composition cutting method as claimed in claim 2, wherein the reference points O of the portrait photo composition template M are:
Figure RE-FDA0002362359770000024
wherein o isx、oyX and y coordinate values of the reference point O respectively;
the reference point OS of the portrait photo S to be cut is:
Figure RE-FDA0002362359770000025
wherein, osx、osyAre respectively provided withX, y coordinate values of a reference point OS; SP0, SP1 and SP2 are the leftmost point, the lowest point and the rightmost point of the cheek in the photo of the portrait to be cut.
4. The automatic composition clipping method according to claim 3, wherein the step S4 is specifically:
s41, calculating mapping position coordinates E (E) of the reference points of the portrait photos to be cut in the portrait photo composition template based on the width and the height of the portrait photo composition template and the width, the height and the reference points of the portrait photos to be cutx,ey);
Figure RE-FDA0002362359770000031
Figure RE-FDA0002362359770000032
Wherein, WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo to be cut, and WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo composition template;
s42, setting an influence factor, and calculating a composition shift amount B (B) based on the mapping position coordinatesx,By);
Bx=osx×(1-factor)+ex×factor-ox
By=osy×(1-factor)+ey×factor-oy
Wherein the influence factor belongs to (0, 1).
5. The automatic composition clipping method according to claim 4, wherein the step S5 is specifically:
s51, calculating a target region-of-interest key point array DF { DP0, DP1 and DP2} based on the composition offset and the first region-of-interest key point array;
DP0=MP0+B
DP1=MP1+B
DP2=MP2+B
s52, calculating an affine transformation matrix H based on the target region-of-interest key point array and the second region-of-interest key point array;
the affine transformation matrix H satisfies:
Figure RE-FDA0002362359770000033
6. the automatic composition clipping method according to claim 5, wherein the step S6 includes:
s61, composing the portrait photo into four vertex coordinates M0 (mx) of the template M0,my0),M1(mx1,my1),M2(mx2,my2),M3(mx3,my3) Affine transformation is carried out according to the affine transformation matrix H to obtain new four vertex coordinates D0 (dx)0,dy0),D1(dx1,dy1),D2(dx2,dy2),D3(dx3,dy3);
S62, calculating a coordinate array K of the upper left corner and the lower right corner of the cutting box based on the new four vertex coordinates:
minx=MIN(dx0,dx1,dx2,dx3)。
miny=MIN(dy0,dy1,dy2,dy3)
maxx=MAX(dx0,dx1,dx2,dx3)
maxy=MAX(dy0,dy1,dy2,dy3)
K={minx,miny,maxx,maxy}
7. an automatic composition cropping system for rapid portrait photos, comprising:
the template creating module is used for collecting a plurality of portrait photos with the same proportion and size and creating a corresponding portrait photo composition template, wherein the portrait photo composition template comprises a wide area key point array, a high area key point array and a first interest area key point array; the generating module is used for extracting key points corresponding to the key points of the first region of interest in the portrait photos to be cut and generating a second region of interest key point array;
the reference point calculating module is used for calculating a portrait photo composition template and reference points of a portrait photo to be cut on the basis of the first interested region key point array and the second interested region key point array respectively;
the offset calculation module is used for calculating composition offsets based on the width, the height and the reference points of the portrait photo composition template and the width, the height and the reference points of the portrait photos to be cut;
the affine transformation matrix calculation module is used for calculating an affine transformation matrix between the portrait photo composition template and the portrait photo to be cut based on the composition offset, the first interested area key point array and the second interested area key point array;
and the cutting module is used for carrying out affine matching on vertex coordinates in the portrait photo composition template based on the affine transformation matrix, generating a cutting frame and cutting the portrait photo to be cut.
8. The automatic composition clipping system of claim 7, wherein the template creation module comprises:
a detection module, configured to detect a third region of interest key point array F ═ P0, P1, P2} for each portrait photo, where the third region of interest key point array includes a cheek leftmost point P0, a chin lowest point P1, and a cheek rightmost point P2;
the first calculation module is used for calculating the mean values of the leftmost point, the lowest point of the chin and the rightmost point of the cheek of all the portrait photos, and a first region-of-interest key point array MF is formed by the first calculation module and the second calculation module, wherein the first region-of-interest key point array MF is { MP0, MP1 and MP2 };
Figure RE-FDA0002362359770000051
Figure RE-FDA0002362359770000052
Figure RE-FDA0002362359770000053
wherein, MP0, MP1 and MP2 are the leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the portrait photo composition template respectively; n is the number of captured photographs, P0i、P1i、P2iThe leftmost point of the cheek, the lowest point of the chin and the rightmost point of the cheek of the ith portrait photo, respectively, and (x, y) are x and y coordinate values of the corresponding key points, respectively.
9. The automatic composition cutting system as claimed in claim 8, wherein the reference points O of the portrait photo composition template M are:
Figure RE-FDA0002362359770000054
wherein o isx、oyX and y coordinate values of the reference point O respectively;
the reference point OS of the portrait photo S to be cut is:
Figure RE-FDA0002362359770000061
wherein, osx、osyX and y coordinate values of the reference point OS, respectively; SP0, SP1 and SP2 are the leftmost point, the lowest point and the rightmost point of the cheek in the photo of the portrait to be cut.
10. The automatic composition clipping system of claim 9, wherein the offset calculation module comprises:
a second calculating module for calculating mapping position coordinates E (E) of the reference points of the portrait photos to be cut in the portrait photo composition template based on the width and height of the portrait photo composition template and the width, height and reference points of the portrait photos to be cutx,ey);
Figure RE-FDA0002362359770000062
Figure RE-FDA0002362359770000063
Wherein, WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo to be cut, and WIDTH and HEIGHT are respectively the WIDTH and HEIGHT of the portrait photo composition template;
a third calculation module for setting an influence factor and calculating a composition shift amount B (B) based on the mapping position coordinatesx,By);
Bx=osx×(1-factor)+ex×factor-ox
By=osy×(1-factor)+ey×factor-oy
Wherein, the influence factor belongs to (0, 1);
the affine transformation matrix calculating module includes:
a fourth calculating module, configured to calculate a target region of interest keypoint array DF { DP0, DP1, DP2} based on the composition offset and the first region of interest keypoint array;
DP0=MP0+B
DP1=MP1+B
DP2=MP2+B
the fifth calculation module is used for calculating an affine transformation matrix H based on the target region-of-interest key point array and the second region-of-interest key point array;
the affine transformation matrix H satisfies:
Figure RE-FDA0002362359770000071
CN201911305183.7A 2019-12-17 2019-12-17 Automatic composition cutting method and system for rapid portrait photo Active CN111080667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911305183.7A CN111080667B (en) 2019-12-17 2019-12-17 Automatic composition cutting method and system for rapid portrait photo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911305183.7A CN111080667B (en) 2019-12-17 2019-12-17 Automatic composition cutting method and system for rapid portrait photo

Publications (2)

Publication Number Publication Date
CN111080667A true CN111080667A (en) 2020-04-28
CN111080667B CN111080667B (en) 2023-04-25

Family

ID=70315375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911305183.7A Active CN111080667B (en) 2019-12-17 2019-12-17 Automatic composition cutting method and system for rapid portrait photo

Country Status (1)

Country Link
CN (1) CN111080667B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036319A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361329A (en) * 2014-11-25 2015-02-18 成都品果科技有限公司 Photo cropping method and system based on face recognition
WO2018120662A1 (en) * 2016-12-27 2018-07-05 华为技术有限公司 Photographing method, photographing apparatus and terminal
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109685740A (en) * 2018-12-25 2019-04-26 努比亚技术有限公司 Method and device, mobile terminal and the computer readable storage medium of face normalization
CN109993137A (en) * 2019-04-09 2019-07-09 安徽大学 A kind of fast face antidote based on convolutional neural networks
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110189252A (en) * 2019-06-10 2019-08-30 北京字节跳动网络技术有限公司 The method and apparatus for generating average face image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361329A (en) * 2014-11-25 2015-02-18 成都品果科技有限公司 Photo cropping method and system based on face recognition
WO2018120662A1 (en) * 2016-12-27 2018-07-05 华为技术有限公司 Photographing method, photographing apparatus and terminal
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109685740A (en) * 2018-12-25 2019-04-26 努比亚技术有限公司 Method and device, mobile terminal and the computer readable storage medium of face normalization
CN109993137A (en) * 2019-04-09 2019-07-09 安徽大学 A kind of fast face antidote based on convolutional neural networks
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110189252A (en) * 2019-06-10 2019-08-30 北京字节跳动网络技术有限公司 The method and apparatus for generating average face image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036319A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
CN112036319B (en) * 2020-08-31 2023-04-18 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111080667B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN107146198B (en) Intelligent photo cutting method and device
US11321385B2 (en) Visualization of image themes based on image content
US9547908B1 (en) Feature mask determination for images
EP3105921B1 (en) Photo composition and position guidance in an imaging device
KR101605983B1 (en) Image recomposition using face detection
Tang et al. Content-based photo quality assessment
US8938100B2 (en) Image recomposition from face detection and facial features
WO2020151750A1 (en) Image processing method and device
CN109274891B (en) Image processing method, device and storage medium thereof
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN111242074B (en) Certificate photo background replacement method based on image processing
CN107622497B (en) Image cropping method and device, computer readable storage medium and computer equipment
Islam et al. A survey of aesthetics-driven image recomposition
CN112016469A (en) Image processing method and device, terminal and readable storage medium
CN107368817B (en) Face recognition method and device
CN108416800A (en) Method for tracking target and device, terminal, computer readable storage medium
CN114845158A (en) Video cover generation method, video publishing method and related equipment
CN111080667B (en) Automatic composition cutting method and system for rapid portrait photo
CN111652795A (en) Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
WO2022121843A1 (en) Text image correction method and apparatus, and device and medium
Greco et al. Saliency based aesthetic cut of digital images
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN113610864A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Lai et al. Correcting face distortion in wide-angle videos
Zhang et al. Pose-based composition improvement for portrait photographs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 22nd floor, block a, Huaxing Times Square, 478 Wensan Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant after: Hangzhou Xiaoying Innovation Technology Co.,Ltd.

Address before: 16 / F, HANGGANG Metallurgical Science and technology building, 294 Tianmushan Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Applicant before: HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant