CN110288520B - Image beautifying method and device and electronic equipment - Google Patents

Image beautifying method and device and electronic equipment Download PDF

Info

Publication number
CN110288520B
CN110288520B CN201910580719.XA CN201910580719A CN110288520B CN 110288520 B CN110288520 B CN 110288520B CN 201910580719 A CN201910580719 A CN 201910580719A CN 110288520 B CN110288520 B CN 110288520B
Authority
CN
China
Prior art keywords
image
waist
target object
point
shoulder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910580719.XA
Other languages
Chinese (zh)
Other versions
CN110288520A (en
Inventor
黄佳斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910580719.XA priority Critical patent/CN110288520B/en
Publication of CN110288520A publication Critical patent/CN110288520A/en
Application granted granted Critical
Publication of CN110288520B publication Critical patent/CN110288520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/18

Abstract

The embodiment of the disclosure provides an image beautifying method, an image beautifying device and electronic equipment, belonging to the technical field of image processing, wherein the method comprises the following steps: respectively executing key point detection on a hip region and a shoulder region of a target object on a target image to obtain a hip key point and a shoulder key point; determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points; determining a waist representation area of the target object by using the waist central point coordinate and the upper body length; and performing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameter aiming at the waist area and taking the central point coordinate as a center to form a beautified image comprising a waist deformation image. Through the processing scheme disclosed by the invention, the attractiveness of the image is improved.

Description

Image beautifying method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image beautifying method and apparatus, and an electronic device.
Background
Image processing (image processing), also called video processing, is a technique for achieving a desired result in an image by a computer. Since the 20 th century, digital image processing was common. The main contents of the image processing technology include image compression, enhancement restoration, and matching description identification 3 parts, and common processes include image digitization, image coding, image enhancement, image restoration, image segmentation, image analysis, and the like. The image processing is to process the image information by using a computer to meet the visual psychology of people or the behavior of application requirements, has wide application, and is mainly used for mapping, atmospheric science, astronomy, artistic drawing, image improvement and identification and the like.
With the popularization of smart phones, the use of mobile phones for photographing is increasing, and in the process of photographing by using a mobile phone, due to various reasons such as a photographing angle, a problem that a body proportion of a photo formed by photographing a person cannot meet a user expectation value exists.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide an image beautification method, an image beautification device, and an electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides an image beautification method, including:
respectively executing key point detection on a hip area and a shoulder area of a target object on a target image to obtain a hip key point and a shoulder key point;
determining the waist center point coordinates and the upper body length of the target object based on the hip key points and the shoulder key points;
determining a waist representation area of the target object by using the waist central point coordinate and the upper body length;
and performing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameter aiming at the waist area and taking the central point coordinate as a center to form a beautified image containing a waist deformation image.
According to a specific implementation manner of the embodiment of the present disclosure, before the performing the keypoint detection on the hip region and the shoulder region of the target object on the target image respectively, the method further includes:
transforming the target image into a grayscale image;
and carrying out edge detection on the gray level image to obtain an edge contour of the target object.
According to a specific implementation manner of the embodiment of the present disclosure, the performing edge detection on the grayscale image to obtain an edge contour of the target object includes:
selecting a plurality of structural elements with different orientations;
carrying out detail matching on the gray level image by utilizing each structural element in a plurality of structural elements to obtain a filtering image;
determining a gray scale edge of the filtered image to obtain a number of pixels present in each of a plurality of gray scale levels in the filtered image;
weighting the number of pixels in each gray level, and taking the weighted gray average value as a threshold value;
carrying out binarization processing on the filtered image based on the threshold value;
and taking the image after the binarization processing as an edge image of the target object.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points includes:
acquiring a first distance D between a buttock key point and a shoulder key point;
and taking the product of the first distance and a preset coefficient as the upper half body length.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points includes:
respectively acquiring a first average coordinate M1 of a hip key point and a second average coordinate M2 of a shoulder key point;
and taking a1 × M1+ a2 × M2 as the waist center point coordinates, wherein a1 and a2 are waist center point correction coefficients, a1 and a2 are both greater than 0, and a1+ a2=1.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the waist representation area of the target object by using the waist center point coordinates and the upper body length includes:
respectively obtaining length distances N1 and N2 of the hip key point and the waist key point on a horizontal plane;
b1 × N1+ b2 × N2 is taken as the width of the waist representing region, wherein b1 and b2 are width correction coefficients, b1 and b2 are both greater than 0, and b1+ b2<1;
b3 x D is taken as the height of the waist representation area, wherein b3 is a height correction coefficient, and b3>1;
taking the waist central point as the central point of the waist representation area.
According to a specific implementation manner of the embodiment of the present disclosure, the performing of the keypoint detection on the hip region and the shoulder region of the target object on the target image respectively includes:
performing edge detection on the target object to obtain contour points of the target object;
selecting any contour point as a starting scanning point, establishing a row pointer and a column pointer pointing to the starting scanning point, and a total pointer pointing to the right of the row pointer and pointing to the down of the column pointer;
when scanning other scanning points except the initial scanning point based on the contour point, correspondingly establishing a row pointer, a column pointer and a total pointer of other scanning points;
based on the row pointer, the column pointer and the total pointer, key points of a hip region and a shoulder region of the target object are determined.
According to a specific implementation manner of the embodiment of the present disclosure, the determining key points of the hip region and the shoulder region of the target object based on the row pointer, the column pointer, and the total pointer includes:
acquiring a line pointer with line coordinates Y = Y0+ R0 on a human body contour point, wherein Y0 is an average line coordinate of a key point of a hip region, and R0 is a head radius determined according to the key point of the hip region;
traversing all key points Pi (Xi, yi) of the shoulder regions, if X0-R0< Xi < X0, determining Pi as a left key point of the target object, and if X0< Xi < X0+ R0, determining Pi as a right key point of the target object;
from the finally determined left and right keypoints, keypoints of the hip region and the shoulder region of the target object are determined.
In a second aspect, an embodiment of the present disclosure provides an image beautification device, including:
the detection module is used for respectively executing key point detection on a hip region and a shoulder region of a target object on a target image to obtain a hip key point and a shoulder key point;
the first determining module is used for determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points;
the second determining module is used for determining a waist representing area of the target object by utilizing the waist central point coordinate and the upper body length;
and the beautifying module is used for executing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameters aiming at the waist area and taking the central point coordinate as a center to form a beautifying image containing a waist deformation image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of image beautification of Ren Di, or any implementation of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image beautification method of the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present disclosure also provides a computer program product including a computer program stored on a non-transitory computer readable storage medium, the computer program including program instructions that, when executed by a computer, cause the computer to perform the image beautification method in the foregoing first aspect or any implementation manner of the first aspect.
The image beautifying scheme in the embodiment of the disclosure comprises the steps of respectively executing key point detection on a hip area and a shoulder area of a target object on a target image to obtain a hip key point and a shoulder key point; determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points; determining a waist representation area of the target object by using the waist central point coordinate and the upper body length; and performing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameter aiming at the waist area and taking the central point coordinate as a center to form a beautified image comprising a waist deformation image. Through the scheme of the disclosure, the aesthetic degree of the image is provided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a process for beautifying a picture according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of key points based on a human body according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another process for beautifying a picture according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another process for beautifying a picture according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a picture beautification device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It should be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an image beautifying method. The image beautification method provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrally arranged in a server, a terminal device and the like.
Referring to fig. 1, an image beautification method provided by an embodiment of the present disclosure includes:
s101, respectively executing key point detection on the hip region and the shoulder region of the target object on the target image to obtain a hip key point and a shoulder key point.
The target image is an image which needs to be beautified, and the target image can be a picture obtained by shooting through equipment such as a smart phone and the like or a picture obtained through other modes.
The target object is an object existing in a target image, and generally speaking, the target object may be a person, for example, a user takes an image including a person through a mobile phone or the like, and the person in the image constitutes the target object. The target object may be a human, an animal, or other types of objects.
The target object is displayed with a hip region and a shoulder region on the target image. After the target image is formed, the key point detection can be performed for the hip region and the shoulder region, and then a plurality of hip key points and a plurality of shoulder region key points are obtained. The detection of the key points in the hip region and the shoulder region can be performed by methods such as CPM (Convolutional Pose detector), PAF (Part Affinity Fields), and the like. The method of detecting the key point is not limited herein.
And S102, determining the waist center point coordinate and the upper body length of the target object based on the hip key point and the shoulder key point.
Referring to fig. 2, the hip keypoints and the shoulder keypoints may contain a keypoint set of multiple keypoints, for example, the hip keypoints may include keypoints P12, P3, and P16, and the shoulder keypoints may include keypoints P4, P2, and P8.
And determining the coordinates of the center point of the waist region through the calculated hip key point and shoulder key point. Specifically, the coordinate positions of the key points of the buttocks can be obtained, and the central point of the buttocks can be calculated by using the coordinate information of the key points, for example, the coordinates of the key points of the buttocks are calculated by averaging, or the coordinates of the key points are weighted and averaged, and the like. Other similar ways of calculating the coordinates of the center point of the hip region are also possible.
Meanwhile, the coordinate positions of the shoulder key points can be acquired, and the center points of the shoulders can be calculated by using the coordinate information of the key points, for example, the coordinate of the key point of the shoulder is calculated in an average manner, or the coordinate of the key point is weighted and averaged. Other similar ways of calculating the center point coordinates of the shoulder region may also be taken.
By combining the center point coordinates T1 (x 1, y 1) of the hip region and the center point coordinates T2 (x 2, y 2) of the shoulder region, the coordinates T3 (x 3, y 3) of the center point of the waist can be obtained by calculation. For example, x3=0.6 x1+0.4 x2, y3=0.6 x1+0.4 x y2. In this way or in a similar way, the coordinates T3 of the waist center point can be calculated. After obtaining the center point coordinates of the waist, the corresponding image processing may be performed for the waist region based on the center point coordinates.
By calculating the distance between the shoulder center point coordinate T2 and the hip center point, the upper body length of the target object can be obtained.
And S103, determining a waist representation area of the target object by using the waist central point coordinate and the upper body length.
In addition to calculating the coordinates of the center point of the waist region, the coordinates of the key points can be used to calculate the size of the waist, for example, obtaining a minimum rectangle by using the coordinates of the key points of the hip and the shoulder to make the key points of the hip and the shoulder all inside the rectangle, then compressing the rectangle in the vertical direction (for example, compressing the rectangle to 30% of the original height in the vertical direction) by using the coordinates of the center point of the waist as the center point to obtain a compressed rectangle, and further obtaining the waist representation region based on the compressed rectangle. The lumbar representation area can be used to describe the general area of the lumbar region. Of course, other figures such as circles, ellipses, etc. may be used instead of rectangles to describe the waist representing regions according to actual needs.
And S104, performing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameters aiming at the waist area and taking the central point coordinate as a center to form a beautified image comprising a waist deformation image.
In the process of forming an image of the lumbar region on the target image, the area of the lumbar region is usually slightly larger than the area of other parts of the body due to the photographing angle or the like, or the area of the lumbar region is desired to be smaller than the actual area of the lumbar region by the user, so that the beauty of the lumbar region of the target object is improved.
Before the deformation is performed, a deformation parameter of the waist region may be further obtained, where the deformation parameter indicates a scaling ratio of the waist region, for example, the deformation parameter may be 0.9, which indicates that a scaling process of 0.9 times is performed on the waist region.
The deformation parameter may be obtained in various ways, and as one way, the deformation parameter may be determined according to a value input by a user on a corresponding interactive interface. Alternatively, the deformation parameters may be automatically calculated by automatically calculating the proportions of the target object to the respective body parts on the target graph.
After obtaining the deformation parameters, the lumbar region may be deformed. For example, the lumbar region boundary may be compressively deformed in the horizontal direction, or the lumbar region may be compressively deformed by stretching the lumbar region in the vertical direction.
Through the scheme in the application, effective deformation of the waist region of the target object can be guaranteed, and therefore the overall attractiveness of the target object on the target image is improved.
Referring to fig. 3, according to a specific implementation manner of the embodiment of the present disclosure, before performing keypoint detection on the hip region and the shoulder region of the target object on the target image, respectively, the method further includes:
s301, the target image is converted into a grayscale image.
In order to reduce the computational efficiency of the target image, the target image may be converted into a grayscale image based on which the keypoint detection is performed.
S302, performing edge detection on the gray level image to obtain an edge contour of the target object.
The edge detection may be performed on the grayscale image by using a variety of methods, referring to fig. 4, in the process of implementing step S302, the edge detection is performed on the grayscale image to obtain the edge contour of the target object, and the method may include the following steps:
s401, selecting a plurality of structural elements with different orientations.
The target object can be detected through the edge detection operator, and if the edge detection operator only adopts one structural element, the output image only contains one kind of geometric information, which is not beneficial to the maintenance of image details. In order to ensure the accuracy of image detection, an edge detection operator containing various structural elements is selected.
S402, carrying out detail matching on the gray level image by using each structural element in the plurality of structural elements to obtain a filtering image.
By using multiple structural elements in different orientations, each structural element being used as a scale to match image details, various details of the image can be adequately preserved while filtering to different types and sizes of noise.
S403, determining a gray edge calculation of the filtered image to obtain a number of pixels present in each of a plurality of gray levels in the filtered image.
After filtering the image, in order to further reduce the amount of calculation, the filtered image may be converted into a gray scale image, and by setting a plurality of gray scales to the gray scale image, the number of pixels present in each gray scale image may be calculated.
S404 weights the number of pixels in each gray scale level, and sets the weighted average value of the gray scales as a threshold value.
For example, a large weight is given to a gradation level value having a large number of pixels, a small weight is given to a gradation level value having a small number of pixels, and an average value of the weighted gradation values is calculated to obtain a weighted average gradation value as a threshold value, thereby performing binarization processing on a gradation image based on the average gradation value.
S405, performing binarization processing on the filtering image based on the threshold value.
Based on the threshold value, the filtered image may be subjected to binarization processing, for example, to data 1 for pixels larger than the threshold value and 0 for pixels smaller than the threshold value.
And S406, taking the image after the binarization processing as an edge image of the target object.
The edge image of the target object is obtained by performing corresponding color assignment on the binarized data, for example, assigning a pixel with a binarization value of 1 to black and assigning an image with a binarization value of 0 to white.
Through the steps from S401 to S406, the accuracy of target object detection is improved on the premise of reducing system resource consumption.
As another implementation manner of the embodiment of the present disclosure, the hip key point and the shoulder key point are used to determine the waist center point coordinate and the upper body length of the target object, and a first distance D between the hip key point (e.g., the hip center point coordinate) and the waist key point (e.g., the shoulder key point center coordinate) may also be obtained, and a product of the first distance and a preset coefficient is used as the upper body length.
As another implementation manner of the embodiment of the present disclosure, determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points includes: respectively acquiring a first average coordinate M1 of a hip key point and a second average coordinate M2 of a shoulder key point, and taking a 1M 1+ a 2M 2 as the waist center point coordinates, wherein a1 and a2 are waist center point correction coefficients, a1 and a2 are both greater than 0, a1+ a2=1, and the specific values of a1 and a2 can be flexibly configured according to actual needs, and the setting mode of the values of a1 and a2 is not limited herein.
As another implementation manner of the embodiment of the present disclosure, determining the waist representing region of the target object by using the waist center point coordinates and the upper body length may include: respectively obtaining length distances N1 and N2 of the hip key point and the waist key point on a horizontal plane; b1 × N1+ b2 × N2 is taken as the width of the waist representation area, wherein a1 and a2 are waist center point correction coefficients, a1 and a2 are both greater than 0, and b1+ b2 is less than 1; taking b3 x D as the height of the waist representation region; and taking the waist central point as the central point of the waist representation area, wherein b3 is a height correction coefficient, and b3> 1. The specific values of b1, b2 and b3 can be flexibly configured according to actual needs, and the setting mode of the values of b1, b2 and b3 is not limited herein.
As another implementation manner of the embodiment of the present disclosure, performing keypoint detection on a hip region and a shoulder region of a target object on a target image respectively includes: performing edge detection on the target object to obtain contour points of the target object; selecting any contour point as a starting scanning point, establishing a row pointer and a column pointer pointing to the starting scanning point, and a total pointer pointing to the right of the row pointer and pointing to the down of the column pointer; when scanning other scanning points except the initial scanning point based on the contour point, correspondingly establishing a row pointer, a column pointer and a total pointer of other scanning points; based on the row pointer, the column pointer and the total pointer, key points of a hip region and a shoulder region of the target object are determined.
As another implementation manner of the embodiment of the present disclosure, determining key points of a hip region and a shoulder region of a target object based on the row pointer, the column pointer, and the total pointer includes: acquiring a line pointer with line coordinates Y = Y0+ R0 on a human body contour point, wherein Y0 is an average line coordinate of a key point of a hip region, and R0 is a head radius determined according to the key point of the hip region; traversing all key points Pi (Xi, yi) of the shoulder area and the hip area, if X0-R0< Xi < X0, determining Pi as a left key point of the target object, and if X0< Xi < X0+ R0, determining Pi as a right key point of the target object; from the finally determined left and right keypoints, keypoints of the hip region and the shoulder region of the target object are determined.
Corresponding to the above method embodiment, referring to fig. 5, the present disclosure also provides an image beautification apparatus 50, comprising:
the detection module 501 is configured to perform key point detection on a hip region and a shoulder region of a target object on a target image, respectively, to obtain a hip key point and a shoulder key point;
a first determining module 502, configured to determine the waist center point coordinates and the upper body length of the target object based on the hip key points and the shoulder key points;
a second determining module 503, configured to determine a waist representing region of the target object by using the waist center point coordinates and the upper body length;
and the beautification module 504 is configured to perform horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameter for the waist area and taking the center point coordinate as a center, so as to form an beautified image including a waist deformation image.
The apparatus shown in fig. 5 may correspondingly execute the contents in the foregoing method embodiment, and details of parts not described in detail in this embodiment refer to the contents described in the foregoing method embodiment, which are not repeated herein.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, which includes:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image beautification method of the method embodiments described above.
Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the image beautification method in the aforementioned method embodiments.
Referring now to FIG. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or installed from the storage means 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. An image beautification method, comprising:
respectively executing key point detection on a hip region and a shoulder region of a target object on a target image to obtain a hip key point and a shoulder key point;
determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points;
determining a waist representation area of the target object by using the waist central point coordinate and the upper body length;
performing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameter aiming at the waist area and taking the central point coordinate as a center to form a beautified image containing a waist deformation image; wherein, the first and the second end of the pipe are connected with each other,
the step of determining the waist representation area of the target object by using the waist center point coordinates and the upper body length comprises the following steps:
respectively obtaining the length distances N1 and N2 of the buttock key points and the shoulder key points on the horizontal plane;
b1 × N1+ b2 × N2 is taken as the width of the waist representing region, wherein b1 and b2 are width correction coefficients, b1 and b2 are both greater than 0, and b1+ b2<1;
b3 x D is taken as the height of the waist representation area, wherein b3 is a height correction coefficient, and b3 is less than 1; wherein D is a first distance between the buttock key point and the shoulder key point;
and taking the waist central point as the central point of the waist representation area.
2. The method of claim 1, wherein before performing keypoint detection on the hip region and the shoulder region, respectively, of the target object on the target image, the method further comprises:
transforming the target image into a grayscale image;
and carrying out edge detection on the gray level image to obtain an edge contour of the target object.
3. The method of claim 2, wherein the performing edge detection on the grayscale image to obtain an edge contour of the target object comprises:
selecting a plurality of structural elements with different orientations;
carrying out detail matching on the gray level image by utilizing each structural element in a plurality of structural elements to obtain a filtering image;
determining a gray scale edge of the filtered image to obtain a number of pixels present in each of a plurality of gray scale levels in the filtered image;
weighting the number of pixels in each gray scale level, and taking the weighted gray scale average value as a threshold value;
carrying out binarization processing on the filtered image based on the threshold value;
and taking the image after the binarization processing as an edge image of the target object.
4. The method of claim 1, wherein determining the target object waist center point coordinates and upper body length based on the hip keypoints and the shoulder keypoints comprises:
acquiring a first distance D between a hip key point and a shoulder key point;
and taking the product of the first distance and a preset coefficient as the upper half body length.
5. The method of claim 4, wherein determining the target object waist center point coordinates and upper body length based on the hip keypoints and the shoulder keypoints comprises:
respectively acquiring a first average coordinate M1 of a hip key point and a second average coordinate M2 of a shoulder key point;
and taking a1 × M1+ a2 × M2 as the waist center point coordinates, wherein a1 and a2 are waist center point correction coefficients, a1 and a2 are both greater than 0, and a1+ a2=1.
6. The method of claim 1, wherein performing keypoint detection for the hip region and the shoulder region, respectively, of the target object on the target image comprises:
performing edge detection on the target object to obtain contour points of the target object;
selecting any contour point as a starting scanning point, establishing a row pointer and a column pointer pointing to the starting scanning point, and a total pointer pointing to the right of the row pointer and pointing to the down of the column pointer;
when scanning other scanning points except the initial scanning point based on the contour point, correspondingly establishing a row pointer, a column pointer and a total pointer of other scanning points;
based on the row pointer, the column pointer and the total pointer, key points of a hip region and a shoulder region of the target object are determined.
7. The method of claim 6, wherein determining key points of hip and shoulder regions of the target object based on the row, column and global pointers comprises:
acquiring a line pointer with line coordinates Y = Y0+ R0 on a human body contour point, wherein Y0 is an average line coordinate of a key point of a hip region, and R0 is a head radius determined according to the key point of the hip region;
traversing all key points Pi (Xi, yi) of the shoulder regions, if X0-R0< Xi < X0, determining Pi as a left key point of the target object, and if X0< Xi < X0+ R0, determining Pi as a right key point of the target object;
from the finally determined left and right keypoints, keypoints of the hip region and the shoulder region of the target object are determined.
8. An image beautification device, characterized by comprising:
the detection module is used for respectively executing key point detection on a hip region and a shoulder region of a target object on a target image to obtain a hip key point and a shoulder key point;
the first determining module is used for determining the coordinates of the waist center point and the upper body length of the target object based on the hip key points and the shoulder key points;
the second determining module is used for determining a waist representing area of the target object by utilizing the waist central point coordinate and the upper body length;
the beautifying module is used for performing horizontal deformation operation on the image of the waist representation area by using the acquired deformation parameters aiming at the waist area and taking the central point coordinate as a center to form a beautifying image containing a waist deformation image; wherein the content of the first and second substances,
the second determining module is further configured to:
respectively obtaining the length distances N1 and N2 of the buttock key points and the shoulder key points on the horizontal plane;
b1 × N1+ b2 × N2 is taken as the width of the waist representing region, wherein b1 and b2 are width correction coefficients, b1 and b2 are both greater than 0, and b1+ b2<1;
b3 x D is taken as the height of the waist representation area, wherein b3 is a height correction coefficient, and b3 is less than 1; wherein D is a first distance between the buttock key point and the shoulder key point;
taking the waist central point as the central point of the waist representation area.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image beautification method of any of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image beautification method of any of claims 1-7.
CN201910580719.XA 2019-06-29 2019-06-29 Image beautifying method and device and electronic equipment Active CN110288520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910580719.XA CN110288520B (en) 2019-06-29 2019-06-29 Image beautifying method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910580719.XA CN110288520B (en) 2019-06-29 2019-06-29 Image beautifying method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110288520A CN110288520A (en) 2019-09-27
CN110288520B true CN110288520B (en) 2023-03-31

Family

ID=68020158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910580719.XA Active CN110288520B (en) 2019-06-29 2019-06-29 Image beautifying method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110288520B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105348A (en) * 2019-12-25 2020-05-05 北京市商汤科技开发有限公司 Image processing method and apparatus, image processing device, and storage medium
CN111415382B (en) * 2020-03-02 2022-04-05 北京字节跳动网络技术有限公司 Method and device for processing human body arm body beautification in picture and electronic equipment
CN111415306A (en) * 2020-03-11 2020-07-14 北京字节跳动网络技术有限公司 Method and device for processing human chest body beautification in picture and electronic equipment
CN111402116A (en) * 2020-03-11 2020-07-10 北京字节跳动网络技术有限公司 Method and device for processing human waist body beautification in picture and electronic equipment
CN114913549B (en) * 2022-05-25 2023-07-07 北京百度网讯科技有限公司 Image processing method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103047938A (en) * 2013-01-05 2013-04-17 山西省电力公司大同供电分公司 Method and device for detecting icing thickness of power transmission line
WO2017084204A1 (en) * 2015-11-19 2017-05-26 广州新节奏智能科技有限公司 Method and system for tracking human body skeleton point in two-dimensional video stream
CN108830784A (en) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
CN109191414A (en) * 2018-08-21 2019-01-11 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8340432B2 (en) * 2009-05-01 2012-12-25 Microsoft Corporation Systems and methods for detecting a tilt angle from a depth image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103047938A (en) * 2013-01-05 2013-04-17 山西省电力公司大同供电分公司 Method and device for detecting icing thickness of power transmission line
WO2017084204A1 (en) * 2015-11-19 2017-05-26 广州新节奏智能科技有限公司 Method and system for tracking human body skeleton point in two-dimensional video stream
CN108830784A (en) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109191414A (en) * 2018-08-21 2019-01-11 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人脸检测和关键点识别的快速人体组件划分;马旋等;《计算机应用与软件》;20130115(第01期);摘要,第3.1节第274-275页 *

Also Published As

Publication number Publication date
CN110288520A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110288520B (en) Image beautifying method and device and electronic equipment
CN110288551B (en) Video beautifying method and device and electronic equipment
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN110415276B (en) Motion information calculation method and device and electronic equipment
CN110070551B (en) Video image rendering method and device and electronic equipment
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
CN110298785A (en) Image beautification method, device and electronic equipment
CN110070495B (en) Image processing method and device and electronic equipment
CN110288553A (en) Image beautification method, device and electronic equipment
CN110288521A (en) Image beautification method, device and electronic equipment
CN110264430B (en) Video beautifying method and device and electronic equipment
CN114004905B (en) Method, device, equipment and storage medium for generating character style pictogram
CN110555861B (en) Optical flow calculation method and device and electronic equipment
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN110378936B (en) Optical flow calculation method and device and electronic equipment
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN110288552A (en) Video beautification method, device and electronic equipment
CN110264431A (en) Video beautification method, device and electronic equipment
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
CN110288554B (en) Video beautifying method and device and electronic equipment
CN111200705B (en) Image processing method and device
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN113808020A (en) Image processing method and apparatus
CN111223105B (en) Image processing method and device
CN111586261B (en) Target video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant