CN112102159A - Human body beautifying method, device, electronic equipment and storage medium - Google Patents

Human body beautifying method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112102159A
CN112102159A CN202010989845.3A CN202010989845A CN112102159A CN 112102159 A CN112102159 A CN 112102159A CN 202010989845 A CN202010989845 A CN 202010989845A CN 112102159 A CN112102159 A CN 112102159A
Authority
CN
China
Prior art keywords
image
beautifying
region
filled
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010989845.3A
Other languages
Chinese (zh)
Inventor
华路延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202010989845.3A priority Critical patent/CN112102159A/en
Publication of CN112102159A publication Critical patent/CN112102159A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04

Abstract

The application provides a human body beautifying method, a human body beautifying device, electronic equipment and a storage medium. The electronic equipment determines a local area corresponding to a body part in the figure image through the determined key point in the figure image area; and adjusting the occupied range in the image to be processed based on the preset beautifying mode and the beautifying parameters, thereby achieving the purpose of beautifying the image of the specific body part of the target person.

Description

Human body beautifying method, device, electronic equipment and storage medium
Technical Field
The application relates to the field of computers, in particular to a human body beautifying method, a human body beautifying device, electronic equipment and a storage medium.
Background
In order to present satisfactory top-of-the-mirror effect, users sometimes need to make some aesthetic adjustments to the captured image or video. At present, the beautification processing mode of the image or the video is mainly to carry out integral stretching or zooming on the picture, and the specific beautification processing on different trunk parts is not considered.
Disclosure of Invention
In order to overcome at least one of the deficiencies in the prior art, an object of the present application is to provide a method for beautifying a human body, which is applied to an electronic device, the method comprising:
acquiring an image to be processed;
determining a figure image area of a target figure in the image to be processed;
extracting key points in the character image area, wherein the key points represent joint points of the target character;
according to the key points, determining a local area of a preset position from the character image area, wherein the local area represents the body part of the target character;
and adjusting the occupied range of the local area in the image to be processed according to the preset beautification mode and preset beautification parameters of the local area.
Optionally, the beautification mode is a zoom mode, and the method further includes:
determining a region to be filled generated in the image to be processed by the zooming mode;
and filling the region to be filled according to the image information outside the region to be filled in the image to be processed.
Optionally, the step of filling the to-be-filled region according to image information outside the to-be-filled region in the to-be-processed image includes:
determining a reference area within a preset range from the edge position of the area to be filled along the direction opposite to the scaling mode;
according to the image information of the reference area, calculating filling image information for filling the area to be filled;
and filling the area to be filled according to the filling image information.
Optionally, the electronic device is preset with a machine learning model, and the step of filling the to-be-filled region according to image information in the to-be-processed image, which is outside the to-be-filled region, includes:
and filling the region to be filled according to the image information outside the region to be filled through the machine learning model.
Optionally, before extracting key points in the person image region, the key points characterizing the joint points of the target person, the method further includes:
and intercepting the figure image area from the image to be processed.
Optionally, before adjusting a range occupied by the local area in the image to be processed according to a preset beautification mode and preset beautification parameters of the local area, the method further includes:
providing a display interface, wherein the display interface provides options of candidate beautifying modes and parameter input controls of the candidate beautifying modes aiming at the local area;
and responding to the configuration operation in the display interface, and determining a preset beautifying mode and preset beautifying parameters of the local area.
It is another object of the embodiments of the present application to provide a human body beautifying device applied to an electronic device, the human body beautifying device including:
the input module is used for acquiring an image to be processed;
the processing module is used for determining a character image area of a target character in the image to be processed; extracting key points in the character image area, wherein the key points represent joint points of the target character; according to the key points, determining a local area of a preset position from the character image area, wherein the local area represents the body part of the target character;
and the output module is used for adjusting the occupied range of the local area in the image to be processed according to the preset beautifying mode and preset beautifying parameters of the local area.
Optionally, the processing module is further configured to:
determining a region to be filled generated in the image to be processed by the zooming mode;
the output module is further configured to fill the region to be filled according to image information in the image to be processed, which is outside the region to be filled.
It is a further object of an embodiment of the present application to provide an electronic device, which includes a processor and a memory, where the memory stores computer-executable instructions, and the computer-executable instructions, when executed by the processor, implement the human beautifying method.
It is a fourth object of the embodiments of the present application to provide a storage medium storing a computer program, which when executed by a processor, implements the human body beautification method.
Compared with the prior art, the method has the following beneficial effects:
the application provides a human body beautifying method, a human body beautifying device, electronic equipment and a storage medium. The electronic equipment determines a local area corresponding to a body part in the figure image through the determined key point in the figure image area; and adjusting the occupied range in the image to be processed based on the preset beautifying mode and the beautifying parameters, thereby achieving the purpose of beautifying the image of the specific body part of the target person.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a schematic diagram illustrating an overall beautification of a picture according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method for beautifying a human body according to an embodiment of the present application;
FIG. 4 is a key point diagram provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a region to be filled according to an embodiment of the present application;
fig. 6 is a second flowchart illustrating steps of a human body beautifying method according to an embodiment of the present application;
FIG. 7 is a schematic illustration of a filling method provided in an embodiment of the present application;
fig. 8 is a third schematic flowchart illustrating a process of a human body beautifying method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a human body beautification device according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a body beautification device; 120-a memory; 130-a processor; 140-a communication unit; 200-key points; 300-area to be filled; 410-a first sub-region; 420-a network point; 400-a second sub-region; 1101-an input module; 1102-a processing module; 1103-output module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it is noted that the terms "first", "second", "third", and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance.
In order to present satisfactory top-of-the-mirror effect, users sometimes need to make some aesthetic adjustments to the captured image or video. At present, the beautification processing mode of the image or the video mainly carries out integral stretching on the picture, and does not consider carrying out targeted beautification processing on different trunk parts.
It should be understood that some aesthetic adjustments to the image or video may be applied in many scenarios. Such as live scenes, video conference scenes, video editing scenes, and photo scenes. The beautification method is exemplified below with reference to fig. 1 by taking a live scene as an example.
In a live broadcast scene, some anchor broadcasters may not be satisfied with their own statures, and need to beautify their own statures to some extent by means of live broadcast equipment, so that the statures are longer and longer, and a better mirror-up effect is achieved.
As shown in fig. 1, in order to make the body shape longer, the live broadcast device needs to zoom the anchor image from a horizontal angle and stretch the anchor image from a longitudinal angle, so that the height of the anchor is increased to a certain extent while the anchor is made thin, and the effect of making the body shape longer is achieved.
However, when the whole picture is stretched or zoomed, all parts of the body trunk will change to some extent. However, some anchor needs only to be waist-lean, thigh-lean, and/or arm-lean.
In view of this, an embodiment of the present application provides a method for beautifying a human body, which is applied to an electronic device. The electronic equipment identifies a local area corresponding to a preset body part of a target person in an image to be processed, and performs corresponding beautifying adjustment based on a preset beautifying mode and beautifying parameters. Further, the purpose of performing targeted beautifying treatment on the trunk part is achieved.
The electronic Device may be, but is not limited to, a server, a smart phone, a Personal Computer (PC), a tablet PC, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), an image capture Device, and the like.
Please refer to fig. 2, which illustrates a schematic structure of the electronic device 100. The electronic device 100 includes a body beautification apparatus 110, a memory 120, a processor 130, and a communication unit 140.
The memory 120, processor 130, and communication unit 140 are electrically connected to each other directly or indirectly to enable data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The body beautification device 110 includes at least one software function module which can be stored in the memory 120 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 130 is used for executing executable modules stored in the memory 120, such as software functional modules and computer programs included in the body beautification apparatus 110. The computer executable instructions in the body beautification apparatus 110, when executed by a processor, implement the body beautification method.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 120 is used for storing a program, and the processor 130 executes the program after receiving the execution instruction. The communication unit 140 is used for transceiving data through a network.
The processor 130 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Please refer to fig. 3, which is a flowchart illustrating a process of the human body beautifying method according to an example of the present application. The following describes each step of the method for beautifying the human body in detail with reference to fig. 3.
In step S90, an image to be processed is acquired.
In step S100, a person image area of a target person in the image to be processed is determined.
There are various ways to determine the person image area of the target person, and conventional image recognition methods can be used, for example, to recognize the person image area by color change between the person image and the background image. Alternatively, the human image region may be identified by a machine learning model.
And step S120, extracting key points in the character image area, wherein the key points represent the joint points of the target character.
Namely, in the embodiment of the present application, the key points corresponding to the joint points in the human skeleton are determined first, and the task image region is divided by the key points.
Step S130, according to the key points, a local area of a preset position is determined from the character image area, and the local area represents the body part of the target character.
And S140, adjusting the occupied range of the local area in the image to be processed according to the preset beautifying mode and preset beautifying parameters of the local area.
That is, in the embodiment of the present application, the electronic device 100 performs beautification processing on the determined local area according to the corresponding beautification mode and beautification parameters, so as to realize targeted adjustment on the trunk part.
With respect to the above steps, the above steps are exemplarily described below with reference to fig. 4. It should be understood that the following example is merely intended to provide one possible example, the above steps are set forth, and are not intended to represent the present application limited to this example.
The image of the person shown in fig. 4 includes 14 key points 200 determined for the head, shoulder, elbow, wrist, waist, knee, and ankle of the target person. That is, the electronic apparatus 100 can divide the human figure into at least a plurality of partial regions such as a torso region, a thigh region, a lower leg region, a head region, and an arm region by the 14 key points 200.
Based on the plurality of local regions divided by the keypoints 200. If the thigh needs to be thinned, the electronic device 100 determines a corresponding thigh area of the thigh of the target person in the person image; and zooming is carried out, so that the occupied range of the thigh area in the image is reduced, and the purpose of thinning the thigh is achieved. Since only the range occupied by the thigh region is adjusted, other regions in the image of the person are not affected.
Therefore, by the human beautification method, the electronic device 100 determines a local area corresponding to a body part in the person image through the determined key points 200 in the person image area; and adjusting the occupied range in the image to be processed based on the preset beautifying mode and the beautifying parameters, thereby achieving the purpose of beautifying the image of the specific body part of the target person.
In addition, the range occupied by the local region in the image to be processed is adjusted, mainly divided into stretching and scaling. Wherein scaling reduces the occupied area and stretching enlarges the occupied area.
Referring to fig. 5, taking the image area corresponding to the thigh as an example, since the occupied area is reduced by scaling, a blank area 300 to be filled is left in the image area corresponding to the original thigh, and the electronic device 100 needs to perform filling and repairing, so that no obvious beautification trace is left in the image to be processed. Of course, scaling other parts of the body will also generate the region 300 to be filled for repair, and the embodiment of the present application is not limited in particular.
In view of this, referring to fig. 6, the method for beautifying a human body further includes:
s150, judging whether the preset beautifying mode is a zooming mode or not;
in step S160, if yes, the region 300 to be filled generated in the image to be processed by the zooming method is determined.
Step S170, filling the region 300 to be filled according to the image information outside the region 300 to be filled in the image to be processed.
That is, when the preset beautification mode is the zoom mode, the electronic device 100 determines the area to be filled 300 generated by the zoom mode in the image to be processed; and then filled by image information outside the region 300 to be filled in the pattern to be processed.
It should be understood that, by filling the image information pair outside the area 300 to be filled in the image information to be processed, the filled image information and the surrounding image information can achieve a better fusion effect and reduce the residue of the beautification trace based on the image information outside the area 300 to be filled in the image information to be processed.
There are various ways of filling the region to be filled 300 by using image information other than the region to be filled 300 in the image information to be processed, and as a possible implementation way, the step S170 includes:
step S170-1A, a reference region within a preset range from the edge position of the region to be filled 300 is determined in a direction opposite to the scaling manner.
Step S170-2A, calculating filling image information for filling the region to be filled 300 according to the image information of the reference region.
Step S170-3A, filling the area to be filled 300 according to the filling image information.
That is, through the above steps, the electronic device 100 determines a piece of reference region around the region 300 to be filled, and calculates filling image information for filling the region 300 to be filled through image information of the reference region.
Taking the thigh of the target person as an example, as shown in fig. 7, after the image area corresponding to the thigh area of the target person is scaled, an area to be filled 300 is generated. The electronic device 100 determines a reference region within a preset range from the edge position of the region 300 to be filled in the direction opposite to the scaling manner.
Further, for the zoomed thigh region, the electronic device 100 determines a plurality of network points 420 along the edge of the zoomed thigh region, and divides the region to be filled into 300 first sub-regions 410 and the reference region into a plurality of second sub-regions 400 along the direction opposite to the zooming manner based on the network points 420.
As a possible filling manner, for each first sub-region 410, the electronic device 100 determines a second sub-region 400 corresponding to the first sub-region 410, calculates an average pixel value of all pixel values in the second sub-region 400, and fills the average pixel value into the first sub-region 410.
As another possible filling manner, for each first sub-region 410 and the corresponding second sub-region 400, the electronic device 100 divides the second sub-region 400 into a plurality of pixel blocks, and sets a plurality of weights according to the distance between each pixel block and the first sub-region 410, wherein the weight is smaller the farther the distance is; obtaining weighted pixel values of a plurality of pixel blocks based on a plurality of set weights; the weighted pixel values are filled into the first sub-region 410.
As another possible implementation manner, the electronic device 100 is preset with a machine learning model, and based on the preset machine learning model, the step S170 includes:
step S170-1B, filling the region 300 to be filled according to the image information outside the region 300 to be filled through a machine learning model.
That is, in this embodiment, the electronic device 100 may fill the region to be filled 300 according to the graphic information outside the region to be filled 300 by pre-selecting the trained machine learning model.
Since there are many ways to fill the region 300 to be filled, the electronic device 100 can provide a corresponding filling way option interface for the user to select.
Taking a live broadcast scene as an example, when the background in the live broadcast scene is single, the filling may be performed by selecting an average pixel value or a weighted pixel value. It should be understood that, since the average pixel value or the weighted pixel value and the like are simple to calculate, the calculation amount can be saved to some extent.
When the background in the live broadcast scene is complex, the average pixel value or the weighted pixel value has certain limitation when the complex background is processed. Thus, the machine learning model may be selected for filling the region 300 to be filled.
In addition, when the preset area in the human figure image area is beautified, a certain influence may be exerted on the area outside the human figure image area, and an beautification mark may be left in the beautified image.
In view of this, referring to fig. 8, before step S120, the method for beautifying a human body further includes:
step S110, a person image region is cut out from the image to be processed.
Through the above steps, the electronic device 100 cuts out the human image area and then processes the cut human image area alone. And after the character image area is intercepted in the image to be processed, the rest is the background image area. The electronic device 100 fills the beautified person image area to the cut position in the background image area to obtain an beautified image.
Because the cut-out person image area is processed independently, the influence of the beautifying processing of the person image area on the background image area can be avoided, and therefore beautifying marks in the beautified image can be reduced.
In addition, in consideration of the aesthetic sense being a subjective sense, referring to fig. 8 again, before step S140, the method for beautifying a human body further includes:
in step S70, a display interface is provided.
The display interface provides options of candidate beautifying modes and parameter input controls of the candidate beautifying modes aiming at local areas;
step S80 determines the preset beautification mode and preset beautification parameters of the local area in response to the configuration operation in the display interface.
That is, the electronic device 100 provides a display interface on which the candidate beautification modes and the parameter input controls are displayed, so that the user can customize the beautification effect. In order to provide visual customization effects, the display interface also displays a preview area with aesthetic effects. Based on the preview image, the electronic device 100 displays the beautified character image to a preview area for the user to browse according to the beautification mode and the beautification parameter selected by the user.
The number of the determined local regions may be plural in consideration of the passing of the key point 200 in the human image region. Therefore, the electronic device 100 provides a responsive display interface for the user to customize the beautification effect for the local areas corresponding to different body parts of the target person.
Based on the same inventive concept, the embodiment of the present application further provides a human body beautifying device 110, which is applied to the electronic device 100. The body beautification apparatus 110 includes at least one functional module that may be stored in the form of software in the memory 120. Referring to fig. 9, the body beautification device 110 includes, in terms of functional division:
an input module 1101, configured to acquire an image to be processed.
In the embodiment of the present application, the input module 1101 is configured to execute step S90 in fig. 3, and as to the detailed description of the input module 1101, reference may be made to the detailed description of step S90.
The processing module 1102 is configured to determine a person image area of a target person in an image to be processed; extracting key points 200 in the character image area, wherein the key points 200 represent the joint points of the target character; according to the key points 200, local areas of preset positions are determined from the character image areas, and the local areas represent body parts of the target characters.
In the embodiment of the present application, the processing module 1102 is configured to execute the steps S100, S120, and S130 in fig. 3, and as to the detailed description of the processing module 1102, reference may be made to the detailed description of the steps S100, S120, and S130.
The output module 1103 is configured to adjust a range occupied by the local area in the image to be processed according to the preset beautifying manner and the preset beautifying parameters of the local area.
In the embodiment of the present application, the output module 1103 is configured to perform step S140 in fig. 3, and as to the detailed description of the output module 1103, reference may be made to the detailed description of step S140.
Optionally, the processing module 1102 is further configured to:
determining a region to be filled 300 generated in the image to be processed by a zooming mode;
the output module 1103 is further configured to fill the region to be filled 300 according to image information outside the region to be filled 300 in the image to be processed.
Optionally, as a possible implementation manner, the manner of filling the region to be filled 300 by the processing module 1102 specifically includes:
determining a reference region within a preset range from the edge position of the region 300 to be filled in the direction opposite to the scaling mode;
calculating filling image information for filling the region 300 to be filled according to the image information of the reference region;
the region to be filled 300 is filled according to the filling image information.
Optionally, as another possible implementation manner, the manner of filling the region to be filled 300 by the processing module 1102 specifically includes:
and filling the region 300 to be filled according to the image information outside the region 300 to be filled by the machine learning model.
Optionally, the processing module 1102 is further configured to intercept a person image region from the image to be processed.
Optionally, the input module 1101 is further configured to provide a display interface, where the display interface provides, for a local area, options of candidate beautifying manners and parameter input controls of the candidate beautifying manners;
and responding to the configuration operation in the display interface, and determining the preset beautifying mode and preset beautifying parameters of the local area.
The embodiment of the present application further provides an electronic device 100, where the electronic device 100 includes a processor 130 and a memory 120, and the memory 120 stores computer-executable instructions, and when the computer-executable instructions are executed by the processor 130, the method for beautifying the human body is implemented.
The embodiment of the present application further provides a storage medium, in which a computer program is stored, and when the computer program is executed by the processor 130, the method for beautifying the human body is implemented.
In summary, the embodiments of the present application provide a method and an apparatus for beautifying a human body, an electronic device, and a storage medium. The electronic equipment determines a local area corresponding to a body part in the figure image through the determined key point in the figure image area; and adjusting the occupied range in the image to be processed based on the preset beautifying mode and the beautifying parameters, thereby achieving the purpose of beautifying the image of the specific body part of the target person.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for beautifying a human body, which is applied to an electronic device, the method comprising:
acquiring an image to be processed;
determining a figure image area of a target figure in the image to be processed;
extracting key points in the character image area, wherein the key points represent joint points of the target character;
according to the key points, determining a local area of a preset position from the character image area, wherein the local area represents the body part of the target character;
and adjusting the occupied range of the local area in the image to be processed according to the preset beautification mode and preset beautification parameters of the local area.
2. The method of beautifying the human body according to claim 1, wherein the beautification mode is a zoom mode, the method further comprising:
determining a region to be filled generated in the image to be processed by the zooming mode;
and filling the region to be filled according to the image information outside the region to be filled in the image to be processed.
3. The method for beautifying human body according to claim 2, wherein the step of filling the region to be filled according to the image information outside the region to be filled in the image to be processed comprises:
determining a reference area within a preset range from the edge position of the area to be filled along the direction opposite to the scaling mode;
according to the image information of the reference area, calculating filling image information for filling the area to be filled;
and filling the area to be filled according to the filling image information.
4. The method for beautifying human body according to claim 2, wherein a machine learning model is preset in the electronic device, and the step of filling the region to be filled according to the image information outside the region to be filled in the image to be processed comprises:
and filling the region to be filled according to the image information outside the region to be filled through the machine learning model.
5. The method of claim 1, wherein before extracting key points in the image region of the person, the key points characterizing the joints of the target person, the method further comprises:
and intercepting the figure image area from the image to be processed.
6. The method for beautifying human body according to claim 1, wherein said adjusting the area occupied by said local area in said image to be processed according to the preset beautifying mode and preset beautifying parameters of said local area, further comprises:
providing a display interface, wherein the display interface provides options of candidate beautifying modes and parameter input controls of the candidate beautifying modes aiming at the local area;
and responding to the configuration operation in the display interface, and determining a preset beautifying mode and preset beautifying parameters of the local area.
7. A human body beautification device, applied to an electronic apparatus, the human body beautification device comprising:
the input module is used for acquiring an image to be processed;
the processing module is used for determining a character image area of a target character in the image to be processed; extracting key points in the character image area, wherein the key points represent joint points of the target character; according to the key points, determining a local area of a preset position from the character image area, wherein the local area represents the body part of the target character;
and the output module is used for adjusting the occupied range of the local area in the image to be processed according to the preset beautifying mode and preset beautifying parameters of the local area.
8. The body beautification device of claim 7, wherein the beautification mode is a zoom mode, the processing module further configured to:
determining a region to be filled generated in the image to be processed by the zooming mode;
the output module is further configured to fill the region to be filled according to image information in the image to be processed, which is outside the region to be filled.
9. An electronic device, comprising a processor and a memory, wherein the memory stores computer-executable instructions that, when executed by the processor, implement the method of beautifying a human body according to any one of claims 1 to 6.
10. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of beautifying a human body according to any one of claims 1 to 6.
CN202010989845.3A 2020-09-18 2020-09-18 Human body beautifying method, device, electronic equipment and storage medium Pending CN112102159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010989845.3A CN112102159A (en) 2020-09-18 2020-09-18 Human body beautifying method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010989845.3A CN112102159A (en) 2020-09-18 2020-09-18 Human body beautifying method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112102159A true CN112102159A (en) 2020-12-18

Family

ID=73760041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010989845.3A Pending CN112102159A (en) 2020-09-18 2020-09-18 Human body beautifying method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112102159A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045941A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045941A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN109584151B (en) Face beautifying method, device, terminal and storage medium
CN107154030B (en) Image processing method and device, electronic equipment and storage medium
CN111460871A (en) Image processing method and device, and storage medium
CN107818543B (en) Image processing method and device
CN109376671B (en) Image processing method, electronic device, and computer-readable medium
CN107659722B (en) Image selection method and mobile terminal
JP2019144890A (en) Virtual try-on system, virtual try-on method, virtual try-on program, information processor, and learning data
CN107172347B (en) Photographing method and terminal
WO2021039856A1 (en) Information processing device, display control method, and display control program
CN109543534B (en) Method and device for re-detecting lost target in target tracking
CN108717704B (en) Target tracking method based on fisheye image, computer device and computer readable storage medium
CN112102198A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
CN112102159A (en) Human body beautifying method, device, electronic equipment and storage medium
US20160350622A1 (en) Augmented reality and object recognition device
CN112785490B (en) Image processing method and device and electronic equipment
CN111953907B (en) Composition method and device
CN112529770B (en) Image processing method, device, electronic equipment and readable storage medium
CN111294518B (en) Portrait composition limb truncation detection method, device, terminal and storage medium
KR102372711B1 (en) Image photographing apparatus and control method thereof
CN114926324A (en) Virtual fitting model training method based on real character image, virtual fitting method, device and equipment
CN114610150A (en) Image processing method and device
CN113610864A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116320711A (en) Image shooting method and device
CN113763233A (en) Image processing method, server and photographing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination