CN108830200A - A kind of image processing method, device and computer storage medium - Google Patents
A kind of image processing method, device and computer storage medium Download PDFInfo
- Publication number
- CN108830200A CN108830200A CN201810556482.7A CN201810556482A CN108830200A CN 108830200 A CN108830200 A CN 108830200A CN 201810556482 A CN201810556482 A CN 201810556482A CN 108830200 A CN108830200 A CN 108830200A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- point information
- contour
- limb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 claims abstract description 74
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000015654 memory Effects 0.000 claims description 31
- 238000007906 compression Methods 0.000 claims description 22
- 230000006835 compression Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 9
- 210000000038 chest Anatomy 0.000 description 140
- 210000002414 leg Anatomy 0.000 description 125
- 210000003414 extremity Anatomy 0.000 description 95
- 210000003205 muscle Anatomy 0.000 description 20
- 230000008569 process Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 13
- 230000001965 increasing effect Effects 0.000 description 13
- 230000001360 synchronised effect Effects 0.000 description 6
- 210000000481 breast Anatomy 0.000 description 5
- 230000003187 abdominal effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 210000002683 foot Anatomy 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 210000000689 upper leg Anatomy 0.000 description 4
- 244000309466 calf Species 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 210000003127 knee Anatomy 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 210000003489 abdominal muscle Anatomy 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 210000000617 arm Anatomy 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000004705 lumbosacral region Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method, device and computer storage mediums.The method includes:The first image is obtained, the target object in the first image is identified, obtains the limbs detection information of the target object;Obtain the first detection information corresponding with the pending area of the target object in the limbs detection information;Image procossing is carried out to the corresponding pending area of first detection information, generates the second image.
Description
Technical Field
The present invention relates to image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a computer storage medium.
Background
With the rapid development of internet technology, various image processing tools have appeared, which can process a person in an image, for example, perform "breast enhancement" or "muscle enhancement" on a target person in an image to make the figure more beautiful. However, such image processing operation requires manual operation by an operator, and requires a plurality of adjustment operations to achieve a better adjustment effect.
Disclosure of Invention
In order to solve the existing technical problems, embodiments of the present invention provide an image processing method, an image processing apparatus, and a computer storage medium.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
the embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring a first image, identifying a target object in the first image, and acquiring limb detection information of the target object;
obtaining first detection information corresponding to a to-be-processed area of the target object in the limb detection information;
and performing image processing on the area to be processed corresponding to the first detection information to generate second image data.
In the above scheme, the limb detection information includes limb contour point information and/or limb key point information;
the limb contour point information comprises coordinate information of the limb contour point;
the limb key point information comprises coordinate information of the limb key points.
In the above scheme, the limb contour point information includes chest contour point information; the limb key point information comprises chest key point information; the region to be treated is a chest region;
the image processing of the to-be-processed area corresponding to the first detection information includes:
carrying out image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information; wherein the image deformation processing comprises stretching and/or compression processing.
In the foregoing solution, the performing image deformation processing on the chest region corresponding to the chest contour point information and/or the chest key point information includes:
determining a center point of the chest region based on the chest contour point information;
stretching the chest region corresponding to the chest contour point information and/or the chest key point information in a direction from the central point to the outside of the chest region, or; and performing compression processing in a direction towards the central point outside the chest area.
In the above scheme, the performing enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information includes:
adding object information to the chest region corresponding to the chest contour point information and/or the chest key point information; or,
identifying the object information of the chest area, and adjusting the display attribute parameters corresponding to the object information.
In the foregoing solution, the performing image deformation processing on the chest region corresponding to the chest contour point information and/or the chest key point information includes:
carrying out image deformation processing on the chest region based on the first type deformation parameters corresponding to each point in the chest region;
wherein the first type deformation parameter varies with a variation in a distance between the corresponding point and a contour edge of the target object.
In the foregoing solution, the performing image processing on the to-be-processed area corresponding to the first detection information to generate a second image includes:
performing image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information to obtain a first processing result;
performing image processing on at least part of background areas except the area where the target object is located in the first image to obtain a second processing result;
generating a second image based on the first processing result and the second processing result.
In the foregoing solution, the performing image processing on at least part of the background area of the first image except for the area where the target object is located includes:
performing image deformation processing on the at least part of the background region based on the second type deformation parameters corresponding to each point in the at least part of the background region;
wherein the second type of deformation parameter varies exponentially with a variation in a distance between the corresponding point to the contour edge of the target object.
In the foregoing solution, the performing image processing on the to-be-processed area corresponding to the first detection information includes:
identifying the type of a to-be-processed area corresponding to the limb contour point information, and adding object information in the to-be-processed area based on the type of the to-be-processed area; or,
identifying object information in the to-be-processed area corresponding to the limb contour point information, and adjusting display attribute parameters corresponding to the object information.
In the foregoing solution, the obtaining first detection information corresponding to the to-be-processed area of the target object in the limb detection information includes:
acquiring contour point information related to a to-be-processed area corresponding to the target object in the limb contour point information; and acquiring key point information related to the to-be-processed area corresponding to the target object in the limb key point information.
The embodiment of the invention also provides an image processing device, which comprises an acquisition unit and an image processing unit; wherein,
the acquisition unit is used for acquiring a first image, identifying a target object in the first image and acquiring limb detection information of the target object; obtaining first detection information corresponding to a to-be-processed area of the target object in the limb detection information;
the image processing unit is configured to perform image processing on the to-be-processed area corresponding to the first detection information obtained by the obtaining unit, and generate a second image.
In the above scheme, the limb detection information includes limb contour point information and/or limb key point information;
the limb contour point information comprises coordinate information of the limb contour point;
the limb key point information comprises coordinate information of the limb key points.
In the above scheme, the limb contour point information includes chest contour point information; the limb key point information comprises chest key point information; the region to be treated is a chest region;
the image processing unit is used for carrying out image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information; wherein the image deformation processing comprises stretch and/or compression deformation processing.
In the above solution, the image processing unit is configured to determine a center point of the chest region based on the chest contour point information; stretching the chest region corresponding to the chest contour point information and/or the chest key point information in a direction from the central point to the outside of the chest region, or; and performing compression processing in a direction towards the central point outside the chest area.
In the above scheme, the image processing unit is configured to add object information to the chest region corresponding to the chest contour point information and/or the chest key point information; or identifying the object information of the chest region and adjusting the display attribute parameters corresponding to the object information.
In the above scheme, the image processing unit is configured to perform image deformation processing on the chest region based on a first type of deformation parameter corresponding to each point in the chest region; wherein the first type deformation parameter varies with a variation in a distance between the corresponding point and a contour edge of the target object.
In the above scheme, the image processing unit is configured to perform image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information to obtain a first processing result; performing image processing on at least part of background areas except the area where the target object is located in the first image to obtain a second processing result; generating a second image based on the first processing result and the second processing result.
In the foregoing solution, the image processing unit is configured to perform image deformation processing on the at least part of the background region based on a second type deformation parameter corresponding to each point in the at least part of the background region; wherein the second type of deformation parameter varies exponentially with a variation in a distance between the corresponding point to the contour edge of the target object.
In the above scheme, the image processing unit is configured to identify a type of a to-be-processed area corresponding to the limb contour point information, and add object information to the to-be-processed area based on the type of the to-be-processed area; or identifying object information in the to-be-processed area corresponding to the limb contour point information, and adjusting display attribute parameters corresponding to the object information.
In the above scheme, the obtaining unit is configured to obtain contour point information related to a region to be processed corresponding to the target object in the limb contour point information; and acquiring key point information related to the to-be-processed area corresponding to the target object in the limb key point information.
Embodiments of the present invention also provide a computer-readable storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the steps of the image processing method according to the embodiments of the present invention are implemented.
The embodiment of the invention also provides an image processing device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the image processing method of the embodiment of the invention.
The embodiment of the invention also provides a computer program product, which comprises computer executable instructions, and after the computer executable instructions are executed, the steps of the image processing method can be realized.
The embodiment of the invention provides an image processing method, an image processing device and a computer storage medium, wherein the method comprises the following steps: acquiring a first image, identifying a target object in the first image, and acquiring limb detection information of the target object; obtaining first detection information corresponding to a to-be-processed area of the target object in the limb detection information; and performing image processing on the area to be processed corresponding to the first detection information to generate second image data. By adopting the technical scheme of the embodiment of the invention, the region to be processed (particularly comprising the chest region) is processed based on the limb detection information based on the acquisition of the limb detection information of the target object in the image object, so that the automatic adjustment of the region to be processed of the target object is realized, the manual operation of a user for many times is not needed, and the operation experience of the user is greatly improved.
Drawings
FIG. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a configuration of an image processing apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware configuration of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides an image processing method. FIG. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention; as shown in fig. 1, the method includes:
step 101: the method comprises the steps of obtaining a first image, identifying a target object in the first image, and obtaining limb detection information of the target object.
Step 102: and acquiring first detection information corresponding to the to-be-processed area of the target object in the limb detection information.
Step 103: and performing image processing on the area to be processed corresponding to the first detection information to generate second image data.
In this embodiment, the image processing method is applied to an image processing device, and the image processing device may be located in a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, or a terminal such as a desktop computer or an all-in-one computer.
The image processing method of the embodiment performs image processing on a first image, and firstly identifies a target object in the first image; the target object is used as an object to be processed, can be a real person and can be understood as a real person in an image; in other embodiments, the target object may also be a virtual character.
In this embodiment, the limb detection information includes limb contour point information and/or limb key point information; the limb contour point information comprises coordinate information of the limb contour point; the limb key point information comprises coordinate information of the limb key points. The limb contour points represent the limb contour of the target object, that is, the limb contour edge of the target object can be formed through the coordinate information of the limb contour points. The limb key points represent key points of bones of the target object, namely, main bones of the target object can be formed by connecting the limb key points through coordinate information of the limb key points.
Wherein the limb contour points comprise at least one of: arm contour points, hand contour points, shoulder contour points, leg contour points, foot contour points and waist contour points; the limb keypoints comprise at least one of: arm key points, hand key points, shoulder key points, leg key points, foot key points, and waist key points. The arms can comprise upper arms and lower arms, and the arm key points can comprise upper arm key points and lower arm key points; the arm contour points may include upper arm contour points and lower arm contour points; the legs may include thighs and calves; the leg keypoints may include thigh keypoints and shank keypoints; the leg contour points may include thigh contour points and shank contour points.
In this embodiment, the obtaining first detection information corresponding to the to-be-processed area of the target object in the limb detection information includes: acquiring contour point information related to a to-be-processed area corresponding to the target object in the limb contour point information; and acquiring key point information related to the to-be-processed area corresponding to the target object in the limb key point information.
In this embodiment, the chest or muscle related region of the target object is mainly processed, and the contour point information related to the region to be processed of the target object includes at least one of the following: chest contour point information, abdomen contour point information, arm contour point information, leg contour point information, back contour point information.
For the processing manner of the chest region, in an embodiment, the limb contour point information includes chest contour point information; the limb key point information comprises chest key point information; the region to be treated is a chest region; the image processing of the to-be-processed area corresponding to the first detection information includes: carrying out image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information; wherein the image deformation processing comprises stretch and/or compression deformation processing.
Wherein the image deformation processing is performed on the chest region corresponding to the chest contour point information and/or the chest key point information, and the image deformation processing comprises: determining a center point of the chest region based on the chest contour point information; stretching the chest region corresponding to the chest contour point information and/or the chest key point information in a direction from the central point to the outside of the chest region, or; and performing compression processing in a direction towards the central point outside the chest area.
Specifically, contour points on two sides of the chest are identified based on the chest contour point information, and the center point of the chest is determined based on the contour points on the two sides of the chest; selecting a circular area by taking the center point of the chest as the center of a circle and the contour points from the center point to the two sides of the chest as the radius; adopting an image deformation algorithm to take the central point as a center, and carrying out stretching deformation processing outwards on the radius of the circular area; or the center point is taken as the center, and the radius of the circular area is inwards subjected to compression deformation processing. The image processing method of the embodiment is particularly suitable for image processing in which the target object is a female person and a breast enlarging effect can be achieved on the breast of the female person. Of course, a "chest reduction" effect may also be achieved for the chest, as required for the chest area to be reduced.
The method for enhancing the chest region corresponding to the chest contour point information and/or the chest key point information comprises the following steps: adding object information to the chest region corresponding to the chest contour point information and/or the chest key point information; or identifying the object information of the chest region and adjusting the display attribute parameters corresponding to the object information.
In particular, the processing of the chest region also includes enhancement processing, in particular feature enhancement processing. Wherein the feature enhancement processing specifically includes: adding object information in the chest region; or identifying the object information of the chest region and adjusting the display attribute parameters corresponding to the object information. Specifically, the object information is shadow data, that is, corresponding shadow data is added to the chest region, or a display attribute parameter corresponding to the object information is adjusted, where the display attribute parameter may specifically be a contrast parameter, that is, a contrast of the object information is adjusted; in particular, the contrast of the object information is increased, so that the stereo degree of the chest area is increased, namely, the effect of 'breast enhancement' is achieved.
In an embodiment, the performing image deformation processing on the chest region corresponding to the chest contour point information and/or the chest key point information includes: carrying out image deformation processing on the chest region based on the first type deformation parameters corresponding to each point in the chest region; wherein the first type deformation parameter varies with a variation in a distance between the corresponding point and a contour edge of the target object.
In an embodiment, the performing image processing on the to-be-processed area corresponding to the first detection information to generate a second image includes: performing image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information to obtain a first processing result; performing image processing on at least part of background areas except the area where the target object is located in the first image to obtain a second processing result; second image data is generated based on the first processing result and the second processing result.
Wherein, the image processing of at least part of the background area in the first image except the area where the target object is located includes: performing image deformation processing on the at least part of the background region based on the second type deformation parameters corresponding to each point in the at least part of the background region; wherein the second type of deformation parameter varies exponentially with a variation in a distance between the corresponding point to the contour edge of the target object.
Specifically, in the image deformation algorithm in this embodiment, a contour edge formed by limb contour point information of a target object is used as a reference, and deformation processing is performed according to deformation parameters corresponding to distances from each point to the contour edge; for points in the target object, the points can be understood as points on the human body, and the corresponding deformation parameters are first-class deformation parameters; and for points outside the target object, namely points in the background area, the corresponding deformation parameters are the second type of deformation parameters.
For a first type of deformation parameter, varying with a variation in the distance between the corresponding point to the edge of the profile; whereas for the second type of deformation parameters, there is an exponential change with a change in the distance between the corresponding point to the edge of the contour. It can be understood that, compared with the first type of deformation parameters, if the distance between the corresponding point and the contour edge changes the same, the second type of deformation parameters has a larger variation, so as to reduce the influence on the background area, so that the image processing effect is more natural, and especially the processing near the contour edge is smoother and more natural.
The image deformation algorithm of the embodiment of the invention is also provided with standard parameters; as an implementation manner, the standard parameter indicates a parameter that a to-be-processed region of the processed target object satisfies, that is, after the to-be-processed region is processed by using an image deformation algorithm and satisfies the standard parameter, the processing of the to-be-processed region is terminated; as another embodiment, the standard parameter indicates an adjustment ratio of the to-be-processed region of the target object, that is, after the to-be-processed region is processed by using an image deformation algorithm, an adjustment variation of the to-be-processed region satisfies the adjustment ratio.
As to the processing method of the muscle-related area, in an embodiment, the performing image processing on the to-be-processed area corresponding to the first detection information includes: identifying the type of a to-be-processed area corresponding to the limb contour point information, and adding object information in the to-be-processed area based on the type of the to-be-processed area; or identifying object information in the to-be-processed area corresponding to the limb contour point information, and adjusting display attribute parameters corresponding to the object information.
Specifically, the treatment of the muscle-related area by the embodiment of the invention can include two modes of "increasing muscle" and "increasing muscle", wherein "increasing muscle" refers to increasing muscle in the area without muscle; the term "increase in muscle" means that the original muscle has an effect of increasing its size in the region where the muscle is present.
Based on the above, as an embodiment, a type of the to-be-processed area corresponding to the contour point information is identified, and object information is added to the to-be-processed area based on the type of the to-be-processed area. The type of the region to be processed represents a part of a target object corresponding to the contour point information corresponding to the type representation of the region to be processed; it is understood that the chest, abdomen, arms, legs, back, etc. correspond to different types. Further, if the part has at least two sub-parts, the at least two sub-parts correspond to different types; for example, a leg includes two sub-portions, namely, a thigh and a shank, and the two sub-portions correspond to different types.
Further, aiming at the type of the area to be processed, adding object information corresponding to the type into the area to be processed; the object information is shadow data representing a muscle area, that is, corresponding shadow data is added to the area to be processed. For example, when the region to be processed is an abdominal region, the object information is shadow data corresponding to abdominal muscles, and the shadow data is added to a corresponding position of the abdominal region.
As another implementation manner, object information in the region to be processed corresponding to the contour point information is identified, and a display attribute parameter corresponding to the object information is adjusted.
Similar to the foregoing embodiment, in this embodiment, the object information represents shadow data of a muscle area, that is, a muscle area in the to-be-processed area corresponding to the contour point information is identified; for example, when the region to be treated is an abdominal region, a muscle region in the abdominal region is identified. Further, adjusting a display attribute parameter corresponding to the object information, where the display attribute parameter may specifically be a contrast parameter, that is, adjusting the contrast of the object information; in particular, the contrast of the object information is increased, thereby having an effect of increasing the stereoscopy of the muscle region, that is, increasing the muscle. Of course, in this embodiment, the adjusting of the display attribute parameter corresponding to the object information may also be to reduce the contrast of the object information, so as to have the effects of reducing the stereoscopy of the muscle area and reducing the muscle.
The following describes an image processing method according to an embodiment of the present invention with reference to a specific embodiment.
If the user desires to adjust the chest region of the person in the first image, the adjustment of the chest region can be performed by one operation based on the terminal, for example, an input operation for a specific function key. The specific adjustment process may include: contour point information and/or key point information related to the chest area of the person are obtained, image deformation processing is carried out on the first image by adopting an image deformation algorithm, and specifically stretching processing or compressing processing can be carried out on the chest area of the target object in the first image based on standard parameters configured in the image deformation algorithm; and/or adjusting the object information in the chest region, specifically adjusting the display attribute parameter (such as contrast) of the object information, so that the processed chest region meets the standard parameter; for example, if the chest area of the target object is smaller than the standard parameter, the stretching process may be performed on the chest area, and the contrast of the object information in the chest area is improved, so as to have the effect of "breast enhancement"; for another example, if the chest region of the target object is larger than the standard parameter, the compression process may be performed on the chest region, and the contrast of the object information in the chest region is reduced, or at least part of the object information in the chest region is removed, thereby having the effect of "chest reduction".
By adopting the technical scheme of the embodiment of the invention, the region to be processed (particularly comprising the chest region) is processed based on the limb detection information based on the acquisition of the limb detection information of the target object in the image object, so that the automatic adjustment of the region to be processed of the target object is realized, the manual operation of a user for many times is not needed, and the operation experience of the user is greatly improved.
Based on the foregoing embodiments, in an embodiment, the method further includes: and acquiring first detection information related to the leg region of the target object in the limb detection information, and performing image deformation processing on the leg region corresponding to the first detection information. Wherein the obtaining first detection information related to the leg region of the target object in the limb detection information includes: and obtaining leg contour point information of the leg region corresponding to the target object in the limb contour point information. That is, the first detection information includes leg contour point information corresponding to a leg region of the target object.
In this embodiment, the performing image deformation processing on the leg region corresponding to the first detection information includes: carrying out deformation processing on the leg region corresponding to the leg contour point information; wherein the deformation process comprises a stretching process and/or a compression process.
Here, the leg contour point information includes first leg contour point information and second leg contour point information; the first leg contour point information corresponds to a leg outer contour; the second leg contour point information corresponds to an inner leg contour;
the image deformation processing of the leg region corresponding to the first detection information includes: for a leg region corresponding to the leg contour point information and/or leg key point information, compressing the leg outer contour in a direction from the leg outer contour to the leg inner contour, and compressing the leg inner contour in a direction from the leg inner contour to the leg outer contour; or, the leg outer contour is stretched in a direction toward the leg outer contour according to the leg inner contour, and the leg inner contour is stretched in a direction toward the leg inner contour according to the leg outer contour.
Specifically, the present embodiment compresses the leg region, specifically, compresses the width of the leg region, thereby achieving the effect of leg slimming. In practical application, the leg contour comprises a leg outer contour and a leg inner contour; the leg outer contour corresponds to the leg outer side and the leg inner contour corresponds to the leg inner side. In the compression process of the leg region, the leg outer contour is compressed in a direction toward the leg inner contour, and the leg inner contour is compressed in a direction toward the leg outer contour, so that the width of the leg region is reduced, that is, the distance between two side edges of the leg region is reduced, wherein the distance is the shortest distance from any point on the leg outer contour to the leg inner contour. The stretching process is opposite to the compressing process for the leg region, and the description thereof is omitted.
In another embodiment, the deforming the leg region corresponding to the leg contour point information may further include: determining a centerline of the leg region based on the first leg contour point information and the second leg contour point information; and compressing the leg region corresponding to the leg contour point information respectively according to the leg outer contour and the leg inner contour in the direction towards the center line, so as to shorten the width of the leg region, namely, shorten the distance between two side edges of the leg region, wherein the distance is the shortest distance from any point on the leg outer contour to the leg inner contour.
In this embodiment, for the "lengthening" processing of the leg region, on one hand, the "lengthening" processing is performed on the shank region, and on the other hand, the "waist line" is raised to increase the proportion of the leg; the "waistline" is a dividing line of the figure ratio and is a reference line for calculating the ratio of the upper half and the lower half, and in practical use, the shortest distance between contour points on both sides of the waist may be referred to as the "waistline".
In an embodiment, the leg contour point information includes third leg contour point information corresponding to a contour of the lower leg; the leg key point information comprises first leg key point information corresponding to a shank region; the image deformation processing of the leg region corresponding to the first detection information includes: and stretching the shank region corresponding to the third leg contour point information and/or the first leg key point information in a first direction, or compressing the shank region in a second direction opposite to the first direction.
Specifically, the calf region is the region from the knee to the ankle; the third leg contour point information corresponds to the outer contour and the inner contour of the calf region. In this embodiment, an image deformation algorithm is used to stretch the distance between the contour points of the outer contour and the inner contour of the lower leg region. Here, the leg direction includes a first direction and a second direction, the first direction being a direction in which the knee points toward the foot; the second direction is a direction in which the foot points toward the knee. And stretching the shank area according to the first direction by adopting an image deformation algorithm, or compressing the shank area according to the second direction, so as to lengthen or shorten the length of the shank.
In another embodiment, the obtaining first detection information related to the leg region of the target object in the limb detection information includes: acquiring waist contour point information in the limb contour point information, and acquiring first waist contour point sub-information related to leg regions of the target object from the waist contour point information; the image deformation processing of the leg region corresponding to the first detection information includes: compressing a part of waist region corresponding to the first waist contour point sub information according to a third direction so as to improve a part of waist contour corresponding to the first waist contour point sub information; or, stretching the partial waist region corresponding to the first waist contour point sub information in a fourth direction opposite to the third direction to reduce the partial waist contour corresponding to the first waist contour point sub information.
Specifically, as the first waist contour point sub-information related to the leg region, in one embodiment, the waist region may be divided into two upper and lower parts by using a "waist line" (the "waist line" is a reference line corresponding to the minimum width of the waist) as a boundary, and the waist contour point information corresponding to the lower half of the two upper and lower parts is the first waist contour point sub-information corresponding to the lower half. In the present embodiment, in order to increase the ratio of the leg portions, the ratio of the upper body is shortened by increasing the "waistline", and the ratio of the lower body is increased, so that the ratio of the leg portions is visually increased.
In practical application, the compression processing is performed on the part of the waist region corresponding to the first waist contour point sub-information, specifically, the lifting processing is performed on the part of the waist region corresponding to the first waist contour point sub-information towards a third direction by using an image deformation algorithm; the third direction may be a direction toward the head, or a direction having a specific acute angle with the direction of the head, and it is understood that when the direction of the head is upward, the third direction may be obliquely upward, so as to achieve an effect of raising the "waistline". Correspondingly, stretching the part of the waist region corresponding to the first waist contour point sub-information, specifically, stretching the part of the waist region corresponding to the first waist contour point sub-information towards a fourth direction by using an image deformation algorithm; the fourth direction may be a direction directed toward the head or a direction having a specific acute angle with the direction of the head, and it is understood that the fourth direction may be obliquely downward when the direction of the head is directed upward, thereby achieving an effect of reducing the "waistline".
Based on the foregoing embodiments, in an embodiment, the method further includes: and acquiring first detection information related to the arm area of the target object in the limb detection information, and performing image deformation processing on the arm area corresponding to the first detection information.
In one embodiment, the limb contour point information comprises arm contour point information; the arm contour point information comprises first arm contour point information and second arm contour point information; the first arm contour point information corresponds to an arm outer contour; the second arm contour point information corresponds to an inner arm contour; the image deformation processing of the arm region corresponding to the first detection information includes: acquiring arm contour point information of an arm area corresponding to the target object in the limb contour point information; compressing the arm outline according to the direction from the arm outline to the arm inner outline and compressing the arm inner outline according to the direction from the arm inner outline to the arm outer outline for the arm area corresponding to the arm outline point information; or, the arm outer contour is stretched in a direction in which the arm inner contour faces the arm outer contour, and the arm inner contour is stretched in a direction in which the arm outer contour faces the arm inner contour.
Based on the foregoing embodiments, in an embodiment, the method further includes: and acquiring first detection information related to the waist area of the target object in the limb detection information, and performing image deformation processing on the waist area corresponding to the first detection information.
In one embodiment, the limb contour point information comprises waist contour point information; the image deformation processing of the waist region corresponding to the first detection information includes: determining a midline of a waist region based on the waist contour point information, and compressing the waist region corresponding to the waist contour point information according to the directions of the waist contours at two sides towards the midline; alternatively, the stretching process is performed in a direction in which the center line is oriented toward both side waist contours.
Wherein the determining a centerline of a lumbar region based on the lumbar contour point information comprises: determining a midline of the waist region based on two side edges of the waist region represented by the waist contour point information; the compression treatment is performed in a direction in which the both side edges are directed toward the center line, thereby shortening the width of the waist region, that is, shortening the distance between the both side edges of the waist region, which is the shortest distance from any point on the contour of one side edge of the waist portion to the contour of the other side edge of the waist portion.
In one embodiment, the image deformation algorithm of the embodiment of the invention adopts different deformation parameters in the process of compressing different parts of the limb area; for example, the deformation process for the leg region corresponds to a first deformation parameter, the deformation process for the arm region corresponds to a second deformation parameter, and the deformation process for the waist region corresponds to a third deformation parameter. The first deformation parameter, the second deformation parameter and the third deformation parameter can be the same or different. As in one embodiment, the third deformation parameter is greater than the first deformation parameter.
The embodiment of the invention also provides an image processing device. FIG. 2 is a schematic diagram of a configuration of an image processing apparatus according to an embodiment of the present invention; as shown in fig. 2, the apparatus includes an acquisition unit 21 and an image processing unit 22; wherein,
the acquiring unit 21 is configured to acquire a first image, identify a target object in the first image, and acquire limb detection information of the target object; obtaining first detection information corresponding to a to-be-processed area of the target object in the limb detection information;
the image processing unit 22 is configured to perform image processing on the to-be-processed area corresponding to the first detection information obtained by the obtaining unit 21, and generate second image data.
In this embodiment, the limb detection information includes limb contour point information and/or limb key point information; the limb contour point information comprises coordinate information of the limb contour point; the limb key point information comprises coordinate information of the limb key points.
In an embodiment, the obtaining unit 21 is configured to obtain contour point information related to a to-be-processed area corresponding to the target object in the limb contour point information; and acquiring key point information related to the to-be-processed area corresponding to the target object in the limb key point information.
In an embodiment, the limb contour point information comprises chest contour point information; the limb key point information comprises chest key point information; the region to be treated is a chest region;
the image processing unit 22 is configured to perform image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information; wherein the image deformation processing comprises stretch and/or compression deformation processing.
Specifically, the image processing unit 22 is configured to determine a center point of the chest region based on the chest contour point information; stretching the chest region corresponding to the chest contour point information and/or the chest key point information in a direction from the central point to the outside of the chest region, or; and performing compression processing in a direction towards the central point outside the chest area.
The image processing unit 22 is configured to add object information to the chest region corresponding to the chest contour point information and/or the chest key point information; or identifying the object information of the chest region and adjusting the display attribute parameters corresponding to the object information.
In an embodiment, the image processing unit 22 is configured to perform image deformation processing on the chest region based on the first type deformation parameter corresponding to each point in the chest region; wherein the first type deformation parameter varies with a variation in a distance between the corresponding point and a contour edge of the target object.
In an embodiment, the image processing unit 22 is configured to perform image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information to obtain a first processing result; performing image processing on at least part of background areas except the area where the target object is located in the first image to obtain a second processing result; generating a second image based on the first processing result and the second processing result.
In an embodiment, the image processing unit 22 is configured to perform image deformation processing on the at least part of the background region based on the second type deformation parameter corresponding to each point in the at least part of the background region; wherein the second type of deformation parameter varies exponentially with a variation in a distance between the corresponding point to the contour edge of the target object.
In an embodiment, the image processing unit 22 is configured to identify a type of a to-be-processed area corresponding to the limb contour point information, and add object information to the to-be-processed area based on the type of the to-be-processed area; or identifying object information in the to-be-processed area corresponding to the limb contour point information, and adjusting display attribute parameters corresponding to the object information.
In an embodiment, the obtaining unit 21 is further configured to obtain leg contour point information corresponding to a leg region of the target object in the limb contour point information;
the image processing unit 22 is configured to perform image deformation processing on the leg region corresponding to the leg contour point information.
In an embodiment, the leg contour point information comprises first leg contour point information and second leg contour point information; the first leg contour point information corresponds to a leg outer contour; the second leg contour point information corresponds to an inner leg contour;
the image processing unit 22 is configured to perform compression processing on the leg outer contour according to the direction from the leg outer contour to the leg inner contour and perform compression processing on the leg inner contour according to the direction from the leg inner contour to the leg outer contour for the leg region corresponding to the leg contour point information and/or the leg key point information; or, the leg outer contour is stretched in a direction toward the leg outer contour according to the leg inner contour, and the leg inner contour is stretched in a direction toward the leg inner contour according to the leg outer contour.
In an embodiment, the leg contour point information includes third leg contour point information corresponding to a contour of the lower leg; the leg key point information comprises first leg key point information corresponding to a shank region;
the image processing unit 22 is configured to perform stretching processing on a lower leg region corresponding to the third leg contour point information and/or the first leg key point information according to a first direction, or perform compression processing according to a second direction opposite to the first direction.
In an embodiment, the obtaining unit 21 is configured to obtain waist contour point information in the limb contour point information, and obtain first waist contour point sub-information related to leg regions of the target object from the waist contour point information;
the image processing unit 22 is configured to perform compression processing on a part of the waist region corresponding to the first waist contour point sub-information according to a third direction, so as to improve a part of the waist contour corresponding to the first waist contour point sub-information; or, stretching the partial waist region corresponding to the first waist contour point sub information in a fourth direction opposite to the third direction to reduce the partial waist contour corresponding to the first waist contour point sub information.
In one embodiment, the limb contour point information comprises arm contour point information; the arm contour point information comprises first arm contour point information and second arm contour point information; the first arm contour point information corresponds to an arm outer contour; the second arm contour point information corresponds to an inner arm contour;
the obtaining unit 21 is further configured to obtain arm contour point information of an arm region corresponding to the target object in the limb contour point information, and obtain arm key point information of an arm region corresponding to the target object in the limb key point information;
the image processing unit 22 is configured to, for an arm region corresponding to the arm contour point information and/or the arm key point information, perform compression processing on the arm outer contour in a direction in which the arm outer contour faces the arm inner contour, and perform compression processing on the arm inner contour in a direction in which the arm inner contour faces the arm outer contour; or, the arm outer contour is stretched in a direction in which the arm inner contour faces the arm outer contour, and the arm inner contour is stretched in a direction in which the arm outer contour faces the arm inner contour.
In one embodiment, the limb contour point information comprises waist contour point information; the image processing unit 22 is configured to determine a center line of the waist region based on the waist contour point information, and perform compression processing on the waist region corresponding to the waist contour point information according to a direction from the waist contours on both sides toward the center line; alternatively, the stretching process is performed in a direction in which the center line is oriented toward both side waist contours.
In the embodiment of the present invention, the obtaining Unit 21 and the image Processing Unit 22 in the image Processing apparatus may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), or a Programmable Gate Array (FPGA) in the terminal in practical application.
Fig. 3 is a schematic diagram of a hardware structure of the image processing apparatus according to the embodiment of the present invention, as shown in fig. 3, the image processing apparatus includes a memory 32, a processor 31, and a computer program stored in the memory 32 and capable of running on the processor 31, and when the processor 31 executes the computer program, the image processing method according to any one of the foregoing embodiments of the present invention is implemented.
It will be appreciated that the various components in the image processing apparatus are coupled together by a bus system 33. It will be appreciated that the bus system 33 is used to enable communications among the components of the connection. The bus system 33 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 33 in fig. 3.
It will be appreciated that the memory 32 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 32 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the above embodiments of the present invention may be applied to the processor 31, or implemented by the processor 31. The processor 31 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 31. The processor 31 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 31 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 32, and the processor 31 reads the information in the memory 32 and performs the steps of the aforementioned methods in conjunction with its hardware.
It should be noted that: the image processing apparatus provided in the above embodiment is exemplified by the division of each program module when performing image processing, and in practical applications, the processing may be distributed to different program modules according to needs, that is, the internal structure of the apparatus may be divided into different program modules to complete all or part of the processing described above. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
In an exemplary embodiment, the embodiment of the present invention further provides a computer readable storage medium, such as the memory 32 including a computer program, which can be executed by the processor 31 of the image processing apparatus to complete the steps of the foregoing method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
The embodiment of the invention also provides a computer readable storage medium, on which computer instructions are stored, and the instructions are executed by a processor to implement the image processing method of any one of the preceding embodiments of the invention.
The embodiment of the present invention further provides a computer program product, where the computer program product includes computer-executable instructions, and after the computer-executable instructions are executed, the steps of the image processing method according to any one of the foregoing embodiments of the present invention can be implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. An image processing method, characterized in that the method comprises:
acquiring a first image, identifying a target object in the first image, and acquiring limb detection information of the target object;
obtaining first detection information corresponding to a to-be-processed area of the target object in the limb detection information;
and performing image processing on the area to be processed corresponding to the first detection information to generate a second image.
2. The method according to claim 1, wherein the limb detection information comprises limb contour point information and/or limb keypoint information;
the limb contour point information comprises coordinate information of the limb contour point;
the limb key point information comprises coordinate information of the limb key points.
3. The method of claim 2, wherein the limb contour point information comprises chest contour point information; the limb key point information comprises chest key point information; the region to be treated is a chest region;
the image processing of the to-be-processed area corresponding to the first detection information includes:
carrying out image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information; wherein the image deformation processing comprises stretching and/or compression processing.
4. The method according to any one of claims 1 to 3, wherein the performing image processing on the to-be-processed region corresponding to the first detection information to generate a second image comprises:
performing image deformation processing and/or enhancement processing on the chest region corresponding to the chest contour point information and/or the chest key point information to obtain a first processing result;
performing image processing on at least part of background areas except the area where the target object is located in the first image to obtain a second processing result;
generating a second image based on the first processing result and the second processing result.
5. The method according to claim 4, wherein the image processing of at least a part of the background area of the first image except for the area where the target object is located comprises:
performing image deformation processing on the at least part of the background region based on the second type deformation parameters corresponding to each point in the at least part of the background region;
wherein the second type of deformation parameter varies exponentially with a variation in a distance between the corresponding point to the contour edge of the target object.
6. The method according to claim 2, wherein the image processing the to-be-processed region corresponding to the first detection information includes:
identifying the type of a to-be-processed area corresponding to the limb contour point information, and adding object information in the to-be-processed area based on the type of the to-be-processed area; or,
identifying object information in the to-be-processed area corresponding to the limb contour point information, and adjusting display attribute parameters corresponding to the object information.
7. An image processing apparatus characterized by comprising an acquisition unit and an image processing unit; wherein,
the acquisition unit is used for acquiring a first image, identifying a target object in the first image and acquiring limb detection information of the target object; obtaining first detection information corresponding to a to-be-processed area of the target object in the limb detection information;
the image processing unit is configured to perform image processing on the to-be-processed area corresponding to the first detection information obtained by the obtaining unit, and generate a second image.
8. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, carry out the steps of the image processing method according to any one of claims 1 to 6.
9. An image processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the image processing method according to any one of claims 1 to 6 are implemented when the processor executes the program.
10. A computer program product, characterized in that it comprises computer-executable instructions capable, when executed, of implementing the image processing method steps of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810556482.7A CN108830200A (en) | 2018-05-31 | 2018-05-31 | A kind of image processing method, device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810556482.7A CN108830200A (en) | 2018-05-31 | 2018-05-31 | A kind of image processing method, device and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108830200A true CN108830200A (en) | 2018-11-16 |
Family
ID=64147131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810556482.7A Pending CN108830200A (en) | 2018-05-31 | 2018-05-31 | A kind of image processing method, device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830200A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712085A (en) * | 2018-12-11 | 2019-05-03 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN109903217A (en) * | 2019-01-25 | 2019-06-18 | 北京百度网讯科技有限公司 | Image distortion method and device |
CN110111240A (en) * | 2019-04-30 | 2019-08-09 | 北京市商汤科技开发有限公司 | A kind of image processing method based on strong structure, device and storage medium |
CN110264430A (en) * | 2019-06-29 | 2019-09-20 | 北京字节跳动网络技术有限公司 | Video beautification method, device and electronic equipment |
CN111105348A (en) * | 2019-12-25 | 2020-05-05 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image processing device, and storage medium |
CN111415382A (en) * | 2020-03-02 | 2020-07-14 | 北京字节跳动网络技术有限公司 | Method and device for processing human body arm body beautification in picture and electronic equipment |
CN111460871A (en) * | 2019-01-18 | 2020-07-28 | 北京市商汤科技开发有限公司 | Image processing method and device, and storage medium |
WO2020220679A1 (en) * | 2019-04-30 | 2020-11-05 | 北京市商汤科技开发有限公司 | Method and device for image processing, and computer storage medium |
CN113040984A (en) * | 2020-11-21 | 2021-06-29 | 泰州国安医疗用品有限公司 | Intelligent leg part regional construction system and method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101341507A (en) * | 2005-12-01 | 2009-01-07 | 株式会社资生堂 | Face classification method, face classifier, classification map, face classification program and recording medium having recorded program |
CN103578004A (en) * | 2013-11-15 | 2014-02-12 | 西安工程大学 | Method for displaying virtual fitting effect |
CN105956997A (en) * | 2016-04-27 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Image deformation treatment method and device |
CN107273846A (en) * | 2017-06-12 | 2017-10-20 | 江西服装学院 | A kind of human somatotype parameter determination method and device |
CN107358658A (en) * | 2017-07-20 | 2017-11-17 | 深圳市大象文化科技产业有限公司 | A kind of Mammaplasty AR Forecasting Methodologies, device and system |
CN107467760A (en) * | 2017-09-30 | 2017-12-15 | 深圳市颐通科技有限公司 | A kind of human body contour outline characteristic point labeling method based on ergonomics |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107833178A (en) * | 2017-11-24 | 2018-03-23 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
-
2018
- 2018-05-31 CN CN201810556482.7A patent/CN108830200A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101341507A (en) * | 2005-12-01 | 2009-01-07 | 株式会社资生堂 | Face classification method, face classifier, classification map, face classification program and recording medium having recorded program |
CN103578004A (en) * | 2013-11-15 | 2014-02-12 | 西安工程大学 | Method for displaying virtual fitting effect |
CN105956997A (en) * | 2016-04-27 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Image deformation treatment method and device |
CN107273846A (en) * | 2017-06-12 | 2017-10-20 | 江西服装学院 | A kind of human somatotype parameter determination method and device |
CN107358658A (en) * | 2017-07-20 | 2017-11-17 | 深圳市大象文化科技产业有限公司 | A kind of Mammaplasty AR Forecasting Methodologies, device and system |
CN107467760A (en) * | 2017-09-30 | 2017-12-15 | 深圳市颐通科技有限公司 | A kind of human body contour outline characteristic point labeling method based on ergonomics |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107833178A (en) * | 2017-11-24 | 2018-03-23 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
Non-Patent Citations (3)
Title |
---|
DICYT: "阿里云首推免费人脸识别SDK 让每个APP轻松拥有短视频AR特效", 《CSDN》 * |
WEIXIN_33769125: "阿里人脸识别:能针对不同应用场景 各模块可自由组合", 《CSDN》 * |
一闪一闪亮晶晶1313: "揭秘|直播美颜不靠脸 靠的是阿里云程序员?", 《CSDN》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712085A (en) * | 2018-12-11 | 2019-05-03 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN111460871B (en) * | 2019-01-18 | 2023-12-22 | 北京市商汤科技开发有限公司 | Image processing method and device and storage medium |
CN111460871A (en) * | 2019-01-18 | 2020-07-28 | 北京市商汤科技开发有限公司 | Image processing method and device, and storage medium |
CN111460870A (en) * | 2019-01-18 | 2020-07-28 | 北京市商汤科技开发有限公司 | Target orientation determination method and device, electronic equipment and storage medium |
WO2020181900A1 (en) * | 2019-01-18 | 2020-09-17 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device and storage medium |
US11538207B2 (en) | 2019-01-18 | 2022-12-27 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and apparatus, image device, and storage medium |
CN109903217A (en) * | 2019-01-25 | 2019-06-18 | 北京百度网讯科技有限公司 | Image distortion method and device |
CN110111240A (en) * | 2019-04-30 | 2019-08-09 | 北京市商汤科技开发有限公司 | A kind of image processing method based on strong structure, device and storage medium |
WO2020220679A1 (en) * | 2019-04-30 | 2020-11-05 | 北京市商汤科技开发有限公司 | Method and device for image processing, and computer storage medium |
US11501407B2 (en) | 2019-04-30 | 2022-11-15 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for image processing, and computer storage medium |
CN110264430B (en) * | 2019-06-29 | 2022-04-15 | 北京字节跳动网络技术有限公司 | Video beautifying method and device and electronic equipment |
CN110264430A (en) * | 2019-06-29 | 2019-09-20 | 北京字节跳动网络技术有限公司 | Video beautification method, device and electronic equipment |
CN111105348A (en) * | 2019-12-25 | 2020-05-05 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image processing device, and storage medium |
US11734829B2 (en) | 2019-12-25 | 2023-08-22 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for processing image, and storage medium |
CN111415382B (en) * | 2020-03-02 | 2022-04-05 | 北京字节跳动网络技术有限公司 | Method and device for processing human body arm body beautification in picture and electronic equipment |
CN111415382A (en) * | 2020-03-02 | 2020-07-14 | 北京字节跳动网络技术有限公司 | Method and device for processing human body arm body beautification in picture and electronic equipment |
CN113040984B (en) * | 2020-11-21 | 2022-01-14 | 陕西立博源科技有限公司 | Intelligent leg part regional construction system and method |
CN113040984A (en) * | 2020-11-21 | 2021-06-29 | 泰州国安医疗用品有限公司 | Intelligent leg part regional construction system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108830783B (en) | Image processing method and device and computer storage medium | |
CN108830200A (en) | A kind of image processing method, device and computer storage medium | |
CN108830784A (en) | A kind of image processing method, device and computer storage medium | |
CN108765274A (en) | A kind of image processing method, device and computer storage media | |
KR20200127971A (en) | Image processing method, apparatus, computer device and computer storage medium | |
WO2020177394A1 (en) | Image processing method and apparatus | |
CN110349081B (en) | Image generation method and device, storage medium and electronic equipment | |
WO2020019915A1 (en) | Image processing method and apparatus, and computer storage medium | |
CN109325907B (en) | Image beautifying processing method, device and system | |
JP7090169B2 (en) | Image processing methods, equipment and computer storage media | |
Vezzetti et al. | Geometry-based 3D face morphology analysis: soft-tissue landmark formalization | |
CN110910512B (en) | Virtual object self-adaptive adjustment method, device, computer equipment and storage medium | |
CN116310000B (en) | Skin data generation method and device, electronic equipment and storage medium | |
CN107945102A (en) | A kind of picture synthetic method and device | |
CN111105348A (en) | Image processing method and apparatus, image processing device, and storage medium | |
US20220318892A1 (en) | Method and system for clothing virtual try-on service based on deep learning | |
CN110852933A (en) | Image processing method and apparatus, image processing device, and storage medium | |
Zhong et al. | Morphological analysis of tumor regression and its impact on deformable image registration for adaptive radiotherapy of lung cancer patients | |
CN110111240A (en) | A kind of image processing method based on strong structure, device and storage medium | |
CN115578513B (en) | Three-dimensional human body reconstruction method, three-dimensional human body reconstruction device, electronic equipment and storage medium | |
CN111968050A (en) | Human body image processing method and related product | |
WO2024148607A1 (en) | Neural network training method, medical image processing method, electronic device and storage medium | |
Gao et al. | Research on human pose estimation algorithm for occlusion scene | |
WO2024129539A1 (en) | Clinical data analysis | |
CN110942421A (en) | Image processing method and apparatus, image processing device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181116 |
|
RJ01 | Rejection of invention patent application after publication |