CN109584168B - Image processing method and apparatus, electronic device, and computer storage medium - Google Patents

Image processing method and apparatus, electronic device, and computer storage medium Download PDF

Info

Publication number
CN109584168B
CN109584168B CN201811250835.7A CN201811250835A CN109584168B CN 109584168 B CN109584168 B CN 109584168B CN 201811250835 A CN201811250835 A CN 201811250835A CN 109584168 B CN109584168 B CN 109584168B
Authority
CN
China
Prior art keywords
contour
processed
point
target
fitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811250835.7A
Other languages
Chinese (zh)
Other versions
CN109584168A (en
Inventor
黄明杨
付万增
石建萍
曲艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811250835.7A priority Critical patent/CN109584168B/en
Publication of CN109584168A publication Critical patent/CN109584168A/en
Application granted granted Critical
Publication of CN109584168B publication Critical patent/CN109584168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method and device, electronic equipment and a computer storage medium, wherein the method comprises the following steps: carrying out deformation processing on the contour of an object to be processed in the image to obtain a target contour; dividing the object to be processed into a plurality of plane units based on the target contour; and according to the target contour, performing deformation processing on each plane unit of the plurality of plane units to obtain a target object. The method and the device for processing the deformation of the object can improve the deformation processing efficiency of the object to be processed.

Description

Image processing method and apparatus, electronic device, and computer storage medium
Technical Field
The present application relates to computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer storage medium.
Background
At present, the beauty function has become a standard configuration of almost all devices such as cameras and mobile phones with a camera function, and the beauty function can make people in a desired state by adjusting and beautifying the skin color, five sense organs and the like of the people in a photo, so as to leave a good memory for people. With the development of image processing technology, the beauty function is no longer applied to only photographs taken by a camera, but also to live broadcast, simulation of hospital shaping effect, and many other aspects of human life. However, the requirement for real-time performance is still a technical problem to be solved by the beautifying function.
Disclosure of Invention
The embodiment of the application provides an image processing technical scheme.
According to an aspect of an embodiment of the present disclosure, there is provided an image processing method including:
carrying out deformation processing on the contour of an object to be processed in the image to obtain a target contour;
dividing the object to be processed into a plurality of plane units based on the target contour;
and according to the target contour, performing deformation processing on each plane unit of the plurality of plane units to obtain a target object.
Optionally, in the foregoing method embodiment of the present application, the performing deformation processing on the contour of the object to be processed in the image to obtain the target contour includes:
acquiring the outline and attitude angle information of the object to be processed in the image;
determining the direction and the scale of the deformation processing according to the contour and the attitude angle information of the object to be processed;
and carrying out deformation processing on the contour of the object to be processed according to the deformation processing direction and scale to obtain the target contour.
Optionally, in any one of the method embodiments of the present application, the performing deformation processing on the contour of the object to be processed according to the direction and the scale of the deformation processing to obtain the target contour includes:
determining the radiation center of the deformation treatment according to the direction of the deformation treatment;
connecting the radiation center with key points of the contour of the object to be processed, and determining target key points corresponding to the key points on a connecting line of the radiation center with the key points of the contour of the object to be processed or an extension line of the connecting line according to the scale of deformation processing;
and taking the target key point as a first fitting point, and performing contour fitting according to the first fitting point to obtain the target contour.
Optionally, in any one of the method embodiments described above in the present application, the acquiring the contour and the posture angle information of the object to be processed in the image includes:
processing the image to obtain key points of the object to be processed in the image;
estimating the attitude angle information of the object to be processed according to the key points of the object to be processed, and acquiring the key points of the contour of the object to be processed;
determining the direction and the scale of the deformation processing according to the contour and the attitude angle information of the object to be processed, wherein the determining comprises the following steps:
and determining the direction and the scale of the deformation processing according to the key points of the object to be processed and the attitude angle information.
Optionally, in any one of the method embodiments of the present application, the performing contour fitting on the target key point according to the first fitting point by using the target key point as the first fitting point to obtain the target contour includes:
according to the target key points, obtaining other points of the target contour except the target key points through a Catmull-Rom difference value to serve as first fitting points;
and taking the target key point as the first fitting point, and sequentially connecting the first fitting points to perform contour fitting to obtain a first broken line segment as the target contour.
Optionally, in any one of the method embodiments of the present application, the performing contour fitting on the target key point according to the first fitting point by using the target key point as the first fitting point to obtain the target contour further includes:
and taking the key points of the contour of the object to be processed as second fitting points, and fitting the contour according to the second fitting points to obtain the contour of the object to be processed.
Optionally, in any one of the method embodiments of the present application, the performing contour fitting on the contour of the object to be processed by using the key point of the contour of the object to be processed as a second fitting point according to the second fitting point to obtain the contour of the object to be processed includes:
according to the key points of the outline of the object to be processed, obtaining other points of the outline of the object to be processed except the key points through a Catmull-Rom difference value, and using the other points as second fitting points;
and taking the key points of the contour of the object to be processed as the second fitting points, and sequentially connecting the second fitting points for contour fitting to obtain a second fold line segment as the contour of the object to be processed.
Optionally, in any one of the method embodiments described above in the present application, the dividing the object to be processed into a plurality of plane units based on the target contour includes:
and connecting the radiation center and the first fitting point, and dividing the object to be processed into a plurality of plane units.
Optionally, in any one of the method embodiments described above in the present application, the dividing the object to be processed into a plurality of plane units based on the target contour includes:
responding to the target key point on an extension line of a connecting line of the radiation center and the key point, connecting the radiation center and the first fitting point, wherein the connecting line of the radiation center and the first fitting point is intersected with the outline of the object to be processed, and dividing the object to be processed into a plurality of plane units; alternatively, the first and second electrodes may be,
and responding to the target key point on the connecting line of the radiation center and the key point, connecting the radiation center and the first fitting point, prolonging the intersection of the connecting line of the radiation center and the first fitting point and the outline of the object to be processed, and dividing the object to be processed into a plurality of plane units.
Optionally, in any one of the method embodiments of the present application, the performing deformation processing on each plane unit of the plurality of plane units according to the target contour to obtain a target object includes:
respectively establishing mapping from pixel points in each plane unit to pixel points in a target area formed by a connecting line of the corresponding first fitting point and the radiation center and the target contour;
and determining the pixel value of the pixel point in the corresponding target area according to the pixel value of the pixel point in each plane unit based on the mapping to obtain the target object.
Optionally, in any one of the method embodiments of the present application, the respectively establishing a mapping between the pixel points in each plane unit and the pixel points in the target region formed by the connection line between the corresponding first fitting point and the radiation center and the target contour includes:
constructing a linear piecewise function according to the straight line determined by the radiation center and the first pixel point in each plane unit and the intersection point of the corresponding contour of the object to be processed and the target contour;
the determining, based on the mapping and according to the pixel value of the pixel point in each plane unit, the pixel value of the pixel point in the corresponding target region to obtain the target object includes:
determining a second pixel point corresponding to the first pixel point in the corresponding target area based on the linear piecewise function;
and determining the pixel value of the second pixel point according to the pixel value of the first pixel point to obtain the target object.
Optionally, in any one of the method embodiments of the present application, constructing a linear piecewise function according to an intersection of a straight line determined by the radiation center and the first pixel point in each plane unit, and the corresponding contour of the object to be processed and the target contour, includes:
extending a connecting line of the radiation center and the first fitting point to a third point, and sequentially connecting the third point to obtain a third broken line segment, wherein the target contour and the contour of the object to be processed are located between the radiation center and the third broken line segment;
and constructing the linear piecewise function according to the intersection points of the straight line determined by the radiation center and the first pixel point in the plane unit, the target contour, the contour of the object to be processed and the third broken line segment.
Optionally, in any one of the method embodiments of the present application, the determining a pixel value of the second pixel according to a pixel value of the first pixel to obtain the target object includes:
determining the data type of the coordinate of the first pixel point;
responding to the fact that the data type of the coordinate of the first pixel point is a floating point number, determining the pixel value of the first pixel point through bilinear interpolation based on the coordinate of the first pixel point, and taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object;
and in response to the fact that the data type of the coordinate of the second pixel point is an integer, taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object.
According to another aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the contour deformation module is used for carrying out deformation processing on the contour of the object to be processed in the image to obtain a target contour;
the unit dividing module is used for dividing the object to be processed into a plurality of plane units based on the target contour;
and the unit deformation module is used for carrying out deformation processing on each plane unit of the plurality of plane units according to the target contour to obtain a target object.
Optionally, in the foregoing apparatus embodiment of the present application, the contour deformation module includes:
the information acquisition submodule is used for acquiring the outline and the attitude angle information of the object to be processed in the image;
the parameter determining submodule is used for determining the direction and the scale of the deformation processing according to the contour and the attitude angle information of the object to be processed;
and the deformation processing submodule is used for carrying out deformation processing on the contour of the object to be processed according to the deformation processing direction and scale to obtain the target contour.
Optionally, in any one of the apparatus embodiments described above in the present application, the deformation processing sub-module includes:
the center determining unit is used for determining the radiation center of the deformation treatment according to the deformation treatment direction;
a key point determining unit, configured to connect the radiation center and a key point of the contour of the object to be processed, and determine a target key point corresponding to the key point according to the scale of the deformation process on a connection line between the radiation center and the key point of the contour of the object to be processed or an extension line of the connection line;
and the contour fitting unit is used for taking the target key points as first fitting points and performing contour fitting according to the first fitting points to obtain the target contour.
Optionally, in any apparatus embodiment of the present application, the information obtaining sub-module includes:
the key point acquisition unit is used for processing the image and acquiring the key points of the object to be processed in the image;
the information acquisition unit is used for estimating the attitude angle information of the object to be processed according to the key points of the object to be processed and acquiring the key points of the outline of the object to be processed;
and the parameter determining submodule is used for determining the direction and the scale of the deformation processing according to the key points of the object to be processed and the attitude angle information.
Optionally, in an embodiment of any one of the apparatuses described above in the present application, the contour fitting unit is configured to obtain, according to the target key point, other points of the target contour except the target key point through a Catmull-Rom difference value, and use the other points as first fitting points; and taking the target key point as the first fitting point, and sequentially connecting the first fitting points to perform contour fitting to obtain a first broken line segment as the target contour.
Optionally, in an embodiment of any one of the apparatuses described above in the present application, the contour fitting unit is further configured to use a key point of the contour of the object to be processed as a second fitting point, and perform contour fitting according to the second fitting point to obtain the contour of the object to be processed.
Optionally, in an embodiment of any one of the apparatuses described in the present application, the contour fitting unit is configured to obtain, according to a key point of the contour of the object to be processed, other points of the contour of the object to be processed except the key point through a Catmull-Rom difference value, and use the other points as second fitting points; and taking the key points of the contour of the object to be processed as the second fitting points, and sequentially connecting the second fitting points for contour fitting to obtain a second fold line segment as the contour of the object to be processed.
Optionally, in an embodiment of any one of the above apparatuses of the present application, the unit dividing module is configured to connect the radiation center and the first fitting point, and divide the object to be processed into a plurality of planar units.
Optionally, in an embodiment of any one of the apparatuses described above, the unit dividing module is configured to, in response to that the target keypoint is on an extension line of a connecting line between the radiation center and the keypoint, connect the radiation center and the first fitting point, where the connecting line between the radiation center and the first fitting point intersects with a contour of the object to be processed, divide the object to be processed into a plurality of plane units; or responding to the target key point on the connecting line of the radiation center and the key point, connecting the radiation center and the first fitting point, prolonging the intersection of the connecting line of the radiation center and the first fitting point and the outline of the object to be processed, and dividing the object to be processed into a plurality of plane units.
Optionally, in any one of the apparatus embodiments described above in the present application, the unit deformation module includes:
the mapping establishing submodule is used for respectively establishing mapping from pixel points in each plane unit to pixel points in a target area formed by a connecting line of the corresponding first fitting point and the radiation center and the target contour;
and the pixel mapping submodule is used for determining the pixel value of the pixel point in the corresponding target area according to the pixel value of the pixel point in each plane unit to obtain the target object.
Optionally, in an embodiment of any one of the above apparatuses of the present application, the mapping establishing sub-module is configured to construct a linear piecewise function according to an intersection point of a straight line determined by the radiation center and the first pixel point in each of the plane units, and the corresponding contour of the object to be processed and the target contour;
the pixel mapping submodule is used for determining a second pixel point corresponding to the first pixel point in the corresponding target area based on the linear piecewise function; and determining the pixel value of the second pixel point according to the pixel value of the first pixel point to obtain the target object.
Optionally, in an embodiment of any one of the apparatuses described in the present application, the mapping creating sub-module is configured to extend a connection line between the radiation center and the first fitting point to a third point, and sequentially connect the third point to obtain a third broken line segment, where the target contour and the contour of the object to be processed are located between the radiation center and the third broken line segment; and constructing the linear piecewise function according to the intersection points of the straight line determined by the radiation center and the first pixel point in the plane unit, the target contour, the contour of the object to be processed and the third broken line segment.
Optionally, in any apparatus embodiment of the present application, the pixel mapping sub-module is configured to:
determining the data type of the coordinate of the first pixel point;
responding to the fact that the data type of the coordinate of the first pixel point is a floating point number, determining the pixel value of the first pixel point through bilinear interpolation based on the coordinate of the first pixel point, and taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object;
and in response to the fact that the data type of the coordinate of the second pixel point is an integer, taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object.
According to another aspect of the embodiments of the present application, there is provided an electronic device including the apparatus according to any of the embodiments.
According to still another aspect of an embodiment of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a processor in communication with the memory for executing the executable instructions to perform the method of any of the above embodiments.
According to a further aspect of embodiments of the present application, there is provided a computer program comprising computer readable code which, when run on a device, executes instructions for implementing a method according to any of the above embodiments.
According to yet another aspect of embodiments of the present application, there is provided a computer program product for storing computer-readable instructions that, when executed, cause a computer to implement the method of any of the above embodiments.
In an alternative embodiment the computer program product is embodied as a computer storage medium, and in another alternative embodiment the computer program product is embodied as a software product, such as an SDK or the like.
Based on the image processing method and apparatus, the electronic device, and the computer storage medium provided by the embodiments of the present application, a target contour is obtained by performing deformation processing on a contour of an object to be processed in an image, the object to be processed is divided into a plurality of plane units based on the target contour, and each plane unit of the plurality of plane units is subjected to deformation processing according to the target contour, so that a target object is obtained. The deformation processing of the object to be processed is converted into the deformation processing of each plane unit, and the deformation processing of each plane unit is convenient to realize and beneficial to quick deformation processing, so that the deformation processing of the object to be processed can be simplified, and the efficiency of the deformation processing of the object to be processed is improved.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a flow diagram of an image processing method according to some embodiments of the present application;
fig. 2 is a flowchart illustrating deformation processing of a contour of an object to be processed by obtaining key points of the object to be processed according to some embodiments of the present disclosure;
FIG. 3 is a flowchart of determining target keypoints corresponding to keypoints of a contour of an object to be processed according to a direction and a scale of deformation processing according to some embodiments of the present application;
FIG. 4 is a flowchart illustrating a deformation process performed on each of a plurality of planar elements according to a target contour to obtain a target object according to some embodiments of the present disclosure;
FIGS. 5A to 5F are schematic diagrams illustrating an embodiment of an application of the image processing method of the present application;
FIG. 6 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 7 is a schematic structural diagram of a contour deformation module according to some embodiments of the present application;
FIG. 8 is a block diagram of an information acquisition sub-module according to some embodiments of the present application;
FIG. 9 is a schematic structural diagram of a deformation processing submodule according to some embodiments of the present application;
FIG. 10 is a schematic structural diagram of a cell deformation module according to some embodiments of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the present application may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Fig. 1 is a flow chart of an image processing method according to some embodiments of the present application. As shown in fig. 1, the method includes:
and 102, carrying out deformation processing on the contour of the object to be processed in the image to obtain a target contour.
In the embodiment of the present application, the object to be processed may be a human body or a part of a human body, or another object outside the human body or a part of another object outside the human body, for example, the object to be processed may be a human body, a head of a human body, four limbs, five sense organs, and the like, and the type of the object to be processed is not limited in the embodiment of the present application. The image of the object to be processed may be a photo, a picture, or a video frame, for example, the image of the object to be processed may be a photo acquired by an image acquisition device, a picture stored by a storage device, or a video frame played by a video playing device. The contour of the object to be processed refers to an edge used by the object to be processed to distinguish from the surrounding, and it may form a closed line in the image, or may form a non-closed line, for example, the object to be processed is a nose of a human body, and the contour of the nose is a non-closed line formed by a face protruding part of a human face in the image, and the form of the contour of the object to be processed is not limited in the embodiments of the present application. The deformation processing performed on the contour of the object to be processed may be one or any combination of stretching processing, compressing processing, bending processing, twisting processing, and the like, and the type of the deformation processing is not limited in the embodiment of the present application.
Optionally, the target contour may be obtained by performing deformation processing on the contour of the object to be processed through a plurality of methods. In an optional example, the method includes determining a deformation processing direction and a deformation processing scale according to the contour and the posture angle information of the object to be processed in the image by obtaining the contour and the posture angle information of the object to be processed in the image, and then performing deformation processing on the contour of the object to be processed according to the deformation processing direction and the deformation processing scale to obtain a deformed target contour. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, the contour and the posture angle information of the nose in the image can be acquired, then the direction and the scale of the stretching processing are determined according to the contour and the posture angle information of the nose, and the contour of the nose is stretched according to the direction and the scale of the stretching processing, so that the target contour of the nose after the stretching processing is obtained.
And 104, dividing the object to be processed into a plurality of plane units based on the target contour.
Optionally, the target contour is used as a reference, the object to be processed in the image is divided into a plurality of plane units, and the plane units may cover the entire object to be processed, or only cover a part of the object to be processed. In the case where a plurality of plane units divided based on the target contour cover the entire object to be processed, the deformation processing of the entire object to be processed can be obtained by performing the deformation processing on each plane unit, respectively. Under the condition that a plurality of plane units divided based on the target contour only cover part of the object to be processed, deformation processing of corresponding parts in the object to be processed can be obtained by respectively carrying out deformation processing on each plane unit, and then deformation processing of parts which are not subjected to deformation processing in the object to be processed can be estimated according to deformation processing of parts which are subjected to known deformation processing in the object to be processed, so that deformation processing of the whole object to be processed can be obtained. For example, the deformation process is a stretching process, and based on the target contour, the object to be processed may be divided into a plurality of planar units along the stretching direction, and the planar units may be made to cover the entire object to be processed. The embodiment of the present application does not limit the form of the planar unit.
And 106, performing deformation processing on each plane unit of the plurality of plane units according to the target contour to obtain a target object.
Optionally, the target contour is a contour after deformation processing is performed on the object to be processed, and mapping from the object to be processed to a region formed by the target contour may be respectively established based on each plane unit in the object to be processed, and each plane unit in the object to be processed is mapped to a region formed by a corresponding target contour, so that the target object after deformation processing of the object to be processed is obtained, and deformation processing of the object to be processed in the image is realized. For example, the shape characteristics of the planar elements may be used to create a map of each planar element to the area formed by the corresponding target contour. As another example, the mapping of each planar element to the region formed by the corresponding target contour may be expressed in the form of a function. The embodiment of the present application does not limit the implementation manner of obtaining the deformed target object through the plurality of plane units according to the target contour.
According to the image processing method provided by the embodiment of the application, the target contour is obtained by performing deformation processing on the contour of the object to be processed in the image, the object to be processed is divided into the plurality of plane units based on the target contour, and each plane unit of the plurality of plane units is subjected to deformation processing according to the target contour to obtain the target object. The deformation processing method comprises the steps of dividing an object to be processed into a plurality of plane units, carrying out deformation processing on each plane unit, converting the deformation processing of the object to be processed into the deformation processing of each plane unit, breaking the whole into zero, improving the applicability of the deformation processing, expanding the application range of the deformation processing, and enabling the deformation processing to be flexibly applicable to deformation processing of the object to be processed with different angles, sizes and shapes. Simultaneously, the deformation treatment of the object to be treated is converted into the deformation treatment of each plane unit, and the deformation treatment of each plane unit is convenient to realize and is favorable for quick deformation treatment, so that the deformation treatment of the object to be treated can be simplified, and the efficiency of the deformation treatment of the object to be treated is improved. In addition, the deformation processing of the object to be processed is converted into the deformation processing of each plane unit, and the whole deformation of the area in a certain range of the object to be processed can be realized through each plane unit, so that the fault tolerance of the deformation processing can be improved, the influence of errors is reduced, and the whole effect of the target object obtained by the deformation processing is more stable.
In some embodiments, the operation of performing deformation processing on the contour of the object to be processed in the image to obtain the target contour may be implemented by acquiring key points of the object to be processed in the image. The following describes in detail a flow of performing deformation processing on the contour of the object to be processed by acquiring the key points of the object to be processed, with reference to fig. 2.
As shown in fig. 2, the method includes:
202, processing the image to obtain key points of the object to be processed in the image.
Optionally, the image may be processed by a neural network or other machine learning methods to obtain the key points of the object to be processed in the image, for example, the neural network may adopt a convolutional neural network, and the method for obtaining the key points of the object to be processed is not limited in the embodiment of the present application. For example, the object to be processed is a nose of a human body, and the image can be processed through a neural network to acquire key points of the nose.
And 204, estimating the attitude angle information of the object to be processed according to the key points of the object to be processed, and acquiring the key points of the outline of the object to be processed.
Optionally, the pose angle information of the object to be processed may be estimated by a neural network or other machine learning methods according to the key points of the object to be processed, for example, the neural network may adopt a convolutional neural network, and the method for estimating the pose angle information of the object to be processed is not limited in the embodiment of the present application. For example, the object to be processed is a nose of a human body, and posture angle information of the nose can be estimated by a neural network according to key points of the nose.
In the embodiment of the application, the key points of the object to be processed at least comprise the key points of the outline of the object to be processed. In an optional example, the key points of the object to be processed are the key points of the contour of the object to be processed, and the key points of the contour of the object to be processed are obtained directly according to the key points of the object to be processed. In another optional example, the key points of the object to be processed include other key points that are not located in the contour of the object to be processed, in addition to the key points of the contour of the object to be processed, and then the key points of the contour of the object to be processed may be obtained through a neural network or other machine learning methods according to the key points of the object to be processed, for example, the neural network may adopt a convolutional neural network. The method for acquiring the key points of the contour of the object to be processed is not limited in the embodiment of the application. For example, the object to be processed is a nose of a human body, and key points of a nose contour can be acquired through a neural network according to the key points of the nose.
In some embodiments, operations 202 and 204 may also be performed simultaneously, that is, the image is processed, the pose angle information of the object to be processed is estimated, and the key points of the object to be processed and/or the key points of the contour of the object to be processed in the image are obtained. Optionally, the image may be processed by a neural network or other machine learning methods, the pose angle information of the object to be processed is estimated, and the key points of the object to be processed and/or the key points of the contour of the object to be processed in the image are obtained. For example, the object to be processed is a nose of a human body, the image can be processed through a neural network, key points of a face, key points of the nose and/or key points of a nose contour are obtained, and then the pose angle information of the face is estimated through the neural network according to the key points of the face and serves as the pose angle information of the nose.
206, determining the direction and the scale of deformation processing according to the key points and the attitude angle information of the object to be processed.
Optionally, the deformation direction of each key point of the object to be processed may be obtained according to the key point and the posture angle information of the object to be processed, and the deformation processing direction of the object to be processed may be determined according to the deformation direction of each key point of the object to be processed. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, and a stretching direction of each key point of the nose can be obtained according to the key point and the posture angle information of the nose, and the stretching direction of each key point of the nose is taken as a stretching processing direction of the nose.
Optionally, the scale of deformation of each key point of the object to be processed may be obtained according to the key point and the attitude angle information of the object to be processed, and the scale of deformation processing of the object to be processed may be determined according to the scale of deformation of each key point of the object to be processed. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, a stretching scale of each key point of the nose can be obtained according to the key point and the posture angle information of the nose, and the stretching scale of each key point of the nose is used as a scale for stretching the nose.
And 208, determining target key points corresponding to the key points of the contour of the object to be processed according to the direction and the scale of the deformation processing to obtain the target contour.
Optionally, after the direction and the scale of the deformation processing of the object to be processed are obtained, the target key points corresponding to the key points of the contour of the object to be processed may be determined in multiple ways according to the direction and the scale of the deformation processing of the object to be processed, so as to obtain the target contour after the deformation processing of the contour of the object to be processed, which is not limited in the embodiment of the present application. In an optional example, a target key point corresponding to each key point of the contour of the object to be processed may be determined according to a scale of deformation of each key point of the contour of the object to be processed along a direction in which each key point of the contour of the object to be processed deforms, and a target contour after deformation processing of the contour of the object to be processed is obtained according to the target key point corresponding to each key point of the contour of the object to be processed. The following describes in detail a process of determining target keypoints corresponding to keypoints of the contour of the object to be processed according to the direction and scale of the deformation processing, with reference to fig. 3.
As shown in fig. 3, the method includes:
302, according to the direction of the deformation process, the radiation center of the deformation process is determined.
Optionally, the radiation center of the deformation processing of the object to be processed may be determined according to the deformation processing direction of the object to be processed, so that the deformation direction of the key point of the object to be processed is directed to the radiation center or directed to the corresponding key point along a connecting line between the radiation center and the corresponding key point. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, a radiation center of the stretching processing can be obtained according to an intersection point of a straight line passing through a key point of the nose between two eyes along the stretching direction and a straight line passing through a key point of a nose tip and a nostril along the stretching direction, and the stretching directions of other key points on the nose contour except the key point are all along a connecting line between the radiation center and the key point and point to the key point.
And 304, connecting the radiation center with the key points of the contour of the object to be processed, and determining target key points corresponding to the key points on a connecting line of the radiation center and the key points of the contour of the object to be processed or an extension line of the connecting line according to the scale of deformation processing.
In this embodiment of the present application, a deformation that causes a dimension of an object to be processed to increase along a direction of the deformation may be referred to as a positive deformation, and a deformation that causes a dimension of the object to be processed to decrease along a direction of the deformation may be referred to as a negative deformation, and a deformation process performed on the object to be processed, that is, a deformation process performed on an outline of the object to be processed, may include only the positive deformation or the negative deformation, or may include both the positive deformation and the negative deformation, for example, a stretching process includes only the positive deformation, a compression process includes only the negative deformation, and a bending process and a twisting process may include both the positive deformation and the negative deformation, which is not limited in this embodiment of the present application.
The method comprises the steps that in response to the condition that deformation processing of an object to be processed comprises forward deformation, key points of a radiation center and the outline of the object to be processed are connected, and corresponding target key points can be determined on an extension line of a connecting line of the radiation center and the key points of the outline of the object to be processed according to the size of deformation of the corresponding key points; and in response to the situation that the deformation processing of the object to be processed comprises negative deformation, connecting the radiation center with the key points of the contour of the object to be processed, and determining corresponding target key points on a connecting line of the radiation center and the key points of the contour of the object to be processed according to the scale of the deformation of the corresponding key points.
Optionally, the scale of the deformation of the key point of the contour of the object to be processed may be obtained according to the attitude angle information of the object to be processed by presetting the attitude angle information of the object to be processed, and the relationship between the key point of the object to be processed and the scale of the deformation. In an optional example, the preset relationship may be a relational expression that the attitude angle information of the object to be processed, the key point of the object to be processed, and the scale of the deformation satisfy. In another optional example, the preset relationship may be a comparison table of key points of the object to be processed and a scale of the deformation under the condition that the object to be processed has different attitude angle information. The embodiments of the present application do not limit this. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, a comparison table of key points of the nose and stretched dimensions can be preset under the condition that the nose has no posture angle information, and the stretched dimensions of the key points of the nose outline can be obtained by inquiring the comparison table according to the posture angle information of the nose and the key points of the nose.
And 306, taking the target key point as a first fitting point, and performing contour fitting according to the first fitting point to obtain a target contour.
Optionally, after the target key point is obtained, the target key point may be used as a first fitting point, and contour fitting is performed according to the first fitting point by a plurality of methods to obtain a target contour, which is not limited in the embodiment of the present application. In an optional example, the target key points may be used as first fitting points, the first fitting points are sequentially connected to perform contour fitting, a first broken line segment is obtained and used as the target contour, and for example, the method may be used to fit the target contour when the number of the key points of the acquired contour of the object to be processed meets the accuracy requirement. In another optional example, according to the target key point, other points of the target contour except the target key point may be obtained through a Catmull-Rom difference value to serve as first fitting points, then the target key point is used as the first fitting point, and the first fitting points are sequentially connected to perform contour fitting to obtain a first broken line segment serving as the target contour. For example, in the case that the number of the acquired key points of the contour of the object to be processed does not meet the accuracy requirement, the target contour can be fitted by adopting the method.
As an example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, other points of a target contour of the nose except the target key point can be obtained through a Catmull-Rom difference value according to the target key point of the nose, the other points are used as first fitting points, then the target key point of the nose is used as the first fitting point, the first fitting points are sequentially connected for contour fitting, a first fold line segment is obtained and used as the target contour of the nose, and fitting of the target contour of the nose is achieved.
According to the method and the device, the Catmull-Rom difference value is utilized, the target contour of the object to be processed can be fitted on the basis of ensuring the precision requirement under the condition that the number of key points for obtaining the contour of the object to be processed is small, so that the time for obtaining the key points of the contour of the object to be processed and the processing cost can be reduced, and the efficiency of deformation processing of the object to be processed is improved.
Optionally, the key points of the contour of the object to be processed may also be used as second fitting points, and the contour of the object to be processed is obtained by performing contour fitting according to the second fitting points. In an optional example, the key points of the contour of the object to be processed may be used as second fitting points, the second fitting points are sequentially connected to perform contour fitting, and a second broken line segment is obtained as the contour of the object to be processed. In another optional example, according to a key point of the contour of the object to be processed, other points of the contour of the object to be processed except the key point may be obtained through a Catmull-Rom difference value to serve as second fitting points, then the key point of the contour of the object to be processed serves as the second fitting points, the second fitting points are sequentially connected to perform contour fitting, and a second broken line segment is obtained to serve as the contour of the object to be processed. For example, in the case that the number of the acquired key points of the contour of the object to be processed does not meet the accuracy requirement, the contour of the object to be processed can be fitted by adopting the method.
As an example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, other points of the nose contour except the key points can be obtained through a Catmull-Rom difference value according to the key points of the nose contour to be used as second fitting points, then the key points of the nose contour are used as the second fitting points, the second fitting points are sequentially connected to perform contour fitting, and a second fold line segment is obtained to be used as the contour of the nose.
As shown in fig. 5A, it is a schematic diagram of an application example of the image processing method of the present application, wherein the object to be processed is a nose of a human body, the deformation processing is stretching processing, in the figure, a first fold line segment 501 fitting the contour of the target nose after the stretching process, and a second fold line segment 502 fitting the contour of the nose before the stretching process, wherein, when the first broken line segment 501 is obtained by contour fitting by sequentially connecting the target key points, correspondingly, the second fold line segment 502 is also obtained by performing contour fitting by sequentially connecting key points of the nose contour, when the first broken line segment 501 is obtained by obtaining more first fitting points through the Catmull-Rom difference value, and connecting the first fitting points in sequence for contour fitting, correspondingly, the second fold line segment 502 is also obtained by obtaining more second fitting points through the Catmull-Rom difference value, and sequentially connecting the second fitting points for contour fitting.
In the embodiment, the key points of the object to be processed in the image are obtained, and the contour of the object to be processed is subjected to deformation processing to obtain the deformed target contour, so that the deformation processing of the contour of the object to be processed can be simplified, and a basis is provided for dividing the object to be processed into a plurality of plane units based on the target contour.
In some embodiments, the object to be processed is divided into a plurality of plane units based on the target contour, and after the contour fitting is performed through the first fitting point according to the above method to obtain the target contour, the object to be processed may be divided into a plurality of plane units by connecting the radiation center and the first fitting point.
Optionally, the object to be processed is divided into a plurality of plane units by connecting the radiation center and the first fitting point, and different implementation manners may be adopted according to the type of deformation processing, which is not limited in the embodiment of the present application. In an alternative example, in a case where the deformation process includes a forward deformation, in response to the target keypoint being on an extension of a connecting line of the radiation center and the keypoint, the radiation center and the first fitting point may be connected, and the connecting line of the radiation center and the first fitting point intersects with the contour of the object to be processed, thereby dividing the object to be processed into a plurality of plane units. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, the radiation center and a first fitting point of a first fold line segment fitting the nose target contour can be connected, a connecting line of the radiation center and the first fitting point and a second fold line segment fitting the nose target contour can be intersected at a second fitting point, and therefore the nose can be divided into a plurality of triangular plane units. As shown in fig. 5B, which is a schematic diagram of an application embodiment of the image processing method of the present application, wherein the object to be processed is a nose of a human body, the deformation processing is a stretching processing, and a first fitting point of a first broken line segment 501 connecting a radiation center O and a target contour of the fitting nose is shown in the diagram, so that the nose is divided into a plurality of triangular plane units, and a target area after the stretching processing of each triangular plane unit is obtained.
In another optional example, in a case where the deformation processing includes negative deformation, in response to that the target key point is on a connecting line of the radiation center and the key point, the radiation center and the first fitting point may be connected, and the connecting line of the radiation center and the first fitting point may be extended to intersect with the contour of the object to be processed, so that the object to be processed may be divided into a plurality of plane units. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, the radiation center and a first fitting point of a first fold line segment fitting the nose target contour can be connected, and a connecting line of the radiation center and the first fitting point and a second fold line segment fitting the nose target contour are extended to intersect at a second fitting point, so that the nose can be divided into a plurality of triangular plane units.
According to the embodiment, the object to be processed is divided into the plurality of triangular plane units, and the plurality of triangular plane units are used for fitting the object to be processed, so that the object to be processed in any shape can be fitted, and the requirements for fitting the objects to be processed with different angles, sizes and shapes are met.
In some embodiments, according to the target contour, each plane unit of the plurality of plane units is subjected to deformation processing to obtain a target object, and after the contour fitting is performed through the first fitting points according to the above method to obtain the target contour, the following method may be adopted. A flowchart of performing deformation processing on each of the plurality of plane elements according to the target contour to obtain the target object will be described in detail below with reference to fig. 4.
As shown in fig. 4, the method includes:
402, respectively establishing mapping of pixel points in each plane unit to pixel points in a target area formed by a connecting line of the corresponding first fitting point and the radiation center and the target contour.
Optionally, a plurality of plane units fitting the object to be processed may be formed according to a connecting line of the radiation center and the second fitting point of the contour of the object to be processed and the contour of the object to be processed, and a target area formed after deformation processing of each plane unit according to the connecting line of the radiation center and the first fitting point of the corresponding target contour and the corresponding target contour may be formed corresponding to each plane unit. Optionally, the deformation processing of each plane unit may be implemented by establishing mapping between the plane unit and the pixel point of the corresponding target region, so as to obtain the target object after the deformation processing of the object to be processed.
Optionally, a linear piecewise function may be constructed according to a straight line determined by the radiation center and the first pixel point in each plane unit, and an intersection point of the contour of the corresponding object to be processed and the target contour, and the linear piecewise function is used as mapping between the pixel points. The embodiment of the application adopts the linear piecewise function to realize the deformation processing, and the linear piecewise function has linear complexity, so that the efficiency of the deformation processing of the object to be processed can be improved.
Alternatively, the contour of the object to be processed may be a second broken line segment obtained by contour fitting, and the target contour may be a first broken line segment obtained by contour fitting. As an example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, and mappings between pixel points in each triangular plane unit of the nose and pixel points in a triangular target region formed by a connecting line between a corresponding first fitting point and a radiation center and a corresponding first broken line segment can be respectively established.
Optionally, a line connecting the radiation center and the first fitting point may be extended to a third point, and the third point is sequentially connected to obtain a third broken line segment, where the contour of the object to be processed and the target contour are located between the radiation center and the third broken line segment, and then a linear piecewise function is constructed according to an intersection point of a straight line determined by the radiation center and the first pixel point in the planar unit, the target contour, the contour of the object to be processed, and the third broken line segment. According to the embodiment of the application, the third broken line segment is formed by prolonging the connecting line of the radiation center and the first fitting point, and a buffer area can be formed outside the contour of the object to be processed and the contour of the target, so that convenience is brought to the construction of a linear piecewise function.
Alternatively, the contour of the object to be processed may be a second broken line segment obtained by contour fitting, and the target contour may be a first broken line segment obtained by contour fitting. As an example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, a connection line between the radiation center and the first fitting point can be extended to a third point, and the third point is connected in sequence to obtain a third broken line segment, wherein the second broken line segment and the first broken line segment are located between the radiation center and the third broken line segment, and then a linear piecewise function is constructed according to intersection points of a straight line determined by the radiation center and the first pixel point in the triangular plane unit of the nose, and the first broken line segment, the second broken line segment and the third broken line segment.
And 404, determining the pixel values of the pixel points in the corresponding target area according to the pixel values of the pixel points in each plane unit based on the mapping to obtain the target object.
Optionally, a second pixel point corresponding to the first pixel point may be determined in the corresponding target region based on a linear piecewise function, and then a pixel value of the second pixel point is determined according to a pixel value of the first pixel point, so as to obtain the target object. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, a second pixel point corresponding to the first pixel point in a target region after the corresponding nose deformation processing can be determined based on a linear piecewise function, and then a pixel value of the second pixel point is determined according to a pixel value of the first pixel point, so that the stretched nose is obtained according to the target region.
Optionally, determining a pixel value of a second pixel point according to a pixel value of a first pixel point to obtain a target object, determining a data type of a coordinate of the first pixel point, responding to that the data type of the coordinate of the first pixel point is a floating point number according to the data type of the coordinate of the first pixel point, determining the pixel value of the first pixel point through bilinear interpolation based on the coordinate of the first pixel point, and taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object; and in response to the fact that the data type of the coordinate of the second pixel point is an integer, taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object.
Fig. 5C and 5D are schematic diagrams illustrating an embodiment of an application of the image processing method according to the present application. The object to be processed is a nose of a human body, the deformation processing is stretching processing, a triangular plane unit ADE fitting the nose is shown in fig. 5C, wherein a is a radiation center, D, E is a second fitting point on a nose contour, a triangular region AFG is a triangular target region after the nose deformation processing, F, G is a first fitting point on the target contour after the nose deformation processing, a quadrilateral region BCGF is a buffer region, and the telescopic transformation of the triangular plane unit can be controlled through A, B, C, D, E, F, G seven points. For an arbitrary point P in the triangle target area AFG, the scaling transformation from the point P ' to the point P may be completed by constructing the mapping relationship P ' ═ f (P) so that the pixel value of the point P located in the triangle target area AFG may be determined according to the pixel value of the point P ' located in the triangle plane unit ADE.
Therefore, for each point P in the triangular target area AFG, connecting the point AP and extending it to intersect DE, FG, and BC at I, J and K, respectively, finds the lengths DIS _ AP, DIS _ AI, DIS _ AJ, and DIS _ AK of AP, AI, AJ, and AK, respectively. Constructing a piecewise function y ═ g (x) so that y ═ g (x) passes through points (0,0), (1,1) and (DIS _ AJ/DIS _ AK, DIS _ AI/DIS _ AK), wherein as shown in fig. 5D, substituting x ═ DIS _ AP/DIS _ AK into an equation y ═ g (x) to obtain y ═ DIS _ AP '/DIS _ AK, so as to obtain a point P ' on a straight line AK, so that the length of the AP ' is DIS _ AP ', and finally, obtaining a pixel value of the point P according to the pixel value of the point P ', so as to complete the scaling transformation. If the coordinates of the point P' are floating point numbers, the corresponding pixel value can be obtained by bilinear interpolation.
Fig. 5E and 5F are schematic diagrams of application embodiments of the image processing method of the present application. The effect of the nose before the stretching treatment is shown in fig. 5E, and the effect of the nose after the stretching treatment is shown in fig. 5F. By comparing fig. 5E and fig. 5F, it can be known that the image processing method of the present application can obtain a more stereoscopic and full nose shape by stretching the nose, thereby achieving the effect of enlarging the nose and beautifying the face.
The method provided by the above embodiments of the present application can be integrated into a camera and a mobile phone with a camera function, to add a beauty function for taking a picture, or can be applied to a device with a micro-shaping simulation function through micro-shaping simulation software, to simulate an effect after micro-shaping, to provide a reference for a shaping doctor and a client, or can be applied to a device with a picture or video processing function through picture or video processing software, to provide multipoint controllable deformation processing on the whole or part of a dynamic picture in a static picture or video, or can be applied to a device with a live broadcast function through live broadcast software, to provide a real-time beauty function for live broadcast software, or can be applied to a device operated by a game engine through the game engine, to provide a deformation basis and a deformation formula for texture deformation in the game engine.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to some embodiments of the present application. As shown in fig. 6, the apparatus includes: an outline deformation module 610, a cell division module 620, and a cell deformation module 630. Wherein the content of the first and second substances,
and the contour deformation module 610 is configured to perform deformation processing on the contour of the object to be processed in the image to obtain a target contour.
In this embodiment of the present application, the object to be processed may be a human body or a certain part of a human body, or another object other than a human body or a certain part of another object other than a human body, and the type of the object to be processed is not limited in this embodiment of the present application. The image of the object to be processed may be a photo, a picture, or a video frame, for example, the image of the object to be processed may be a photo acquired by an image acquisition device, a picture stored by a storage device, or a video frame played by a video playing device. The contour of the object to be processed refers to an edge of the object to be processed for distinguishing from the surrounding, and it may form a closed line or a non-closed line in the image. The deformation processing performed on the contour of the object to be processed may be one or any combination of stretching processing, compressing processing, bending processing, twisting processing, and the like, and the type of the deformation processing is not limited in the embodiment of the present application.
Optionally, the contour deformation module 610 may perform deformation processing on the contour of the object to be processed by a plurality of methods to obtain the target contour. In an alternative example, as shown in fig. 7, the contour deformation module 610 may include: the information obtaining sub-module 710, the parameter determining sub-module 720 and the deformation processing sub-module 730 are arranged, wherein the information obtaining sub-module 710 is used for obtaining the outline and the attitude angle information of the object to be processed in the image, the parameter determining sub-module 720 is used for determining the direction and the scale of deformation processing according to the outline and the attitude angle information of the object to be processed, and the deformation processing sub-module 730 is used for performing deformation processing on the outline of the object to be processed according to the direction and the scale of deformation processing to obtain the deformed target outline.
And a unit dividing module 620, configured to divide the object to be processed into a plurality of plane units based on the target contour.
Optionally, the unit dividing module 620 divides the object to be processed in the image into a plurality of plane units with the target contour as a reference, and the plane units may cover the entire object to be processed, or only cover a part of the object to be processed. In the case where a plurality of plane units divided based on the target contour cover the entire object to be processed, the deformation processing of the entire object to be processed can be obtained by performing the deformation processing on each plane unit, respectively. Under the condition that a plurality of plane units divided based on the target contour only cover part of the object to be processed, deformation processing of corresponding parts in the object to be processed can be obtained by respectively carrying out deformation processing on each plane unit, and then deformation processing of parts which are not subjected to deformation processing in the object to be processed can be estimated according to deformation processing of parts which are subjected to known deformation processing in the object to be processed, so that deformation processing of the whole object to be processed can be obtained. For example, the deformation process is a stretching process, and based on the target contour, the object to be processed may be divided into a plurality of planar units along the stretching direction, and the planar units may be made to cover the entire object to be processed. The embodiment of the present application does not limit the form of the planar unit.
And a unit deformation module 630, configured to perform deformation processing on each plane unit of the multiple plane units according to the target contour, so as to obtain the target object.
Optionally, the target contour is a contour after deformation processing is performed on the object to be processed, and the unit deformation module 630 may respectively establish mapping from the object to be processed to a region formed by the target contour based on each plane unit in the object to be processed, and map each plane unit in the object to be processed to a corresponding region formed by the target contour, so as to obtain the target object after deformation processing of the object to be processed, and implement deformation processing on the object to be processed in the image. For example, the shape characteristics of the planar elements may be used to create a map of each planar element to the area formed by the corresponding target contour. As another example, the mapping of each planar element to the region formed by the corresponding target contour may be expressed in the form of a function. The embodiment of the present application does not limit the implementation manner of obtaining the deformed target object through the plurality of plane units according to the target contour.
According to the image processing device provided by the above embodiment of the present application, a target contour is obtained by performing deformation processing on a contour of an object to be processed in an image, the object to be processed is divided into a plurality of plane units based on the target contour, and each plane unit of the plurality of plane units is subjected to deformation processing according to the target contour, so that the target object is obtained. The deformation processing method comprises the steps of dividing an object to be processed into a plurality of plane units, carrying out deformation processing on each plane unit, converting the deformation processing of the object to be processed into the deformation processing of each plane unit, breaking the whole into zero, improving the applicability of the deformation processing, expanding the application range of the deformation processing, and enabling the deformation processing to be flexibly applicable to deformation processing of the object to be processed with different angles, sizes and shapes. Simultaneously, the deformation treatment of the object to be treated is converted into the deformation treatment of each plane unit, and the deformation treatment of each plane unit is convenient to realize and is favorable for quick deformation treatment, so that the deformation treatment of the object to be treated can be simplified, and the efficiency of the deformation treatment of the object to be treated is improved. In addition, the deformation processing of the object to be processed is converted into the deformation processing of each plane unit, and the whole deformation of the area in a certain range of the object to be processed can be realized through each plane unit, so that the fault tolerance of the deformation processing can be improved, the influence of errors is reduced, and the whole effect of the target object obtained by the deformation processing is more stable.
Fig. 8 is a schematic structural diagram of an information acquisition submodule according to some embodiments of the present application. As shown in fig. 8, the information acquisition sub-module 710 includes: a key point acquisition unit 810 and an information acquisition unit 820. Wherein the content of the first and second substances,
the key point obtaining unit 810 is configured to process the image and obtain a key point of the object to be processed in the image.
Optionally, the keypoint obtaining unit 810 may process the image by using a neural network or other machine learning methods to obtain keypoints of the object to be processed in the image, for example, the neural network may adopt a convolutional neural network, and the method for obtaining keypoints of the object to be processed is not limited in the embodiment of the present application.
An information obtaining unit 820, configured to estimate pose angle information of the object to be processed according to the key points of the object to be processed, and obtain key points of the contour of the object to be processed.
Alternatively, the information obtaining unit 820 may estimate the pose angle information of the object to be processed according to the key points of the object to be processed through a neural network or other machine learning methods, for example, the neural network may adopt a convolutional neural network, and the method for estimating the pose angle information of the object to be processed is not limited in the embodiment of the present application.
In the embodiment of the application, the key points of the object to be processed at least comprise the key points of the outline of the object to be processed. In an optional example, the key points of the object to be processed are the key points of the contour of the object to be processed, and the key points of the contour of the object to be processed are obtained directly according to the key points of the object to be processed. In another optional example, the key points of the object to be processed include other key points that are not located in the contour of the object to be processed, in addition to the key points of the contour of the object to be processed, and then the key points of the contour of the object to be processed may be obtained through a neural network or other machine learning methods according to the key points of the object to be processed, for example, the neural network may adopt a convolutional neural network. The method for acquiring the key points of the contour of the object to be processed is not limited in the embodiment of the application.
In some embodiments, the keypoint acquisition unit 810 and the information acquisition unit 820 may also be implemented by using one processing unit, that is, the processing unit is configured to process the image, estimate pose angle information of the object to be processed, and acquire keypoints of the object to be processed in the image and/or keypoints of the contour of the object to be processed. Optionally, the processing unit may process the image by a neural network or other machine learning methods, estimate pose angle information of the object to be processed, and acquire key points of the object to be processed and/or key points of the contour of the object to be processed in the image, for example, the neural network may adopt a convolutional neural network. For example, the object to be processed is a nose of a human body, the image can be processed through a neural network, key points of a face, key points of the nose and/or key points of a nose contour are obtained, and then the pose angle information of the face is estimated through the neural network according to the key points of the face and serves as the pose angle information of the nose.
And the parameter determining submodule 720 is used for determining the direction and the scale of deformation processing according to the key point and the attitude angle information of the object to be processed.
Optionally, the parameter determining submodule 720 may obtain a deformation direction of each key point of the object to be processed according to the key point and the posture angle information of the object to be processed, and determine the deformation processing direction of the object to be processed according to the deformation direction of each key point of the object to be processed.
Optionally, the parameter determining submodule 720 may obtain a scale of deformation of each key point of the object to be processed according to the key point and the posture angle information of the object to be processed, and determine a scale of deformation processing of the object to be processed according to the scale of deformation of each key point of the object to be processed.
And the deformation processing submodule 730 is used for determining a target key point corresponding to the key point of the contour of the object to be processed according to the direction and the scale of the deformation processing to obtain the target contour.
Optionally, after obtaining the direction and the scale of the deformation processing of the object to be processed, the deformation processing submodule 730 may determine, according to the direction and the scale of the deformation processing of the object to be processed, the target key point corresponding to the key point of the contour of the object to be processed in multiple ways, so as to obtain the target contour after the deformation processing of the contour of the object to be processed, which is not limited in the embodiment of the present application. In an optional example, the deformation processing sub-module 730 may determine, along the direction of deformation of each key point of the contour of the object to be processed, a target key point corresponding to each key point of the contour of the object to be processed according to the dimension of deformation of each key point of the contour of the object to be processed, and obtain, according to the target key point corresponding to each key point of the contour of the object to be processed, the target contour after the deformation processing of the contour of the object to be processed.
FIG. 9 is a block diagram of a deformation processing submodule according to some embodiments of the present application. As shown in fig. 9, the deformation processing submodule 730 includes: a center determining unit 910, a keypoint determining unit 920 and a contour fitting unit 930. Wherein the content of the first and second substances,
a center determining unit 910, configured to determine a radiation center of the deformation process according to the deformation process direction.
Optionally, the center determining unit 910 may determine a radiation center of the object to be processed through deformation processing according to the deformation processing direction of the object to be processed, so that the deformation direction of the key point of the object to be processed is directed to the radiation center or directed to the corresponding key point along a connection line between the radiation center and the corresponding key point. For example, the object to be processed is a nose of a human body, the deformation processing is stretching processing, a radiation center of the stretching processing can be obtained according to an intersection point of a straight line passing through a key point of the nose between two eyes along the stretching direction and a straight line passing through a key point of a nose tip and a nostril along the stretching direction, and the stretching directions of other key points on the nose contour except the key point are all along a connecting line between the radiation center and the key point and point to the key point.
And a key point determining unit 920, configured to connect the radiation center and the key point of the contour of the object to be processed, and determine a target key point corresponding to the key point according to the scale of the deformation process on a connection line between the radiation center and the key point of the contour of the object to be processed or an extension line of the connection line.
In this embodiment of the present application, a deformation that causes a dimension of an object to be processed to increase along a direction of the deformation may be referred to as a positive deformation, and a deformation that causes a dimension of the object to be processed to decrease along a direction of the deformation may be referred to as a negative deformation, and a deformation process performed on the object to be processed, that is, a deformation process performed on an outline of the object to be processed, may include only the positive deformation or the negative deformation, or may include both the positive deformation and the negative deformation, for example, a stretching process includes only the positive deformation, a compression process includes only the negative deformation, and a bending process and a twisting process may include both the positive deformation and the negative deformation, which is not limited in this embodiment of the present application.
The keypoint determining unit 920 is configured to connect the radiation center and the keypoint of the contour of the object to be processed in response to a situation that deformation processing performed on the object to be processed includes forward deformation, and determine a corresponding target keypoint according to a scale of deformation of the corresponding keypoint on an extension line of a connection line between the radiation center and the keypoint of the contour of the object to be processed; the key point determining unit 920 connects the radiation center and the key point of the contour of the object to be processed in response to the situation that the deformation processing performed on the object to be processed includes negative deformation, and may determine a corresponding target key point according to the scale of the deformation of the corresponding key point on the connecting line between the radiation center and the key point of the contour of the object to be processed.
Optionally, the scale of the deformation of the key point of the contour of the object to be processed may be obtained according to the attitude angle information of the object to be processed by presetting the attitude angle information of the object to be processed, and the relationship between the key point of the object to be processed and the scale of the deformation. In an optional example, the preset relationship may be a relational expression that the attitude angle information of the object to be processed, the key point of the object to be processed, and the scale of the deformation satisfy. In another optional example, the preset relationship may be a comparison table of key points of the object to be processed and a scale of the deformation under the condition that the object to be processed has different attitude angle information. The embodiments of the present application do not limit this.
The contour fitting unit 930 is configured to use the target key point as a first fitting point, and perform contour fitting according to the first fitting point to obtain a target contour.
Optionally, after obtaining the target key point, the contour fitting unit 930 may use the target key point as a first fitting point, and perform contour fitting according to the first fitting point by a plurality of methods to obtain the target contour, which is not limited in this embodiment of the present application. In an optional example, the contour fitting unit 930 may use the target key points as first fitting points, sequentially connect the first fitting points to perform contour fitting, and obtain a first broken line segment as the target contour, for example, when the number of the key points of the acquired contour of the object to be processed meets the accuracy requirement, the method may be used to fit the target contour. In another optional example, the contour fitting unit 930 may obtain, according to the target key point, other points of the target contour except for the target key point through a Catmull-Rom difference value, as first fitting points, then use the target key point as the first fitting points, and sequentially connect the first fitting points for contour fitting, so as to obtain a first broken line segment as the target contour. For example, in the case that the number of the acquired key points of the contour of the object to be processed does not meet the accuracy requirement, the target contour can be fitted by adopting the method.
According to the method and the device, the Catmull-Rom difference value is utilized, the target contour of the object to be processed can be fitted on the basis of ensuring the precision requirement under the condition that the number of key points for obtaining the contour of the object to be processed is small, so that the time for obtaining the key points of the contour of the object to be processed and the processing cost can be reduced, and the efficiency of deformation processing of the object to be processed is improved.
Optionally, the contour fitting unit 930 may further use a key point of the contour of the object to be processed as a second fitting point, and perform contour fitting according to the second fitting point to obtain the contour of the object to be processed. In an optional example, the contour fitting unit 930 may use the key points of the contour of the object to be processed as second fitting points, sequentially connect the second fitting points to perform contour fitting, and obtain a second broken line segment as the contour of the object to be processed, for example, when the number of the key points of the acquired contour of the object to be processed meets the accuracy requirement, the method may be used to fit the contour of the object to be processed. In another optional example, the contour fitting unit 930 may obtain, according to the key points of the contour of the object to be processed, other points of the contour of the object to be processed except the key points through a Catmull-Rom difference value, as second fitting points, then take the key points of the contour of the object to be processed as the second fitting points, and sequentially connect the second fitting points for contour fitting, so as to obtain a second fold line segment as the contour of the object to be processed. For example, in the case that the number of the acquired key points of the contour of the object to be processed does not meet the accuracy requirement, the contour of the object to be processed can be fitted by adopting the method.
In the embodiment, the key points of the object to be processed in the image are obtained, and the contour of the object to be processed is subjected to deformation processing to obtain the deformed target contour, so that the deformation processing of the contour of the object to be processed can be simplified, and a basis is provided for dividing the object to be processed into a plurality of plane units based on the target contour.
In some embodiments, after performing contour fitting through the first fitting point according to the above method to obtain the target contour, the unit dividing module 620 may divide the object to be processed into a plurality of plane units by connecting the radiation center and the first fitting point.
Optionally, the unit dividing module 620 divides the object to be processed into a plurality of plane units by connecting the radiation center and the first fitting point, and different implementation manners may be adopted according to the type of the deformation processing, which is not limited in this embodiment of the present application. In an alternative example, in a case that the deformation processing includes forward deformation, the unit dividing module 620 may connect the radiation center and the first fitting point in response to that the target key point is on an extension line of a connecting line of the radiation center and the key point, and the connecting line of the radiation center and the first fitting point intersects with the contour of the object to be processed, so as to divide the object to be processed into a plurality of plane units.
In another alternative example, in a case that the deformation processing includes negative deformation, the unit dividing module 620 may connect the radiation center and the first fitting point in response to that the target key point is on a connecting line of the radiation center and the key point, and extend that the connecting line of the radiation center and the first fitting point intersects with the contour of the object to be processed, so that the object to be processed may be divided into a plurality of planar units.
According to the embodiment, the object to be processed is divided into the plurality of triangular plane units, and the plurality of triangular plane units are used for fitting the object to be processed, so that the object to be processed in any shape can be fitted, and the requirements for fitting the objects to be processed with different angles, sizes and shapes are met.
In some embodiments, after performing contour fitting through the first fitting points according to the above method to obtain the target contour, the unit deformation module 630 may be implemented by using the structure in fig. 10.
Fig. 10 is a schematic structural diagram of a cell deformation module according to some embodiments of the present application. As shown in fig. 10, the unit deformation module 630 includes: a map building sub-module 1010 and a pixel mapping sub-module 1020. Wherein the content of the first and second substances,
the mapping establishing submodule 1010 is configured to respectively establish mapping between pixel points in each plane unit and pixel points in a target region formed by a connecting line between the corresponding first fitting point and the radiation center and the target contour.
Optionally, a plurality of plane units fitting the object to be processed may be formed according to a connecting line of the radiation center and the second fitting point of the contour of the object to be processed and the contour of the object to be processed, and a target area formed after deformation processing of each plane unit according to the connecting line of the radiation center and the first fitting point of the corresponding target contour and the corresponding target contour may be formed corresponding to each plane unit. Optionally, the deformation processing of each plane unit may be implemented by establishing mapping between the plane unit and the pixel point of the corresponding target region, so as to obtain the target object after the deformation processing of the object to be processed.
Optionally, the mapping establishing submodule 1010 may construct a linear piecewise function according to a straight line determined by the radiation center and the first pixel point in each plane unit, and an intersection point of the contour of the corresponding object to be processed and the target contour, and use the linear piecewise function as the mapping between the pixel points. The embodiment of the application adopts the linear piecewise function to realize the deformation processing, and the linear piecewise function has linear complexity, so that the efficiency of the deformation processing of the object to be processed can be improved.
Optionally, the mapping establishing sub-module 1010 may extend a connection line between the radiation center and the first fitting point to a third point, and sequentially connect the third point to obtain a third broken line segment, where the contour of the object to be processed and the target contour are located between the radiation center and the third broken line segment, and then construct a linear piecewise function according to an intersection point of a straight line determined by the radiation center and the first pixel point in the planar unit, the target contour, the contour of the object to be processed, and the third broken line segment. According to the embodiment of the application, the third broken line segment is formed by prolonging the connecting line of the radiation center and the first fitting point, and a buffer area can be formed outside the contour of the object to be processed and the contour of the target, so that convenience is brought to the construction of a linear piecewise function.
The pixel mapping submodule 1020 is configured to determine, based on mapping, pixel values of pixel points in a corresponding target region according to pixel values of pixel points in each plane unit, so as to obtain a target object.
Optionally, the pixel mapping sub-module 1020 may determine, based on the linear piecewise function, a second pixel point corresponding to the first pixel point in the corresponding target region, and then determine, according to a pixel value of the first pixel point, a pixel value of the second pixel point, so as to obtain the target object.
Optionally, the pixel mapping sub-module 1020 may first determine a data type of the coordinate of the first pixel, then, according to the data type of the coordinate of the first pixel, in response to that the data type of the coordinate of the first pixel is a floating point number, determine a pixel value of the first pixel through bilinear interpolation based on the coordinate of the first pixel, and obtain the target object by using the pixel value of the first pixel as a pixel value of the second pixel; and in response to the fact that the data type of the coordinate of the second pixel point is an integer, taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object.
The embodiment of the application also provides an electronic device, which can be a mobile terminal, a Personal Computer (PC), a tablet computer, a server and the like. Referring now to fig. 11, shown is a schematic diagram of an electronic device 1100 suitable for use in implementing a terminal device or server of an embodiment of the present application: as shown in fig. 11, the computer system 1100 includes one or more processors, communication sections, and the like, for example: one or more Central Processing Units (CPUs) 1101, and/or one or more acceleration units 1113, etc., the acceleration unit 1113 may include, but is not limited to, a GPU, FPGA, other type of special purpose processor, etc., and the processor may perform various appropriate actions and processes according to executable instructions stored in a Read Only Memory (ROM)1102 or loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. The communication unit 1112 may include, but is not limited to, a network card, which may include, but is not limited to, an ib (ib) (infiniband) network card, and the processor may communicate with the read only memory 1102 and/or the random access memory 1103 to execute executable instructions, connect with the communication unit 1112 through the bus 1104, and communicate with other target devices through the communication unit 1112, so as to complete operations corresponding to any method provided by the embodiment of the present application, for example, perform deformation processing on an outline of an object to be processed in an image, and obtain a target outline; dividing the object to be processed into a plurality of plane units based on the target contour; and according to the target contour, performing deformation processing on each plane unit of the plurality of plane units to obtain a target object.
In addition, in the RAM1103, various programs and data necessary for the operation of the apparatus can also be stored. The CPU1101, ROM1102, and RAM1103 are connected to each other by a bus 1104. The ROM1102 is an optional module in case of the RAM 1103. The RAM1103 stores or writes executable instructions into the ROM1102 at runtime, which causes the central processing unit 1101 to perform operations corresponding to the above-described communication methods. An input/output (I/O) interface 1105 is also connected to bus 1104. The communication unit 1112 may be integrated, or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and connected to the bus link.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output portion 1107 including a signal output unit such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a network interface card such as a LAN card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
It should be noted that the architecture shown in fig. 11 is only an optional implementation manner, and in a specific practical process, the number and types of the components in fig. 11 may be selected, deleted, added, or replaced according to actual needs; in different functional component arrangements, separate arrangements or integrated arrangements may also be adopted, for example, the acceleration unit 1113 and the CPU1101 may be separately arranged, or the acceleration unit 1113 may be integrated on the CPU1101, the communication part 1112 may be separately arranged, or may be integrated on the CPU1101 or the acceleration unit 1113, and so on. These alternative embodiments are all within the scope of the present disclosure.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, where the program code may include instructions corresponding to performing the method steps provided in the embodiments of the present disclosure, for example, performing a deformation process on an outline of an object to be processed in an image to obtain a target outline; dividing the object to be processed into a plurality of plane units based on the target contour; and according to the target contour, performing deformation processing on each plane unit of the plurality of plane units to obtain a target object. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1101.
In one or more alternative implementations, the disclosed embodiments also provide a computer program product for storing computer readable instructions that, when executed, cause a computer to perform the image processing method in any one of the possible implementations described above.
The computer program product may be embodied in hardware, software or a combination thereof. In one alternative, the computer program product is embodied in a computer storage medium, and in another alternative, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
In one or more optional implementation manners, the disclosed embodiments further provide an image processing method and a corresponding apparatus and electronic device, a computer storage medium, a computer program, and a computer program product.
In some embodiments, the image processing instruction may be embodied as a call instruction, and the first device may instruct the second device to perform the image processing by calling, and accordingly, in response to receiving the call instruction, the second device may perform the steps and/or flows of any of the above-described image processing methods.
It is to be understood that the terms "first," "second," and the like in the embodiments of the present disclosure are used for distinguishing and not limiting the embodiments of the present disclosure.
It is also understood that in the present disclosure, "plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in this disclosure is generally to be construed as one or more, unless explicitly stated otherwise or indicated to the contrary hereinafter.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
The method and apparatus, device of the present application may be implemented in a number of ways. For example, the methods and apparatuses, devices of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present application may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (27)

1. An image processing method, comprising:
carrying out deformation processing on the contour of an object to be processed in the image to obtain a target contour; wherein the contour of the object to be processed is an edge that the object to be processed uses to distinguish from the surroundings;
dividing the object to be processed into a plurality of plane units based on the target contour;
according to the target contour, carrying out deformation processing on each plane unit of the plurality of plane units to obtain a target object;
wherein, according to the target contour, performing deformation processing on each plane unit of the plurality of plane units to obtain a target object includes:
respectively establishing a mapping between pixel points in each plane unit and pixel points in a target area formed by a connecting line between the corresponding first fitting point and the deformed radiation center and the target contour; wherein the first fitting point comprises a target key point corresponding to a key point of the contour of the object to be processed;
and determining the pixel value of the pixel point in the corresponding target area according to the pixel value of the pixel point in each plane unit based on the mapping to obtain the target object.
2. The method according to claim 1, wherein the deforming the contour of the object to be processed in the image to obtain the target contour comprises:
acquiring the outline and attitude angle information of the object to be processed in the image;
determining the direction and the scale of the deformation processing according to the contour and the attitude angle information of the object to be processed;
and carrying out deformation processing on the contour of the object to be processed according to the deformation processing direction and scale to obtain the target contour.
3. The method according to claim 2, wherein the deforming the contour of the object to be processed according to the direction and the scale of the deforming process to obtain the target contour comprises:
determining the radiation center of the deformation treatment according to the direction of the deformation treatment;
connecting the radiation center with key points of the contour of the object to be processed, and determining target key points corresponding to the key points on a connecting line of the radiation center with the key points of the contour of the object to be processed or an extension line of the connecting line according to the scale of deformation processing;
and taking the target key point as a first fitting point, and performing contour fitting according to the first fitting point to obtain the target contour.
4. The method according to claim 3, wherein the acquiring contour and posture angle information of the object to be processed in the image comprises:
processing the image to obtain key points of the object to be processed in the image;
estimating the attitude angle information of the object to be processed according to the key points of the object to be processed, and acquiring the key points of the contour of the object to be processed;
determining the direction and the scale of the deformation processing according to the contour and the attitude angle information of the object to be processed, wherein the determining comprises the following steps:
and determining the direction and the scale of the deformation processing according to the key points of the object to be processed and the attitude angle information.
5. The method according to claim 3, wherein the obtaining the target contour by using the target keypoint as a first fitting point and performing contour fitting according to the first fitting point comprises:
according to the target key points, obtaining other points of the target contour except the target key points through a Catmull-Rom difference value to serve as first fitting points;
and taking the target key point as the first fitting point, and sequentially connecting the first fitting points to perform contour fitting to obtain a first broken line segment as the target contour.
6. The method according to claim 3, wherein the obtaining the target contour by using the target keypoint as a first fitting point and performing contour fitting according to the first fitting point comprises:
and taking the key points of the contour of the object to be processed as second fitting points, and fitting the contour according to the second fitting points to obtain the contour of the object to be processed.
7. The method according to claim 6, wherein the obtaining the contour of the object to be processed by taking the key points of the contour of the object to be processed as second fitting points and performing contour fitting according to the second fitting points comprises:
according to the key points of the outline of the object to be processed, obtaining other points of the outline of the object to be processed except the key points through a Catmull-Rom difference value, and using the other points as second fitting points;
and taking the key points of the contour of the object to be processed as the second fitting points, and sequentially connecting the second fitting points for contour fitting to obtain a second fold line segment as the contour of the object to be processed.
8. The method of claim 3, wherein the dividing the object to be processed into a plurality of planar units based on the target contour comprises:
and connecting the radiation center and the first fitting point, and dividing the object to be processed into a plurality of plane units.
9. The method of claim 8, wherein the dividing the object to be processed into a plurality of planar units based on the target contour comprises:
responding to the target key point on an extension line of a connecting line of the radiation center and the key point, connecting the radiation center and the first fitting point, wherein the connecting line of the radiation center and the first fitting point is intersected with the outline of the object to be processed, and dividing the object to be processed into a plurality of plane units; alternatively, the first and second electrodes may be,
and responding to the target key point on the connecting line of the radiation center and the key point, connecting the radiation center and the first fitting point, prolonging the intersection of the connecting line of the radiation center and the first fitting point and the outline of the object to be processed, and dividing the object to be processed into a plurality of plane units.
10. The method according to any one of claims 1 to 9, wherein said separately mapping pixel points in each of said planar units to pixel points in a target region formed by a connection line between said corresponding first fitting point and said deformed radiation center and said target contour comprises:
constructing a linear piecewise function according to the straight line determined by the radiation center and the first pixel point in each plane unit and the intersection point of the corresponding contour of the object to be processed and the target contour;
the determining, based on the mapping and according to the pixel value of the pixel point in each plane unit, the pixel value of the pixel point in the corresponding target region to obtain the target object includes:
determining a second pixel point corresponding to the first pixel point in the corresponding target area based on the linear piecewise function;
and determining the pixel value of the second pixel point according to the pixel value of the first pixel point to obtain the target object.
11. The method according to claim 10, wherein constructing a linear piecewise function according to the intersection of the straight line determined by the radiation center and the first pixel point in each of the planar units, and the corresponding contour of the object to be processed and the target contour comprises:
extending a connecting line of the radiation center and the first fitting point to a third point, and sequentially connecting the third point to obtain a third broken line segment, wherein the target contour and the contour of the object to be processed are located between the radiation center and the third broken line segment;
and constructing the linear piecewise function according to the intersection points of the straight line determined by the radiation center and the first pixel point in the plane unit, the target contour, the contour of the object to be processed and the third broken line segment.
12. The method according to claim 10, wherein the determining the pixel value of the second pixel point according to the pixel value of the first pixel point to obtain the target object comprises:
determining the data type of the coordinate of the first pixel point;
responding to the fact that the data type of the coordinate of the first pixel point is a floating point number, determining the pixel value of the first pixel point through bilinear interpolation based on the coordinate of the first pixel point, and taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object;
and in response to the fact that the data type of the coordinate of the second pixel point is an integer, taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object.
13. An image processing apparatus characterized by comprising:
the contour deformation module is used for carrying out deformation processing on the contour of the object to be processed in the image to obtain a target contour; wherein the contour of the object to be processed is an edge that the object to be processed uses to distinguish from the surroundings;
the unit dividing module is used for dividing the object to be processed into a plurality of plane units based on the target contour;
the unit deformation module is used for carrying out deformation processing on each plane unit of the plurality of plane units according to the target contour to obtain a target object;
the unit deformation module includes:
the mapping establishing submodule is used for respectively establishing mapping between pixel points in each plane unit and pixel points in a target area formed by a connecting line of the corresponding first fitting point and the deformed radiation center and the target contour; wherein the first fitting point comprises a target key point corresponding to a key point of the contour of the object to be processed;
and the pixel mapping submodule is used for determining the pixel value of the pixel point in the corresponding target area according to the pixel value of the pixel point in each plane unit to obtain the target object.
14. The apparatus of claim 13, wherein the contour deformation module comprises:
the information acquisition submodule is used for acquiring the outline and the attitude angle information of the object to be processed in the image;
the parameter determining submodule is used for determining the direction and the scale of the deformation processing according to the contour and the attitude angle information of the object to be processed;
and the deformation processing submodule is used for carrying out deformation processing on the contour of the object to be processed according to the deformation processing direction and scale to obtain the target contour.
15. The apparatus of claim 14, wherein the deformation processing submodule comprises:
the center determining unit is used for determining the radiation center of the deformation treatment according to the deformation treatment direction;
a key point determining unit, configured to connect the radiation center and a key point of the contour of the object to be processed, and determine a target key point corresponding to the key point according to the scale of the deformation process on a connection line between the radiation center and the key point of the contour of the object to be processed or an extension line of the connection line;
and the contour fitting unit is used for taking the target key points as first fitting points and performing contour fitting according to the first fitting points to obtain the target contour.
16. The apparatus of claim 15, wherein the information acquisition sub-module comprises:
the key point acquisition unit is used for processing the image and acquiring the key points of the object to be processed in the image;
the information acquisition unit is used for estimating the attitude angle information of the object to be processed according to the key points of the object to be processed and acquiring the key points of the outline of the object to be processed;
and the parameter determining submodule is used for determining the direction and the scale of the deformation processing according to the key points of the object to be processed and the attitude angle information.
17. The apparatus according to claim 15, wherein the contour fitting unit is configured to obtain, as the first fitting point, other points of the target contour except the target keypoint through a Catmull-Rom difference value according to the target keypoint; and taking the target key point as the first fitting point, and sequentially connecting the first fitting points to perform contour fitting to obtain a first broken line segment as the target contour.
18. The apparatus according to claim 15, wherein the contour fitting unit is further configured to use a key point of the contour of the object to be processed as a second fitting point, and perform contour fitting according to the second fitting point to obtain the contour of the object to be processed.
19. The apparatus according to claim 18, wherein the contour fitting unit is configured to obtain, according to a key point of the contour of the object to be processed, other points of the contour of the object to be processed except the key point as second fitting points through a Catmull-Rom difference value; and taking the key points of the contour of the object to be processed as the second fitting points, and sequentially connecting the second fitting points for contour fitting to obtain a second fold line segment as the contour of the object to be processed.
20. The apparatus of claim 15, wherein the unit dividing module is configured to connect the radiation center and the first fitting point to divide the object to be processed into a plurality of planar units.
21. The apparatus as claimed in claim 20, wherein the unit dividing module is configured to divide the object to be processed into a plurality of planar units in response to the target keypoint being on an extension of a connecting line between the radiation center and the keypoint, connecting the radiation center and the first fitting point, the connecting line between the radiation center and the first fitting point intersecting with a contour of the object to be processed; or responding to the target key point on the connecting line of the radiation center and the key point, connecting the radiation center and the first fitting point, prolonging the intersection of the connecting line of the radiation center and the first fitting point and the outline of the object to be processed, and dividing the object to be processed into a plurality of plane units.
22. The apparatus according to any one of claims 13-21, wherein the mapping sub-module is configured to construct a linear piecewise function according to an intersection of a straight line defined by the radiation center and the first pixel point in each of the planar units, and the corresponding contour of the object to be processed and the target contour;
the pixel mapping submodule is used for determining a second pixel point corresponding to the first pixel point in the corresponding target area based on the linear piecewise function; and determining the pixel value of the second pixel point according to the pixel value of the first pixel point to obtain the target object.
23. The apparatus according to claim 22, wherein the mapping sub-module is configured to extend a connecting line between the radiation center and the first fitting point to a third point, and sequentially connect the third point to obtain a third broken line segment, wherein the target contour and the contour of the object to be processed are located between the radiation center and the third broken line segment; and constructing the linear piecewise function according to the intersection points of the straight line determined by the radiation center and the first pixel point in the plane unit, the target contour, the contour of the object to be processed and the third broken line segment.
24. The apparatus of claim 22, wherein the pixel mapping sub-module is configured to:
determining the data type of the coordinate of the first pixel point;
responding to the fact that the data type of the coordinate of the first pixel point is a floating point number, determining the pixel value of the first pixel point through bilinear interpolation based on the coordinate of the first pixel point, and taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object;
and in response to the fact that the data type of the coordinate of the second pixel point is an integer, taking the pixel value of the first pixel point as the pixel value of the second pixel point to obtain the target object.
25. An electronic device, characterized in that it comprises the apparatus of any of claims 13 to 24.
26. An electronic device, comprising:
a memory for storing executable instructions; and
a processor in communication with the memory to execute the executable instructions to perform the method of any of claims 1 to 12.
27. A computer storage medium storing computer readable instructions that, when executed, implement the method of any one of claims 1 to 12.
CN201811250835.7A 2018-10-25 2018-10-25 Image processing method and apparatus, electronic device, and computer storage medium Active CN109584168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811250835.7A CN109584168B (en) 2018-10-25 2018-10-25 Image processing method and apparatus, electronic device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811250835.7A CN109584168B (en) 2018-10-25 2018-10-25 Image processing method and apparatus, electronic device, and computer storage medium

Publications (2)

Publication Number Publication Date
CN109584168A CN109584168A (en) 2019-04-05
CN109584168B true CN109584168B (en) 2021-05-04

Family

ID=65920889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811250835.7A Active CN109584168B (en) 2018-10-25 2018-10-25 Image processing method and apparatus, electronic device, and computer storage medium

Country Status (1)

Country Link
CN (1) CN109584168B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060287B (en) * 2019-04-26 2021-06-15 北京迈格威科技有限公司 Face image nose shaping method and device
CN110443256B (en) * 2019-07-03 2022-04-12 大连理工大学 Method for extracting multi-target regions of image
CN111031305A (en) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
JP2022512262A (en) 2019-11-21 2022-02-03 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Image processing methods and equipment, image processing equipment and storage media
CN113706369A (en) * 2020-05-21 2021-11-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112926462B (en) * 2021-03-01 2023-02-07 创新奇智(西安)科技有限公司 Training method and device, action recognition method and device and electronic equipment
CN113420721B (en) * 2021-07-21 2022-03-29 北京百度网讯科技有限公司 Method and device for labeling key points of image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407270A (en) * 2014-09-09 2016-03-16 卡西欧计算机株式会社 Image Correcting Apparatus, Image Correcting Method And Computer Readable Recording Medium Recording Program Thereon
CN108346130A (en) * 2018-03-20 2018-07-31 北京奇虎科技有限公司 Image processing method, device and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10438631B2 (en) * 2014-02-05 2019-10-08 Snap Inc. Method for real-time video processing involving retouching of an object in the video
US10559062B2 (en) * 2015-10-22 2020-02-11 Korea Institute Of Science And Technology Method for automatic facial impression transformation, recording medium and device for performing the method
CN106204665B (en) * 2016-06-27 2019-04-30 深圳市金立通信设备有限公司 A kind of image processing method and terminal
JP6421794B2 (en) * 2016-08-10 2018-11-14 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
CN107657590B (en) * 2017-09-01 2021-01-15 北京小米移动软件有限公司 Picture processing method and device and storage medium
CN107818305B (en) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407270A (en) * 2014-09-09 2016-03-16 卡西欧计算机株式会社 Image Correcting Apparatus, Image Correcting Method And Computer Readable Recording Medium Recording Program Thereon
CN108346130A (en) * 2018-03-20 2018-07-31 北京奇虎科技有限公司 Image processing method, device and electronic equipment

Also Published As

Publication number Publication date
CN109584168A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109584168B (en) Image processing method and apparatus, electronic device, and computer storage medium
WO2022012085A1 (en) Face image processing method and apparatus, storage medium, and electronic device
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN112819947A (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN107452049B (en) Three-dimensional head modeling method and device
CN105096353B (en) Image processing method and device
CN113688907B (en) A model training and video processing method, which comprises the following steps, apparatus, device, and storage medium
US11631154B2 (en) Method, apparatus, device and storage medium for transforming hairstyle
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
CN112652057B (en) Method, device, equipment and storage medium for generating human body three-dimensional model
US20220375258A1 (en) Image processing method and apparatus, device and storage medium
WO2022165722A1 (en) Monocular depth estimation method, apparatus and device
CN115147265A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113808249B (en) Image processing method, device, equipment and computer storage medium
CN116342782A (en) Method and apparatus for generating avatar rendering model
WO2022237089A1 (en) Image processing method and apparatus, and device, storage medium, program product and program
CN113223137B (en) Generation method and device of perspective projection human face point cloud image and electronic equipment
US20210279928A1 (en) Method and apparatus for image processing
CN111652795A (en) Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN113658035A (en) Face transformation method, device, equipment, storage medium and product
CN113496506A (en) Image processing method, device, equipment and storage medium
CN112528707A (en) Image processing method, device, equipment and storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN111652807B (en) Eye adjusting and live broadcasting method and device, electronic equipment and storage medium
WO2020215854A1 (en) Method and apparatus for rendering image, electronic device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant