CN110502993B - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110502993B
CN110502993B CN201910652362.1A CN201910652362A CN110502993B CN 110502993 B CN110502993 B CN 110502993B CN 201910652362 A CN201910652362 A CN 201910652362A CN 110502993 B CN110502993 B CN 110502993B
Authority
CN
China
Prior art keywords
point
manipulation
user
control
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910652362.1A
Other languages
Chinese (zh)
Other versions
CN110502993A (en
Inventor
赵伟
王聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910652362.1A priority Critical patent/CN110502993B/en
Publication of CN110502993A publication Critical patent/CN110502993A/en
Application granted granted Critical
Publication of CN110502993B publication Critical patent/CN110502993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure shows an image processing method, an apparatus, an electronic device, and a storage medium, wherein the image processing method includes: identifying a face area in an image, and selecting a plurality of original points in the face area; determining a user control point and a control associated point according to an operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points; determining an offset vector of the operation and control associated point according to the position change of the user operation and control point; moving the control associated point along the offset vector to obtain a current position corresponding to the control associated point; and according to the texture information of the operation associated point, performing coloring processing on the current position to obtain an image after the face area is subjected to deformation processing. According to the technical scheme, the beautifying effects such as face slimming and the like can be achieved through the operation of the user on the screen, and the beautifying process is simple to operate and visible in real time.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The photographing function on the mobile terminal enables a user to record and share the living state and beautiful scenery of the user at any time and any place without additionally carrying a camera, and is deeply loved by the majority of users.
At present, a plurality of mobile terminals with beauty functions are provided in the market, however, when a user performs beauty adjustment on an image, the user needs to enter an editing page first to select the beauty function, then select various beauty items such as a face-thinning item and a large eye item to perform repeated adjustment, and an interaction process is complicated.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, so as to at least solve the problem of complicated and complicated American-type procedures in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the present disclosure, there is provided an image processing method, the method comprising:
identifying a face area in an image, and selecting a plurality of original points in the face area;
determining a user control point and a control associated point according to an operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points;
determining an offset vector of the operation and control associated point according to the position change of the user operation and control point;
moving the control associated point along the offset vector to obtain a current position corresponding to the control associated point;
and according to the texture information of the operation associated point, performing coloring processing on the current position to obtain an image after the face area is subjected to deformation processing.
In an optional implementation manner, the step of selecting a plurality of original points in the face region includes:
detecting keypoints in the face region;
and carrying out interpolation calculation on the key points to obtain extension points of the face area, wherein the original points comprise the key points and the extension points.
In an optional implementation manner, the step of determining a user manipulation point and a manipulation associated point according to an operation instruction of a user on a screen includes:
determining a first control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the plurality of original points;
the step of determining the offset vector of the manipulation associated point according to the position change of the user manipulation point includes:
determining a second control point according to a second operation instruction of the user on the screen;
and determining an offset vector of the manipulation associated point according to the position change of the first manipulation point and the second manipulation point.
In an optional implementation manner, the step of determining a first manipulation point and a manipulation associated point according to a first operation instruction of a user on a screen includes:
determining a click position as the first control point according to a click operation of a user on a screen;
and determining an original point which is less than the preset threshold value from the first manipulation point as the manipulation related point.
In an optional implementation manner, the step of determining a second control point according to a second operation instruction of the user on the screen includes:
and determining the position where the dragging is stopped as the second control point according to the dragging operation of the user on the screen.
In an optional implementation manner, the step of determining an offset vector of the manipulation related point according to the position change of the first manipulation point and the second manipulation point includes:
calculating a ratio of a distance between the manipulation associated point and the first manipulation point to the preset threshold value;
according to the distance between the first control point and the second control point and the ratio, determining an offset vector of the control associated point, wherein the direction of the offset vector is the direction from the first control point to the second control point.
In an optional implementation manner, the step of performing a coloring process on the current position according to the texture information of the operation related point to obtain an image after the face region is subjected to a deformation process includes:
inputting the coordinate information of the current position and the texture information of the operation associated point into a shader, performing shading processing on the current position, and outputting the texture information of the current position;
and obtaining an image after deformation processing of the face area according to the coordinate information of the current position and the texture information of the current position.
According to a second aspect of the present disclosure, there is provided an image processing apparatus, the apparatus comprising:
a first module configured to identify a face region in an image, in which a plurality of origin points are selected;
the second module is configured to determine a user control point and a control associated point according to an operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points;
a third module configured to determine an offset vector of the manipulation-related point according to a change in position of the user manipulation point;
a fourth module configured to move the manipulation associated point along the offset vector to obtain a current position corresponding to the manipulation associated point;
and the fifth module is configured to perform coloring processing on the current position according to the texture information of the operation associated point to obtain an image obtained after the face area is subjected to deformation processing.
In an optional implementation, the first module is specifically configured to:
detecting keypoints in the face region;
and carrying out interpolation calculation on the key points to obtain extension points of the face area, wherein the original points comprise the key points and the extension points.
In an optional implementation, the second module includes:
the first operation unit is configured to determine a first control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the plurality of original points;
the third module includes:
the second operation unit is configured to determine a second control point according to a second operation instruction of the user on the screen;
a determination unit configured to determine an offset vector of the manipulation-related point according to a change in position of the first manipulation point and the second manipulation point.
In an optional implementation manner, the first operation unit is specifically configured to:
determining a click position as the first control point according to a click operation of a user on a screen;
and determining an original point which is less than the preset threshold value from the first manipulation point as the manipulation related point.
In an optional implementation, the second operating unit is specifically configured to:
and determining the position where the dragging is stopped as the second control point according to the dragging operation of the user on the screen.
In an optional implementation manner, the determining unit is specifically configured to:
calculating a ratio of a distance between the manipulation associated point and the first manipulation point to the preset threshold value;
according to the distance between the first control point and the second control point and the ratio, determining an offset vector of the control associated point, wherein the direction of the offset vector is the direction from the first control point to the second control point.
In an optional implementation manner, the fifth module is specifically configured to:
inputting the coordinate information of the current position and the texture information of the operation associated point into a shader, performing shading processing on the current position, and outputting the texture information of the current position;
and obtaining an image after deformation processing of the face area according to the coordinate information of the current position and the texture information of the current position.
According to a third aspect of the present disclosure, there is provided an electronic apparatus comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method according to the first aspect.
According to a fourth aspect of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product, wherein the instructions of the computer program product, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the technical scheme, the user control point and the control associated point within the influence range of the user control point can be determined according to the operation instruction of the user on the screen; then determining an offset vector of the operation associated point according to the position change of the user operation point on the screen; the operation and control associated point moves along the offset vector of the operation and control associated point, and the current position corresponding to the operation and control associated point is obtained; and then, according to the texture information of the operation associated point, the current position is colored, and an image obtained after the face area is deformed can be obtained. According to the technical scheme, the beautifying effects such as face slimming and the like can be achieved through the operation of the user on the screen, and the beautifying process is simple to operate and visible in real time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating another image processing method according to an exemplary embodiment.
FIG. 3 is a flow diagram illustrating another image processing method according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flow chart illustrating an image processing method according to an exemplary embodiment, as shown in fig. 1, the method including the following steps.
In step S11, a face area in the image is recognized, and a plurality of original points are selected in the face area.
The image may be a preview image, an image already stored in the electronic device, or an image downloaded through a network. The original points may include key points of the face region (e.g., 21 key points or 101 key points, etc.), and may further include extended points extended from the key points, etc.
In an optional implementation manner, the execution main body of the present embodiment may be an electronic device. After the image is acquired, the electronic equipment can perform face recognition (face recognition) on the image through a face recognition algorithm to determine a face area in the image; keypoints in the face region may then be detected using keypoint detection techniques (e.g., face detection sdk); the step may further comprise: and carrying out interpolation calculation on the key points to obtain the extension points of the face area, wherein the original points comprise the key points and the extension points, and all the original points form an original point set face _ points.
In step S12, according to the operation instruction of the user on the screen, a user manipulation point and a manipulation related point are determined, where the manipulation related point is an original point that is smaller than a preset threshold from the user manipulation point among a plurality of original points.
The control associated point is a point in the original point set, which is affected by the operation of the user on the screen, and specifically may be an original point which is less than a preset threshold from the user control point. The size of the preset threshold may be determined according to the screen size and other factors, for example, the preset threshold may be set to 20% of the screen width, and the present embodiment does not limit the specific numerical value thereof.
The operation of the user on the screen (touch screen) may include a click operation, a long press operation, and the like. For example, when the user's operation on the screen is a click operation, the click position may be determined as a user manipulation point; when the user's operation on the screen is a long press operation, then the long press position may be determined as the user manipulation point.
In step S13, an offset vector of the manipulation-related point is determined according to the change in the position of the user manipulation point.
The implementation manner of the step is various, for example, when the operation of the user on the screen is a click operation and a drag operation in sequence, the position of the user control point is changed from the click position to the position at which the drag stops, and the offset vector of the control associated point is determined according to the position change of the user control point; when the operation of the user on the screen is a first click operation and a second click operation in sequence, the user operation point is changed from a first click position to a second click position, an offset vector of the operation associated point is determined according to the change of the two click positions, and the like.
In step S14, the manipulation related point is moved along the offset vector, and the current position corresponding to the manipulation related point is obtained.
And moving all the operation and control associated points along the offset vectors obtained by respective calculation, wherein the original point set face _ points is changed into the offset current point set face _ points _ moved.
The original point set before the migration is face _ points (including the manipulation associated points), and the current point set after the migration corresponding to each original point is face _ points _ moved (including the original point position without migration and the current position corresponding to the manipulation associated points).
In step S15, the current position is rendered based on the texture information of the operation-related point, and an image obtained by deforming the face area is obtained.
In an optional implementation manner, the coordinate information of the current position and the texture information of the operation associated point may be input into a shader, the current position is subjected to shading processing, and the texture information of the current position is output; and then obtaining an image after the deformation processing is carried out on the face area according to the coordinate information of the current position and the texture information of the current position.
In practical application, the texture information (texture coordinates) of the original point set face _ points before being shifted and the coordinate information (as vertex data) of the current point set face _ points _ moved after being shifted may be transmitted to a shader of OpenGL, and processed by a fragment shader of OpenGL to obtain the texture information (e.g., rgba data) of each point in the current point set face _ points _ moved after being shifted, and the image effect after the face area deformation may be displayed on the screen according to the texture information and the coordinate information of each point in the face _ points _ moved.
According to the image processing method provided by the embodiment, the user control point and the control associated point within the control point influence range can be determined according to the operation of the user on the screen; then determining an offset vector of the operation associated point according to the position change of the user operation point on the screen; the operation and control associated point moves along the offset vector of the operation and control associated point, and the current position corresponding to the operation and control associated point is obtained; and then, according to the texture information of the operation associated point, the current position is colored, and an image obtained after the face area is deformed can be obtained. According to the technical scheme, the beautifying effects such as face slimming and the like can be achieved through the operation of the user on the screen, and the beautifying process is simple to operate and visible in real time.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment, which includes the following steps, as shown in fig. 2.
In step S21, a face area in the image is recognized, and a plurality of original points are selected in the face area.
Step S21 in this embodiment is the same as or similar to step S11 in the previous embodiment, and is not repeated here, so that differences from the previous implementation are emphasized here.
In step S22, according to a first operation instruction of a user on a screen, a first manipulation point and a manipulation related point are determined, where the manipulation related point is an original point that is less than a preset threshold from the first manipulation point among a plurality of original points.
In an optional implementation manner, according to a clicking operation of a user on a screen, a clicking position can be determined as a first control point; and determining the original point which is less than the preset threshold value from the first manipulation point as a manipulation related point.
In practical application, a click event of a user on a screen is acquired, and a click position Startpoint, namely a first control point, is recorded. And then calculating an original point which is less than a preset threshold value from the first control point in the original point set face _ points, determining the original point as a control associated point, and controlling the associated point to form a point set S-startpoints. The preset threshold is set to 20% of the screen width in the present embodiment. It should be noted that the position of the first control point may be in a face region or a non-face region.
In step S23, according to a second operation instruction of the user on the screen, determining a second control point;
in an alternative implementation, a position where the dragging is stopped may be determined as the second manipulation point according to a dragging operation of the user on the screen. In practical application, a user's finger drags on a screen to obtain a currentpoint point where dragging is stopped, that is, a second control point.
In step S24, an offset vector of the maneuver related point is determined according to the position change of the first and second maneuver points.
In an alternative implementation manner, a ratio between a distance between the manipulation associated point and the first manipulation point and a preset threshold value may be calculated; and determining an offset vector of the control associated point according to the distance between the first control point and the second control point and the ratio, wherein the direction of the offset vector is the direction from the first control point to the second control point.
In practical application, the ratio of the distance from the manipulation associated point in the point set S _ startpoints to the first manipulation point startpoint to the preset threshold is percentage _ dist, and the distance between the first manipulation point and the second manipulation point is current-startpoint, then (current-startpoint) ((1-percentage _ dist)) may be used as the offset vector corresponding to the manipulation associated point.
In step S25, the manipulation related point is moved along the offset vector, and the current position corresponding to the manipulation related point is obtained.
All the control associated points in the set S _ startpoints are moved along the offset vectors obtained by respective calculation, and the original point set face _ points is changed into the offset current point set face _ points _ moved.
In step S26, the current position is rendered based on the texture information of the operation-related point, and an image obtained by deforming the face area is obtained.
Steps S25 to S26 in this embodiment are the same as or similar to steps S14 to S15 in the previous embodiment, and are not repeated here, so that differences from the previous implementation are emphasized here. A flowchart illustrating an image processing method shown in the present exemplary embodiment is shown with reference to fig. 3.
According to the image processing method provided by the embodiment, the beautifying effect can be realized through operations of a user on the screen, such as clicking operation, dragging operation and the like, the whole beautifying process is simple and convenient to operate, the beautifying effect is visible in real time, and the interaction cost of the user is reduced.
Fig. 4 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 4, the apparatus includes:
a first module 41 configured to identify a face region in an image, in which a plurality of origin points are selected;
a second module 42, configured to determine, according to an operation instruction of a user on a screen, a user manipulation point and a manipulation related point, where the manipulation related point is an original point, from among the plurality of original points, whose distance from the user manipulation point is smaller than a preset threshold;
a third module 43 configured to determine an offset vector of the manipulation-related point according to the position change of the user manipulation point;
a fourth module 44 configured to move the manipulation related point along the offset vector to obtain a current position corresponding to the manipulation related point;
a fifth module 45, configured to perform a coloring process on the current position according to the texture information of the operation related point, so as to obtain an image obtained by performing a deformation process on the face region.
The image may be a preview image, an image already stored in the electronic device, or an image downloaded through a network. The original points may include key points of the face region (e.g., 21 key points or 101 key points, etc.), and may further include extended points extended from the key points, etc.
In an optional implementation manner, after the image is acquired, the first module 41 may perform face recognition (face recognition) on the image through a face recognition algorithm to determine a face region in the image; keypoints in the face region may then be detected using keypoint detection techniques (e.g., face detection sdk); the first module 41 may further perform interpolation calculation on the key points to obtain extension points of the face region, where the original points include the key points and the extension points, and all the original points form an original point set face _ points.
The control associated point is a point in the original point set, which is affected by the operation of the user on the screen, and specifically may be an original point which is less than a preset threshold from the user control point. The size of the preset threshold may be determined according to the screen size and other factors, for example, the preset threshold may be set to 20% of the screen width, and the present embodiment does not limit the specific numerical value thereof.
The operation of the user on the screen (touch screen) may include a click operation, a long press operation, and the like. For example, when the operation of the user on the screen is a click operation, the second module 42 may determine the click position as a user manipulation point; when the user's operation on the screen is a long press operation, the second module 42 may determine the long press position as the user manipulation point.
When the operations of the user on the screen are a click operation and a drag operation in sequence, the click position of the user operation point is changed into a position where the drag stops, and the third module 43 may determine the offset vector of the operation associated point according to the position change of the user operation point; when the operation of the user on the screen is the first click operation and the second click operation in sequence, the user operation point is changed from the first click position to the second click position, the third module 43 may determine the offset vector of the operation associated point according to the change of the two click positions, and so on.
The fourth module 44 moves all the operation and control associated points along the offset vectors obtained by the respective calculations, and the original point set face _ points becomes the offset current point set face _ points _ moved.
The original point set before the migration is face _ points (including the manipulation associated points), and the current point set after the migration corresponding to each original point is face _ points _ moved (including the original point position without migration and the current position corresponding to the manipulation associated points).
In an optional implementation manner, the fifth module 45 may input the coordinate information of the current position and the texture information of the operation associated point into a shader, perform shading processing on the current position, and output the texture information of the current position; and then obtaining an image after the deformation processing is carried out on the face area according to the coordinate information of the current position and the texture information of the current position.
In practical applications, the fifth module 45 may transmit texture information (texture coordinates) of the original point set face _ points before the offset and coordinate information (serving as vertex data) of the current point set face _ points _ moved after the offset into a shader of OpenGL, and obtain texture information (such as rgba data) of each point in the current point set face _ points _ moved after the offset through processing by a fragment shader of OpenGL, and may display an image effect after the face area is deformed on the screen according to the texture information and the coordinate information of each point in the face _ points _ moved.
According to the image processing device provided by the embodiment, the user control point and the control associated point within the control point influence range can be determined according to the operation of the user on the screen; then determining an offset vector of the operation associated point according to the position change of the user operation point on the screen; the operation and control associated point moves along the offset vector of the operation and control associated point, and the current position corresponding to the operation and control associated point is obtained; and then, according to the texture information of the operation associated point, the current position is colored, and an image obtained after the face area is deformed can be obtained. According to the technical scheme, the beautifying effects such as face slimming and the like can be achieved through the operation of the user on the screen, and the beautifying process is simple to operate and visible in real time.
In an alternative implementation, the second module 42 includes:
the first operation unit is configured to determine a first control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the plurality of original points;
the third module 43 comprises:
the second operation unit is configured to determine a second control point according to a second operation instruction of the user on the screen;
a determination unit configured to determine an offset vector of the manipulation-related point according to a change in position of the first manipulation point and the second manipulation point.
In an optional implementation manner, the first operation unit may determine a click position as a first control point according to a click operation of a user on a screen; and determining the original point which is less than the preset threshold value from the first manipulation point as a manipulation related point.
In practical application, the first operation unit acquires a click event of a user on a screen, and records a click position Startpoint, namely a first control point. And then calculating an original point which is less than a preset threshold value from the first control point in the original point set face _ points, determining the original point as a control associated point, and controlling the associated point to form a point set S-startpoints. The preset threshold is set to 20% of the screen width in the present embodiment. It should be noted that the position of the first control point may be in a face region or a non-face region.
In an alternative implementation, the second operation unit may determine a position where the dragging is stopped as the second manipulation point according to a dragging operation of the user on the screen. In practical application, a user's finger drags on a screen to obtain a currentpoint point where dragging is stopped, that is, a second control point.
In an optional implementation manner, the determining unit may calculate a ratio between a distance between the manipulation related point and the first manipulation point and a preset threshold; and determining an offset vector of the control associated point according to the distance between the first control point and the second control point and the ratio, wherein the direction of the offset vector is the direction from the first control point to the second control point.
In practical application, the ratio of the distance from the manipulation associated point in the point set S _ startpoints to the first manipulation point startpoint to the preset threshold is percentage _ dist, and the distance between the first manipulation point and the second manipulation point is current-startpoint, then (current-startpoint) ((1-percentage _ dist)) may be used as the offset vector corresponding to the manipulation associated point. All the control associated points in the set S _ startpoints are moved along the offset vectors obtained by respective calculation, and the original point set face _ points is changed into the offset current point set face _ points _ moved.
The image processing device provided by the embodiment can realize the beautifying effect through the operations of the user on the screen, such as clicking operation, dragging operation and the like, the whole beautifying process is simple and convenient to operate, the beautifying effect is visible in real time, and the interaction cost of the user is reduced.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 5 is a block diagram of one type of electronic device 800 shown in the present disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the image processing method described in any of the embodiments. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the image processing methods described in any of the embodiments.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the image processing method of any of the embodiments is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which comprises readable program code executable by the processor 820 of the device 800 to perform the image processing method according to any of the embodiments. Alternatively, the program code may be stored in a storage medium of the apparatus 800, which may be a non-transitory computer readable storage medium, for example, ROM, Random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
Fig. 6 is a block diagram of one type of electronic device 1900 shown in the present disclosure. For example, the electronic device 1900 may be provided as a server.
Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the image processing method according to any of the embodiments.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
A1, an image processing method, the method comprising:
identifying a face area in an image, and selecting a plurality of original points in the face area;
determining a user control point and a control associated point according to an operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points;
determining an offset vector of the operation and control associated point according to the position change of the user operation and control point;
moving the control associated point along the offset vector to obtain a current position corresponding to the control associated point;
and according to the texture information of the operation associated point, performing coloring processing on the current position to obtain an image after the face area is subjected to deformation processing.
A2, the image processing method according to a1, wherein the step of selecting a plurality of original points in the face region includes:
detecting keypoints in the face region;
and carrying out interpolation calculation on the key points to obtain extension points of the face area, wherein the original points comprise the key points and the extension points.
A3, the image processing method according to A1, wherein the step of determining a user manipulation point and a manipulation associated point according to an operation instruction of a user on a screen includes:
determining a first control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the plurality of original points;
the step of determining the offset vector of the manipulation associated point according to the position change of the user manipulation point includes:
determining a second control point according to a second operation instruction of the user on the screen;
and determining an offset vector of the manipulation associated point according to the position change of the first manipulation point and the second manipulation point.
A4, the image processing method according to A3, wherein the step of determining a first manipulating point and manipulating the associated point according to a first operation instruction of a user on a screen includes:
determining a click position as the first control point according to a click operation of a user on a screen;
and determining an original point which is less than the preset threshold value from the first manipulation point as the manipulation related point.
A5, the image processing method according to A3, wherein the step of determining a second control point according to a second operation instruction of the user on the screen includes:
and determining the position where the dragging is stopped as the second control point according to the dragging operation of the user on the screen.
A6, the method for processing image according to A3, wherein the step of determining the offset vector of the maneuver related point according to the position change of the first and second maneuver points comprises:
calculating a ratio of a distance between the manipulation associated point and the first manipulation point to the preset threshold value;
according to the distance between the first control point and the second control point and the ratio, determining an offset vector of the control associated point, wherein the direction of the offset vector is the direction from the first control point to the second control point.
A7, the image processing method according to a1, wherein the step of obtaining the image with the face region deformed by performing the rendering process on the current position according to the texture information of the operation-related point comprises:
inputting the coordinate information of the current position and the texture information of the operation associated point into a shader, performing shading processing on the current position, and outputting the texture information of the current position;
and obtaining an image after deformation processing of the face area according to the coordinate information of the current position and the texture information of the current position.
A8, an image processing apparatus, the apparatus comprising:
a first module configured to identify a face region in an image, in which a plurality of origin points are selected;
the second module is configured to determine a user control point and a control associated point according to an operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points;
a third module configured to determine an offset vector of the manipulation-related point according to a change in position of the user manipulation point;
a fourth module configured to move the manipulation associated point along the offset vector to obtain a current position corresponding to the manipulation associated point;
and the fifth module is configured to perform coloring processing on the current position according to the texture information of the operation associated point to obtain an image obtained after the face area is subjected to deformation processing.
A9, the image processing apparatus of A8, the first module being specifically configured to:
detecting keypoints in the face region;
and carrying out interpolation calculation on the key points to obtain extension points of the face area, wherein the original points comprise the key points and the extension points.
A10, the image processing apparatus of A8, the second module comprising:
the first operation unit is configured to determine a first control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the plurality of original points;
the third module includes:
the second operation unit is configured to determine a second control point according to a second operation instruction of the user on the screen;
a determination unit configured to determine an offset vector of the manipulation-related point according to a change in position of the first manipulation point and the second manipulation point.
A11, the image processing apparatus according to a10, the first operating unit being specifically configured to:
determining a click position as the first control point according to a click operation of a user on a screen;
and determining an original point which is less than the preset threshold value from the first manipulation point as the manipulation related point.
A12, the image processing apparatus according to A10, the second operating unit being specifically configured to:
and determining the position where the dragging is stopped as the second control point according to the dragging operation of the user on the screen.
A13, the image processing apparatus of A10, the determining unit being specifically configured to:
calculating a ratio of a distance between the manipulation associated point and the first manipulation point to the preset threshold value;
according to the distance between the first control point and the second control point and the ratio, determining an offset vector of the control associated point, wherein the direction of the offset vector is the direction from the first control point to the second control point.
A14, the image processing apparatus of A8, the fifth module being specifically configured to:
inputting the coordinate information of the current position and the texture information of the operation associated point into a shader, performing shading processing on the current position, and outputting the texture information of the current position;
and obtaining an image after deformation processing of the face area according to the coordinate information of the current position and the texture information of the current position.

Claims (14)

1. An image processing method, characterized in that the method comprises:
identifying a face area in an image, and selecting a plurality of original points in the face area;
determining a user control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points;
determining an offset vector of the control associated point according to the position change of the user control point, wherein the position change of the user control point is determined according to a second operation instruction of a user on a screen;
moving the control associated point along the offset vector to obtain a current position corresponding to the control associated point;
according to the texture information of the control associated point, coloring the current position to obtain an image after deformation processing of the face area;
determining an offset vector of the manipulation associated point according to the position change of the user manipulation point, wherein the determining comprises:
determining a first control point according to a first operation instruction of a user on a screen; determining a second control point according to a second operation instruction of the user on the screen;
calculating a ratio of a distance between the manipulation associated point and the first manipulation point to the preset threshold value;
determining an offset vector of the manipulation associated point according to the distance between the first manipulation point and the second manipulation point and the ratio, wherein the direction of the offset vector is the direction from the first manipulation point to the second manipulation point, and the length of the offset vector is obtained by adjusting the distance between the first manipulation point and the second manipulation point by using the ratio.
2. The image processing method according to claim 1, wherein the step of selecting a plurality of origin points in the face region comprises:
detecting keypoints in the face region;
and carrying out interpolation calculation on the key points to obtain extension points of the face area, wherein the original points comprise the key points and the extension points.
3. The image processing method according to claim 1, wherein the step of determining a user manipulation point and a manipulation associated point according to a first operation instruction of a user on a screen comprises:
according to a first operation instruction of a user on a screen, a first control point and a control associated point are determined, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the multiple original points.
4. The image processing method according to claim 3, wherein the step of determining a first manipulation point and manipulating the associated point according to a first operation instruction of a user on a screen comprises:
determining a click position as the first control point according to a click operation of a user on a screen;
and determining an original point which is less than the preset threshold value from the first manipulation point as the manipulation related point.
5. The image processing method according to claim 3, wherein the step of determining the second manipulation point according to the second operation instruction of the user on the screen comprises:
and determining the position where the dragging is stopped as the second control point according to the dragging operation of the user on the screen.
6. The image processing method according to claim 1, wherein the step of obtaining the image after deforming the face region by performing the coloring process on the current position according to the texture information of the manipulation related point includes:
inputting the coordinate information of the current position and the texture information of the control associated point into a shader, performing shading processing on the current position, and outputting the texture information of the current position;
and obtaining an image after deformation processing of the face area according to the coordinate information of the current position and the texture information of the current position.
7. An image processing apparatus, characterized in that the apparatus comprises:
a first module configured to identify a face region in an image, in which a plurality of origin points are selected;
the second module is configured to determine a user control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the user control point in the plurality of original points;
a third module configured to determine an offset vector of the manipulation associated point according to a change in position of the user manipulation point, the change in position of the user manipulation point being determined according to a second operation instruction of the user on the screen;
a fourth module configured to move the manipulation associated point along the offset vector to obtain a current position corresponding to the manipulation associated point;
a fifth module, configured to perform a coloring process on the current position according to the texture information of the manipulation related point, and obtain an image obtained by performing a deformation process on the face region;
the third module is configured as a second operation unit and is configured to determine a first control point according to a first operation instruction of a user on a screen; determining a second control point according to a second operation instruction of the user on the screen;
a determination unit configured to determine an offset vector of the manipulation-related point according to a change in position of the first manipulation point and the second manipulation point;
calculating a ratio of a distance between the manipulation associated point and the first manipulation point to the preset threshold value;
determining an offset vector of the manipulation associated point according to the distance between the first manipulation point and the second manipulation point and the ratio, wherein the direction of the offset vector is the direction from the first manipulation point to the second manipulation point, and the length of the offset vector is obtained by adjusting the distance between the first manipulation point and the second manipulation point by using the ratio.
8. The image processing apparatus according to claim 7, wherein the first module is specifically configured to:
detecting keypoints in the face region;
and carrying out interpolation calculation on the key points to obtain extension points of the face area, wherein the original points comprise the key points and the extension points.
9. The image processing apparatus according to claim 7, wherein the second module comprises:
the first operation unit is configured to determine a first control point and a control associated point according to a first operation instruction of a user on a screen, wherein the control associated point is an original point which is smaller than a preset threshold value from the first control point in the plurality of original points.
10. The image processing apparatus according to claim 9, wherein the first operating unit is specifically configured to:
determining a click position as the first control point according to a click operation of a user on a screen;
and determining an original point which is less than the preset threshold value from the first manipulation point as the manipulation related point.
11. The image processing apparatus according to claim 9, wherein the second operation unit is specifically configured to:
and determining the position where the dragging is stopped as the second control point according to the dragging operation of the user on the screen.
12. The image processing apparatus according to claim 7, wherein the fifth module is specifically configured to:
inputting the coordinate information of the current position and the texture information of the control associated point into a shader, performing shading processing on the current position, and outputting the texture information of the current position;
and obtaining an image after deformation processing of the face area according to the coordinate information of the current position and the texture information of the current position.
13. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 6.
14. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any one of claims 1 to 6.
CN201910652362.1A 2019-07-18 2019-07-18 Image processing method, image processing device, electronic equipment and storage medium Active CN110502993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910652362.1A CN110502993B (en) 2019-07-18 2019-07-18 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910652362.1A CN110502993B (en) 2019-07-18 2019-07-18 Image processing method, image processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110502993A CN110502993A (en) 2019-11-26
CN110502993B true CN110502993B (en) 2022-03-25

Family

ID=68586649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910652362.1A Active CN110502993B (en) 2019-07-18 2019-07-18 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110502993B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986105A (en) * 2020-07-27 2022-01-28 北京达佳互联信息技术有限公司 Face image deformation method and device, electronic equipment and storage medium
CN114296622B (en) * 2020-09-23 2023-08-08 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242765A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10796480B2 (en) * 2015-08-14 2020-10-06 Metail Limited Methods of generating personalized 3D head models or 3D body models
CN107341777B (en) * 2017-06-26 2020-12-04 北京小米移动软件有限公司 Picture processing method and device
CN107330868B (en) * 2017-06-26 2020-11-13 北京小米移动软件有限公司 Picture processing method and device
CN108198141B (en) * 2017-12-28 2021-04-16 北京奇虎科技有限公司 Image processing method and device for realizing face thinning special effect and computing equipment
CN108876732A (en) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 Face U.S. face method and device
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus
CN108921798B (en) * 2018-06-14 2021-06-22 北京微播视界科技有限公司 Image processing method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242765A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium

Also Published As

Publication number Publication date
CN110502993A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN109087238B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
EP3173970A1 (en) Image processing method and apparatus
CN107330868B (en) Picture processing method and device
CN107958439B (en) Image processing method and device
CN107341777B (en) Picture processing method and device
JP2016531361A (en) Image division method, image division apparatus, image division device, program, and recording medium
CN107464253B (en) Eyebrow positioning method and device
CN106791535B (en) Video recording method and device
CN107015648B (en) Picture processing method and device
CN110580688B (en) Image processing method and device, electronic equipment and storage medium
CN109472738B (en) Image illumination correction method and device, electronic equipment and storage medium
CN110782532B (en) Image generation method, image generation device, electronic device, and storage medium
CN107403144B (en) Mouth positioning method and device
US20170316805A1 (en) Method and apparatus for adjusting playing progress of media file
CN110059547B (en) Target detection method and device
WO2022110837A1 (en) Image processing method and device
CN111626183A (en) Target object display method and device, electronic equipment and storage medium
CN106774849B (en) Virtual reality equipment control method and device
CN111612876A (en) Expression generation method and device and storage medium
CN112669198A (en) Image special effect processing method and device, electronic equipment and storage medium
CN113096213A (en) Image processing method and device, electronic equipment and storage medium
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112929561A (en) Multimedia data processing method and device, electronic equipment and storage medium
US9665925B2 (en) Method and terminal device for retargeting images
CN107437269B (en) Method and device for processing picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant