CN109410138B - Method, device and system for modifying double chin - Google Patents

Method, device and system for modifying double chin Download PDF

Info

Publication number
CN109410138B
CN109410138B CN201811207299.2A CN201811207299A CN109410138B CN 109410138 B CN109410138 B CN 109410138B CN 201811207299 A CN201811207299 A CN 201811207299A CN 109410138 B CN109410138 B CN 109410138B
Authority
CN
China
Prior art keywords
chin
modified
target object
face
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811207299.2A
Other languages
Chinese (zh)
Other versions
CN109410138A (en
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811207299.2A priority Critical patent/CN109410138B/en
Publication of CN109410138A publication Critical patent/CN109410138A/en
Application granted granted Critical
Publication of CN109410138B publication Critical patent/CN109410138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a device and a system for modifying double chin, which relate to the technical field of image processing, wherein the method comprises the following steps: acquiring image data of a face of a target object; detecting a face feature point of a target object from image data; determining a contour line of a first jaw and a region to be modified of the target object according to the face characteristic points; and detecting whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, modifying the region to be modified to obtain processed image data. The method and the device can automatically identify the double chin of the target object and carry out the modification treatment, and have convenient operation and good modification effect, thereby improving the user experience.

Description

Method, device and system for modifying double chin
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and a system for modifying double chin.
Background
With the improvement of aesthetic standards of users and the emphasis on personal images, users often want to reprocess photographs or videos to improve personal images, for example, to perform skin whitening, face thinning, chin restoration, five-sense-organ shape adjustment, and the like on photographs. In the existing mode, a user is usually required to manually process a photo by using third-party image processing software, such as erasing, smearing, blurring and the like; for repairing the double chin, the actual operation is more complicated, if the user has less operation experience on the image processing software, the modification degree is difficult to control, the aim of hiding the double chin cannot be fulfilled if the modification degree is too small, and the face is easy to deform obviously if the modification degree is too large; the decoration effect is difficult to meet the user requirement, and the user experience is low.
Disclosure of Invention
In view of this, the present invention provides a method, an apparatus, and a system for modifying double chin, so as to automatically identify and modify the double chin of a target object, so that the operation of modifying the double chin is more convenient and faster, and the modification effect is improved, thereby improving the user experience.
In a first aspect, embodiments of the present invention provide a method for modifying double chin, the method comprising: acquiring image data of a face of a target object; detecting a face feature point of a target object from image data; determining a contour line of a first jaw and a region to be modified of the target object according to the face characteristic points; and detecting whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, modifying the region to be modified to obtain processed image data.
Further, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of acquiring image data of a face of the target object includes: acquiring a preview frame image through image acquisition equipment; performing face detection on the preview frame image through a preset face detection model; and if the human face is detected to exist in the preview frame image, acquiring image data of the face of the target object through image acquisition equipment.
Further, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of detecting the face feature point of the target object from the image data includes: detecting a face characteristic point of a target object from image data through a characteristic point detection model obtained through pre-training; the feature point detection model is obtained by training in the following way: acquiring a training sample set; the training sample set comprises a set number of face images; the face image carries the labeling information of the face characteristic points; the marking information comprises the position of the human face characteristic point and the type of the characteristic point; dividing a training subset and a verification subset from a training sample set according to a first dividing proportion; building an initial neural network model and setting initial training parameters; training the neural network model through the training subset and the training parameters, and verifying the trained neural network model through the verification subset; if the verification result does not meet the preset precision threshold, adjusting the training parameters according to the verification result; and continuing to train the neural network model through the training subset and the adjusted training parameters until the verification result of the neural network model meets the precision threshold value, and obtaining the feature point detection model.
Further, an embodiment of the present invention provides a third possible implementation manner of the first aspect, and the step of determining the contour line of the first jaw and the region to be modified of the target object according to the face feature point includes: extracting a first feature point set representing a first chin of a target object from the face feature points according to the feature point types of all the feature points in the face feature points; according to the position of each feature point in the first feature point set, performing curve fitting processing on each feature point in the first feature point set to obtain a contour line of a first chin of the target object; and taking the contour line of the first jaw as a reference, and extending a preset distance to the neck direction of the target object to obtain an area to be modified.
Further, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of detecting whether a second chin is present in the region to be modified according to the contour line of the first chin includes: carrying out edge extraction processing on the region to be modified; if the edge line is extracted from the area to be modified, calculating a first curvature of the edge line; calculating a curvature difference value of the first curvature and a second curvature corresponding to the contour line of the first jaw; and judging whether the curvature difference value is within a preset difference value range, if so, determining that a second chin exists in the area to be modified, and determining an edge line strip corresponding to the first curvature as a contour line of the second chin of the target object.
Further, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the step of performing modification processing on the region to be modified to obtain processed image data includes: calculating normal data of a specified position on the contour line of the first jaw; carrying out local deformation processing on the region to be modified along the direction pointed by the normal data so as to hide the contour line of the second chin and obtain a deformed region to be modified; and carrying out uniform fuzzy processing on the deformed region to be modified to obtain processed image data.
Further, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where before the step of performing uniform blurring processing on the deformed region to be modified to obtain the processed image data, the method further includes: and performing feathering treatment on the area to be modified to obtain the treated area to be modified.
In a second aspect, an embodiment of the present invention provides an apparatus for modifying double chin, the apparatus including: a data acquisition module for acquiring image data of a face of a target object; the characteristic point detection module is used for detecting human face characteristic points of the target object from the image data; the line area determining module is used for determining a contour line of a first jaw of the target object and an area to be modified according to the face characteristic points; and the modification processing module is used for detecting whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, performing modification processing on the region to be modified to obtain processed image data.
In a third aspect, an embodiment of the present invention provides a system for modifying double chin, including: the device comprises an image acquisition device, a processing device and a storage device; the image acquisition equipment is used for acquiring a preview frame image or image data; the storage means has stored thereon a computer program which, when run by the processing device, performs the above-described method of retouching a chin.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processing device to perform the steps of the above method for modifying double chin.
The embodiment of the invention has the following beneficial effects:
according to the method, the device and the system for modifying double chin, provided by the embodiment of the invention, after the image data of the face of the target object is obtained, the face characteristic point of the target object is detected from the image data; determining a contour line of a first chin of the target object and an area to be modified according to the face characteristic point; and if the second chin is detected to exist in the region to be modified according to the contour line of the first chin, performing modification treatment on the region to be modified to obtain the processed image data. The mode can automatically identify the double chin of the target object and carry out decoration treatment, the operation is convenient and fast, the decoration effect is good, and therefore the user experience degree is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of modifying double chin provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a face feature point according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for modifying double chin provided by embodiments of the present invention;
FIG. 5 is a schematic diagram of normal data of an area to be modified according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for modifying double chin according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In view of the problems that the existing double-chin modification needs to be realized by means of third-party software, the operation is complex, the modification effect is difficult to control, and the user experience is low, embodiments of the present invention provide a method, an apparatus, and a system for modifying double-chin, the technology can be applied to various terminal devices such as cameras, mobile phones, tablet computers, and the like, the technology can be realized by adopting corresponding software and hardware, and the following describes the embodiments of the present invention in detail.
The first embodiment is as follows:
first, an example electronic system 100 for implementing the method, apparatus and system for modifying double chin of embodiments of the present invention is described with reference to fig. 1.
As shown in FIG. 1, an electronic system 100 includes one or more processing devices 102, one or more memory devices 104, an input device 106, an output device 108, and one or more image capture devices 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic system 100 shown in fig. 1 are exemplary only, and not limiting, and that the electronic system may have other components and structures as desired.
The processing device 102 may be a gateway or an intelligent terminal, or a device including a Central Processing Unit (CPU) or other form of processing unit having data processing capability and/or instruction execution capability, and may process data of other components in the electronic system 100 and may control other components in the electronic system 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processing device 102 to implement client functionality (implemented by the processing device) and/or other desired functionality in embodiments of the present invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture preview frame images or image data and store the captured preview frame images or image data in the storage 104 for use by other components.
For example, the devices in the exemplary electronic system for implementing the method, apparatus and system for modifying double chin according to the embodiments of the present invention may be integrally disposed, or may be disposed separately, such as integrally disposing the processing device 102, the storage device 104, the input device 106 and the output device 108, and disposing the image capturing device 110 at a designated position where the target object can be captured. When the devices in the above-described electronic system are integrally provided, the electronic system may be implemented as an intelligent terminal such as a camera, a smart phone, a tablet computer, a computer, and the like.
Example two:
the present embodiment provides a method of modifying double chin, the method being performed by a processing device in the electronic system described above; the processing device may be any device having data processing capabilities, such as a host computer, a local server, a cloud server, and the like. The processing equipment can independently process the received information, can also be connected with the server, jointly analyzes and processes the information, and uploads a processing result to the cloud.
As shown in fig. 2, the method for modifying double chin comprises the following steps:
step S202, acquiring image data of the face of the target object;
the image data may be a single frame image or a multi-frame image. Generally, when a shooting instruction sent by a user is received, such as pressing a shooting button, sending a voice instruction, a gesture instruction, and the like, image data can be shot or recorded; the face included in the image data may be understood as the face of the target object.
Step S204, detecting a human face characteristic point of a target object from the image data;
for example, the face feature points of the target object may be eyebrow contour points, eye contour points, nose contour points, upper lip contour points, lower lip contour points, chin contour points, etc.; of course, other feature points may be included; the face feature points of the target object detected from the image data may include the position of each feature point and the feature point type of each feature point; the positions of the feature points may be labeled in the image data in the form of identifiers, and each identifier is associated with a feature point type of the feature point.
FIG. 3 shows an example of a face feature point; the detection result of the human face characteristic point is in a dotted line frame; each feature point is marked on the image data of the face of the target object in a dot form; marking various types of feature points on corresponding positions; for example, the eyebrow contour points are located near the eyebrows of the target subject; the eye contour points are located near the eyes of the target object; and the feature point type of each feature point is stored in association with each 'dot', so that a user can click each feature point in the detection result, and the feature point type of the clicked feature point can be displayed at a specified position for the user to refer to.
If a frame of image data contains a plurality of target objects, the image data may be segmented to obtain a plurality of partial images, each partial image contains one target object, and then the detection of the human face feature point is performed for each target object. The detection of the human face characteristic points can be realized by a characteristic point detection model obtained by pre-training, the characteristic point detection model can be realized by a neural network, and can also be realized by other artificial intelligence or machine learning modes. The feature point detection model can be obtained through training of a large number of image samples marked with human face feature points.
Step S206, determining a contour line of a first chin of the target object and an area to be modified according to the face characteristic points;
with continued reference to fig. 3, the target object's chin contour point is labeled near the first chin of the target object, and the chin contour point is usually composed of a plurality of feature points, for example, along the edge of the target object's face, the feature points from the lower side of the ears to the feature point at the lowest end of the target object's face may be referred to as the chin contour point. The chin contour points may form an arc-shaped curve, which is the contour line of the first chin. Generally, the second chin is closer to the neck portion than the first chin, and therefore, in order to obtain a region to be modified that may include the second chin, a region range, which is the region to be modified described above, may be defined along the neck portion direction based on the contour line of the first chin.
In the process of acquiring the region to be modified, it may be necessary to identify which direction is the neck direction of the target object; specifically, the neck direction of the target object can be determined according to the relative positions of various types of feature points in the detected human face feature points; for example, taking a nose contour point and a chin contour point as an example, a feature point at the center of the nose head in the nose contour point is point a, a feature point at the bottommost end in the chin contour point is point B, and a directional line segment is formed from the point a to the point B, and the direction pointed by the directional line segment can be determined as the neck direction of the target object. In another mode, a point with the maximum curvature in the contour line of the first chin may also be obtained through calculation, assuming that the point is a point C, a center of a curvature circle corresponding to the point C is K, and a directional line segment is formed from the center of the point K to the point C, where a direction pointed by the directional line segment may be determined as a neck direction of the target object.
And S208, detecting whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, modifying the region to be modified to obtain processed image data.
In particular, the shape of the first and second jaw of the same target object are generally similar; on the basis, lines can be firstly identified from the area to be modified, and the identified lines may include contour lines of the second chin, and may also include contour lines of neck edges or neck wrinkles and the like; and comparing the shapes of the identified lines with the shape of the contour line of the first chin one by one to obtain lines similar to the shape of the contour line of the first chin, wherein the lines can be determined as the contour line of the second chin. The shape comparison process of the lines can be realized through edge detection, neural network, machine learning or image pattern recognition and the like.
The modification process may include a plurality of image processing manners, for example, deformation, translation, blurring, etc. are performed on the region to be modified, so as to hide the contour line of the second chin in the region to be modified; in the processing process, lines can be identified from the intermediate processing result, if lines similar to the shape of the contour line of the first chin still exist in the identified lines, the processing is continued in the above mode until the lines similar to the shape of the contour line of the first chin cannot be detected in the area to be modified. By the image processing, image data of a single chin of the target object can be obtained. After the second jaw is hidden, the face area of the target object is reduced, so that the purpose of face slimming and beautifying is achieved.
According to the method for modifying double chin, provided by the embodiment of the invention, after the image data of the face of the target object is acquired, the face characteristic point of the target object is detected from the image data; determining a contour line of a first chin of the target object and an area to be modified according to the face characteristic point; and if the second chin is detected to exist in the region to be modified according to the contour line of the first chin, performing modification treatment on the region to be modified to obtain the processed image data. The mode can automatically identify the double chin of the target object and carry out decoration treatment, the operation is convenient and fast, the decoration effect is good, and therefore the user experience degree is improved.
Example three:
the above-described embodiment describes that the face feature points of the target object can be detected from the image data by the feature point detection model; therefore, in this embodiment, a training method of the feature point detection model is first described. Specifically, the feature point detection model may be trained in the following manner:
step 11, acquiring a training sample set; the training sample set comprises a set number of face images; the face image carries the labeling information of the face characteristic points; the marking information comprises the position of the face characteristic point and the type of the characteristic point;
the number of face images in the training sample set can be preset, for example, 10 ten thousand; it can be understood that the larger the number of the face images is, the better the performance and the capability of the feature point detection model obtained by training are, and the more accurate the detection accuracy is. The face images can be obtained from a general face image library, and can also be detected from a video stream by a face detection mode. The human face characteristic points can be manually marked on the human face image by an engineer, can also be automatically marked by marking software and then are adjusted by the engineer. The more accurate the marking of the human face characteristic points is, the more favorable the detection accuracy of the subsequent characteristic point detection model is. The face feature points may also be referred to as face key points.
For example, manually marking a face feature point, for a face image, feature points may be added to the face image in the form of identifiers such as dots and stars, and the type of the feature point, such as an eyebrow contour point, an eye contour point, and a chin contour point, may be input through an input box of the feature point; the feature point type can be further refined, for example, as for the eyebrow contour points, the feature point type can be further subdivided into eyebrow contour points, eyebrow peak contour points and the like; for the chin contour points, it is also possible to subdivide into chin bottom contour points, chin side contour points, and the like.
In the present embodiment, since the chin contour of the target object is mainly detected, only the chin contour points, and the subdivided chin bottom contour points, chin side contour points, and the like may be labeled in the image data in the training sample set.
Step 12, dividing a training subset and a verification subset from the training sample set according to a first division ratio;
the first division ratio may be a specific percentage, for example, 30%, at this time, 30% of the face images and the corresponding annotation information in the training sample set may be used as a training subset, and 30% of the face images and the corresponding annotation information in the training sample set may be used as a verification subset; the first division ratio may also be a combination of percentages, for example, 30% and 40%, at this time, 30% of the face images and the corresponding annotation information in the training sample set may be used as a training subset, and 40% of the face images and the corresponding annotation information in the training sample set may be used as a verification subset.
As can be seen from the above description, the training subsets and the verification subsets may be the same or different in percentage of the training sample set; moreover, the face images in the training subset and the verification subset may be completely different, or may have partial intersection. For example, a training subset and a verification subset are distributed and divided from a training sample set in a random manner, and the same face images may exist in the face images in the training subset and the verification subset at this time; if the training subset is divided from the training sample set, and then the verification subset is divided from the face images left in the training sample set, the face images in the training subset and the face images in the verification subset can be completely different.
Step 13, building an initial neural network model and setting initial training parameters;
in general, the training parameters of the neural network model include network nodes, determination of initial weights, minimum training rate, dynamic parameters, allowable errors, iteration number, and the like.
Step 14, training the neural network model through the training subset and the training parameters, and verifying the trained neural network model through the verification subset;
in practical implementation, the face images and the corresponding label information in the training subset and the verification subset can be divided into a plurality of groups respectively; firstly, inputting a group of face images in a training subset and corresponding annotation information into the neural network model for training, after the training is finished, inputting a group of face images in a verification subset into the trained neural network model for detecting the face characteristic points, and comparing a detection result with the annotation information corresponding to the group of face images to obtain the detection accuracy of the current neural network model, wherein the detection accuracy is the verification result.
Step 15, if the verification result does not meet the preset precision threshold, adjusting the training parameters according to the verification result;
in order to improve the detection accuracy of the neural network model, the reason that the detection accuracy of the neural network model is low and the training parameters needing to be adjusted can be analyzed according to the verification result so as to optimize the neural network model and the training mode thereof.
And step 16, continuing to train the neural network model through the training subset and the adjusted training parameters until the verification result of the neural network model meets the precision threshold value, and obtaining the feature point detection model.
The above steps show that the training and the verification of the neural network model are processes performed in an intersecting manner, each training uses a group of face images and corresponding label information in the training subset, each verification uses a group of face images and corresponding label information in the verification subset, and the training and the verification are repeated until the verification result of the neural network model meets the precision threshold, so that the feature point detection model can be obtained.
If each group of face images and corresponding label information in the training subset are used up, but the verification result still cannot meet the precision threshold, at this time, each group of face images and corresponding label information in the training subset can be reused, and a new training subset can be divided from the training sample set to continue training.
In addition, a test subset with a second division ratio may be divided from the training sample set, and in order to ensure the accuracy of the test result, the face images in the test subset are usually completely different from the face images in the training subset and the verification subset, that is, there is no cross. The test subset can be used for comprehensively testing the trained feature point detection model so as to measure the performance and the capability of the feature point detection model and generate an evaluation report of the feature point detection model. In actual implementation, a plurality of feature point detection models can be obtained through training, the performance and the capability of each feature point detection model are different, and feature point detection models with matched performance and capability can be selected according to the actual requirements of face feature point detection, such as detection accuracy, detection speed and the like.
In this embodiment, the feature point detection model obtained by training in the above manner has higher feature point detection accuracy, so that the contour line of the first chin of the target object in the image data can be accurately detected, and then the second chin is modified, which is beneficial to improving the user experience.
Example four:
the embodiment of the invention provides another method for modifying double chin, which is realized on the basis of the embodiment; in this embodiment, the determination process of the contour line of the first chin and the region to be modified, the detection process of the second chin, and the specific modification process of the second chin are described with emphasis; as shown in fig. 4, the method for modifying double chin comprises the following steps:
step S402, when the image acquisition equipment is started, acquiring a preview frame image through the image acquisition equipment; performing face detection on the preview frame image through a preset face detection model;
the image acquisition equipment can be a camera which can be independent equipment and is in communication connection with remote processing equipment; the camera can also be integrated on a mobile phone, a tablet computer and other equipment. After the user starts the image acquisition equipment, the image acquisition equipment can acquire the preview frame image.
Step S404, judging whether a human face exists in the preview frame image; if yes, go to step S406; if not, executing step S402;
the face detection model can be obtained by pre-training a neural network; specifically, the preview frame image can be input into the face detection model, whether a face exists in the frame image is identified through the model, and if the face exists, the target object exists in the preview frame image, the specific position of the face is output; the specific position can be identified by a face detection frame; the image data inside the face detection frame is image data of a face, and the image data usually contains a complete face image of the target object.
Step S406, when a photographing instruction triggered by a user is received, image data of the face of the target object is acquired by the image acquisition apparatus.
In practical implementation, the process of acquiring, by the processing device, the image data of the face of the target object through the image acquisition device may be triggered by a user, for example, the user presses a shooting button; of course, in some cases, after detecting that a human face exists in the preview video, the processing device may automatically acquire, through the image acquisition device, image data of the face of the target object corresponding to the human face. In another manner, the processing device may perform the step S402 after receiving a trigger instruction of the user, that is, the image capturing device captures the preview frame image, and then perform the subsequent processes.
Step S408, detecting the human face characteristic points of the target object from the image data through the characteristic point detection model obtained by pre-training;
step S410, extracting a first feature point set representing a first chin of a target object from the face feature points according to the feature point types of all the feature points in the face feature points;
for example, the feature point types of the feature points in the face feature points can be checked one by one, and if the feature point types include a keyword of "chin", for example, a chin contour point, a chin bottom contour point, a chin side contour point, and the like, the feature point can be extracted; the feature points of the feature point type including the keyword of "chin" constitute the first feature point set.
It should be noted that, the face feature points are detected by the feature point detection model, and most of samples used for training the feature point detection model are single chin, and considering that for most target objects, the contour line of the first chin is obviously protruded from the contour line of the second chin, so if there are double chins in the image data of the target object, when the face feature points are detected by the feature point detection model, the feature points representing the chin are usually located near the first chin rather than near the first chin, that is, the feature points representing the first chin are usually included in the face feature points. Therefore, in most cases, of the face feature points, the feature point of which the feature point type includes the "chin" keyword is the feature point representing the first chin.
Step S412, according to the position of each feature point in the first feature point set, performing curve fitting processing on each feature point in the first feature point set to obtain a contour line of a first chin of the target object;
the method can be used for obtaining a smooth and accurate first chin contour line of the target object. In another mode, with one of the feature points in the first feature point set as a reference, for example, with a chin bottom contour point as a reference, adjacent feature points are searched for to both sides, and are connected with the adjacent feature points, and then extend to both sides after the connection, and the adjacent feature points are searched for and connected with each other, so as to finally form a first chin contour line of the target object.
Step S414, using the contour line of the first chin as a reference, extending a preset distance in the neck direction of the target object, and obtaining an area to be modified.
The preset distance can be a fixed value set according to experience; it is also possible to preset a fixed parameter, for example, a proportional parameter, and calculate the preset distance according to the length of the first contour line of the target chin and the proportional parameter. Continuing with fig. 3, the feature points of the first feature point set are connected to obtain a contour line of the first chin of the target object, and all points on the contour line move to the neck by the preset distance, so as to form the region to be decorated. The region to be modified usually includes a second chin and a neck portion of the target object, and may include a background portion other than the target object. In addition, as to which direction is identified as the neck direction, reference may be made to the description of the above embodiments, and details are not repeated here.
Step S416, performing edge extraction processing on the region to be modified;
step S418, judging whether to extract edge lines from the area to be modified; if yes, go to step S420; if not, ending;
the edge in the image data generally refers to a place where the gray value of the pixel changes severely, and from the viewpoint of naked eyes, the appearance of the line generally has no transition on parameters such as brightness and color, so the gray value near the line changes severely, and based on this, the edge is mostly extracted as an image area with the line. It should be noted that the edge line does not necessarily refer to the line at the edge position of the region to be modified, and the line extracted from the edge of the region to be modified may be referred to as an edge line. In this embodiment, the region to be modified may include a partial region of the second chin and the neck, and may further include a background portion other than the target object, so that edge lines may be extracted from the region to be modified in most cases, and these edge lines may include a contour line of the second chin, a contour line of the neck, a line of the neck line, a texture line of the background portion, and the like. In practical implementation, the edge extraction processing can be performed on the region to be modified by using a Sobel operator, a Laplacian operator, a Canny operator and the like.
Step S420, calculating a first curvature of the edge line; calculating a curvature difference value of the first curvature and a second curvature corresponding to the contour line of the first jaw;
curvature generally refers to the degree of curvature of a curve at a certain point. Generally, the edge line is not a regular circular arc, so that the curvatures of all points have certain difference, in order to accurately evaluate the shape of the edge line, a specified number of position points can be extracted from the edge line, the curvatures are calculated for each position point one by one, and the curvatures corresponding to each position point are combined into the first curvature of the edge line; the first curvature may also include a statistic of curvatures corresponding to each position point, such as an average, variance, and the like of curvatures corresponding to all position points. The curvature calculation for each point can be described as follows: taking the position point a on the edge line as an example, on the edge line, a position point B is further determined near the position point a, a tangent is generated on each of the position point a and the position point B, the included angle between the two tangents is θ, the arc length between the position point a and the position point B is a, and the curvature K of the position point a is a/θ. The second curvature corresponding to the contour of the first chin may also be calculated in the above manner.
As can be seen from the above description of the embodiments, the contour lines of the first jaw and the second jaw of the same target object have similar shapes; the first curvature and the second curvature can be used for evaluating the shape of the corresponding line; therefore, if the difference between the first curvature and the second curvature is small, the edge line is similar to the shape of the contour line of the first chin, and the edge line may be the contour line of the second chin; if the difference between the first curvature and the second curvature is larger, the difference between the shape of the edge line and the shape of the outline of the first jaw is larger, and the edge line is not the outline of the second jaw.
In practical implementation, because the first curvature and the second curvature both include curvatures of a plurality of position points, the curvatures of each position point in the first curvature and the second curvature can be compared one by one along a certain direction, and the comparison result is the curvature difference; if the curvature difference is smaller, it is determined that the curvature change trends of the position points in the first curvature and the second curvature are similar, and it may be determined that the edge line corresponding to the first curvature is similar to the contour line shape of the first chin corresponding to the second curvature, and the edge line corresponding to the first curvature is the second chin of the target object. In another mode, the curvatures of the respective position points in the first curvature and the second curvature may be counted, respectively, and a mean value, a variance, and the like may be calculated as described above, and if a difference between the statistical results corresponding to the first curvature and the second curvature (i.e., the curvature difference) is small, the edge line corresponding to the first curvature may also be determined as the second chin of the target object.
Step S422, judging whether the curvature difference value is within a preset difference value range, if so, executing step S424; if not, ending;
the difference range can be preset according to the specific calculation mode of the curvature difference; specifically, if the curvatures of the respective position points in the first curvature and the second curvature are compared one by one, the curvature difference value may be a sum or an average of differences in the curvatures of the respective position points. If the curvature difference is smaller than the difference range, the edge line is determined to be the second chin. In addition, if a plurality of edge lines are detected from the region to be modified, the first curvature and the curvature difference value can be calculated for each edge line one by one. If the curvature difference value corresponding to the plurality of lines in the plurality of edge lines is within the preset difference value range, one or more edge lines with the minimum curvature difference value can be determined as the edge profile of the second chin. If there is no edge line with a corresponding curvature difference value within a preset difference value range in the plurality of edge lines, it may be determined that the target object does not have a second chin, and the process may be ended at this time.
Step S424, it is determined that a second chin exists in the region to be modified, and the edge line corresponding to the first curvature is determined as the contour line of the second chin of the target object.
Step S426, calculating normal data of a specified position on the contour line of the first jaw;
as shown in fig. 5, a set number of position points may be determined on the contour line of the first chin at certain intervals, then for each position point, a tangent line of the position point is obtained, and according to the direction of the tangent line, the direction of the normal line is obtained after 90 degrees rotation; the direction of the normal line corresponding to each location point constitutes the normal data. Considering that the normal line may point in two directions, while the direction of the final normal line needs to point at the first chin of the target object; in order to determine the direction of the resulting normal line, the direction of the normal line may be determined from the determined neck direction, i.e. the direction away from the neck direction is the direction of the normal line, out of possibly two directions in which a normal line is directed.
Step 428, performing local deformation processing on the region to be modified along the direction pointed by the normal data to hide the contour line of the second chin, so as to obtain a deformed region to be modified;
during actual implementation, the region to be modified can be moved and compressed along the direction of the normal line corresponding to each position point; in the moving and compressing process, the contour line of the first chin can be taken as a reference, and if the whole contour line of the second chin is far away from the contour line of the first chin, the area to be modified can be moved until the local line of the contour line of the second chin is coincident with the contour line of the first chin; at this time, for a part of the contour line of the second chin, which is not overlapped with the contour line of the first chin, the region to be modified of the part may be locally deformed until the contour line of the second chin is completely overlapped with the contour line of the first chin, thereby achieving the purpose of hiding the contour line of the second chin.
In another mode, the region to be modified of the contour line portion of the second jaw may also be locally deformed according to a curvature difference between the contour lines of the first jaw and the second jaw, so that the contour line of the second jaw and the contour line of the first jaw have the same shape, and at this time, the region to be modified is moved so that the contour line of the second jaw coincides with the contour line of the first jaw, thereby achieving the purpose of hiding the contour line of the second jaw.
Step S430, performing feathering treatment on the area to be modified to obtain the treated area to be modified.
The feathering process may be specifically understood as blurring a joining portion (which may also be understood as an edge portion of the region to be modified) of the region to be modified and the other region of the image data, so that the edge of the region to be modified and the other region of the image data may be naturally joined. Specifically, in this embodiment, since the processing such as deformation and movement is performed on the region to be modified in the above step, the edge of the region to be modified and other portions of the image data are usually fractured or visually distorted, and the problems such as fracture and distortion generated on the edge of the region to be modified can be weakened by the feathering processing.
And step S432, carrying out uniform fuzzy processing on the deformed to-be-modified area to obtain processed image data.
The uniform fuzzy processing can be carried out on the whole of the region to be modified; in the deformation processing process, except for the edge part, the inside of the area to be modified is also obviously deformed, and at the moment, the inside of the area to be modified can be subjected to fuzzy processing through the uniform fuzzy processing so as to hide the visual unnatural problem caused by deformation.
In the process of performing local deformation, feathering and uniform blurring on the region to be modified, the specific processing degree can be preset by corresponding parameters, such as the deformation degree, the feathering degree, the uniform blurring degree and the like. In addition, in the process of local deformation, whether a line similar to the shape of the contour line of the first chin still exists in the region to be modified can be detected in real time until the line similar to the shape of the contour line of the first chin cannot be detected in the region to be modified, and the local deformation processing is finished at this moment.
Step S434, judging whether the image acquisition equipment is quitted, if so, ending; if not, step S404 is performed.
According to the method for modifying double chin, provided by the embodiment of the invention, when a human face is detected, image data of the face is obtained; detecting a face feature point of a target object from the image data through a feature point detection model obtained by pre-training; connecting characteristic points representing a first chin of the target object in the face characteristic points to obtain a contour line of the first chin, and further obtaining an area to be modified; and if the second chin is detected to exist in the region to be modified according to the contour line of the first chin, performing local deformation, feathering, uniform model and other processing on the region to be modified to obtain processed image data. The mode can automatically identify the double chin of the target object and carry out decoration treatment, the operation is convenient and fast, the decoration effect is good, and therefore the user experience degree is improved.
Example five:
based on the method for modifying double chin provided by the embodiment, the embodiment provides a specific application scenario, that is, the method for modifying double chin is implemented in the process of photographing through an intelligent terminal; in addition, another specific modification process of the second chin is provided in the embodiment; the method comprises the following steps:
step 21, starting a photographing mode with double chin modification functions;
step 22, loading a double-chin modification default parameter table; the default parameter table usually includes parameters such as the number of feature points detected by the feature point detection model, the types of the feature points, the local deformation degree when the second chin is modified, the feathering degree, the uniform blurring degree and the like; of course, the user may also manually adjust various parameters in the parameter table before or after taking a picture.
Step 23, starting an image acquisition device (such as a camera of a mobile phone) to acquire a preview frame image;
step 24, receiving a photographing instruction of a user;
step 26, inputting the preview frame image into a face detection model, performing face detection on the image through the model, and judging whether a face exists in the preview frame image;
step 27, if a human face exists, acquiring image data, and inputting the image data into the feature point detection model to detect human face feature points of the face of the target object in the image data; if no human face exists, after image data is collected, the current process is ended;
step 28, identifying and obtaining a contour line of a first chin of the target object and an area to be modified according to the detected face characteristic points; a second chin of the area to be modified, which typically contains the target object; the region to be modified may also be referred to as the neck region.
Step 29, performing edge feathering treatment on the area to be modified according to the parameter table to obtain an area A to be modified after feathering treatment;
step 30, carrying out local deformation including translation, scaling and the like on the area A to be modified along the direction of the normal line of the contour line of the first chin; obtaining a region B to be modified after local deformation;
and 31, carrying out local uniform fuzzy processing on the area B to be modified to obtain a final processing result, and displaying the final processing result.
In the method for modifying the double chin, the area to be modified is feathered firstly, then the area to be modified after feathering is locally deformed, and then the deformed area to be modified is subjected to balanced blurring.
In the above mode, after the user finishes photographing, the image data after the double-chin decoration can be obtained, manual decoration is not needed, the operation is more convenient and the decoration effect is better, the shooting requirement of the current intelligent terminal user can be met, the application is more extensive, the user experience degree is higher, the interest of the intelligent terminal is increased, and the economic benefit of manufacturers is also favorably improved.
Example six:
corresponding to the above method embodiment, refer to a schematic structural diagram of a device for modifying double chin as shown in fig. 6; the device includes:
a data acquisition module 60 for acquiring image data of a face of a target object;
a feature point detection module 61 for detecting a face feature point of the target object from the image data;
the line area determining module 62 is configured to determine a contour line of a first chin of the target object and an area to be modified according to the face feature point;
and the modification processing module 63 is configured to detect whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, perform modification processing on the region to be modified to obtain processed image data.
According to the device for modifying double chin, provided by the embodiment of the invention, after the image data of the face of the target object is acquired, the face characteristic point of the target object is detected from the image data; determining a contour line of a first chin of the target object and an area to be modified according to the face characteristic point; and if the second chin is detected to exist in the region to be modified according to the contour line of the first chin, performing modification treatment on the region to be modified to obtain the processed image data. The mode can automatically identify the double chin of the target object and carry out decoration treatment, the operation is convenient and fast, the decoration effect is good, and therefore the user experience degree is improved.
Further, the data obtaining module is further configured to: acquiring a preview frame image through image acquisition equipment; performing face detection on the preview frame image through a preset face detection model; and if the human face is detected to exist in the preview frame image, acquiring image data of the face of the target object through image acquisition equipment.
Further, the feature point detection module is further configured to: detecting a face characteristic point of a target object from image data through a characteristic point detection model obtained through pre-training; the feature point detection model is obtained by training in the following way: acquiring a training sample set; the training sample set comprises a set number of face images; the face image carries the labeling information of the face characteristic points; the marking information comprises the position of the human face characteristic point and the type of the characteristic point; dividing a training subset and a verification subset from a training sample set according to a first dividing proportion; building an initial neural network model and setting initial training parameters; training the neural network model through the training subset and the training parameters, and verifying the trained neural network model through the verification subset; if the verification result does not meet the preset precision threshold, adjusting the training parameters according to the verification result; and continuing to train the neural network model through the training subset and the adjusted training parameters until the verification result of the neural network model meets the precision threshold value, and obtaining the feature point detection model.
Further, the line region determining module is further configured to: extracting a first feature point set representing a first chin of a target object from the face feature points according to the feature point types of all the feature points in the face feature points; according to the position of each feature point in the first feature point set, performing curve fitting processing on each feature point in the first feature point set to obtain a contour line of a first chin of the target object; and taking the contour line of the first jaw as a reference, and extending a preset distance to the neck direction of the target object to obtain an area to be modified.
Further, the modification processing module is further configured to: carrying out edge extraction processing on the region to be modified; if the edge line is extracted from the area to be modified, calculating a first curvature of the edge line; calculating a curvature difference value of the first curvature and a second curvature corresponding to the contour line of the first jaw; and judging whether the curvature difference value is within a preset difference value range, if so, determining that a second chin exists in the area to be modified, and determining an edge line strip corresponding to the first curvature as a contour line of the second chin of the target object.
Further, the modification processing module is further configured to: calculating normal data of a specified position on the contour line of the first jaw; carrying out local deformation processing on the region to be modified along the direction pointed by the normal data so as to hide the contour line of the second chin and obtain a deformed region to be modified; and carrying out uniform fuzzy processing on the deformed region to be modified to obtain processed image data.
Further, the device also comprises a feathering module for feathering the region to be modified to obtain the processed region to be modified.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
Example seven:
the embodiment of the invention provides a system for modifying double chin, which comprises: the device comprises an image acquisition device, a processing device and a storage device; the image acquisition equipment is used for acquiring a preview data frame or image data; the storage means has stored thereon a computer program which, when run by the processing device, performs the method of retouching a chin as described in the above embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, the present embodiment also provides a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processing device to perform the steps of the method for modifying double chin described in the above embodiments.
The computer program product of the method, the apparatus, and the system for modifying double chin provided by the embodiments of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A method of modifying a double chin, the method comprising:
acquiring image data of a face of a target object;
detecting a face feature point of the target object from the image data; the face feature points at least comprise: eyebrow contour points, eye contour points, nose contour points, upper lip contour points, lower lip contour points, chin contour points; the face feature points further include: the position of each feature point and the feature point type of each feature point;
determining a contour line of a first chin of the target object and an area to be modified according to the face characteristic points;
detecting whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, modifying the region to be modified to obtain the processed image data;
determining a contour line of a first jaw and a region to be modified of the target object according to the face feature point, wherein the step comprises the following steps:
extracting a first feature point set representing a first chin of the target object from the face feature points according to the feature point types of all the feature points in the face feature points;
performing curve fitting processing on each feature point in the first feature point set according to the position of each feature point in the first feature point set to obtain a contour line of a first chin of the target object;
extending a preset distance to the neck direction of the target object by taking the contour line of the first chin as a reference to obtain the region to be modified;
the neck direction of the target object is judged by the following method:
judging the neck direction of the target object according to the relative position of each type of characteristic point in the face characteristic points;
detecting whether a second chin exists in the area to be modified according to the contour line of the first chin, wherein the step comprises the following steps:
performing edge extraction processing on the region to be modified;
if an edge line is extracted from the region to be modified, calculating a first curvature of the edge line;
calculating a curvature difference value of the first curvature and a second curvature corresponding to the contour line of the first jaw;
and judging whether the curvature difference is within a preset difference range, if so, determining that a second chin exists in the area to be modified, and determining an edge line strip corresponding to the first curvature as a contour line of the second chin of the target object.
2. The method of claim 1, wherein the step of obtaining image data of the face of the target object comprises:
acquiring a preview frame image through image acquisition equipment;
performing face detection on the preview frame image through a preset face detection model;
and if the human face is detected to exist in the preview frame image, acquiring image data of the face of the target object through the image acquisition equipment.
3. The method of claim 1, wherein the step of detecting the facial feature points of the target object from the image data comprises: detecting a face feature point of the target object from the image data through a feature point detection model obtained through pre-training;
the feature point detection model is obtained by training in the following mode:
acquiring a training sample set; the training sample set comprises a set number of face images; the face image carries the labeling information of the face characteristic points; the labeling information comprises the position of the face characteristic point and the type of the characteristic point;
dividing a training subset and a verification subset from the training sample set according to a first division ratio;
building an initial neural network model and setting initial training parameters;
training the neural network model through the training subset and the training parameters, and verifying the trained neural network model through the verification subset;
if the verification result does not meet the preset precision threshold value, adjusting the training parameters according to the verification result;
and continuing to train the neural network model through the training subset and the adjusted training parameters until the verification result of the neural network model meets the precision threshold value, so as to obtain a feature point detection model.
4. The method according to claim 1, wherein the step of performing modification processing on the region to be modified to obtain the processed image data comprises:
calculating normal data of a specified position on the contour line of the first jaw;
carrying out local deformation processing on the region to be modified along the direction pointed by the normal data so as to hide the contour line of the second chin and obtain the deformed region to be modified;
and carrying out uniform fuzzy processing on the deformed region to be modified to obtain the processed image data.
5. The method according to claim 4, wherein before the step of uniformly blurring the deformed region to be modified to obtain the processed image data, the method further comprises: and performing feathering treatment on the area to be modified to obtain the treated area to be modified.
6. A device for modifying double chin, the device comprising:
a data acquisition module for acquiring image data of a face of a target object;
a feature point detection module for detecting a face feature point of the target object from the image data; the face feature points at least comprise: eyebrow contour points, eye contour points, nose contour points, upper lip contour points, lower lip contour points, chin contour points; the face feature points further include: the position of each feature point and the feature point type of each feature point;
the line area determining module is used for determining a contour line of a first jaw of the target object and an area to be modified according to the face characteristic points;
the modification processing module is used for detecting whether a second chin exists in the region to be modified according to the contour line of the first chin, and if so, modifying the region to be modified to obtain the processed image data;
the line region determination module is further configured to:
extracting a first feature point set representing a first chin of the target object from the face feature points according to the feature point types of all the feature points in the face feature points;
performing curve fitting processing on each feature point in the first feature point set according to the position of each feature point in the first feature point set to obtain a contour line of a first chin of the target object;
extending a preset distance to the neck direction of the target object by taking the contour line of the first chin as a reference to obtain the region to be modified;
the line region determining module comprises a neck direction determining unit, and is used for judging the neck direction of the target object according to the relative position of each type of feature point in the face feature points;
the modification processing module is further configured to:
performing edge extraction processing on the region to be modified;
if an edge line is extracted from the region to be modified, calculating a first curvature of the edge line;
calculating a curvature difference value of the first curvature and a second curvature corresponding to the contour line of the first jaw;
and judging whether the curvature difference is within a preset difference range, if so, determining that a second chin exists in the area to be modified, and determining an edge line strip corresponding to the first curvature as a contour line of the second chin of the target object.
7. A system for modifying a double chin, the system comprising: the device comprises an image acquisition device, a processing device and a storage device;
the image acquisition equipment is used for acquiring a preview frame image or image data;
the storage means having stored thereon a computer program which, when executed by the processing apparatus, performs the method of any of claims 1 to 5.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processing device, carries out the steps of the method of any one of the preceding claims 1 to 5.
CN201811207299.2A 2018-10-16 2018-10-16 Method, device and system for modifying double chin Active CN109410138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811207299.2A CN109410138B (en) 2018-10-16 2018-10-16 Method, device and system for modifying double chin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811207299.2A CN109410138B (en) 2018-10-16 2018-10-16 Method, device and system for modifying double chin

Publications (2)

Publication Number Publication Date
CN109410138A CN109410138A (en) 2019-03-01
CN109410138B true CN109410138B (en) 2021-10-01

Family

ID=65468211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811207299.2A Active CN109410138B (en) 2018-10-16 2018-10-16 Method, device and system for modifying double chin

Country Status (1)

Country Link
CN (1) CN109410138B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111240A (en) * 2019-04-30 2019-08-09 北京市商汤科技开发有限公司 A kind of image processing method based on strong structure, device and storage medium
CN113012031A (en) * 2020-10-30 2021-06-22 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus
CN113781355B (en) * 2021-09-18 2024-05-03 厦门美图之家科技有限公司 Method, device, equipment and storage medium for modifying double chin in image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894446A (en) * 2016-05-09 2016-08-24 西安北升信息科技有限公司 Automatic face outline modification method for video

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510255A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method for identifying and positioning human face, apparatus and video processing chip
US8687039B2 (en) * 2011-06-06 2014-04-01 Cisco Technology, Inc. Diminishing an appearance of a double chin in video communications
CN102592260B (en) * 2011-12-26 2013-09-25 广州商景网络科技有限公司 Certificate image cutting method and system
CN107844748B (en) * 2017-10-17 2019-02-05 平安科技(深圳)有限公司 Auth method, device, storage medium and computer equipment
CN107862673B (en) * 2017-10-31 2021-08-24 北京小米移动软件有限公司 Image processing method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894446A (en) * 2016-05-09 2016-08-24 西安北升信息科技有限公司 Automatic face outline modification method for video

Also Published As

Publication number Publication date
CN109410138A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109325437B (en) Image processing method, device and system
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
US10616475B2 (en) Photo-taking prompting method and apparatus, an apparatus and non-volatile computer storage medium
CN105335722B (en) Detection system and method based on depth image information
CN108229369B (en) Image shooting method and device, storage medium and electronic equipment
EP3323249B1 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
CN106056064B (en) A kind of face identification method and face identification device
JP4950787B2 (en) Image processing apparatus and method
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
CN108463823B (en) Reconstruction method and device of user hair model and terminal
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
EP2608108A1 (en) Face feature vector construction
US20210334998A1 (en) Image processing method, apparatus, device and medium for locating center of target object region
CN109410138B (en) Method, device and system for modifying double chin
US20180357819A1 (en) Method for generating a set of annotated images
WO2016089529A1 (en) Technologies for learning body part geometry for use in biometric authentication
WO2016107638A1 (en) An image face processing method and apparatus
CN111586424B (en) Video live broadcast method and device for realizing multi-dimensional dynamic display of cosmetics
WO2019011073A1 (en) Human face live detection method and related product
CN112633221A (en) Face direction detection method and related device
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
CN109711287B (en) Face acquisition method and related product
CN107368817A (en) Face identification method and device
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant