CN113591562A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113591562A
CN113591562A CN202110701338.XA CN202110701338A CN113591562A CN 113591562 A CN113591562 A CN 113591562A CN 202110701338 A CN202110701338 A CN 202110701338A CN 113591562 A CN113591562 A CN 113591562A
Authority
CN
China
Prior art keywords
face
point
pixel
information
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110701338.XA
Other languages
Chinese (zh)
Inventor
辛琪
孙宇超
魏文
姚聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202110701338.XA priority Critical patent/CN113591562A/en
Publication of CN113591562A publication Critical patent/CN113591562A/en
Priority to PCT/CN2022/087744 priority patent/WO2022267653A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image processing method, an image processing device, electronic equipment and a computer readable storage medium, and belongs to the field of image processing. When the face thinning processing is required to be carried out on the image to be processed, the face key point information of the image to be processed is firstly acquired, then the moving reference points when the pixel points in the face region are moved are determined and the face region is divided into different local regions according to the face key point information, so when the face thinning processing is carried out, the pixel points included in each local region are moved towards the direction of the moving reference points according to the pixel moving strategy parameters corresponding to the local region to which the pixel points belong, and therefore the finally obtained face thinning effect can be natural as much as possible, and the visual effect presented after face thinning can be further improved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application belongs to the field of image processing, and in particular, relates to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of the general entertainment trend, the technology of beautifying and thinning the face is involved in application scenes such as video conferences, live broadcasting, photographing, photo processing and the like.
In the existing technique for beautifying and thinning face, a face contour template is generally preset by a worker, and then the face contour included in the acquired image to be processed is moved for a fixed distance according to fixed parameters provided by the face contour template. Although the operation can enable the image to present the face thinning effect, the effect is to optimize the face curve of the image to be processed into a uniform awl face shape, so that the face presents an unnatural visual effect, and even the face is distorted.
Disclosure of Invention
In view of the above, an object of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which can improve the visual effect of an image to be processed obtained after face thinning processing.
The embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes: acquiring face key point information of an image to be processed, wherein the image to be processed comprises a face area; determining a moving reference point when a pixel point in the face region is moved according to the face key point information; dividing the face area into a plurality of local areas according to the face key point information; and for each local area, moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
With reference to the embodiment of the first aspect, in a possible implementation manner, the pixel movement policy parameter includes a pixel position adjustment ratio; correspondingly, the moving each pixel point in the local region towards the direction of the moving reference point based on the pixel moving policy parameter corresponding to the local region includes: determining a first distance between the pixel point and the moving reference point for each pixel point in the local area; determining a second distance between the pixel point and the moving reference point based on the first distance and a corresponding pixel position adjustment ratio; and moving the pixel point towards the direction of the moving reference point, so that the distance between the pixel point and the moving reference point after the pixel point is moved is equal to the second distance.
With reference to the embodiment of the first aspect, in a possible implementation manner, the determining a second distance between the pixel point and the moving reference point based on the first distance and the corresponding pixel position adjustment ratio includes: determining a first product value of the first distance and a pixel position adjustment proportion corresponding to a local area to which the pixel point belongs; determining the first product value as the second distance.
With reference to the embodiment of the first aspect, in a possible implementation manner, after moving each pixel point in the local region toward the direction of the moving reference point based on the pixel movement policy parameter corresponding to the local region, the method further includes: responding to a face adjusting instruction triggered by a user, and acquiring position information before movement and position information after movement of each pixel point in the face area; the face adjustment instruction carries a face thinning intensity coefficient; determining target position information corresponding to each pixel point which is moved in the face area based on the face thinning strength coefficient, the position information before the movement and the position information after the movement; and moving each pixel point which is moved in the face area to the target position information.
With reference to the embodiment of the first aspect, in a possible implementation manner, the position information is characterized by coordinate information; determining target position information corresponding to each pixel point which is moved in the face region based on the face thinning strength coefficient, the position information before moving and the position information after moving, including: determining a first coordinate difference value between second coordinate information and first coordinate information aiming at each pixel point which is moved in the face area; the position information before movement is represented by the first coordinate information, and the position information after movement is represented by the second coordinate information; calculating a second product value between the face-thinning intensity coefficient and the first coordinate difference value; determining a sum of the first coordinate information and the second product value as the target position information.
With reference to the embodiment of the first aspect, in a possible implementation manner, the determining, according to the face key point information, a moving reference point when a pixel point in the face region is moved includes: determining a face key point corresponding to the center position of the two eyes in the face region; determining the key points of the human face corresponding to the central positions of the two eyes as the mobile reference points;
or determining a face key point corresponding to the nose tip position in the face region, and determining the face key point corresponding to the nose tip position as the mobile reference point.
With reference to the embodiment of the first aspect, in a possible implementation manner, the determining, according to the face key point information, a moving reference point when a pixel point in the face region is moved includes: determining a reference line based on the face key points on the central line in the vertical direction of the face region; for each pixel point in the face region, determining a face key point with the minimum distance between the reference line and the pixel point as a moving reference point corresponding to the pixel point; correspondingly, the moving each pixel point in the local area towards the direction of the moving reference point includes: and moving each pixel point in the local area towards the direction of the corresponding moving reference point.
With reference to the embodiment of the first aspect, in a possible implementation manner, the face keypoint information includes identification information of a face keypoint, and the dividing the face region into a plurality of local regions according to the face keypoint information includes: determining a face key point set corresponding to each face organ according to the corresponding relation between the acquired identification information of the face key points and the face organs; and determining a region surrounded by the face key points in each face key point set as a local region.
With reference to the embodiment of the first aspect, in a possible implementation manner, the adjustment ratios of the pixel positions corresponding to different local areas are different.
With reference to the embodiment of the first aspect, in a possible implementation manner, the acquiring face key point information of an image to be processed includes: inputting the image to be processed into a face key point detection model, and carrying out face key point detection on the image to be processed through the face key point detection model; and acquiring the face key point information output by the face key point detection model.
With reference to the embodiment of the first aspect, in a possible implementation manner, the face key point information includes coordinate information of a face positioning frame and coordinate information of the face key point; before determining a moving reference point when moving a pixel point in the face region according to the face key point information, the method further includes:
and carrying out normalization processing on the coordinate information of the face key points based on the coordinate information of the face positioning frame to obtain the normalized coordinate information of the face key points.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an acquisition module, a determination module, a division module and an adjustment module.
The system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring face key point information of an image to be processed, and the image to be processed comprises a face area; the determining module is used for determining a moving reference point when the pixel point in the face area is moved according to the face key point information; the dividing module is used for dividing the face area into a plurality of local areas according to the face key point information; and the adjusting module is used for moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a memory and a processor, the memory and the processor connected; the memory is used for storing programs; the processor calls a program stored in the memory to perform the method of the first aspect embodiment and/or any possible implementation manner of the first aspect embodiment.
In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium (hereinafter, referred to as a computer-readable storage medium), on which a computer program is stored, where the computer program is executed by a computer to perform the method in the foregoing first aspect and/or any possible implementation manner of the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The foregoing and other objects, features and advantages of the application will be apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not intended to be to scale as practical, emphasis instead being placed upon illustrating the subject matter of the present application.
Fig. 1 shows a flowchart of an image processing method provided in an embodiment of the present application.
Fig. 2 shows a block diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 3 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Reference numerals: 100-an electronic device; 110-a processor; 120-a memory; 130-display screen; 400-an image processing device; 410-an obtaining module; 420-a determination module; 430-a partitioning module; 440-adjustment module.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely in the description herein to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In recent years, technical research based on artificial intelligence, such as computer vision, deep learning, machine learning, image processing, and image recognition, has been actively developed. Artificial Intelligence (AI) is an emerging scientific technology for studying and developing theories, methods, techniques and application systems for simulating and extending human Intelligence. The artificial intelligence subject is a comprehensive subject and relates to various technical categories such as chips, big data, cloud computing, internet of things, distributed storage, deep learning, machine learning and neural networks. Computer vision is used as an important branch of artificial intelligence, particularly a machine is used for identifying the world, and the computer vision technology generally comprises the technologies of face identification, living body detection, fingerprint identification and anti-counterfeiting verification, biological feature identification, face detection, pedestrian detection, target detection, pedestrian identification, image processing, image identification, image semantic understanding, image retrieval, character identification, video processing, video content identification, behavior identification, three-dimensional reconstruction, virtual reality, augmented reality, synchronous positioning and map construction (SLAM), computational photography, robot navigation and positioning and the like. With the research and progress of artificial intelligence technology, the technology is applied to various fields, such as security, city management, traffic management, building management, park management, face passage, face attendance, logistics management, warehouse management, robots, intelligent marketing, computational photography, mobile phone images, cloud services, smart homes, wearable equipment, unmanned driving, automatic driving, smart medical treatment, face payment, face unlocking, fingerprint unlocking, testimony verification, smart screens, smart televisions, cameras, mobile internet, live webcasts, beauty treatment, medical beauty treatment, intelligent temperature measurement and the like.
Along with the development of the general entertainment trend, the requirements of various fields on beautifying and face thinning are more and more, but the current face thinning technology has the problem of face walking distortion, and the requirements of users cannot be met.
In order to solve the above problem, embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which can improve a visual effect of an image to be processed obtained after face thinning processing. In addition, the defects (presenting unnatural visual effect and even causing facial distortion) existing in the face thinning technology in the prior art are the result obtained after the applicant has practiced and studied carefully, and therefore, the discovery process of the above defects and the solution proposed by the embodiment of the present application to the above defects in the following text should be considered as the contribution of the applicant to the present application.
The technology can be realized by adopting corresponding software, hardware and a combination of software and hardware. The following describes embodiments of the present application in detail.
First, an embodiment of the present application provides an image processing method, which is used for performing face thinning processing on an image to be processed including a face region. Referring to fig. 1, the method may include the following steps:
step S110: the method comprises the steps of obtaining face key point information of an image to be processed, wherein the image to be processed comprises a face area.
Step S120: determining a moving reference point when a pixel point in the face region is moved according to the face key point information;
step S130: dividing the face area into a plurality of local areas according to the face key point information;
step S140: and for each local area, moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
In the embodiment of the application, all the images to be processed are not adjusted according to the fixed distance specified by the face contour template, but the face area is divided into different local areas, and corresponding pixel movement strategy parameters exist in the different local areas, so that when the adjustment is performed, pixel points included in each local area move towards the direction of a movement reference point according to the pixel movement strategy parameters corresponding to the local area to which the pixel points belong, and therefore the finally obtained face-thinning effect can be natural as much as possible, and the visual effect presented after face-thinning can be further improved.
The following will explain each step in fig. 1 in detail.
Step S110: the method comprises the steps of obtaining face key point information of an image to be processed, wherein the image to be processed comprises a face area.
The image processing method provided by the embodiment of the application can be used for processing the image to be processed in real time and can also be used for post-processing, namely non-real-time processing, of the image to be processed.
When the image to be processed is processed in real time, the image processing method can be suitable for application scenes such as video live broadcast, video conference, portrait shooting and the like. In this embodiment, the image to be processed can be determined according to the picture and/or the collected video stream collected by the camera in real time.
When post-processing is performed for an image to be processed, the image processing method may be applicable to an image processing application scenario. In this embodiment, the image to be processed may be determined from a pre-downloaded picture and/or video stream, a pre-shot picture and/or video stream taken by a camera.
Of course, the camera may be a component of the electronic device itself that executes or calls the image processing method, or may be an external component of the electronic device. In an optional implementation manner, in the step S110, the process of obtaining the face key point information may be a process of obtaining the face key point information for the image to be processed from a third-party application, software, a face key point detection model, or other devices having a face key point detection function, or a process of performing face key point detection on the image to be processed through the face key point model; that is, when the method provided in the embodiment of the present application is executed, the obtained original parameter may be key point information of a human face, or may also be an image to be processed including a human face region, and a specific implementation process of the method may be selected according to an actual application scenario, which is not limited in the embodiment of the present application.
In an optional implementation manner, the image processing method provided in this embodiment of the present application also includes a process of performing face key point detection on an image to be processed, that is, an original parameter acquired by an electronic device executing the image processing method is an image to be processed including a face region, and then the acquired image to be processed is input to a face key point detection model having a face key point detection function to be detected, so as to obtain face key point information.
In this embodiment, in order to enable the face keypoint detection model to have the function of detecting face keypoints, the face keypoint detection model needs to be trained in advance, and specifically, before the image is processed by using the method provided by the embodiment of the present application, the face keypoint detection model may be trained as follows.
A large number of pictures containing face regions are obtained, and each picture is labeled, so that a training set S comprising a plurality of samples is formed.
Wherein, aiming at the ith sample x in the training set SiLet it be assumed that the label corresponding thereto is yiThen yiMay include xiThe position information G of each face key point in (1). Wherein G ═ a [ ("a")i1,bi1),(ai2,bi2)…(ain,bin)]N is identification information of key points of the face, such as a number, an ID (Identity Document), and the like, (a)in,bin) Is shown at the ith sample xiThe coordinate information of the face key point marked as n in the image.
It is worth pointing out that, in the embodiment of the present application, a coding rule of the identification information of the face key points in each sample is set in advance, so that the meanings of the face key points having the same identification information in different samples are the same, and the identification information of the face key points belonging to a certain specific local area or a certain specific face organ in the face is limited within an identification information range interval corresponding to the specific local area or the specific face organ.
For example, in some embodiments, the identification information is a number of face key points, and when 81 face key points need to be labeled for each face, the preset identification information encoding rule may be: taking the forehead above the eyes including the eyes as an area, wherein the number range of key points of the face belonging to the area is 1-20; taking a chin area below a mouth including the mouth as an area, wherein the number range of key points of the face belonging to the area is 21-40; taking the left face as an area, wherein the number range of key points of the face belonging to the area is 41-55; taking the right face as an area, wherein the number range of key points of the face belonging to the area is 56-70; the hairline is taken as a region, and the number range of the key points of the face belonging to the region is 71-81.
For another example, in some embodiments, the identification information is a number of face key points, and when 81 face key points need to be labeled for each face, the preset identification information encoding rule may be: the number range of face key points belonging to the face contour in the face organ is 1-20, the number range of face key points belonging to the mouth in the face organ is 21-40, the number range of face key points belonging to the nose in the face organ is 41-55, the number range of face key points belonging to the eyes in the face organ is 56-70, and the number range of face key points belonging to the eyebrow in the face organ is 71-81.
Of course, the above identification information encoding rule is merely an example, and it can be understood that in other embodiments, other similar schemes may also be adopted for the identification information encoding rule.
After the labeling is finished, training a deep learning model through a training set S, wherein the training process is as follows: and inputting each sample picture in the training set S into the deep learning model, obtaining corresponding output (the face key points and the coordinate information thereof of the sample pictures), and enabling the deep learning model to automatically learn the internal association between the sample pictures and the output, thereby obtaining the face key point detection model.
Generally speaking, in the labeling stage, N face key points need to be labeled for each sample, and the face key point information output by the face key point detection model obtained through subsequent training for performing face key point detection on the input image to be processed includes N face key points with identification information and coordinate information thereof. For example, in the labeling stage, 81 personal face key points are labeled for each sample, the face key point detection model performs face key point detection on an input image to be processed, and the output face key point information includes 81 face key points with identification information and coordinate information thereof.
Further, in some embodiments, during the annotation phase, y is annotatediMay also include sample xiThe information K ═ of the face location frame in (u ═ ci,viM, f), wherein (u)i,vi) The coordinate information of the face location box is generally the coordinate information of a vertex (e.g. the point where the lower left corner is located) of the face location box, and (m, f) respectively represent the width and height of the face location box.
It should be noted that the coordinate information of the lower left corner of the face location frame and the coordinate information of each face key point belong to the same rectangular coordinate system (for the convenience of distinction, called as the first coordinate system), and the first coordinate system is generally expressed by sample xiWith one vertex (e.g., the point at which the lower left corner is located) as the origin and the two edges connecting to the vertex as the X-axis and the Y-axis, respectively.
In this embodiment, after the face keypoint detection model obtained by training performs face keypoint detection on the image to be processed, the output face keypoint information may include, in addition to each face keypoint with identification information and coordinate information thereof included in the image to be processed, information of a face positioning frame included in the image to be processed, that is, the output face keypoint information includes G and K.
After the face key point detection model is obtained, the obtained image to be processed can be input into the face key point detection model, face key point detection is carried out on the image to be processed through the face key point detection model, and then face key point information output by the face key point detection model is obtained.
In some embodiments, the acquired image to be processed may be a face image only including a face region, or may be a large image including the face region and other regions of the human body.
In general, when the image to be processed is a large image, the subsequent face thinning processing process may be directly based on the large image, and the face area for the large image is the processing object.
In addition, because the face thinning processing is mainly performed on the face region, in some embodiments, when the image to be processed is a large image, and the face key point information obtained after the image to be processed is input to the face key point detection model includes information of the face positioning frame, the face image corresponding to the face positioning frame may be further extracted from the image to be processed (i.e., the large image) according to the obtained information of the face positioning frame, so that the face region of the face image may be subsequently used as a processing object of the subsequent face thinning processing directly on the basis of the face image without processing the rest regions of the large image.
It can be understood that the data size of the face image including the same face area is smaller than that of the image to be processed, so that when the face image is taken as a processing object, the time delay generated in the face thinning processing process is favorably reduced.
Of course, since the origin of coordinates of a coordinate system (first coordinate system) in which various coordinate information output by the face key point detection model is located is one vertex of the large map, when the face image is taken as a processing object, the origin of coordinates is out of the face image with a large probability.
In order to facilitate operation only for the pixel points in the face region in the face image, in some embodiments, normalization processing may be performed on various coordinate information output by the face key point detection model, so that the coordinate information of the face key point output by the face key point detection model is converted into new coordinate information in the intercepted face region. In a specific implementation, the normalization process may be performed before the step of determining, according to the face key point information, a moving reference point when the pixel point in the face region is moved in step S120. Of course, the normalization process may be performed before the step of dividing the face region into a plurality of local regions according to the face key point information in the step S130. The embodiment of the present application does not limit the specific execution process of the normalization processing operation.
In specific implementation, the coordinate information of the key points of the face can be normalized based on the coordinate information of the face positioning frame, so as to obtain the normalized coordinate information of the key points of the face.
In this way, the subsequent steps of the method provided by the embodiment of the application are executed based on the normalized coordinate information of the key points of the human face. That is, convert the coordinate of people's face key point into the coordinate in the face area of the face image of intercepting, like this, when carrying out the face thinning operation only need to operate to the pixel in the face area can, the pixel outside the face area remains unchanged, after carrying out the face thinning operation and accomplishing, draw the face area again according to the pixel after the operation to it can to replace original face area.
The origin of the coordinate system (first coordinate system) corresponding to the coordinate information before the normalization operation is one of the vertexes of the image to be processed, and the origin of the coordinate system (second coordinate system) corresponding to the coordinate information after the normalization operation is one of the vertexes of the face positioning frame referred to by the normalization operation.
Correspondingly, when the face image is taken as a processing object, the processing can be carried out based on the normalized face key point information.
The following will describe the process of normalization conversion.
Optionally, a coordinate difference between the coordinate information of the face positioning frame and the coordinate information of the face key point may be calculated, and then the coordinate difference is used to update and replace the original coordinate information of the face key point, so as to obtain normalized face key point information.
Specifically, assume that the information of the face location frame in the ith image to be processed is (u)i,viM, f) and the coordinate information of the face key point is ((a)i1,bi1),(ai2,bi2)…(ain,bin) Then the coordinate information of the face location frame is (u)i,vi). After the normalization conversion, the coordinate information of each face key point included in the face image is ((a)i1-ui,bi1-vi),(ai2-ui,bi2-vi)…(ain-ui,bin-vi) At this time, the origin of coordinates is the point where the coordinate information of the face positioning frame is located.
Of course, it is understood that the foregoing embodiments corresponding to the normalization may be combined with any of the foregoing or following embodiments (the implementation of any of the embodiments needs not conflict with the implementation of the present embodiment), for example: the embodiment may be combined with the embodiment that corresponds to the direction movement of each pixel point in the local area toward the movement reference point based on the pixel movement policy parameter; the embodiment may be combined with an embodiment corresponding to the movement of each pixel point in the local area toward the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement policy parameter, which will be described later; the embodiment may also be combined with an embodiment that corresponds to target position information corresponding to each pixel point that has moved in the face area, which is determined according to a face thinning intensity coefficient included in a face adjustment instruction triggered by a user and the acquired position information before and after the movement of each pixel point in the face area, and the like.
Step S120: and determining a moving reference point when the pixel points in the face area are moved according to the face key point information.
Wherein, the moving reference point is used for guiding the subsequent face thinning processing.
In the embodiment of the application, all pixel points in the whole face region can correspond to one same moving reference point; in addition, each pixel point in the whole face region can also correspond to different mobile reference points.
As mentioned above, the meaning of the face key points corresponding to the respective identification information is determined in advance.
In some embodiments, a face key point corresponding to a specific position (e.g., a center position of eyes or a tip position of nose, etc.) may be determined from the face region according to the identification information of each face key point, and the face key point may be determined as a moving reference point.
In a specific implementation, the step S120: determining a moving reference point when a pixel point in a face region is moved according to the face key point information, wherein the moving reference point at least comprises the following two implementation modes:
determining a face key point corresponding to the center position of the two eyes in the face area, and determining the face key point corresponding to the center position of the two eyes as the mobile reference point; or determining a face key point corresponding to the nose tip position in the face region, and determining the face key point corresponding to the nose tip position as the mobile reference point.
Of course, the specific positions are only exemplified as the center positions of the eyes and the tip of the nose, and besides, the specific positions may be other positions, such as the middle point of the connecting line between the center positions of the eyes and the tip of the nose, and the like; the embodiments of the present application are not described in detail.
For example, in one embodiment, if the predefined number range of the face key points belonging to the eye region is 56-70, the face key point with number 63 is used to characterize the center position of the eyes, and then the key point of the face with number 63 can be determined as the moving reference point.
In this embodiment, the same moving reference points corresponding to the respective pixel points in the face region are all face key points corresponding to the positions for representing the centers of the eyes in the face region.
Of course, it can be understood that the above-mentioned embodiment that uses the specific position as the moving reference point when the pixel point in the face region moves may be combined with any of the foregoing or following embodiments (the implementation of any of the embodiments needs not to conflict with the implementation of the embodiment), for example: the embodiment may be combined with the embodiment that corresponds to the direction movement of each pixel point in the local area toward the movement reference point based on the pixel movement policy parameter; the embodiment may be combined with an embodiment corresponding to the movement of each pixel point in the local area toward the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement policy parameter, which will be described later; the embodiment may also be combined with an embodiment that corresponds to target position information corresponding to each pixel point that has moved in the face area, which is determined according to a face thinning intensity coefficient included in a face adjustment instruction triggered by a user and the acquired position information before and after the movement of each pixel point in the face area, and the like.
In some embodiments, a reference line may also be determined, and for different pixel points in the face region, a moving reference point corresponding to the pixel point is determined from the reference line according to a set rule. Therefore, the above step S120: according to the face key point information, determining a moving reference point when a pixel point in a face area is moved, and the method can be realized through the following processes:
determining a reference line based on a face key point on a central line in the vertical direction of the face region; and aiming at each pixel point in the face area, determining a face key point with the minimum distance between the reference line and the pixel point as a moving reference point corresponding to the pixel point.
In a specific embodiment, the key points corresponding to the center positions of the two eyes on the center line may be used as first face key points, the face key points corresponding to the center positions of the upper and lower jaws on the center line may be used as second face key points, and a line segment formed by the first key points and the second face key points may be determined as the reference line.
For example, in one embodiment, it is assumed that the number range of the face key points belonging to the eye region is 56-70, the number range of the face key points belonging to the face contour region is 1-20, the key point of the face with number 63 is the center of both eyes, and the key point of the face with number 10 is the center of chin. At this time, a connection line between the face key point numbered 10 and the face key point numbered 63 is determined as a reference line.
Of course, in another specific embodiment, the key point corresponding to the position of the eyebrow center on the center line may be used as the first face key point, the face key point corresponding to the position of the center of the lips on the center line may be used as the second face key point, and the line segment formed based on the first face key point and the second face key point may be determined as the reference line.
And on the basis of obtaining the reference line, determining a face key point with the minimum distance between the reference line and each pixel point in the face region as a moving reference point corresponding to the pixel point.
In this embodiment, the moving reference points corresponding to the pixels in the face region are different, that is, the pixels in the face region have corresponding moving reference points.
Of course, it can be understood that the above-mentioned embodiment of taking the face key point on the reference line as the moving reference point when the pixel point in the face region moves may be combined with any embodiment (implementation of any embodiment needs not to conflict with implementation of this embodiment) in the foregoing or following, for example: the embodiment may be combined with the embodiment that corresponds to the direction movement of each pixel point in the local area toward the movement reference point based on the pixel movement policy parameter; the embodiment may be combined with an embodiment corresponding to the movement of each pixel point in the local area toward the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement policy parameter, which will be described later; the embodiment may also be combined with an embodiment that corresponds to target position information corresponding to each pixel point that has moved in the face area, which is determined according to a face thinning intensity coefficient included in a face adjustment instruction triggered by a user and the acquired position information before and after the movement of each pixel point in the face area, and the like.
Step S130: and dividing the face area into a plurality of local areas according to the face key point information.
In the foregoing, the face key point information includes identification information of a face key point, and correspondingly, in step S130, according to the face key point information, the dividing the face region into a plurality of local regions specifically includes:
determining a face key point set corresponding to each face organ according to the corresponding relation between the acquired identification information of the face key points and the face organs; and determining a region surrounded by the face key points in each face key point set as a local region.
The above human face organs can be eyes, nose, mouth, eyebrows, face, etc.
Of course, in some other embodiments, the division of the partial region may not be performed according to the region corresponding to the above-mentioned human face organ, or may be performed according to other methods, for example, the forehead region above the eyes including the eyes is regarded as one region, the chin region below the mouth including the mouth is regarded as one region, the left face region (including a part of the nose) located between the forehead region and the chin region is regarded as one region, and the right face region (including a part of the nose) located between the forehead region and the lower chin region is regarded as one region; of course, in specific implementation, the region may be divided in other manners, and the embodiments of the present application are not described in detail again.
For ease of understanding, the following description will be given by way of example.
For example, dividing the face key points with the numbers belonging to the number range of 1-20 into a group, and determining that the face key points included in the group belong to the face contour region; dividing the face key points with the numbers belonging to the number range 21-40 into a group, and determining that the face key points included in the group belong to the mouth area; dividing the face key points with numbers belonging to the number range of 41-55 into a group, and determining that the face key points included in the group belong to a nose area; dividing the face key points with the numbers belonging to the number range of 56-70 into a group, and determining that the face key points included in the group belong to the eye area; dividing the face key points with numbers belonging to the number ranges 71-81 into a group, and determining that the face key points included in the group belong to the eyebrow area.
After a plurality of groups are obtained, each group is a face key point set. And the area range surrounded by the coordinate information of each face key point in each face key point set is a local area.
Of course, it can be understood that the face key point set corresponding to each face organ is determined according to the obtained correspondence between the identification information of the face key points and the face organs; the embodiment of determining the region surrounded by the face key points in each face key point set as a local region may be combined with any of the foregoing or following embodiments (the implementation of any of the foregoing embodiments needs not to conflict with the implementation of the present embodiment), for example: the embodiment may be combined with the embodiment that corresponds to the direction movement of each pixel point in the local area toward the movement reference point based on the pixel movement policy parameter; the embodiment may be combined with an embodiment corresponding to the movement of each pixel point in the local area toward the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement policy parameter, which will be described later; the embodiment may also be combined with an embodiment that corresponds to target position information corresponding to each pixel point that has moved in the face area, which is determined according to a face thinning intensity coefficient included in a face adjustment instruction triggered by a user and the acquired position information before and after the movement of each pixel point in the face area, and the like.
Step S140: and for each local area, moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
And (4) performing face thinning operation, namely performing pixel adjustment on each local area divided by the face area needing face thinning.
As mentioned above, each local area has a corresponding pixel movement policy parameter, and the pixel movement policy parameters corresponding to different local areas are different.
In some embodiments, the pixel movement policy parameter adjusts the scale for the pixel location. In this embodiment, each pixel point included in each local area may be moved and adjusted in a direction toward the moving reference point, so as to achieve the face-thinning effect. And determining the adjustment degree by adjusting the proportion of the pixel position corresponding to the local area where the pixel point is located.
In this embodiment, when each pixel point in each local region is moved toward the direction of the moving reference point, a first distance d between the pixel point and the moving reference point may be determined for each pixel point in each local region; then, based on the first distance d and the pixel position adjustment proportion T corresponding to the local area where the pixel point is located, determining a second distance d' between the pixel point and the moving reference point; and then moving the pixel point towards the direction of the moving reference point, so that the distance between the moved pixel point and the moving reference point is equal to the second distance.
In some embodiments, a first product value of the first distance d and the pixel position adjustment ratio T corresponding to the local region to which the pixel point belongs may be calculated, and the first product value may be determined as the second distance d'.
For example, the above procedure can be implemented based on the formula d' ═ d × T, T ∈ [0,1 ].
Taking the same moving reference points of all the pixel points in the face region, all the pixel points are face key points 63 at the center positions of both eyes as an example, assuming that the pixel position adjustment proportion corresponding to the face contour region is T1, and the pixel position adjustment proportion corresponding to the mouth region is T2, when pixel movement is performed, each pixel point included in the face contour region is moved in the direction of the moving reference point represented by the face key point 63, and the moved distance is the product value of the distance d between the pixel point and the face key point 63 and T1; and (3) moving each pixel point included in the mouth contour region to the direction of the moving reference point represented by the face key point 63, wherein the moving distance is the product value of the distance d between the pixel point and the face key point 63 and T2.
Of course, it can be understood that in this specific example, other local areas that are not enumerated and their corresponding pixel position adjustment ratios are also included, and the directions of the moving reference points represented by the face key points 63 are often moved in a similar manner as described above.
In addition, it should be noted that, when the moving reference points corresponding to the respective pixel points in the face region are different, in the pixel moving process, when the respective pixel points move toward the direction of the moving reference point, the respective pixel points move toward the direction of the moving reference point corresponding to the respective pixel points.
In addition, in the embodiment of the present application, the pixel movement policy parameter may be a ratio between a distance from a position after the pixel is moved to the pixel reference point and a distance from a position before the pixel is moved to the pixel reference point, so that when the pixel is moved, the target position information of the pixel may be determined according to the ratio.
Of course, in some embodiments, the pixel policy movement parameter may also be a distance that the pixel moves, and the like; the pixel policy movement parameters may be other parameters, and are not described in detail in this embodiment.
For the determination of the moving reference points corresponding to the pixels, please refer to the related description above, which is not repeated herein.
Of course, the adjustment ratios of the pixel positions corresponding to the local regions are independent of each other, and may be partially the same or completely different.
In the embodiment of the application, in order to ensure that a natural effect is exhibited after face thinning, background workers configure corresponding pixel position adjustment proportions for each local area of a human face in advance. The background staff can take different pixel position adjustment proportions for each local area to test, and observe corresponding visual effects, so that the optimal pixel position adjustment proportions corresponding to the local areas are determined and stored. In the actual use process, a user cannot directly adjust the pixel position adjustment proportion of each local area independently.
Of course, in some embodiments, after the pixel moving operation, the effect may not reach the degree satisfied by the user, and in this case, the user may also trigger the user-defined adjustment through the virtual key or the physical key, so as to trigger the face adjustment instruction.
The face adjustment instruction comprises a face thinning intensity coefficient k, and the size of the face thinning intensity coefficient k can be adjusted by a user, so that the user can adjust the current face thinning degree based on the face thinning intensity coefficient k.
In this embodiment of the application, when the electronic device that operates the image processing method obtains and responds to a face adjustment instruction (where the face adjustment instruction carries a face thinning intensity coefficient), position information before movement and position information after movement of each pixel point that has moved in a face region of a processed image (which may be a to-be-processed image or a face image captured from the to-be-processed image) may be obtained. Then, based on the face thinning intensity coefficient k, the position information before movement and the position information after movement, target position information corresponding to each moved pixel point in a face region of a processed image (which may be a to-be-processed image or a face image captured from the to-be-processed image) is determined, and each moved pixel point in the face region is moved to a target position corresponding to the target position information.
In one embodiment, the position information is represented by coordinate information; correspondingly, the determining of the target position information corresponding to each pixel point which is moved in the face region of the processed image based on the face thinning strength coefficient k, the position information before the movement and the position information after the movement can be specifically realized by the following processes:
determining second coordinate information (x) for each pixel point which is moved in the face regioni',yi') and first coordinate information (x)i,yi) Then calculating a second product value between the face-thinning strength coefficient k and the first coordinate difference value, and setting the first sitting postureLogo information (x)i,yi) And the sum of the second product value is determined as the target position information
Figure BDA0003129372510000211
The position information before movement is represented by the first coordinate information, and the position information after movement is represented by the second coordinate information.
For example, the above process may be based on a formula
Figure BDA0003129372510000212
To achieve that k ∈ [0,1]]。
Of course, it is to be noted that, when the processing target at the time of face thinning processing is an image to be processed, the above-mentioned related coordinate information is coordinate information belonging to the first coordinate system; when the processing object is a face image cut out from an image to be processed, the related coordinate information is coordinate information belonging to a second coordinate system.
Optionally, in some embodiments, movement information (including a movement distance and a movement direction) of each pixel in the face area of the processed image (which may be the image to be processed, or a face image captured from the image to be processed) in the X axis and the Y axis may be determined based on the face thinning coefficient k, the position information before movement, and the position information after movement, and each pixel in the face area is moved based on the movement information.
Optionally, in a specific embodiment, the position information may be represented by coordinate information, and correspondingly, the movement information corresponding to each pixel point may be determined through the following process:
determining second coordinate information (x) for each pixel point which is moved in the face regioni',yi') and first coordinate information (x)i,yi) A first coordinate difference therebetween; then calculating a difference between the set value and the face-thinning intensity coefficient k, calculating a product between the difference and the first coordinate difference, and determining the product as the movement information, wherein the product may beThe positive or negative value of the product indicates the direction of movement.
Of course, the embodiment of the present application only exemplifies possible implementation manners when face thinning is performed based on the face thinning strength coefficient, and in addition, face thinning may be performed by using other manners based on the face thinning strength coefficient, which is not described herein again.
In addition, in some embodiments, when the processing object during the face thinning processing is a face image cut from the image to be processed, after the face image is subjected to the face thinning processing, the original face image included in the image to be processed needs to be replaced by the face image subjected to the face thinning processing, so that the image to be processed exhibits a face thinning effect.
In addition, referring to fig. 2, an embodiment of the present application further provides an image processing apparatus 400, where the image processing apparatus 400 may include: an acquisition module 410, a determination module 420, a partitioning module 430, and an adjustment module 440.
An obtaining module 410, configured to obtain face key point information of an image to be processed, where the image to be processed includes a face region;
a determining module 420, configured to determine, according to the face key point information, a moving reference point when a pixel point in the face region is moved;
a dividing module 430, configured to divide the face region into a plurality of local regions according to the face key point information;
an adjusting module 440, configured to, for each local region, move each pixel point in the local region toward the direction of the moving reference point based on the pixel movement policy parameter corresponding to the local region.
In one possible embodiment, the pixel movement strategy comprises a pixel position adjustment scale; the adjusting module 440 is configured to determine, for each pixel point in the local region, a first distance between the pixel point and the moving reference point; determining a second distance between the pixel point and the moving reference point based on the first distance and a corresponding pixel position adjustment ratio; and moving the pixel point towards the direction of the moving reference point, so that the distance between the pixel point and the moving reference point after the pixel point is moved is equal to the second distance.
In a possible implementation manner, the adjusting module 440 is configured to determine a first product value of the first distance and a pixel position adjustment ratio corresponding to the local area to which the pixel point belongs; determining the first product value as the second distance.
In a possible implementation manner, the adjusting module 440 is further configured to, in response to a face adjusting instruction triggered by a user, obtain location information before movement and location information after movement of each pixel point in the face region that has moved; the face adjustment instruction carries a face thinning intensity coefficient; determining target position information corresponding to each pixel point which is moved in the face area based on the face thinning strength coefficient, the position information before the movement and the position information after the movement; and moving each pixel point which is moved in the face area to a target position corresponding to the target position information.
In one possible embodiment, the position information is characterized by coordinate information; the adjusting module 440 is configured to determine, for each pixel point that has moved in the face region, a first coordinate difference between the second coordinate information and the first coordinate information; the position information before movement is represented by the first coordinate information, and the position information after movement is represented by the second coordinate information; calculating a second product value between the face-thinning intensity coefficient and the first coordinate difference value; determining a sum of the first coordinate information and the second product value as the target position information.
In a possible implementation manner, the determining module 420 is configured to determine a face key point corresponding to a center position of two eyes in the face region, and determine the face key point corresponding to the center position of the two eyes as the moving reference point;
or, the method is used to determine a face key point corresponding to the nose tip position in the face region, and determine the face key point corresponding to the nose tip position as the mobile reference point.
In a possible implementation manner, the determining module 420 is configured to determine a reference line based on a face key point located on a center line in a vertical direction of the face region; for each pixel point in the face region, determining a face key point with the minimum distance between the reference line and the pixel point as a moving reference point corresponding to the pixel point;
correspondingly, the adjusting module 440 is configured to move each pixel point in the local area toward the direction of the corresponding moving reference point.
In a possible implementation manner, the face key point information includes identification information of face key points, and the dividing module 430 is configured to determine a face key point set corresponding to each face organ according to a correspondence between the acquired identification information of the face key points and the face organs; and determining a region surrounded by the face key points in each face key point set as a local region.
In one possible implementation, the adjustment ratios of the pixel positions corresponding to different local areas are different.
In a possible implementation manner, the obtaining module 410 is configured to input the image to be processed into a face key point detection model, and perform face key point detection on the image to be processed through the face key point detection model; and acquiring the face key point information output by the face key point detection model.
In a possible implementation manner, the face key point information includes coordinate information of a face positioning frame and coordinate information of the face key point; the device also comprises a normalization module which is used for carrying out normalization processing on the coordinate information of the face key points based on the coordinate information of the face positioning frame to obtain the normalized coordinate information of the face key points.
The image processing apparatus 400 provided in the embodiment of the present application has the same implementation principle and technical effect as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the part of the embodiment of the apparatus that is not mentioned.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a computer, the computer program executes the steps included in the image processing method.
In addition, referring to fig. 3, an electronic device 100 for implementing the image processing method and apparatus is also provided in the embodiments of the present application.
The electronic device 100 is a mobile phone, a smart camera, a tablet computer, a Personal Computer (PC), or the like. The user can take photos, live videos, image processing and other activities through the electronic device 100.
Among them, the electronic device 100 may include: processor 110, memory 120, display 130.
It should be noted that the components and structure of electronic device 100 shown in FIG. 3 are exemplary only, and not limiting, and electronic device 100 may have other components and structures as desired. For example, in some cases, electronic device 100 may also include a camera for capturing images to be processed in real-time.
The processor 110, memory 120, display 130, and other components that may be present in the electronic device 100 are electrically connected to each other, directly or indirectly, to enable the transfer or interaction of data. For example, the processor 110, the memory 120, the display 130, and other components that may be present may be electrically connected to each other via one or more communication buses or signal lines.
The memory 120 is used for storing programs, such as programs corresponding to the image processing methods mentioned above or the image processing apparatuses mentioned above. Alternatively, when the image processing apparatus is stored in the memory 120, the image processing apparatus includes at least one software functional module that can be stored in the memory 120 in the form of software or firmware (firmware).
Alternatively, the software function module included in the image processing apparatus may also be solidified in an Operating System (OS) of the electronic device 100.
The processor 110 is used to execute executable modules stored in the memory 120, such as software functional modules or computer programs included in the image processing apparatus. When the processor 110 receives the execution instruction, it may execute the computer program, for example, to perform: acquiring face key point information of an image to be processed, wherein the image to be processed comprises a face area; determining a moving reference point when a pixel point in the face region is moved according to the face key point information; dividing the face area into a plurality of local areas according to the face key point information; and for each local area, moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
Of course, the method disclosed in any of the embodiments of the present application can be applied to the processor 110, or implemented by the processor 110.
In summary, according to the image processing method, the image processing apparatus, the electronic device, and the computer-readable storage medium provided in the embodiments of the present invention, when a face-thinning process is required to be performed on an image to be processed, face key point information of the image to be processed is first obtained, then a moving reference point is determined according to the face key point information, and a face region is divided into different local regions, so that when the face-thinning process is performed, pixel points included in each local region are moved in a direction of the moving reference point according to a pixel movement policy corresponding to the local region to which the pixel points belong, so that a final obtained face-thinning effect is as natural as possible, and a visual effect exhibited after face-thinning can be further improved.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (14)

1. An image processing method, characterized in that the method comprises:
acquiring face key point information of an image to be processed, wherein the image to be processed comprises a face area;
determining a moving reference point when a pixel point in the face region is moved according to the face key point information;
dividing the face area into a plurality of local areas according to the face key point information;
and for each local area, moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
2. The method of claim 1, wherein the pixel movement strategy parameters comprise a pixel position adjustment ratio;
correspondingly, the moving each pixel point in the local region towards the direction of the moving reference point based on the pixel moving policy parameter corresponding to the local region includes:
determining a first distance between the pixel point and the moving reference point for each pixel point in the local area;
determining a second distance between the pixel point and the moving reference point based on the first distance and a corresponding pixel position adjustment ratio;
and moving the pixel point towards the direction of the moving reference point, so that the distance between the pixel point and the moving reference point after the pixel point is moved is equal to the second distance.
3. The method of claim 2, wherein determining a second distance between the pixel point and the moving reference point based on the first distance and the corresponding pixel location adjustment scale comprises:
determining a first product value of the first distance and a pixel position adjustment proportion corresponding to a local area to which the pixel point belongs;
determining the first product value as the second distance.
4. The method according to claim 1, wherein after moving each pixel point in the local region toward the direction of the moving reference point based on the pixel movement parameter corresponding to the local region, the method further comprises:
responding to a face adjusting instruction triggered by a user, and acquiring position information before movement and position information after movement of each pixel point in the face area; the face adjustment instruction carries a face thinning intensity coefficient;
determining target position information corresponding to each pixel point which is moved in the face area based on the face thinning strength coefficient, the position information before the movement and the position information after the movement;
and moving each pixel point which is moved in the face area to a target position corresponding to the target position information.
5. The method of claim 4, wherein the location information is characterized by coordinate information;
determining target position information corresponding to each pixel point which is moved in the face region based on the face thinning strength coefficient, the position information before moving and the position information after moving, including:
determining a first coordinate difference value between second coordinate information and first coordinate information aiming at each pixel point which is moved in the face area; the position information before movement is represented by the first coordinate information, and the position information after movement is represented by the second coordinate information;
calculating a second product value between the face-thinning intensity coefficient and the first coordinate difference value;
determining a sum of the first coordinate information and the second product value as the target position information.
6. The method according to any one of claims 1 to 5, wherein the determining, according to the face key point information, a moving reference point when moving a pixel point in the face region includes:
determining a face key point corresponding to the center position of the two eyes in the face region, and determining the face key point corresponding to the center position of the two eyes as the mobile reference point;
alternatively, the first and second electrodes may be,
and determining a face key point corresponding to the nose tip position in the face region, and determining the face key point corresponding to the nose tip position as the mobile reference point.
7. The method according to any one of claims 1 to 5, wherein the determining, according to the face key point information, a moving reference point when moving a pixel point in the face region includes:
determining a reference line based on the face key points on the central line in the vertical direction of the face region;
for each pixel point in the face region, determining a face key point with the minimum distance between the reference line and the pixel point as a moving reference point corresponding to the pixel point;
correspondingly, the moving each pixel point in the local area towards the direction of the moving reference point includes:
and moving each pixel point in the local area towards the direction of the corresponding moving reference point.
8. The method according to any one of claims 1 to 5, wherein the face key point information includes identification information of face key points, and the dividing the face region into a plurality of local regions according to the face key point information includes:
determining a face key point set corresponding to each face organ according to the corresponding relation between the acquired identification information of the face key points and the face organs;
and determining a region surrounded by the face key points in each face key point set as a local region.
9. The method according to claim 2 or 3, wherein the pixel position adjustment ratios for different local regions are different.
10. The method according to any one of claims 1 to 5, wherein the obtaining of the face key point information of the image to be processed comprises:
inputting the image to be processed into a face key point detection model, and carrying out face key point detection on the image to be processed through the face key point detection model;
and acquiring the face key point information output by the face key point detection model.
11. The method according to any one of claims 1 to 5, wherein the face key point information includes coordinate information of a face location box and coordinate information of the face key point; before determining a moving reference point when moving a pixel point in the face region according to the face key point information, the method further includes:
and carrying out normalization processing on the coordinate information of the face key points based on the coordinate information of the face positioning frame to obtain the normalized coordinate information of the face key points.
12. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring face key point information of an image to be processed, and the image to be processed comprises a face area;
the determining module is used for determining a moving reference point when the pixel point in the face area is moved according to the face key point information;
the dividing module is used for dividing the face area into a plurality of local areas according to the face key point information;
and the adjusting module is used for moving each pixel point in the local area towards the direction of the moving reference point based on the pixel moving strategy parameter corresponding to the local area.
13. An electronic device, comprising: a memory and a processor, the memory and the processor connected;
the memory is used for storing programs;
the processor calls a program stored in the memory to perform the method of any of claims 1-11.
14. A computer-readable storage medium, on which a computer program is stored which, when executed by a computer, performs the method of any one of claims 1-11.
CN202110701338.XA 2021-06-23 2021-06-23 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN113591562A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110701338.XA CN113591562A (en) 2021-06-23 2021-06-23 Image processing method, image processing device, electronic equipment and computer readable storage medium
PCT/CN2022/087744 WO2022267653A1 (en) 2021-06-23 2022-04-19 Image processing method, electronic device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110701338.XA CN113591562A (en) 2021-06-23 2021-06-23 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113591562A true CN113591562A (en) 2021-11-02

Family

ID=78244528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110701338.XA Pending CN113591562A (en) 2021-06-23 2021-06-23 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113591562A (en)
WO (1) WO2022267653A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267653A1 (en) * 2021-06-23 2022-12-29 北京旷视科技有限公司 Image processing method, electronic device, and computer readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994947B (en) * 2023-03-22 2023-06-02 万联易达物流科技有限公司 Positioning-based intelligent card punching estimation method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198141A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Realize image processing method, device and the computing device of thin face special efficacy
KR20180108048A (en) * 2017-03-23 2018-10-04 박귀현 Apparatus and method that automatically interacts with the subject of smart mirror, and smart mirror using the same
CN109359618A (en) * 2018-10-30 2019-02-19 北京市商汤科技开发有限公司 A kind of image processing method and its device, equipment and storage medium
CN111652794A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face adjusting method, face live broadcasting method, face adjusting device, live broadcasting device, electronic equipment and storage medium
CN111652025A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face processing method, live broadcast method, device, electronic equipment and storage medium
WO2021012596A1 (en) * 2019-07-24 2021-01-28 广州视源电子科技股份有限公司 Image adjustment method, device, storage medium, and apparatus
CN112488909A (en) * 2019-09-11 2021-03-12 广州虎牙科技有限公司 Multi-face image processing method, device, equipment and storage medium
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652795A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
US11256956B2 (en) * 2019-12-02 2022-02-22 Qualcomm Incorporated Multi-stage neural network process for keypoint detection in an image
CN113591562A (en) * 2021-06-23 2021-11-02 北京旷视科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180108048A (en) * 2017-03-23 2018-10-04 박귀현 Apparatus and method that automatically interacts with the subject of smart mirror, and smart mirror using the same
CN108198141A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Realize image processing method, device and the computing device of thin face special efficacy
CN109359618A (en) * 2018-10-30 2019-02-19 北京市商汤科技开发有限公司 A kind of image processing method and its device, equipment and storage medium
CN111652794A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face adjusting method, face live broadcasting method, face adjusting device, live broadcasting device, electronic equipment and storage medium
CN111652025A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face processing method, live broadcast method, device, electronic equipment and storage medium
WO2021012596A1 (en) * 2019-07-24 2021-01-28 广州视源电子科技股份有限公司 Image adjustment method, device, storage medium, and apparatus
CN112488909A (en) * 2019-09-11 2021-03-12 广州虎牙科技有限公司 Multi-face image processing method, device, equipment and storage medium
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦枫;卢芳芳;林江南;: "基于SDM及加权均值滤波的人脸美化系统", 上海电力大学学报, no. 04 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267653A1 (en) * 2021-06-23 2022-12-29 北京旷视科技有限公司 Image processing method, electronic device, and computer readable storage medium

Also Published As

Publication number Publication date
WO2022267653A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
Fischer et al. Rt-gene: Real-time eye gaze estimation in natural environments
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
CN112241731B (en) Attitude determination method, device, equipment and storage medium
US10679041B2 (en) Hybrid deep learning method for recognizing facial expressions
JP6207210B2 (en) Information processing apparatus and method
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN110163211B (en) Image recognition method, device and storage medium
JP2012155391A (en) Posture state estimation device and posture state estimation method
JP6859765B2 (en) Image processing equipment, image processing methods and programs
WO2022267653A1 (en) Image processing method, electronic device, and computer readable storage medium
CN106033539A (en) Meeting guiding method and system based on video face recognition
CN111079625A (en) Control method for camera to automatically rotate along with human face
Budiman et al. Student attendance with face recognition (LBPH or CNN): Systematic literature review
CN112446322A (en) Eyeball feature detection method, device, equipment and computer-readable storage medium
WO2022063321A1 (en) Image processing method and apparatus, device and storage medium
Yang et al. Development of a fast panoramic face mosaicking and recognition system
KR20220000851A (en) Dermatologic treatment recommendation system using deep learning model and method thereof
CN115410240A (en) Intelligent face pockmark and color spot analysis method and device and storage medium
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN111275610A (en) Method and system for processing face aging image
CN112655021A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN112819984B (en) Classroom multi-person roll-call sign-in method based on face recognition
Vivek et al. A Way to Mark Attentance using Face Recognition using PL
CN116434253A (en) Image processing method, device, equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination