WO2022267653A1 - Image processing method, electronic device, and computer readable storage medium - Google Patents

Image processing method, electronic device, and computer readable storage medium Download PDF

Info

Publication number
WO2022267653A1
WO2022267653A1 PCT/CN2022/087744 CN2022087744W WO2022267653A1 WO 2022267653 A1 WO2022267653 A1 WO 2022267653A1 CN 2022087744 W CN2022087744 W CN 2022087744W WO 2022267653 A1 WO2022267653 A1 WO 2022267653A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
pixel
point
information
moving
Prior art date
Application number
PCT/CN2022/087744
Other languages
French (fr)
Chinese (zh)
Inventor
辛琪
孙宇超
魏文
姚聪
Original Assignee
北京旷视科技有限公司
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110701338.XA external-priority patent/CN113591562B/en
Application filed by 北京旷视科技有限公司, 北京迈格威科技有限公司 filed Critical 北京旷视科技有限公司
Publication of WO2022267653A1 publication Critical patent/WO2022267653A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the disclosure belongs to the field of image processing, and in particular relates to an image processing method, electronic equipment, and a computer-readable storage medium.
  • a staff member pre-sets a facial contour template, and then moves the facial contour included in the acquired image to be processed by a fixed distance according to the fixed parameters provided by the facial contour template.
  • the present disclosure provides an image processing method, device, electronic equipment, and computer-readable storage medium, which can improve the visual effect of the image to be processed obtained after face-thinning processing.
  • An embodiment of the present disclosure provides an image processing method, the method may include: acquiring face key point information of an image to be processed, the image to be processed may include a face area; according to the face key point information, determine A moving reference point when moving the pixels in the face area; according to the key point information of the face, dividing the face area into a plurality of local areas; for each of the local areas, based on the The parameter of the pixel movement strategy corresponding to the local area is used to move each pixel in the local area toward the direction of the moving reference point.
  • the pixel movement strategy parameter may include a pixel position adjustment ratio; correspondingly, based on the pixel movement strategy parameter corresponding to the local area, the local Each pixel in the area moves toward the direction of the moving reference point, which may include: for each pixel in the local area, determining a first distance between the pixel and the moving reference point; based on Adjusting the ratio of the first distance and the corresponding pixel position to determine a second distance between the pixel point and the moving reference point; moving the pixel point toward the direction of the moving reference point so that the The distance between the pixel point after moving and the moving reference point is equal to the second distance.
  • the second distance between the pixel point and the moving reference point is determined based on the first distance and the corresponding pixel position adjustment ratio.
  • the method may further include: in response to a face adjustment instruction triggered by the user, acquiring the position information of each pixel in the face area that has been moved before and after the move; wherein, the face The adjustment command carries a face-thinning intensity coefficient; based on the face-thinning intensity coefficient, the position information before the movement, and the position information after the movement, determine the target position corresponding to each pixel point in the face area that has been moved Information; moving each pixel point in the face area that has been moved to the target position information.
  • the size of the face-thinning strength coefficient is adjustable.
  • the determining the moving reference point when moving the pixel points in the human face area according to the key point information of the human face may include: determining the The human face key point corresponding to the central position of the eyes in the human face area; the key point of the human face corresponding to the central position of the eyes is determined as the mobile reference point;
  • a key point of the human face corresponding to the position of the tip of the nose in the face area may be determined, and the key point of the human face corresponding to the position of the tip of the nose may be determined as the moving reference point.
  • the determining the moving reference point when moving the pixel points in the human face area according to the key point information of the human face may include: based on The human face key point on the center line in the vertical direction of the human face area determines a reference line; for each pixel point in the human face area, the distance between the reference line and the pixel point The minimum face key point is determined as the moving reference point corresponding to the pixel point; correspondingly, the moving each pixel point in the local area toward the direction of the moving reference point may include: moving the local area Each pixel point in the area moves towards the direction of the corresponding moving reference point.
  • the face key point information may include identification information of the face key points, and according to the face key point information, the face area is divided into For a plurality of local areas, it may include: according to the corresponding relationship between the obtained identification information of the key points of the face and the facial organs, determine the set of key points of the face corresponding to each of the facial organs; The area surrounded by each key point of the face is determined as a local area.
  • pixel position adjustment ratios corresponding to different local regions are different.
  • the acquisition of face key point information of the image to be processed may include: inputting the image to be processed into a face key point detection model, through the human face key point detection model, The facial key point detection model performs human face key point detection on the image to be processed; and obtains the human face key point information output by the human face key point detection model.
  • the face key point information may include the coordinate information of the face positioning frame and the coordinate information of the face key point;
  • Point information before determining the moving reference point when moving the pixel points in the human face area, the method may also include:
  • the coordinate information of the key points of the face is normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face is obtained.
  • the embodiment of the present application also provides an image processing device, which includes: an acquisition module, a determination module, a division module, and an adjustment module.
  • the acquiring module is used to acquire the face key point information of the image to be processed, and the image to be processed includes the face area; the determination module is used to determine the pixels in the face area according to the face key point information The moving reference point when the point moves; the division module is used to divide the face area into a plurality of local areas according to the key point information of the face; the adjustment module is used for each of the local areas based on The pixel movement strategy parameter corresponding to the local area moves each pixel in the local area toward the moving reference point.
  • An embodiment of the present disclosure also provides an electronic device, which may include: a memory and a processor, the memory is connected to the processor; the memory is used to store a program; the processor calls the program stored in the memory A program to execute the methods provided by the above-mentioned embodiments of the present disclosure and/or any possible implementation manners in combination with the embodiments of the present disclosure.
  • the embodiment of the present disclosure also provides a non-volatile computer-readable storage medium (hereinafter referred to as the computer-readable storage medium), on which a computer program is stored, and when the computer program is run by a computer, the above-mentioned embodiments of the present disclosure are executed. And/or in combination with the method provided in any possible implementation manner of the embodiments of the present disclosure.
  • the computer-readable storage medium hereinafter referred to as the computer-readable storage medium
  • Embodiments of the present disclosure also provide a computer program product including code instructions, which, when executed by a processor, cause the processor to execute any one of the above-mentioned embodiments of the present disclosure and/or in combination with the embodiments of the present disclosure. Possible implementations of the methods provided.
  • the embodiment of the present disclosure also provides a computer program, which executes the method provided by the embodiment of the present disclosure and/or in combination with any possible implementation manner of the embodiment of the present disclosure when the computer program is executed by a computer.
  • FIG. 1 shows a flow chart of an image processing method provided by an embodiment of the present disclosure.
  • Fig. 2 shows a structural block diagram of an image processing apparatus provided by an embodiment of the present disclosure.
  • Fig. 3 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Artificial Intelligence is an emerging science and technology that researches and develops theories, methods, technologies and application systems for simulating and extending human intelligence.
  • the subject of artificial intelligence is a comprehensive subject that involves many technologies such as chips, big data, cloud computing, Internet of Things, distributed storage, deep learning, machine learning, and neural networks.
  • computer vision is specifically to allow machines to recognize the world.
  • Computer vision technology usually includes face recognition, liveness detection, fingerprint recognition and anti-counterfeiting verification, biometric recognition, face detection, pedestrian detection, target detection, pedestrian detection, etc.
  • the embodiments of the present disclosure provide an image processing method, device, electronic equipment and computer-readable storage medium, which can improve the visual effect of the image to be processed obtained after face-thinning processing.
  • the defects in the face-slimming technology in related technologies are the result of the applicant's practice and careful research. Therefore, the above-mentioned defects
  • the discovery process and the solutions proposed by the embodiments of the present disclosure hereinafter for the above defects should be recognized as the applicant's contribution to the present disclosure.
  • an embodiment of the present disclosure provides an image processing method for performing face thinning processing on an image to be processed including a human face area.
  • the method may include the following steps:
  • Step S110 Acquiring face key point information of an image to be processed, where the image to be processed includes a face area.
  • Step S120 According to the face key point information, determine a moving reference point when moving pixels in the face area;
  • Step S130 Divide the face area into a plurality of local areas according to the key point information of the face;
  • Step S140 For each local area, based on the pixel movement strategy parameter corresponding to the local area, move each pixel in the local area toward the moving reference point.
  • the face area is first divided into different local areas, and there are corresponding pixels in different local areas Move the strategy parameters, then when adjusting, the pixels included in each local region will move in the direction of the moving reference point according to the pixel movement strategy parameters corresponding to the local region to which it belongs, so that the final face-lifting effect can be obtained as much as possible Naturally, it can improve the visual effect after face-lifting.
  • Step S110 Acquiring face key point information of an image to be processed, where the image to be processed includes a face area.
  • the image processing method provided by the embodiments of the present disclosure may perform real-time processing on the image to be processed, or may perform post-processing on the image to be processed, that is, non-real-time processing.
  • the image processing method can be applied to application scenarios such as video live broadcast, video conference, and portrait photography.
  • the image to be processed can be determined according to the pictures and/or video streams collected by the camera in real time.
  • the image processing method may be applicable to an image processing application scenario.
  • the image to be processed may be determined according to the pre-downloaded picture and/or video stream, or the picture and/or video stream taken by the camera in advance.
  • the camera may be a built-in component of the electronic device that executes or invokes the image processing method, or may be an external component of the electronic device.
  • the process of acquiring face key point information may be from a third-party application program, software, face key point detection model or other
  • the process in which the device obtains the face key point information for the image to be processed may also be the process of performing face key point detection on the image to be processed through the face key point model; that is, when executing the method provided by the embodiment of the present disclosure , the acquired original parameters may be face key point information, or an image to be processed including a face area, and its specific implementation process may be selected according to an actual application scenario, which is not limited in the embodiments of the present disclosure.
  • the image processing method provided by the embodiment of the present disclosure may also include the process of performing face key point detection on the image to be processed, that is to say, the image processing method obtained by the electronic device
  • the original parameter is the image to be processed including the face area
  • the obtained image to be processed is input to a face key point detection model with a face key point detection function for detection to obtain face key point information.
  • the human face key point detection model in order to make the human face key point detection model have the function of detecting human face key points, it needs to be trained in advance.
  • the training process is as follows.
  • y i may include position information G of each face key point in x i .
  • G [(a i1 , b i1 ), (a i2 , b i2 )...(a in , bin in )]
  • n is the identification information of key points of the face, such as number, ID (Identity Document, identity identification ), etc.
  • (a in , bin in ) represent the coordinate information of the face key point identified as n in the i-th sample x i .
  • the encoding rules of the identification information of the key points of the face in each sample are set in advance, so that the key points of the face with the same identification information in different samples are characterized by The meaning is the same, and the identification information of the key points of the face belonging to a specific local area or a specific facial organ in the human face is limited to the range of identification information corresponding to the specific local area or the specific facial organ.
  • the identification information can be the number of the key points of the face.
  • the preset identification information encoding rule can be: the eyes including the eyes The above forehead is regarded as an area, and the number of face key points belonging to this area ranges from 1-20; the chin area below the mouth including the mouth is regarded as an area, and the number of face key points belonging to this area The range is 21-40; the left face is regarded as a region, and the number range of face key points belonging to this region is 41-55; the right face is regarded as a region, and the number range of face key points belonging to this region is 56-70; the hairline is regarded as a region, and the number range of the face key points belonging to this region is 71-81.
  • the identification information may be the number of the key points of the face.
  • the preset identification information encoding rule may be: The numbering range of the face key points of the face contour is 1-20, the numbering range of the face key points belonging to the mouth in the face organ is 21-40, and the numbering of the face key points belonging to the nose in the face organ The range is 41-55, the number range of the face key points belonging to the eyes in the face organs is 56-70, and the number range of the face key points belonging to the eyebrows in the face organs is 71-81.
  • identification information encoding rule is only an example, and it can be understood that in other implementation manners, other similar schemes may be adopted for the identification information encoding rule.
  • the deep learning model can be trained through the training set S.
  • the training process can be as follows: input each sample picture in the training set S to the deep learning model, and obtain the corresponding output (the key points of the face of the sample picture and its Coordinate information), and let the deep learning model automatically learn the internal correlation between the sample picture and the output, so as to obtain the face key point detection model.
  • the labeling stage it is necessary to label N key points of the face for each sample, and the subsequent training obtains the key point detection model of the face to detect the key points of the face for the input image to be processed, and the key points of the face outputted
  • the information may include N key points of the face with identification information and coordinate information thereof.
  • 81 face key points are marked for each sample, and the face key point detection model performs face key point detection on the input image to be processed, and the output face key point information includes 81 face key points with labels The face key points and their coordinate information of the information.
  • the coordinate information of the lower left corner of the face positioning frame and the coordinate information of each key point of the face belong to the same rectangular coordinate system (for the sake of distinction, it is called the first coordinate system).
  • a vertex of x i (such as the point where the lower left corner is located) is used as the origin, and the two edges connected to the vertex are used as the X axis and the Y axis respectively.
  • the outputted face key point information may include the identification information included in the image to be processed In addition to each face key point and its coordinate information, it can also include the information of the face positioning frame included in the image to be processed, that is, the output face key point information includes G and K.
  • the acquired image to be processed may be a face image including only the face area, or a large image including the face area and other areas of the human body.
  • the subsequent face thinning process can be directly based on the large image, targeting the face area of the large image as the processing object.
  • the face thinning process mainly performs image processing on the face area
  • the image to be processed is a large image
  • the image to be processed is input to the face key point detection model
  • the obtained face key point information includes the information of the face positioning frame
  • the face image corresponding to the face positioning frame can also be intercepted from the image to be processed (i.e. the large image) according to the information of the obtained face positioning frame , so that the face area of the face image can be used as the processing object of the subsequent face thinning process directly on the basis of the face image, without processing the rest of the large image.
  • the data volume of face images including the same face area is smaller than the data volume of the image to be processed. Therefore, when the face image is used as the processing object, it is beneficial to reduce the time delay generated in the face thinning process.
  • the coordinate origin of the coordinate system (the first coordinate system) where the various coordinate information output by the face key point detection model is located is a vertex of the large image, when the face image is used as the processing object, the coordinate origin is large The probability is outside the face image.
  • the various coordinate information output by the face key point detection model can also be normalized, so that the face The coordinate information of the face key points output by the face key point detection model is converted into new coordinate information in the intercepted face area.
  • the above normalization processing operation may be performed before performing the above step S120 of determining the moving reference point when moving the pixel points in the face area according to the key point information of the face.
  • the above normalization processing operation may be performed before performing the above step S130 of dividing the human face area into a plurality of local areas according to the key point information of the human face.
  • the embodiment of the present disclosure does not limit the specific execution process of the above normalization processing operation.
  • the coordinate information of the key points of the face can be normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face can be obtained.
  • the subsequent steps of the method provided by the embodiments of the present disclosure are executed based on the normalized coordinate information of key points of the human face. That is, the coordinates of the key points of the human face are converted into the coordinates in the human face area of the intercepted human face image, so that only the pixels in the human face area need to be operated when performing the face thinning operation, and the human face area The pixels outside remain unchanged. After the face thinning operation is completed, the face area can be redrawn according to the pixels after the operation, and the original face area can be replaced.
  • the origin of the coordinate system (the first coordinate system) corresponding to the coordinate information before the normalization operation can be one of the vertices of the image to be processed
  • the coordinate system (the second coordinate system) corresponding to the coordinate information after the normalization operation system) can be one of the vertices of the face positioning frame referenced for the normalization operation.
  • processing can be performed based on the normalized key point information of the face.
  • the coordinate difference between the coordinate information of the face positioning frame and the coordinate information of the key points of the face can be calculated, and then the coordinate difference can be used to update and replace the original coordinate information of the key points of the face to obtain the returned The unified face key point information.
  • the information of the face positioning frame is (u i , v i , m, f), and the coordinate information of the key points of the face is ((a i1 , b i1 ), ( a i2 , b i2 )...(a in , b in )), then the coordinate information of the face positioning frame is (u i , v i ).
  • the coordinate information of each face key point included in the face image can be ((a i1 -u i , b i1 -v i ), (a i2 -u i , b i2 -v i )...(a in -u i , bin in -v i )), at this time, the coordinate origin is the point where the coordinate information of the face positioning frame is located.
  • Step S120 According to the key point information of the human face, determine a moving reference point when moving the pixel points in the human face area.
  • the moving reference point is configured to play a guiding role in the subsequent face thinning process.
  • all pixel points in the entire face area may correspond to the same moving reference point; in addition, each pixel point in the entire face area may also correspond to different moving reference points.
  • the meaning represented by the key points of the face corresponding to each identification information is determined in advance.
  • the key point of the face corresponding to a certain position can be determined from the face area, and the The face key point is determined as a reference point for movement.
  • step S120 according to the key point information of the human face, determine the moving reference point when moving the pixels in the human face area, which may include at least the following two implementations:
  • the key point of the face corresponding to the center position of the eyes in the face area can be determined, and the key point of the face corresponding to the center position of the eyes can be determined as the above-mentioned mobile reference point; or, the face corresponding to the position of the tip of the nose in the face area can be determined key point, the key point of the face corresponding to the position of the tip of the nose is determined as the above-mentioned moving reference point.
  • the above is just an example of the specific position being the center position of the eyes and the position of the tip of the nose.
  • the specific position above can also be other, such as the midpoint of the line connecting the center position of the eyes and the position of the tip of the nose, etc.; Embodiments of the present disclosure will not be described in detail one by one.
  • the face key point numbered 63 is used to represent the center position of the eyes, Then the key point of the face numbered 63 can be determined as the moving reference point.
  • the moving reference points corresponding to each pixel point in the face area are the same as the key points of the face used to represent the center positions of the eyes in the face area.
  • this embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area towards the direction of the moving reference point based on the pixel movement strategy parameter;
  • This embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area toward the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement strategy parameter; this embodiment can also be
  • the face is determined according to the face-thinning intensity coefficient included in the face adjustment command triggered by the user and the acquired position information of each pixel in the face area that has been moved before and after the move.
  • the embodiments corresponding to the target position information corresponding to the moved pixels in the area are combined and the like.
  • a reference line can also be determined, and for different pixel points in the face area, the moving reference point corresponding to the pixel point can be determined from the reference line according to the set rules. Therefore, the above step S120: according to the key point information of the human face, determine the moving reference point when moving the pixels in the human face area, which can also be realized through the following process:
  • the key point corresponding to the center position of the eyes on the center line may be used as the first face key point, and the face key point corresponding to the center position of the chin on the center line may be used as the second face key point.
  • Face key points, the line segment formed by the first key point and the second face key point is determined as the above-mentioned reference line.
  • the number range of the face key points belonging to the eye area is 56-70
  • the number range of the face key points belonging to the face contour area is 1-20
  • the number is The key point of the face numbered 63 is the center of the eyes
  • the key point of the face numbered 10 is the center of the chin.
  • the line connecting the face key point numbered 10 and the face key point numbered 63 is determined as the reference line.
  • the key point corresponding to the brow center position on the center line can also be used as the first face key point, and the face key point corresponding to the lip center position on the center line can be used as the second face key point.
  • the face key point is determined as the reference line based on the line segment formed by the first face key point and the second face key point.
  • the face key point with the smallest distance between the reference line and the pixel point can be determined as the moving reference point corresponding to the pixel point .
  • the moving reference points corresponding to each pixel in the face area are different, that is, each pixel in the face area has a corresponding moving reference point.
  • this embodiment can correspond to the movement of each pixel point in the local area towards the direction of the reference point based on the pixel movement strategy parameter that appears above
  • This embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area towards the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement strategy parameter that will appear later.
  • This embodiment can also appear in the following, according to the thin face strength coefficient included in the face adjustment command triggered by the user and the obtained position information of each pixel point in the face area that has been moved before moving and after moving Combination of the embodiments corresponding to the target position information corresponding to each pixel point in the human face area that has been determined by using the position information, etc.
  • Step S130 Divide the face area into a plurality of local areas according to the key point information of the face.
  • the face key point information may include the identification information of the face key point.
  • the above step S130 divides the face area into a plurality of local areas according to the face key point information, and optionally including:
  • the set of key points of the face corresponding to each organ of the face determines the set of key points of the face corresponding to each organ of the face; The region identified as a local region.
  • the above-mentioned facial organs can be eyes, nose, mouth, eyebrows, face, etc.
  • the division of local areas may not be performed according to the areas corresponding to the above-mentioned human face organs, but may also be divided in other ways, for example, the forehead above the eyes including the eyes is used as One area, the chin area below the mouth including the mouth as one area, the left face area (including a part of the nose) located between the forehead area and the chin area as one area, and the right face area located between the forehead area and the chin area
  • the face area (including a part of the nose) is regarded as an area; of course, in specific implementation, the area division can also be performed in other ways, and the embodiments of the present disclosure will not repeat them one by one.
  • the face key points whose numbers belong to the number range 1-20 can be divided into one group, and it is determined that the face key points included in this group belong to the face contour area; the faces whose numbers belong to the number range 21-40 can be divided into The key points are divided into one group, and the face key points included in the group are determined to belong to the mouth area; the face key points whose numbers belong to the number range 41-55 can be divided into one group, and the faces included in the group are determined The key points belong to the nose area; the face key points whose number belongs to the number range 56-70 can be divided into a group, and the face key points included in this group can be determined to belong to the eye area; the number belongs to the number range 71-81 The face key points are divided into one group, and it is determined that the face key points included in the group belong to the eyebrow region.
  • each group is a set of face key points.
  • the area surrounded by the coordinate information of each face key point in each face key point set is a local area.
  • the set of key points of the face corresponding to each organ of the face can be determined;
  • the embodiment that the area surrounded by each face key point is determined as a local area can be combined with any of the preceding or following embodiments (the realization of any embodiment needs not to conflict with the realization of this embodiment) , for example: this embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area toward the direction of the moving reference point based on the pixel movement strategy parameter; this embodiment can be combined with the embodiment that appears later , based on the pixel position adjustment ratio included in the pixel movement strategy parameters to move each pixel in the local area towards the direction of the reference point; this embodiment can also be combined with the following, according to user trigger
  • the face-thinning intensity coefficient included in the face adjustment instruction and the obtained position information of each pixel point in the face area after moving and the position information after moving are used to determine the position of each
  • Step S140 For each local area, based on the pixel movement strategy parameter corresponding to the local area, move each pixel in the local area toward the direction of the moving reference point.
  • Face thinning operations can be performed, that is, pixel adjustments are performed on each local area divided by the face area that needs to be thinned.
  • each local region has a corresponding pixel movement strategy parameter, and the pixel movement strategy parameters corresponding to different local regions are different.
  • the pixel shift strategy parameter may be a pixel position scaling.
  • each pixel included in each local area can be moved and adjusted toward the direction of the moving reference point, so as to achieve the effect of thinning the face.
  • the degree of adjustment it is determined by the adjustment ratio of the pixel position corresponding to the local area where the pixel is located.
  • a first product value of the first distance d and the pixel position adjustment ratio T corresponding to the local area to which the pixel belongs may be calculated, and the first product value is determined as the second distance d′.
  • the above-mentioned pixel shift strategy parameter may also be the ratio between the distance between the position after pixel shift and the pixel reference point and the distance between the position before pixel shift and the pixel reference point, In this way, when pixel movement is performed, the target position information of the pixel can be determined according to the ratio.
  • the above-mentioned pixel strategy movement parameter may also be the distance of pixel movement, etc.;
  • the pixel position adjustment ratios corresponding to the respective local regions are independent of each other, and may be partly the same or completely different.
  • background staff configure corresponding pixel position adjustment ratios for each local area of the face in advance.
  • the background staff can take different pixel position adjustment ratios for each local area to test, and observe the corresponding visual effect, so as to determine the optimal pixel position adjustment ratio corresponding to each local area, and save it.
  • the user cannot directly adjust the pixel position adjustment ratio of each local area separately.
  • the effect may not be satisfactory to the user.
  • the user can also trigger custom adjustments through virtual buttons or physical buttons, and then trigger face recognition. Adjust instructions.
  • the face-thinning intensity coefficient k may be included in the face adjustment instruction, and the size of the face-thinning intensity coefficient k can be adjusted by the user, so that the user can adjust the current face-thinning degree based on the face-thinning intensity coefficient k.
  • the electronic device running the image processing method when the electronic device running the image processing method acquires and responds to the face adjustment instruction (wherein the face adjustment instruction carries the face-thinning intensity coefficient), it can acquire the processed image (which may be the image to be processed, It may also be the position information of each pixel in the face area cut out from the image to be processed before the movement and the position information after the movement. Then, based on the face-thinning intensity coefficient k, the position information before the movement and the position information after the movement, determine the face area of the processed image (which can be the image to be processed, or the face image intercepted from the image to be processed). The target position information corresponding to each moved pixel point, and move each moved pixel point in the face area to the target position corresponding to the target position information.
  • the above position information can be characterized by coordinate information; correspondingly, based on the face thinning intensity coefficient k, the position information before the movement and the position information after the movement, determine the face area of the processed image that has been moved
  • the target position information corresponding to each pixel of can be realized through the following process:
  • the location information before the movement can be represented by the first coordinate information
  • the location information after the movement can be represented by the second coordinate information
  • the coordinate information involved above may be coordinate information belonging to the first coordinate system;
  • the above-mentioned coordinate information may be coordinate information belonging to the second coordinate system.
  • the processed image which may be the image to be processed, or the The movement information of each pixel on the X-axis and the Y-axis (the movement information includes the moving distance and the moving direction) in the human face area of the human face image), and each pixel in the human face area is moved based on the above-mentioned movement information.
  • the above position information may be represented by coordinate information, and correspondingly, the movement information corresponding to each of the above pixel points may be determined through the following process:
  • the processing object of the face thinning process is a human face image cut out from the image to be processed
  • the face thinning process is performed on the face image
  • it is also necessary to use the face after the face thinning process The image replaces the original face image included in the image to be processed, so that the image to be processed presents a face-thinning effect.
  • an embodiment of the present disclosure further provides an image processing apparatus 400 , which may include: an acquisition module 410 , a determination module 420 , a division module 430 and an adjustment module 440 .
  • the obtaining module 410 is configured to obtain face key point information of an image to be processed, and the image to be processed may include a face area;
  • the determination module 420 is configured to determine a moving reference point when moving pixels in the human face area according to the key point information of the human face;
  • the dividing module 430 is configured to divide the human face area into a plurality of local areas according to the key point information of the human face;
  • the adjustment module 440 is configured to, for each of the local areas, move each pixel in the local area toward the moving reference point based on the pixel movement strategy parameter corresponding to the local area.
  • the pixel moving strategy may include a pixel position adjustment ratio; the adjustment module 440 is configured to determine the relationship between the pixel point and the pixel point for each pixel point in the local area. moving a first distance between reference points; adjusting a ratio based on the first distance and a corresponding pixel position, determining a second distance between the pixel point and the moving reference point; moving the pixel point toward the The direction of the moving reference point is moved, so that the distance between the pixel point and the moving reference point after the movement is equal to the second distance.
  • the adjustment module 440 is configured to determine a first product value of the first distance and a pixel position adjustment ratio corresponding to the local area to which the pixel point belongs; A product value is determined as the second distance.
  • the adjustment module 440 is further configured to, in response to a user-triggered face adjustment instruction, obtain the position information and movement information of each pixel in the face area that has been moved before the movement. post-movement position information; wherein, the face adjustment instruction may carry a face-thinning intensity coefficient; based on the face-thinning intensity coefficient, the position information before the movement, and the position information after the movement, determine the face area the target position information corresponding to each pixel point that has been moved; and move each pixel point that has been moved in the face area to the target position corresponding to the target position information.
  • the position information is represented by coordinate information; the adjustment module 440 is configured to determine the second coordinate information and the first coordinate information for each pixel point in the face area that has been moved The first coordinate difference between the information; wherein, the position information before the movement can be represented by the first coordinate information, and the position information after the movement can be represented by the second coordinate information; calculate the thin face A second product value between the intensity coefficient and the first coordinate difference; determining the sum of the first coordinate information and the second product value as the target position information.
  • the determination module 420 is configured to determine the key points of the human face corresponding to the center position of the eyes in the face area, and determine the key point of the human face corresponding to the center position of the eyes is said mobile reference point;
  • it is configured to determine the key point of the human face corresponding to the position of the tip of the nose in the human face area, and determine the key point of the human face corresponding to the position of the tip of the nose as the moving reference point.
  • the determination module 420 is configured to determine a reference line based on the key points of the face located on the center line in the vertical direction of the face area; For each pixel point, the face key point with the minimum distance between the reference line and the pixel point is determined as the moving reference point corresponding to the pixel point;
  • the adjustment module 440 is configured to move each pixel point in the local area toward a direction corresponding to the moving reference point.
  • the key point information of the face may include identification information of the key points of the face, and the division module 430 is configured to The corresponding relationship of each face organ is determined to determine the face key point set corresponding to each face key point set; the area surrounded by each face key point in each face key point set is determined as a local area.
  • pixel position adjustment ratios corresponding to different local regions are different.
  • the acquisition module 410 is configured to input the image to be processed into a human face key point detection model, and perform artificial intelligence on the image to be processed through the human face key point detection model. Facial key point detection; obtaining the human face key point information output by the human face key point detection model.
  • the face key point information may include the coordinate information of the face positioning frame and the coordinate information of the face key point; the device may also include a normalization module configured to The coordinate information of the key points of the face is normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face is obtained.
  • the image processing device 400 provided by the embodiment of the present disclosure has the same realization principle and technical effect as the aforementioned method embodiment.
  • the part not mentioned in the device embodiment please refer to the corresponding content in the aforementioned method embodiment. .
  • an embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored.
  • a computer program is stored.
  • the steps included in the above-mentioned image processing method are executed.
  • an embodiment of the present disclosure further provides an electronic device 100 for implementing an image processing method and apparatus.
  • the electronic device 100 may be a mobile phone, a smart camera, a tablet computer, a personal computer (Personal computer, PC) and other devices. Users can use the electronic device 100 to perform activities such as taking pictures, live video broadcasting, and image processing.
  • the electronic device 100 may include: a processor 110 , a memory 120 , and a display screen 130 .
  • the components and structure of the electronic device 100 shown in FIG. 3 are only exemplary rather than limiting, and the electronic device 100 may also have other components and structures as required.
  • the electronic device 100 may further include a camera configured to capture images to be processed in real time.
  • the processor 110 , the memory 120 , the display screen 130 and other components that may appear in the electronic device 100 are electrically connected to each other directly or indirectly to realize data transmission or interaction.
  • the processor 110, the memory 120, the display screen 130 and other possible components may be electrically connected to each other through one or more communication buses or signal lines.
  • the memory 120 is used to store a program, for example, a program corresponding to the above-mentioned image processing method or the above-mentioned image processing device is stored.
  • the image processing device may include at least one software function module that may be stored in the memory 120 in the form of software or firmware (firmware).
  • the software function modules included in the image processing apparatus may also be solidified in an operating system (operating system, OS) of the electronic device 100 .
  • OS operating system
  • the processor 110 is configured to execute executable modules stored in the memory 120, such as software function modules or computer programs included in the image processing apparatus. After the processor 110 receives the execution instruction, it can execute the computer program, for example, execute: acquire the face key point information of the image to be processed, the image to be processed includes the face area; according to the face key point information, determine A moving reference point when moving the pixels in the face area; according to the key point information of the face, dividing the face area into a plurality of local areas; for each of the local areas, based on the The parameter of the pixel movement strategy corresponding to the local area is used to move each pixel in the local area toward the direction of the moving reference point.
  • any embodiment of the present disclosure may be applied to the processor 110 or implemented by the processor 110 .
  • Embodiments of the present disclosure also provide a computer program product including code instructions, which, when executed by a processor, cause the processor to execute any one of the above-mentioned embodiments of the present disclosure and/or in combination with the embodiments of the present disclosure. Possible implementations of the methods provided.
  • the embodiments of the present disclosure also provide a computer program, which executes the method provided by the embodiments of the present disclosure and/or in combination with any possible implementation manners of the embodiments of the present disclosure when the computer program is run by a computer.
  • the image processing method, device, electronic device, and computer-readable storage medium proposed by the embodiments of the present disclosure when the image to be processed needs to be thinned, can first obtain the face key point information of the image to be processed, and then According to the face key point information, determine the mobile reference point and divide the face area into different local areas, then when performing face thinning processing, move the pixels included in each local area according to the pixels corresponding to the local area to which it belongs
  • the strategy moves in the direction of moving the reference point, so that the final face-lifting effect can be as natural as possible, and the visual effect after the face-lifting can be improved.
  • each embodiment in this specification is described in a progressive manner, each embodiment describes the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other .
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions configured to implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
  • each functional module in each embodiment of the present disclosure may be integrated together to form an independent part, each module may exist independently, or two or more modules may be integrated to form an independent part.
  • the functions are realized in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the essence of the technical solution of the present disclosure or the part that contributes to the related technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product can be stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a notebook computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium can include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. medium.
  • the disclosure provides an image processing method, device, electronic equipment and computer-readable storage medium.
  • the key point information of the face of the image to be processed is obtained first, and then according to the key point information of the face, the reference point for moving the pixels in the face area is determined and the face is moved.
  • the area is divided into different local areas.
  • face thinning processing the pixels included in each local area are moved in the direction of the moving reference point according to the pixel movement strategy parameters corresponding to the local areas to which they belong, so that the final obtained
  • the effect of face-lifting is as natural as possible, which can improve the visual effect after face-lifting.
  • image processing method, device, electronic device, and computer-readable storage medium of the present disclosure are reproducible, and can be used in various industrial applications, for example, in the field of image processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

An image processing method, an electronic device, and a computer readable storage medium, relating to the field of image processing. The method comprises: obtaining face key point information of an image to be processed (S110); according to the face key point information, determining a movement reference point when moving pixels in a face area (S120), and dividing the face area into different local areas (S130); and during face thinning, moving pixels included in the local areas toward the movement reference point according to pixel moving policy parameters corresponding to the local areas to which the pixels belong (S140). Therefore, the final face thinning effect can be natural as much as possible, thereby improving the visual effect after face thinning.

Description

图像处理方法、电子设备及计算机可读存储介质Image processing method, electronic device, and computer-readable storage medium
相关申请的交叉引用Cross References to Related Applications
本公开要求于2021年06月23日提交中国国家知识产权局的申请号为202110701338X、名称为“图像处理方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of the Chinese patent application with application number 202110701338X and titled "image processing method, device, electronic device, and computer-readable storage medium" submitted to the State Intellectual Property Office of China on June 23, 2021, all of which The contents are incorporated by reference in this disclosure.
技术领域technical field
本公开属于图像处理领域,具体涉及一种图像处理方法、电子设备及计算机可读存储介质。The disclosure belongs to the field of image processing, and in particular relates to an image processing method, electronic equipment, and a computer-readable storage medium.
背景技术Background technique
随着泛娱乐化趋势的发展,在视频会议、直播、拍照、照片处理等应用场景都会涉及到美颜瘦脸技术。With the development of pan-entertainment trends, face beautification and face-lifting technologies will be involved in application scenarios such as video conferences, live broadcasts, taking pictures, and photo processing.
相关的美颜瘦脸技术一般由工作人员预先设置一个面部轮廓模板,然后将获取到的待处理图像所包括的人脸轮廓按照面部轮廓模板所提供的固定参数移动固定的距离。该操作虽然可以使得图像呈现出瘦脸的效果,但是该效果是将待处理图像的脸部曲线优化成统一的锥子脸型,进而使得人脸呈现出不自然的视觉效果,甚至会导致人脸走形失真。In related face-beautifying and face-lifting techniques, a staff member pre-sets a facial contour template, and then moves the facial contour included in the acquired image to be processed by a fixed distance according to the fixed parameters provided by the facial contour template. Although this operation can make the image appear thinner, the effect is to optimize the face curve of the image to be processed into a uniform awl face shape, which will make the face appear unnatural visual effect, and even cause the face to be out of shape distortion.
发明内容Contents of the invention
有鉴于此,本公开提供了一种图像处理方法、装置、电子设备及计算机可读存储介质,可以提高进行瘦脸处理后所得到的待处理图像的视觉效果。In view of this, the present disclosure provides an image processing method, device, electronic equipment, and computer-readable storage medium, which can improve the visual effect of the image to be processed obtained after face-thinning processing.
本公开的一些实施例是这样实现的:Some embodiments of the present disclosure are implemented as follows:
本公开实施例提供了一种图像处理方法,所述方法可以包括:获取待处理图像的人脸关键点信息,所述待处理图像可以包括人脸区域;根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点;根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域;针对每个所述局部区域,基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。An embodiment of the present disclosure provides an image processing method, the method may include: acquiring face key point information of an image to be processed, the image to be processed may include a face area; according to the face key point information, determine A moving reference point when moving the pixels in the face area; according to the key point information of the face, dividing the face area into a plurality of local areas; for each of the local areas, based on the The parameter of the pixel movement strategy corresponding to the local area is used to move each pixel in the local area toward the direction of the moving reference point.
结合本公开实施例,在一种可能的实施方式中,所述像素移动策略参数可以包括像素位置调整比例;相应地,所述基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动,可以包括:针对所述局部区域内的每个像素点,确定所述像素点与所述移动参考点之间的第一距离;基于所述第一距离和对应的像素位置调整比例,确定所述像素点与所述移动参考点之间的第二距离;将所述像素点朝向所述移动参考点的方向移动,以使所述像素点在移动后与所述移动参考点之间的距离等于所述第二距离。With reference to the embodiments of the present disclosure, in a possible implementation manner, the pixel movement strategy parameter may include a pixel position adjustment ratio; correspondingly, based on the pixel movement strategy parameter corresponding to the local area, the local Each pixel in the area moves toward the direction of the moving reference point, which may include: for each pixel in the local area, determining a first distance between the pixel and the moving reference point; based on Adjusting the ratio of the first distance and the corresponding pixel position to determine a second distance between the pixel point and the moving reference point; moving the pixel point toward the direction of the moving reference point so that the The distance between the pixel point after moving and the moving reference point is equal to the second distance.
结合本公开实施例,在一种可能的实施方式中,所述基于所述第一距离和所述对应的像素位置调整比例,确定所述像素点与所述移动参考点之间的第二距离,可以包括:确定所述第一距离与所述像素点所属的局部区域所对应的像素位置调整比例的第一乘积值;将所述第一乘积值确定为所述第二距离。With reference to the embodiments of the present disclosure, in a possible implementation manner, the second distance between the pixel point and the moving reference point is determined based on the first distance and the corresponding pixel position adjustment ratio. , may include: determining a first product value of the first distance and a pixel position adjustment ratio corresponding to the local area to which the pixel point belongs; determining the first product value as the second distance.
结合本公开实施例,在一种可能的实施方式中,所述基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动之后,所述方法还可以包括:响应于用户触发的人脸调整指令,获取所述人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息;其中,所述人脸调整指令中携带有瘦脸强度系数;基于所述瘦脸强度系数、所述移动前的位置信息和所述移动后的位置信息,确定所述人脸区域中经过移动的各个像素点所对应的目标位置信息;将所述人脸区域中经过移动的各个像素点移动至所述目标位置信息处。With reference to the embodiments of the present disclosure, in a possible implementation manner, after moving each pixel point in the local area toward the direction of the moving reference point based on the pixel movement strategy parameter corresponding to the local area , the method may further include: in response to a face adjustment instruction triggered by the user, acquiring the position information of each pixel in the face area that has been moved before and after the move; wherein, the face The adjustment command carries a face-thinning intensity coefficient; based on the face-thinning intensity coefficient, the position information before the movement, and the position information after the movement, determine the target position corresponding to each pixel point in the face area that has been moved Information; moving each pixel point in the face area that has been moved to the target position information.
结合本公开实施例,在一种可能的实施方式中,所述位置信息可以通过坐标信息表征;所述基于所述瘦脸强度系数、所述移动前的位置信息和所述移动后的位置信息,确定所述人脸区域中经过移动的各个像素点所对应的目标位置信息,可以包括:针对所述人脸区域中经过移动的各个像素点,确定第二坐标信息和第一坐标信息之间的第一坐标差值;其中,所述移动前的位置信息可以通过所述第一坐标信息表征,所述移动后的位置信息可以通过所述第二坐标信息表征;计算所述瘦脸强度系数与所述第一坐标差值之间的第二乘积值;将所述第一坐标信息和所述第二乘积值的和值,确定为所述目标位置信息。With reference to the embodiments of the present disclosure, in a possible implementation manner, the location information may be represented by coordinate information; based on the face-thinning intensity coefficient, the location information before the movement, and the location information after the movement, Determining the target position information corresponding to each moved pixel in the face area may include: determining the distance between the second coordinate information and the first coordinate information for each moved pixel in the face area The first coordinate difference; wherein, the position information before the movement can be represented by the first coordinate information, and the position information after the movement can be represented by the second coordinate information; calculate the thin face intensity coefficient and the A second product value between the first coordinate difference values; and a sum value of the first coordinate information and the second product value is determined as the target position information.
结合本公开实施例,在一种可能的实施方式中,所述瘦脸强度系数的大小可调。With reference to the embodiments of the present disclosure, in a possible implementation manner, the size of the face-thinning strength coefficient is adjustable.
结合本公开实施例,在一种可能的实施方式中,所述根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点,可以包括:确定所述人脸区域中双眼中心位置处所对应的人脸关键点;将所述双眼中心位置处所对应的人脸关键点确定为所述移动参考点;With reference to the embodiments of the present disclosure, in a possible implementation manner, the determining the moving reference point when moving the pixel points in the human face area according to the key point information of the human face may include: determining the The human face key point corresponding to the central position of the eyes in the human face area; the key point of the human face corresponding to the central position of the eyes is determined as the mobile reference point;
或者,可以确定所述人脸区域中鼻尖位置处所对应的人脸关键点,将所述鼻尖位置处所对应的人脸关键点确定为所述移动参考点。Alternatively, a key point of the human face corresponding to the position of the tip of the nose in the face area may be determined, and the key point of the human face corresponding to the position of the tip of the nose may be determined as the moving reference point.
结合本公开实施例,在一种可能的实施方式中,所述根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点,可以包括:基于位于所述人脸区域竖直方向上的中心线上的人脸关键点确定参考线;针对所述人脸区域中的每个像素点,将所述参考线上与所述像素点之间的距离最小的人脸关键点确定为所述像素点对应的移动参考点;相应地,所述将所述局部区域内的各个像素点朝向所述移动参考点的方向移动,可以包括:将所述局部区域内的各个像素点朝向与其对应的移动参考点的方向移动。With reference to the embodiments of the present disclosure, in a possible implementation manner, the determining the moving reference point when moving the pixel points in the human face area according to the key point information of the human face may include: based on The human face key point on the center line in the vertical direction of the human face area determines a reference line; for each pixel point in the human face area, the distance between the reference line and the pixel point The minimum face key point is determined as the moving reference point corresponding to the pixel point; correspondingly, the moving each pixel point in the local area toward the direction of the moving reference point may include: moving the local area Each pixel point in the area moves towards the direction of the corresponding moving reference point.
结合本公开实施例,在一种可能的实施方式中,所述人脸关键点信息可以包括人脸关 键点的标识信息,所述根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域,可以包括:根据获取到的人脸关键点的标识信息与人脸器官的对应关系,确定各个人脸器官所对应的人脸关键点集合;将每个人脸关键点集合中的各个人脸关键点所围成的区域确定为一个局部区域。With reference to the embodiments of the present disclosure, in a possible implementation manner, the face key point information may include identification information of the face key points, and according to the face key point information, the face area is divided into For a plurality of local areas, it may include: according to the corresponding relationship between the obtained identification information of the key points of the face and the facial organs, determine the set of key points of the face corresponding to each of the facial organs; The area surrounded by each key point of the face is determined as a local area.
结合本公开实施例,在一种可能的实施方式中,不同的局部区域所对应的像素位置调整比例不同。With reference to the embodiments of the present disclosure, in a possible implementation manner, pixel position adjustment ratios corresponding to different local regions are different.
结合本公开实施例,在一种可能的实施方式中,所述获取待处理图像的人脸关键点信息,可以包括:将所述待处理图像输入至人脸关键点检测模型,通过所述人脸关键点检测模型对所述待处理图像进行人脸关键点检测;获取所述人脸关键点检测模型输出的所述人脸关键点信息。With reference to the embodiments of the present disclosure, in a possible implementation manner, the acquisition of face key point information of the image to be processed may include: inputting the image to be processed into a face key point detection model, through the human face key point detection model, The facial key point detection model performs human face key point detection on the image to be processed; and obtains the human face key point information output by the human face key point detection model.
结合本公开实施例,在一种可能的实施方式中,所述人脸关键点信息可以包括人脸定位框的坐标信息以及所述人脸关键点的坐标信息;所述根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点之前,所述方法还可以包括:With reference to the embodiments of the present disclosure, in a possible implementation manner, the face key point information may include the coordinate information of the face positioning frame and the coordinate information of the face key point; Point information, before determining the moving reference point when moving the pixel points in the human face area, the method may also include:
基于所述人脸定位框的坐标信息对所述人脸关键点的坐标信息进行归一化处理,得到归一化后的人脸关键点的坐标信息。The coordinate information of the key points of the face is normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face is obtained.
本申请实施例还提供了一种图像处理装置,所述装置包括:获取模块、确定模块、划分模块以及调整模块。The embodiment of the present application also provides an image processing device, which includes: an acquisition module, a determination module, a division module, and an adjustment module.
获取模块,用于获取待处理图像的人脸关键点信息,所述待处理图像包括人脸区域;确定模块,用于根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点;划分模块,用于根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域;调整模块,用于针对每个所述局部区域,基于所述局部区域对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。The acquiring module is used to acquire the face key point information of the image to be processed, and the image to be processed includes the face area; the determination module is used to determine the pixels in the face area according to the face key point information The moving reference point when the point moves; the division module is used to divide the face area into a plurality of local areas according to the key point information of the face; the adjustment module is used for each of the local areas based on The pixel movement strategy parameter corresponding to the local area moves each pixel in the local area toward the moving reference point.
本公开实施例还提供了一种电子设备,可以包括:存储器和处理器,所述存储器和所述处理器连接;所述存储器用于存储程序;所述处理器调用存储于所述存储器中的程序,以执行上述本公开实施例和/或结合本公开实施例的任一种可能的实施方式所提供的方法。An embodiment of the present disclosure also provides an electronic device, which may include: a memory and a processor, the memory is connected to the processor; the memory is used to store a program; the processor calls the program stored in the memory A program to execute the methods provided by the above-mentioned embodiments of the present disclosure and/or any possible implementation manners in combination with the embodiments of the present disclosure.
本公开实施例还提供了一种非易失性计算机可读取存储介质(以下简称计算机可读存储介质),其上存储有计算机程序,所述计算机程序被计算机运行时执行上述本公开实施例和/或结合本公开实施例的任一种可能的实施方式所提供的方法。The embodiment of the present disclosure also provides a non-volatile computer-readable storage medium (hereinafter referred to as the computer-readable storage medium), on which a computer program is stored, and when the computer program is run by a computer, the above-mentioned embodiments of the present disclosure are executed. And/or in combination with the method provided in any possible implementation manner of the embodiments of the present disclosure.
本公开实施例还提供了一种包括代码指令的计算机程序产品,所述代码指令在被处理器执行时使所述处理器执行上述本公开实施例和/或结合本公开实施例的任一种可能的实施方式所提供的方法。Embodiments of the present disclosure also provide a computer program product including code instructions, which, when executed by a processor, cause the processor to execute any one of the above-mentioned embodiments of the present disclosure and/or in combination with the embodiments of the present disclosure. Possible implementations of the methods provided.
本公开实施例还提供了一种计算机程序,所述计算机程序在被计算机运行时执行述本 公开实施例和/或结合本公开实施例的任一种可能的实施方式所提供的方法。The embodiment of the present disclosure also provides a computer program, which executes the method provided by the embodiment of the present disclosure and/or in combination with any possible implementation manner of the embodiment of the present disclosure when the computer program is executed by a computer.
本公开的其他特征和优点将在随后的说明书阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开实施例而了解。本公开的目的和其他优点可通过在所写的说明书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the disclosure. The objectives and other advantages of the disclosure may be realized and attained by the structure particularly pointed out in the written description and appended drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。通过附图所示,本公开的上述及其它目的、特征和优势将更加清晰。在全部附图中相同的附图标记指示相同的部分。并未刻意按实际尺寸等比例缩放绘制附图,重点在于示出本公开的主旨。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure or related technologies, the following will briefly introduce the drawings required in the embodiments. Obviously, the drawings in the following description are only some implementations of the present disclosure For example, those of ordinary skill in the art can also obtain other drawings based on these drawings on the premise of not paying creative efforts. The above and other objects, features and advantages of the present disclosure will be more clearly illustrated by the accompanying drawings. Like reference numerals designate like parts throughout the drawings. The drawings are not drawn to scale to actual size, emphasis is placed on illustrating the gist of the disclosure.
图1示出本公开实施例提供的一种图像处理方法的流程图。FIG. 1 shows a flow chart of an image processing method provided by an embodiment of the present disclosure.
图2示出本公开实施例提供的一种图像处理装置的结构框图。Fig. 2 shows a structural block diagram of an image processing apparatus provided by an embodiment of the present disclosure.
图3示出本公开实施例提供的一种电子设备的结构示意图。Fig. 3 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
附图标记:100-电子设备;110-处理器;120-存储器;130-显示屏;400-图像处理装置;410-获取模块;420-确定模块;430-划分模块;440-调整模块。Reference numerals: 100—electronic device; 110—processor; 120—memory; 130—display screen; 400—image processing device; 410—obtaining module; 420—determining module; 430—dividing module;
具体实施方式detailed description
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行描述。The technical solutions in the embodiments of the present disclosure will be described below with reference to the drawings in the embodiments of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,在某一项在一个附图中被定义后,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本公开的描述中诸如“第一”、“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that similar numbers and letters denote similar items in the following drawings, therefore, after an item is defined in one drawing, it does not require further definition and explanation in subsequent drawings . Meanwhile, relational terms such as "first", "second", etc. in the description of the present disclosure are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations, any such actual relationship or order exists. Moreover, the term "comprising" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements but also other elements not expressly listed, Or also include elements inherent in such a process, method, article or apparatus. Without further limitations, an element defined by the phrase "comprising a ..." does not exclude the presence of additional identical elements in the process, method, article or apparatus comprising said element.
再者,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。Furthermore, the term "and/or" in the present disclosure is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B may indicate that A exists alone, and A and B exist simultaneously. B, there are three situations of B alone.
近年来,基于人工智能的计算机视觉、深度学习、机器学习、图像处理、图像识别等技术研究取得了重要进展。人工智能(Artificial Intelligence,AI)是研究、开发用于模拟、 延伸人的智能的理论、方法、技术及应用系统的新兴科学技术。人工智能学科是一门综合性学科,涉及芯片、大数据、云计算、物联网、分布式存储、深度学习、机器学习、神经网络等诸多技术种类。计算机视觉作为人工智能的一个重要分支,具体是让机器识别世界,计算机视觉技术通常包括人脸识别、活体检测、指纹识别与防伪验证、生物特征识别、人脸检测、行人检测、目标检测、行人识别、图像处理、图像识别、图像语义理解、图像检索、文字识别、视频处理、视频内容识别、三维重建、虚拟现实、增强现实、同步定位与地图构建(SLAM)、计算摄影、机器人导航与定位等技术。随着人工智能技术的研究和进步,该项技术在众多领域展开了应用,例如安全防控、城市管理、交通管理、楼宇管理、园区管理、人脸通行、人脸考勤、物流管理、仓储管理、机器人、智能营销、计算摄影、手机影像、云服务、智能家居、穿戴设备、无人驾驶、自动驾驶、智能医疗、人脸支付、人脸解锁、指纹解锁、人证核验、智慧屏、智能电视、摄像机、移动互联网、网络直播、美颜、美妆、医疗美容、智能测温等领域。In recent years, artificial intelligence-based computer vision, deep learning, machine learning, image processing, image recognition and other technologies have made important progress. Artificial Intelligence (AI) is an emerging science and technology that researches and develops theories, methods, technologies and application systems for simulating and extending human intelligence. The subject of artificial intelligence is a comprehensive subject that involves many technologies such as chips, big data, cloud computing, Internet of Things, distributed storage, deep learning, machine learning, and neural networks. As an important branch of artificial intelligence, computer vision is specifically to allow machines to recognize the world. Computer vision technology usually includes face recognition, liveness detection, fingerprint recognition and anti-counterfeiting verification, biometric recognition, face detection, pedestrian detection, target detection, pedestrian detection, etc. Recognition, image processing, image recognition, image semantic understanding, image retrieval, text recognition, video processing, video content recognition, 3D reconstruction, virtual reality, augmented reality, simultaneous localization and mapping (SLAM), computational photography, robot navigation and positioning and other technologies. With the research and progress of artificial intelligence technology, this technology has been applied in many fields, such as security prevention and control, urban management, traffic management, building management, park management, face access, face attendance, logistics management, warehouse management , robots, intelligent marketing, computational photography, mobile imaging, cloud services, smart home, wearable devices, unmanned driving, automatic driving, smart medical care, face payment, face unlock, fingerprint unlock, witness verification, smart screen, smart TV, video camera, mobile Internet, webcast, beauty, makeup, medical cosmetology, intelligent temperature measurement and other fields.
随着泛娱乐化趋势的发展,各个领域对美颜瘦脸的需求也越来越多,但是目前的瘦脸技术存在人脸走行失真的问题,无法满足用户的需求。With the development of pan-entertainment trends, there are more and more demands for beauty and face-lifting in various fields. However, the current face-lifting technology has the problem of face distortion, which cannot meet the needs of users.
为了解决上述问题,本公开实施例提供了一种图像处理方法、装置、电子设备及计算机可读存储介质,可以提高进行瘦脸处理后所得到的待处理图像的视觉效果。此外,针对相关技术中的瘦脸技术存在的缺陷(呈现出不自然的视觉效果,甚至会导致人脸走形失真)是申请人在经过实践并仔细研究后得出的结果,因此,上述缺陷的发现过程以及在下文中本公开实施例针对上述缺陷所提出的解决方案,都应该被认定为申请人对本公开做出的贡献。In order to solve the above problems, the embodiments of the present disclosure provide an image processing method, device, electronic equipment and computer-readable storage medium, which can improve the visual effect of the image to be processed obtained after face-thinning processing. In addition, the defects in the face-slimming technology in related technologies (presenting unnatural visual effects and even causing distortion of human faces) are the result of the applicant's practice and careful research. Therefore, the above-mentioned defects The discovery process and the solutions proposed by the embodiments of the present disclosure hereinafter for the above defects should be recognized as the applicant's contribution to the present disclosure.
该技术可采用相应的软件、硬件以及软硬结合的方式实现。以下对本公开实施例进行详细介绍。This technology can be realized by using corresponding software, hardware and a combination of software and hardware. The embodiments of the present disclosure will be described in detail below.
首先,本公开实施例提供一种图像处理方法,用于对包括人脸区域的待处理图像进行瘦脸处理。请参照图1,该方法可以包括以下步骤:First, an embodiment of the present disclosure provides an image processing method for performing face thinning processing on an image to be processed including a human face area. Referring to Figure 1, the method may include the following steps:
步骤S110:获取待处理图像的人脸关键点信息,所述待处理图像包括人脸区域。Step S110: Acquiring face key point information of an image to be processed, where the image to be processed includes a face area.
步骤S120:根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点;Step S120: According to the face key point information, determine a moving reference point when moving pixels in the face area;
步骤S130:根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域;Step S130: Divide the face area into a plurality of local areas according to the key point information of the face;
步骤S140:针对每个所述局部区域,基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。Step S140: For each local area, based on the pixel movement strategy parameter corresponding to the local area, move each pixel in the local area toward the moving reference point.
在本公开实施例中,不再将所有的待处理图像按照面部轮廓模板所规定的固定距离进行调整,而是先将人脸区域划分为不同的局部区域,且不同的局部区域存在对应的像素移 动策略参数,那么在进行调整时,各个局部区域所包括的像素点按照其所属的局部区域所对应的像素移动策略参数往移动参考点的方向移动,从而可以使得最终得到的瘦脸效果尽可能的自然,进而可以提高瘦脸后所呈现出的视觉效果。In the embodiment of the present disclosure, instead of adjusting all the images to be processed according to the fixed distance specified by the facial contour template, the face area is first divided into different local areas, and there are corresponding pixels in different local areas Move the strategy parameters, then when adjusting, the pixels included in each local region will move in the direction of the moving reference point according to the pixel movement strategy parameters corresponding to the local region to which it belongs, so that the final face-lifting effect can be obtained as much as possible Naturally, it can improve the visual effect after face-lifting.
下面将针对图1中的各个步骤进行详细说明。Each step in FIG. 1 will be described in detail below.
步骤S110:获取待处理图像的人脸关键点信息,所述待处理图像包括人脸区域。Step S110: Acquiring face key point information of an image to be processed, where the image to be processed includes a face area.
其中,本公开实施例提供的图像处理方法可以针对待处理图像进行实时处理,也可以针对待处理图像进行后处理,即非实时处理。Wherein, the image processing method provided by the embodiments of the present disclosure may perform real-time processing on the image to be processed, or may perform post-processing on the image to be processed, that is, non-real-time processing.
当针对待处理图像进行实时处理时,图像处理方法可以适用于视频直播、视频会议以及人像拍照等应用场景。在这种实施方式下,可以根据摄像头实时采集的图片和/或采集视频流确定出待处理图像。When real-time processing is performed on the image to be processed, the image processing method can be applied to application scenarios such as video live broadcast, video conference, and portrait photography. In this embodiment, the image to be processed can be determined according to the pictures and/or video streams collected by the camera in real time.
当针对待处理图像进行后处理时,图像处理方法可以适用于图像处理应用场景。在这种实施方式下,可以根据预先下载的图片和/或视频流、预先通过摄像头拍摄的图片和/或视频流确定出待处理图像。When performing post-processing on the image to be processed, the image processing method may be applicable to an image processing application scenario. In this embodiment, the image to be processed may be determined according to the pre-downloaded picture and/or video stream, or the picture and/or video stream taken by the camera in advance.
当然,摄像头可以是执行或者调用该图像处理方法的电子设备自带的组件,也可以是电子设备的外接组件。在一种可选的实施方式中,上述步骤S110中,获取人脸关键点信息的过程,可以是从具备人脸关键点检测功能的第三方应用程序、软件、人脸关键点检测模型或者其他设备获取针对待处理图像的人脸关键点信息的过程,也可以是通过人脸关键点模型对待处理图像进行人脸关键点检测的过程;也即,在执行本公开实施例所提供的方法时,所获取的原始参数可以为人脸关键点信息,也可以为包括人脸区域的待处理图像,其具体实现过程可以根据实际应用场景进行选择,本公开实施例并不对此进行限定。Certainly, the camera may be a built-in component of the electronic device that executes or invokes the image processing method, or may be an external component of the electronic device. In an optional implementation manner, in the above step S110, the process of acquiring face key point information may be from a third-party application program, software, face key point detection model or other The process in which the device obtains the face key point information for the image to be processed may also be the process of performing face key point detection on the image to be processed through the face key point model; that is, when executing the method provided by the embodiment of the present disclosure , the acquired original parameters may be face key point information, or an image to be processed including a face area, and its specific implementation process may be selected according to an actual application scenario, which is not limited in the embodiments of the present disclosure.
在一种可选的实施方式中,本公开实施例所提供的图像处理方法自身也可以包括对待处理图像进行人脸关键点检测的流程,也就是说,执行图像处理方法的电子设备获取到的原始参数为包括人脸区域的待处理图像,然后将获取到的待处理图像输入至具备人脸关键点检测功能的人脸关键点检测模型进行检测,以得到人脸关键点信息。In an optional implementation manner, the image processing method provided by the embodiment of the present disclosure may also include the process of performing face key point detection on the image to be processed, that is to say, the image processing method obtained by the electronic device The original parameter is the image to be processed including the face area, and then the obtained image to be processed is input to a face key point detection model with a face key point detection function for detection to obtain face key point information.
在这种实施方式下,为了使得人脸关键点检测模型具备检测人脸关键点的功能,需要预先对其进行训练,可选地,可以在采用本公开实施例提供的方法对图像进行处理之前,训练上述人脸关键点检测模型,训练过程如下。In this embodiment, in order to make the human face key point detection model have the function of detecting human face key points, it needs to be trained in advance. Optionally, before using the method provided by the embodiment of the present disclosure to process the image , to train the above face key point detection model, the training process is as follows.
获取大量包含人脸区域的图片,并对每张图片进行标注,从而构成包括多个样本的训练集S。Obtain a large number of pictures containing face regions, and label each picture to form a training set S including multiple samples.
其中,针对训练集S中的第i个样本x i,假设与其对应的标注为y i,那么y i可以包括x i中的各个人脸关键点的位置信息G。其中,G=[(a i1,b i1),(a i2,b i2)…(a in,b in)],n为人脸关键点的标识信息,例如为编号、ID(Identity Document,身份标识)等,(a in,b in) 表示在第i个样本x i中标识为n的人脸关键点的坐标信息。 Wherein, for the i-th sample x i in the training set S, assuming that the corresponding label is y i , then y i may include position information G of each face key point in x i . Among them, G=[(a i1 , b i1 ), (a i2 , b i2 )...(a in , bin in )], n is the identification information of key points of the face, such as number, ID (Identity Document, identity identification ), etc., (a in , bin in ) represent the coordinate information of the face key point identified as n in the i-th sample x i .
值得指出的是,在本公开实施例中,预先为各个样本中的人脸关键点的标识信息的编码规则进行设置,从而使得在不同的样本中具备相同标识信息的人脸关键点所表征的含义相同,以及将属于人脸中某一特定局部区域或者某一特定人脸器官的人脸关键点的标识信息限制在与该特定局部区域或者该特定人脸器官对应的标识信息范围区间内。It is worth pointing out that in the embodiments of the present disclosure, the encoding rules of the identification information of the key points of the face in each sample are set in advance, so that the key points of the face with the same identification information in different samples are characterized by The meaning is the same, and the identification information of the key points of the face belonging to a specific local area or a specific facial organ in the human face is limited to the range of identification information corresponding to the specific local area or the specific facial organ.
例如,在一些实施方式中,标识信息可以为人脸关键点的编号,当需要针对每张人脸标注81个人脸关键点时,预先设置的标识信息编码规则可以为:将包括眼睛在内的眼睛以上的额头作为一个区域,且属于该区域的人脸关键点的编号范围为1-20;将包括嘴巴在内的嘴巴以下的下巴区域作为一个区域,且属于该区域的人脸关键点的编号范围为21-40;将左脸作为一个区域,且属于该区域的人脸关键点的编号范围为41-55;将右脸作为一个区域,且属于该区域的人脸关键点的编号范围为56-70;将发际线作为一个区域,且属于该区域的人脸关键点的编号范围为71-81。For example, in some implementations, the identification information can be the number of the key points of the face. When it is necessary to mark 81 key points of the face for each face, the preset identification information encoding rule can be: the eyes including the eyes The above forehead is regarded as an area, and the number of face key points belonging to this area ranges from 1-20; the chin area below the mouth including the mouth is regarded as an area, and the number of face key points belonging to this area The range is 21-40; the left face is regarded as a region, and the number range of face key points belonging to this region is 41-55; the right face is regarded as a region, and the number range of face key points belonging to this region is 56-70; the hairline is regarded as a region, and the number range of the face key points belonging to this region is 71-81.
再例如,在一些实施方式中,标识信息可以为人脸关键点的编号,当需要针对每张人脸标注81个人脸关键点时,预先设置的标识信息编码规则可以为:属于人脸器官中的脸部轮廓的人脸关键点的编号范围为1-20,属于人脸器官中的嘴巴的人脸关键点的编号范围为21-40,属于人脸器官中的鼻子的人脸关键点的编号范围为41-55,属于人脸器官中的眼睛的人脸关键点的编号范围为56-70,属于人脸器官中的眉毛的人脸关键点的编号范围为71-81。For another example, in some implementations, the identification information may be the number of the key points of the face. When it is necessary to mark 81 key points of the face for each face, the preset identification information encoding rule may be: The numbering range of the face key points of the face contour is 1-20, the numbering range of the face key points belonging to the mouth in the face organ is 21-40, and the numbering of the face key points belonging to the nose in the face organ The range is 41-55, the number range of the face key points belonging to the eyes in the face organs is 56-70, and the number range of the face key points belonging to the eyebrows in the face organs is 71-81.
当然,以上标识信息编码规则仅为举例,可以理解,在其他实施方式中,标识信息编码规则还可以采用其他类似的方案。Certainly, the above identification information encoding rule is only an example, and it can be understood that in other implementation manners, other similar schemes may be adopted for the identification information encoding rule.
在标注完成后,可以通过训练集S来训练深度学习模型,训练过程可以为:向深度学习模型输入训练集S中的各个样本图片,并得到对应的输出(样本图片的人脸关键点及其坐标信息),并让深度学习模型自动学习样本图片和输出之间的内在关联,从而得到人脸关键点检测模型。After the labeling is completed, the deep learning model can be trained through the training set S. The training process can be as follows: input each sample picture in the training set S to the deep learning model, and obtain the corresponding output (the key points of the face of the sample picture and its Coordinate information), and let the deep learning model automatically learn the internal correlation between the sample picture and the output, so as to obtain the face key point detection model.
一般而言,在标注阶段,需要针对每个样本标注出N个人脸关键点,后续训练得到人脸关键点检测模型针对输入的待处理图像进行人脸关键点检测,所输出的人脸关键点信息就可以包括N个带有标识信息的人脸关键点及其坐标信息。例如在标注阶段,针对每个样本标注出81个人脸关键点,人脸关键点检测模型针对输入的待处理图像进行人脸关键点检测,其输出的人脸关键点信息包括81个带有标识信息的人脸关键点及其坐标信息。Generally speaking, in the labeling stage, it is necessary to label N key points of the face for each sample, and the subsequent training obtains the key point detection model of the face to detect the key points of the face for the input image to be processed, and the key points of the face outputted The information may include N key points of the face with identification information and coordinate information thereof. For example, in the labeling stage, 81 face key points are marked for each sample, and the face key point detection model performs face key point detection on the input image to be processed, and the output face key point information includes 81 face key points with labels The face key points and their coordinate information of the information.
此外,在一些可选的实施方式中,在标注阶段,标注y i还可以包括样本x i中的人脸定位框的信息K=(u i,v i,m,f),其中,(u i,v i)表示人脸定位框的坐标信息,一般为人脸定位框的一个顶点(例如左下角所在的点)的坐标信息,(m,f)分别表示人脸定位框的宽 度和高度。 In addition, in some optional implementation manners, in the tagging stage, the tag y i may also include information K=(u i , v i , m, f) of the face positioning frame in the sample xi , where (u i , vi ) represent the coordinate information of the face positioning frame, generally the coordinate information of a vertex (such as the point at the lower left corner) of the face positioning frame, and (m, f) represent the width and height of the face positioning frame respectively.
值得指出的是,人脸定位框左下角的坐标信息与各个人脸关键点的坐标信息属于同一直角坐标系(为了便于区分,称之为第一坐标系),该第一坐标系一般以样本x i的一个顶点(例如左下角所在的点)作为原点,以与该顶点相连的两条边分别作为X轴和Y轴。 It is worth pointing out that the coordinate information of the lower left corner of the face positioning frame and the coordinate information of each key point of the face belong to the same rectangular coordinate system (for the sake of distinction, it is called the first coordinate system). A vertex of x i (such as the point where the lower left corner is located) is used as the origin, and the two edges connected to the vertex are used as the X axis and the Y axis respectively.
在这种实施方式下,训练所得到的人脸关键点检测模型针对待处理图像进行人脸关键点检测后,所输出的人脸关键点信息除了可以包括待处理图像所包括的带有标识信息的各个人脸关键点及其坐标信息外,还可以包括待处理图像所包括的人脸定位框的信息,即输出的人脸关键点信息包括G以及K。In this embodiment, after the face key point detection model obtained by training performs face key point detection on the image to be processed, the outputted face key point information may include the identification information included in the image to be processed In addition to each face key point and its coordinate information, it can also include the information of the face positioning frame included in the image to be processed, that is, the output face key point information includes G and K.
在得到人脸关键点检测模型后,即可以把获取到的待处理图像输入人脸关键点检测模型,并通过人脸关键点检测模型对待处理图像进行人脸关键点检测,然后获取人脸关键点检测模型输出的人脸关键点信息。After obtaining the face key point detection model, you can input the obtained image to be processed into the face key point detection model, and perform face key point detection on the image to be processed through the face key point detection model, and then obtain the face key point The face key point information output by the point detection model.
其中,在一些实施方式中,获取到的待处理图像可以是只包含人脸区域在内的人脸图像,也可以为包括人脸区域以及人身体的其他区域在内的大图。Wherein, in some implementations, the acquired image to be processed may be a face image including only the face area, or a large image including the face area and other areas of the human body.
一般而言,当待处理图像为大图时,后续的瘦脸处理过程可以直接在大图的基础上,针对大图的人脸区域为处理对象。Generally speaking, when the image to be processed is a large image, the subsequent face thinning process can be directly based on the large image, targeting the face area of the large image as the processing object.
此外,由于瘦脸处理主要是针对脸部区域进行图像处理,因此,在一些可选的实施方式中,当待处理图像为大图,且将待处理图像输入至人脸关键点检测模型后所获取到的人脸关键点信息包括人脸定位框的信息时,还可以根据得到的人脸定位框的信息,从待处理图像(即大图)中截取出与人脸定位框对应的人脸图像,以便后续可以直接在人脸图形的基础上,以人脸图像的人脸区域作为后续的瘦脸处理的处理对象,而无需对大图的其余区域进行处理。In addition, because the face thinning process mainly performs image processing on the face area, in some optional implementations, when the image to be processed is a large image, and the image to be processed is input to the face key point detection model, the obtained When the obtained face key point information includes the information of the face positioning frame, the face image corresponding to the face positioning frame can also be intercepted from the image to be processed (i.e. the large image) according to the information of the obtained face positioning frame , so that the face area of the face image can be used as the processing object of the subsequent face thinning process directly on the basis of the face image, without processing the rest of the large image.
可以理解,包括同一个人脸区域的人脸图像的数据量小于待处理图像的数据量,因此,以人脸图像为处理对象时,有利于降低瘦脸处理过程所产生的时延。It can be understood that the data volume of face images including the same face area is smaller than the data volume of the image to be processed. Therefore, when the face image is used as the processing object, it is beneficial to reduce the time delay generated in the face thinning process.
当然,由于人脸关键点检测模型输出的各种坐标信息所处的坐标系(第一坐标系)的坐标原点为大图的一个顶点,当以人脸图像为处理对象时,该坐标原点大概率在人脸图像之外。Of course, since the coordinate origin of the coordinate system (the first coordinate system) where the various coordinate information output by the face key point detection model is located is a vertex of the large image, when the face image is used as the processing object, the coordinate origin is large The probability is outside the face image.
为了便于只针对人脸图像中的人脸区域内的像素点进行操作,在一些实施方式中,还可以对人脸关键点检测模型所输出的各种坐标信息进行归一化处理,从而将人脸关键点检测模型所输出的人脸关键点的坐标信息转换为在截取的人脸区域内的新坐标信息。在具体实施时,可以在执行上述步骤S120,根据所述人脸关键点信息,确定对人脸区域中的像素点进行移动时的移动参考点的步骤之前,执行上述归一化处理的操作。当然,也可以在执行上述步骤S130,根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域的步骤 之前,执行上述归一化处理的操作。本公开实施例并不对上述归一化处理操作的具体执行过程进行限定。In order to operate only on the pixels in the face area in the face image, in some implementations, the various coordinate information output by the face key point detection model can also be normalized, so that the face The coordinate information of the face key points output by the face key point detection model is converted into new coordinate information in the intercepted face area. In a specific implementation, before performing the above step S120 of determining the moving reference point when moving the pixel points in the face area according to the key point information of the face, the above normalization processing operation may be performed. Certainly, before performing the above step S130 of dividing the human face area into a plurality of local areas according to the key point information of the human face, the above normalization processing operation may be performed. The embodiment of the present disclosure does not limit the specific execution process of the above normalization processing operation.
在具体实施时,可以基于人脸定位框的坐标信息对人脸关键点的坐标信息进行归一化处理,得到归一化后的人脸关键点的坐标信息。During specific implementation, the coordinate information of the key points of the face can be normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face can be obtained.
这样,则基于归一化后的人脸关键点的坐标信息,执行本公开实施例所提供方法的后续步骤。也即,将人脸关键点的坐标转换为所截取的人脸图像的人脸区域内的坐标,这样,在进行瘦脸操作时只需要对人脸区域内的像素进行操作即可,人脸区域外的像素保持不变,在进行瘦脸操作完成后,可以根据操作后的像素点重新绘制人脸区域,并替换原有的人脸区域即可。In this way, the subsequent steps of the method provided by the embodiments of the present disclosure are executed based on the normalized coordinate information of key points of the human face. That is, the coordinates of the key points of the human face are converted into the coordinates in the human face area of the intercepted human face image, so that only the pixels in the human face area need to be operated when performing the face thinning operation, and the human face area The pixels outside remain unchanged. After the face thinning operation is completed, the face area can be redrawn according to the pixels after the operation, and the original face area can be replaced.
其中,归一化操作前的坐标信息所对应的坐标系(第一坐标系)的原点可以为待处理图像的其中一个顶点,归一化操作后的坐标信息所对应的坐标系(第二坐标系)的原点可以为进行归一化操作所参考的人脸定位框的其中一个顶点。Wherein, the origin of the coordinate system (the first coordinate system) corresponding to the coordinate information before the normalization operation can be one of the vertices of the image to be processed, and the coordinate system (the second coordinate system) corresponding to the coordinate information after the normalization operation system) can be one of the vertices of the face positioning frame referenced for the normalization operation.
相应地,后续在以人脸图像为处理对象时,可以基于归一化后的人脸关键点信息进行处理。Correspondingly, when the face image is used as the processing object, processing can be performed based on the normalized key point information of the face.
下面将针对归一化转换的过程进行介绍。The process of normalization conversion will be introduced below.
可选地,可以计算人脸定位框的坐标信息与人脸关键点的坐标信息之间的坐标差值,然后用该坐标差值更新替换原来的人脸关键点的坐标信息,即可得到归一化后的人脸关键点信息。Optionally, the coordinate difference between the coordinate information of the face positioning frame and the coordinate information of the key points of the face can be calculated, and then the coordinate difference can be used to update and replace the original coordinate information of the key points of the face to obtain the returned The unified face key point information.
可选地,假设第i张待处理图像中,人脸定位框的信息为(u i,v i,m,f),人脸关键点的坐标信息为((a i1,b i1),(a i2,b i2)…(a in,b in)),那么人脸定位框的坐标信息为(u i,v i)。进行归一化转换后,人脸图像所包括的各个人脸关键点的坐标信息可以为((a i1-u i,b i1-v i),(a i2-u i,b i2-v i)…(a in-u i,b in-v i)),此时,坐标原点为人脸定位框的坐标信息所在的点。 Optionally, suppose that in the i-th image to be processed, the information of the face positioning frame is (u i , v i , m, f), and the coordinate information of the key points of the face is ((a i1 , b i1 ), ( a i2 , b i2 )...(a in , b in )), then the coordinate information of the face positioning frame is (u i , v i ). After normalized transformation, the coordinate information of each face key point included in the face image can be ((a i1 -u i , b i1 -v i ), (a i2 -u i , b i2 -v i )...(a in -u i , bin in -v i )), at this time, the coordinate origin is the point where the coordinate information of the face positioning frame is located.
当然,可以理解,上述进行归一化时所对应的实施例可以和前文或者后文中的任一实施例(该任一实施例的实现需要与本实施例的实现不发生冲突)相结合,例如:本实施例可以和前文中出现的,基于像素移动策略参数将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例可以和后文中出现的,基于像素移动策略参数包括的像素位置调整比例来将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例也可以和后文中出现的,根据用户触发的人脸调整指令所包括的瘦脸强度系数以及获取到的人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息来确定人脸区域中经过移动的各个像素点所对应的目标位置信息所对应的实施例相结合等。Of course, it can be understood that the above-mentioned embodiment corresponding to normalization can be combined with any of the preceding or following embodiments (the implementation of any embodiment needs not to conflict with the implementation of this embodiment), for example : This embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area towards the direction of the moving reference point based on the pixel movement strategy parameter in the foregoing; this embodiment can be combined with the embodiment in the following, based on The pixel position adjustment ratio included in the pixel movement strategy parameter is combined with the embodiment corresponding to moving each pixel point in the local area toward the direction of the mobile reference point; The face-thinning strength coefficient included in the face adjustment instruction and the acquired position information of each pixel in the face area that has been moved before and after the movement are used to determine the corresponding position of each pixel in the face area that has been moved. Embodiments corresponding to the target location information are combined and the like.
步骤S120:根据所述人脸关键点信息,确定对人脸区域中的像素点进行移动时的移动参考点。Step S120: According to the key point information of the human face, determine a moving reference point when moving the pixel points in the human face area.
其中,移动参考点被配置成对后续的瘦脸处理起到引导作用。Wherein, the moving reference point is configured to play a guiding role in the subsequent face thinning process.
在本公开实施例中,整个人脸区域中的全部像素点可以对应一个相同的移动参考点;此外,整个人脸区域中的各个像素点也可以对应不同的移动参考点。In the embodiment of the present disclosure, all pixel points in the entire face area may correspond to the same moving reference point; in addition, each pixel point in the entire face area may also correspond to different moving reference points.
前文提及,与各个标识信息对应的人脸关键点所表征的含义预先被确定。As mentioned above, the meaning represented by the key points of the face corresponding to each identification information is determined in advance.
在一些可选的实施方式中,可以根据各个人脸关键点的标识信息,从人脸区域中确定出某特定位置(如双眼中心位置或者鼻尖位置等)处所对应的人脸关键点,并将该人脸关键点确定为移动参考点。In some optional implementations, according to the identification information of each key point of the face, the key point of the face corresponding to a certain position (such as the center of the eyes or the position of the tip of the nose, etc.) can be determined from the face area, and the The face key point is determined as a reference point for movement.
在具体实施时,上述步骤S120:根据所述人脸关键点信息,确定对人脸区域中的像素点进行移动时的移动参考点,至少可以包括如下两种实现方式:In a specific implementation, the above step S120: according to the key point information of the human face, determine the moving reference point when moving the pixels in the human face area, which may include at least the following two implementations:
可以确定人脸区域中双眼中心位置处所对应的人脸关键点,将双眼中心位置处所对应的人脸关键点确定为上述移动参考点;或者,可以确定人脸区域中鼻尖位置处所对应的人脸关键点,将鼻尖位置处所对应的人脸关键点确定为上述移动参考点。The key point of the face corresponding to the center position of the eyes in the face area can be determined, and the key point of the face corresponding to the center position of the eyes can be determined as the above-mentioned mobile reference point; or, the face corresponding to the position of the tip of the nose in the face area can be determined key point, the key point of the face corresponding to the position of the tip of the nose is determined as the above-mentioned moving reference point.
当然,上述只是以特定位置为双眼中心位置和鼻尖位置为例进行示例性说明,除此之外,上述特定位置还可以为其他,如双眼中心位置与鼻尖位置的连线的中点等等;本公开实施例不再一一赘述。Of course, the above is just an example of the specific position being the center position of the eyes and the position of the tip of the nose. In addition, the specific position above can also be other, such as the midpoint of the line connecting the center position of the eyes and the position of the tip of the nose, etc.; Embodiments of the present disclosure will not be described in detail one by one.
例如,在一种可选的实施方式中,若是预先定义的属于眼睛区域内的人脸关键点的编号范围为56-70时,编号为63的人脸关键点用于表征双眼中心位置处,那么可以将编号为63的人脸的关键点确定为移动参考点。For example, in an optional implementation, if the pre-defined number range of face key points belonging to the eye area is 56-70, the face key point numbered 63 is used to represent the center position of the eyes, Then the key point of the face numbered 63 can be determined as the moving reference point.
在这种实施方式下,与人脸区域中的各个像素点所对应的移动参考点相同,均为人脸区域中用于表征双眼中心位置处所对应的人脸关键点。In this embodiment, the moving reference points corresponding to each pixel point in the face area are the same as the key points of the face used to represent the center positions of the eyes in the face area.
当然,可以理解,上述以特定位置作为人脸区域中的像素点进行移动时的移动参考点的实施例可以和前文或者后文中的任一实施例(该任一实施例的实现需要与本实施例的实现不发生冲突)相结合,例如:本实施例可以和前文中出现的,基于像素移动策略参数将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例可以和后文中出现的,基于像素移动策略参数包括的像素位置调整比例来将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例也可以和后文中出现的,根据用户触发的人脸调整指令所包括的瘦脸强度系数以及获取到的人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息来确定人脸区域中经过移动的各个像素点所对应的目标位置信息所对应的实施例相结合等。Of course, it can be understood that the above-mentioned embodiment of using a specific position as a moving reference point when moving pixels in the face area can be combined with any of the preceding or following embodiments (the realization of any embodiment needs to be consistent with this embodiment. The implementation of the example does not conflict), for example: this embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area towards the direction of the moving reference point based on the pixel movement strategy parameter; This embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area toward the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement strategy parameter; this embodiment can also be As will appear later, the face is determined according to the face-thinning intensity coefficient included in the face adjustment command triggered by the user and the acquired position information of each pixel in the face area that has been moved before and after the move. The embodiments corresponding to the target position information corresponding to the moved pixels in the area are combined and the like.
在一些实施方式中,还可以确定一条参考线,针对人脸区域中的不同像素点,按照设 定规则从该参考线上确定该像素点所对应的移动参考点。因此,上述步骤S120:根据所述人脸关键点信息,确定对人脸区域中的像素点进行移动时的移动参考点,还可以通过如下过程实现:In some embodiments, a reference line can also be determined, and for different pixel points in the face area, the moving reference point corresponding to the pixel point can be determined from the reference line according to the set rules. Therefore, the above step S120: according to the key point information of the human face, determine the moving reference point when moving the pixels in the human face area, which can also be realized through the following process:
基于位于人脸区域竖直方向上的中心线上的人脸关键点确定参考线;针对人脸区域中的每个像素点,将参考线上与该像素点之间的距离最小的人脸关键点确定为该像素点所对应的移动参考点。Determine the reference line based on the face key points located on the center line in the vertical direction of the face area; for each pixel in the face area, the face key with the minimum distance between the reference line and the pixel The point is determined as the moving reference point corresponding to the pixel point.
在一种可选的实施方式中,可以将上述中心线上双眼中心位置处所对应的关键点作为第一人脸关键点,将中心线上下巴中心位置处所对应的人脸关键点作为第二人脸关键点,将第一关键点和第二人脸关键点构成的线段确定为上述参考线。In an optional implementation manner, the key point corresponding to the center position of the eyes on the center line may be used as the first face key point, and the face key point corresponding to the center position of the chin on the center line may be used as the second face key point. Face key points, the line segment formed by the first key point and the second face key point is determined as the above-mentioned reference line.
例如,在一种可选实施方式中,假设属于眼睛区域内的人脸关键点的编号范围为56-70,属于脸部轮廓区域内的人脸关键点的编号范围为1-20,编号为63的人脸的关键点为双眼中心,编号为10的人脸的关键点为下巴中心。此时,将编号为10的人脸关键点与编号为63的人脸关键点的连线确定为参考线。For example, in an optional implementation manner, assuming that the number range of the face key points belonging to the eye area is 56-70, the number range of the face key points belonging to the face contour area is 1-20, and the number is The key point of the face numbered 63 is the center of the eyes, and the key point of the face numbered 10 is the center of the chin. At this time, the line connecting the face key point numbered 10 and the face key point numbered 63 is determined as the reference line.
当然,在另外一种可选实施方式中,还可以将中心线上眉心位置处所对应的关键点作为第一人脸关键点,将中心线上嘴唇中心位置处所对应的人脸关键点作为第二人脸关键点,基于该第一人脸关键点和第二人脸关键点构成的线段确定为上述参考线。Of course, in another optional implementation manner, the key point corresponding to the brow center position on the center line can also be used as the first face key point, and the face key point corresponding to the lip center position on the center line can be used as the second face key point. The face key point is determined as the reference line based on the line segment formed by the first face key point and the second face key point.
在得到参考线的基础上,针对人脸区域中的每个像素点,可以将上述参考线上与该像素点之间的距离最小的人脸关键点确定为与该像素点对应的移动参考点。On the basis of obtaining the reference line, for each pixel point in the face area, the face key point with the smallest distance between the reference line and the pixel point can be determined as the moving reference point corresponding to the pixel point .
在这种实施方式下,人脸区域中的各个像素点所对应的移动参考点不相同,也就是说,人脸区域内的各个像素点存在对应的移动参考点。In this embodiment, the moving reference points corresponding to each pixel in the face area are different, that is, each pixel in the face area has a corresponding moving reference point.
当然,可以理解,上述以参考线上的人脸关键点作为人脸区域中的像素点进行移动时的移动参考点的实施方式可以和前文或者后文中的任一实施例(该任一实施例的实现需要与本实施例的实现不发生冲突)相结合,例如:本实施例可以和前文中出现的,基于像素移动策略参数将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例可以和后文中出现的,基于像素移动策略参数包括的像素位置调整比例来将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例也可以和后文中出现的,根据用户触发的人脸调整指令所包括的瘦脸强度系数以及获取到的人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息来确定人脸区域中经过移动的各个像素点所对应的目标位置信息所对应的实施例相结合等。Of course, it can be understood that the above-mentioned embodiment of using the key points of the face on the reference line as the moving reference point when moving the pixels in the face area can be the same as any of the preceding or following embodiments (this any embodiment) need to be combined with the implementation of this embodiment), for example: this embodiment can correspond to the movement of each pixel point in the local area towards the direction of the reference point based on the pixel movement strategy parameter that appears above This embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area towards the direction of the moving reference point based on the pixel position adjustment ratio included in the pixel movement strategy parameter that will appear later. ; This embodiment can also appear in the following, according to the thin face strength coefficient included in the face adjustment command triggered by the user and the obtained position information of each pixel point in the face area that has been moved before moving and after moving Combination of the embodiments corresponding to the target position information corresponding to each pixel point in the human face area that has been determined by using the position information, etc.
步骤S130:根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域。Step S130: Divide the face area into a plurality of local areas according to the key point information of the face.
前文提及,人脸关键点信息可以包括人脸关键点的标识信息,相应地,上述步骤S130,根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域,可选地包括:As mentioned above, the face key point information may include the identification information of the face key point. Correspondingly, the above step S130 divides the face area into a plurality of local areas according to the face key point information, and optionally including:
根据获取到的人脸关键点的标识信息与人脸器官的对应关系,确定各个人脸器官所对应的人脸关键点集合;将每个人脸关键点集合中的各个人脸关键点所围成的区域确定为一个局部区域。According to the corresponding relationship between the acquired identification information of the key points of the face and the facial organs, determine the set of key points of the face corresponding to each organ of the face; The region identified as a local region.
上述人脸器官可以为眼睛、鼻子、嘴巴、眉毛、脸部等。The above-mentioned facial organs can be eyes, nose, mouth, eyebrows, face, etc.
当然,在另外一些可选实施方式中,也可以不按照上述人脸器官所对应的区域进行局部区域的划分,也可以按照其他方式进行划分,例如,将包括眼睛在内的眼睛以上的额头作为一个区域、包括嘴巴在内的嘴巴以下的下巴区域作为一个区域、将位于额头区域和下巴区域中间的左脸区域(包含鼻子的一部分)作为一个区域,将位于额头区域和下把区域中间的右脸区域(包含鼻子的一部分)作为一个区域;当然,在具体实施时,还可以按照其他方式进行区域划分,本公开实施例不再一一赘述。Certainly, in some other optional implementation manners, the division of local areas may not be performed according to the areas corresponding to the above-mentioned human face organs, but may also be divided in other ways, for example, the forehead above the eyes including the eyes is used as One area, the chin area below the mouth including the mouth as one area, the left face area (including a part of the nose) located between the forehead area and the chin area as one area, and the right face area located between the forehead area and the chin area The face area (including a part of the nose) is regarded as an area; of course, in specific implementation, the area division can also be performed in other ways, and the embodiments of the present disclosure will not repeat them one by one.
为便于理解,下述将举例进行说明。For ease of understanding, the following examples will be used for illustration.
例如,可以将编号属于编号范围1-20的人脸关键点划分为一组,并确定该组所包括的人脸关键点属于脸部轮廓区域;可以将编号属于编号范围21-40的人脸关键点划分为一组,并确定该组所包括的人脸关键点属于嘴巴区域;可以将编号属于编号范围41-55的人脸关键点划分为一组,并确定该组所包括的人脸关键点属于鼻子区域;可以将编号属于编号范围56-70的人脸关键点划分为一组,并确定该组所包括的人脸关键点属于眼睛区域;可以将编号属于编号范围71-81的人脸关键点划分为一组,并确定该组所包括的人脸关键点属于眉毛区域。For example, the face key points whose numbers belong to the number range 1-20 can be divided into one group, and it is determined that the face key points included in this group belong to the face contour area; the faces whose numbers belong to the number range 21-40 can be divided into The key points are divided into one group, and the face key points included in the group are determined to belong to the mouth area; the face key points whose numbers belong to the number range 41-55 can be divided into one group, and the faces included in the group are determined The key points belong to the nose area; the face key points whose number belongs to the number range 56-70 can be divided into a group, and the face key points included in this group can be determined to belong to the eye area; the number belongs to the number range 71-81 The face key points are divided into one group, and it is determined that the face key points included in the group belong to the eyebrow region.
在得到多个分组后,每个分组即为一个人脸关键点集合。每个人脸关键点集合中的各个人脸关键点的坐标信息所围成的区域范围即为一个局部区域。After obtaining multiple groups, each group is a set of face key points. The area surrounded by the coordinate information of each face key point in each face key point set is a local area.
当然,可以理解,上述根据获取到的人脸关键点的标识信息与人脸器官的对应关系,可以确定各个人脸器官所对应的人脸关键点集合;可以将每个人脸关键点集合中的各个人脸关键点所围成的区域确定为一个局部区域的实施例可以和前文或者后文中的任一实施例(该任一实施例的实现需要与本实施例的实现不发生冲突)相结合,例如:本实施例可以和前文中出现的,基于像素移动策略参数将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例可以和后文中出现的,基于像素移动策略参数包括的像素位置调整比例来将局部区域内的各个像素点朝向移动参考点的方向移动所对应的实施例相结合;本实施例也可以和后文中出现的,根据用户触发的人脸调整指令所包括的瘦脸强度系数以及获取到的人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息来确定人脸区域中经过移动的各个像素点所对应的目标位置信息所对应的实施例相结合等。Of course, it can be understood that, according to the above-mentioned corresponding relationship between the acquired identification information of the key points of the face and the facial organs, the set of key points of the face corresponding to each organ of the face can be determined; The embodiment that the area surrounded by each face key point is determined as a local area can be combined with any of the preceding or following embodiments (the realization of any embodiment needs not to conflict with the realization of this embodiment) , for example: this embodiment can be combined with the embodiment corresponding to moving each pixel point in the local area toward the direction of the moving reference point based on the pixel movement strategy parameter; this embodiment can be combined with the embodiment that appears later , based on the pixel position adjustment ratio included in the pixel movement strategy parameters to move each pixel in the local area towards the direction of the reference point; this embodiment can also be combined with the following, according to user trigger The face-thinning intensity coefficient included in the face adjustment instruction and the obtained position information of each pixel point in the face area after moving and the position information after moving are used to determine the position of each pixel point in the face area after moving. The embodiments corresponding to the corresponding target location information are combined and the like.
步骤S140:针对每个所述局部区域,基于所述局部区域所对应的像素移动策略参数, 将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。Step S140: For each local area, based on the pixel movement strategy parameter corresponding to the local area, move each pixel in the local area toward the direction of the moving reference point.
可以进行瘦脸操作,即针对需要瘦脸的人脸区域所划分出的各个局部区域,分别进行像素调整。Face thinning operations can be performed, that is, pixel adjustments are performed on each local area divided by the face area that needs to be thinned.
前文提及,每个局部区域存在对应的像素移动策略参数,不同的局部区域所对应的像素移动策略参数并不相同。As mentioned above, each local region has a corresponding pixel movement strategy parameter, and the pixel movement strategy parameters corresponding to different local regions are different.
在一些实施方式中,该像素移动策略参数可以为像素位置调整比例。在这种实施方式下,可以将各个局部区域所包括的每个像素点朝向移动参考点的方向做移动调整,以达到瘦脸的效果。至于调整的程度,由该像素点所在的局部区域对应的像素位置调整比例来确定。In some implementations, the pixel shift strategy parameter may be a pixel position scaling. In this embodiment, each pixel included in each local area can be moved and adjusted toward the direction of the moving reference point, so as to achieve the effect of thinning the face. As for the degree of adjustment, it is determined by the adjustment ratio of the pixel position corresponding to the local area where the pixel is located.
在这种实施方式下,在将各个局部区域内的各个像素点朝向移动参考点的方向移动时,可以是针对各个局部区域内的每个像素点,确定该像素点与移动参考点之间的第一距离d;然后基于第一距离d和该像素点所在的局部区域所对应的像素位置调整比例T,确定该像素点与移动参考点之间的第二距离d';然后将该像素点朝向移动参考点的方向移动,以使得该像素点在移动后与移动参考点之间的距离等于所述第二距离。In this implementation manner, when moving each pixel point in each local area towards the direction of the moving reference point, for each pixel point in each local area, determine the distance between the pixel point and the moving reference point The first distance d; then adjust the ratio T based on the first distance d and the pixel position corresponding to the local area where the pixel point is located, and determine the second distance d' between the pixel point and the mobile reference point; then the pixel point moving towards the direction of the moving reference point, so that the distance between the pixel point and the moving reference point after the movement is equal to the second distance.
在一些实施方式中,可以计算第一距离d与像素点所属的局部区域所对应的像素位置调整比例T的第一乘积值,并将该第一乘积值确定为第二距离d'。In some implementations, a first product value of the first distance d and the pixel position adjustment ratio T corresponding to the local area to which the pixel belongs may be calculated, and the first product value is determined as the second distance d′.
例如上述过程可以基于公式d'=d×T来实现,T∈[0,1]。For example, the above process can be realized based on the formula d'=d×T, T∈[0,1].
以人脸区域中的各个像素点的移动参考点相同,均为双眼中心位置处的人脸关键点63为例,假设与脸部轮廓区域对应的像素位置调整比例为T1,与嘴巴区域对应的像素位置调整比例为T2,那么在进行像素移动时,将脸部轮廓区域所包括的每个像素点均往人脸关键点63所表征的移动参考点所在的方向移动,移动的距离为该像素点距离人脸关键点63的距离d与T1的乘积值;将嘴巴轮廓区域所包括的每个像素点均往人脸关键点63所表征的移动参考点所在的方向移动,移动的距离为该像素点距离人脸关键点63的距离d与T2的乘积值。Taking the moving reference point of each pixel in the face area as the same, which is the face key point 63 at the center of both eyes as an example, assume that the pixel position adjustment ratio corresponding to the face contour area is T1, and the pixel position corresponding to the mouth area The pixel position adjustment ratio is T2, then when performing pixel movement, each pixel point included in the face contour area is moved in the direction where the moving reference point represented by the key point 63 of the face is located, and the moving distance is the pixel The product value of the distance d and T1 of point distance people's face key point 63; Every pixel point included in the mouth contour area is all moved towards the direction where the mobile reference point represented by people's face key point 63 is located, and the moving distance is this The product value of the distance d from the pixel point to the face key point 63 and T2.
当然,可以理解,在此具体的举例中,还可以包括未枚举的其他局部区域及其对应的像素位置调整比例,均会按照上述类似的方式往往人脸关键点63所表征的移动参考点所在的方向移动。Of course, it can be understood that in this specific example, other local areas not enumerated and their corresponding pixel position adjustment ratios may also be included, and the moving reference points represented by the face key points 63 will be moved in a similar manner to the above. Move in the direction in which to move.
此外,值得指出的是,当与人脸区域中的各个像素点所对应的移动参考点不相同时,上述像素移动过程中,各个像素点在朝向移动参考点所在的方向移动时,可以是分别朝向与自身对应的移动参考点所在的方向移动。In addition, it is worth pointing out that when the moving reference point corresponding to each pixel point in the face area is different, during the above pixel moving process, when each pixel point moves toward the direction where the moving reference point is located, it can be respectively Move towards the direction of the movement reference point corresponding to itself.
另外,需要说明的是,在本公开实施例中,上述像素移动策略参数还可以为像素移动 后的位置距离像素参考点的距离与像素移动前的位置距离像素参考点的距离之间的比例,这样,在进行像素移动时,可以按照该比例确定像素的目标位置信息。In addition, it should be noted that, in the embodiment of the present disclosure, the above-mentioned pixel shift strategy parameter may also be the ratio between the distance between the position after pixel shift and the pixel reference point and the distance between the position before pixel shift and the pixel reference point, In this way, when pixel movement is performed, the target position information of the pixel can be determined according to the ratio.
当然,在某些可选实施方式中,上述像素策略移动参数还可以为像素移动的距离等等;上述像素策略移动参数还可以为其他,本公开实施例不再一一赘述。Certainly, in some optional implementation manners, the above-mentioned pixel strategy movement parameter may also be the distance of pixel movement, etc.;
与各个像素点对应的移动参考点的确定方式,请参照前文相关描述,此处不再赘述。For the method of determining the moving reference point corresponding to each pixel point, please refer to the relevant description above, which will not be repeated here.
当然,各个局部区域所对应的像素位置调整比例之间相互独立,可能部分相同,也可能完全不同。Of course, the pixel position adjustment ratios corresponding to the respective local regions are independent of each other, and may be partly the same or completely different.
在本公开实施例中,为了保证瘦脸后呈现出自然的效果,后台工作人员预先为人脸的各个局部区域配置对应的像素位置调整比例。其中,后台工作人员可以针对每个局部区域取不同的像素位置调整比例进行测试,并观察对应的视觉效果,从而确定出与各个局部区域对应的最佳像素位置调整比例,并进行保存。用户在实际使用过程中,不可直接对各个局部区域的像素位置调整比例进行单独调整。In the embodiment of the present disclosure, in order to ensure a natural effect after the face is thinned, background staff configure corresponding pixel position adjustment ratios for each local area of the face in advance. Among them, the background staff can take different pixel position adjustment ratios for each local area to test, and observe the corresponding visual effect, so as to determine the optimal pixel position adjustment ratio corresponding to each local area, and save it. During actual use, the user cannot directly adjust the pixel position adjustment ratio of each local area separately.
当然,在一些实施方式中,在经过上述像素移动操作后,可能效果并未达到用户满意的程度,在该种情况下,用户还可以通过虚拟按键或者实体按键触发自定义调整,进而触发人脸调整指令。Of course, in some implementations, after the above-mentioned pixel shifting operation, the effect may not be satisfactory to the user. In this case, the user can also trigger custom adjustments through virtual buttons or physical buttons, and then trigger face recognition. Adjust instructions.
其中,在人脸调整指令中可以包括瘦脸强度系数k,且该瘦脸强度系数k的大小可由用户进行调整,以便用户可以基于瘦脸强度系数k对当前的瘦脸程度进行调整。Wherein, the face-thinning intensity coefficient k may be included in the face adjustment instruction, and the size of the face-thinning intensity coefficient k can be adjusted by the user, so that the user can adjust the current face-thinning degree based on the face-thinning intensity coefficient k.
于本公开实施例中,当运行图像处理方法的电子设备获取并响应人脸调整指令时(其中,该人脸调整指令中携带有瘦脸强度系数),可以获取处理图像(可以是待处理图像,也可以是从待处理图像中截取出的人脸图像)的人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息。然后基于瘦脸强度系数k、移动前的位置信息和移动后的位置信息,确定处理图像(可以是待处理图像,也可以是从待处理图像中截取出的人脸图像)的人脸区域中经过移动的各个像素点所对应的目标位置信息,并将人脸区域中经过移动后的各个像素点移动至目标位置信息所对应的目标位置处。In the embodiment of the present disclosure, when the electronic device running the image processing method acquires and responds to the face adjustment instruction (wherein the face adjustment instruction carries the face-thinning intensity coefficient), it can acquire the processed image (which may be the image to be processed, It may also be the position information of each pixel in the face area cut out from the image to be processed before the movement and the position information after the movement. Then, based on the face-thinning intensity coefficient k, the position information before the movement and the position information after the movement, determine the face area of the processed image (which can be the image to be processed, or the face image intercepted from the image to be processed). The target position information corresponding to each moved pixel point, and move each moved pixel point in the face area to the target position corresponding to the target position information.
在一种可选实施方式中,上述位置信息可以通过坐标信息表征;相应地,上述基于瘦脸强度系数k、移动前的位置信息和移动后的位置信息,确定处理图像的人脸区域中经过移动的各个像素点所对应的目标位置信息,具体可以通过如下过程实现:In an optional implementation manner, the above position information can be characterized by coordinate information; correspondingly, based on the face thinning intensity coefficient k, the position information before the movement and the position information after the movement, determine the face area of the processed image that has been moved The target position information corresponding to each pixel of , can be realized through the following process:
针对人脸区域中经过移动的各个像素点,确定第二坐标信息(x′ i,y′ i)和第一坐标信息(x i,y i)之间的第一坐标差值,然后计算瘦脸强度系数k与第一坐标差值之间的第二乘积值,并将第一坐标信息(x i,y i)和第二乘积值的和值确定为目标位置信息
Figure PCTCN2022087744-appb-000001
For each pixel point that has been moved in the face area, determine the first coordinate difference between the second coordinate information (x′ i , y′ i ) and the first coordinate information ( xi , y i ), and then calculate the thin face The second product value between the intensity coefficient k and the first coordinate difference, and the sum of the first coordinate information ( xi , y i ) and the second product value is determined as the target position information
Figure PCTCN2022087744-appb-000001
其中,移动前的位置信息可以通过第一坐标信息表征,移动后的位置信息可以通过第 二坐标信息表征。Wherein, the location information before the movement can be represented by the first coordinate information, and the location information after the movement can be represented by the second coordinate information.
例如上述过程可以基于公式
Figure PCTCN2022087744-appb-000002
来实现,k∈[0,1]。
For example the above procedure could be based on the formula
Figure PCTCN2022087744-appb-000002
To achieve, k∈[0,1].
当然,值得指出的是,当进行瘦脸处理时的处理对象为待处理图像时,上述所涉及到的坐标信息可以为属于第一坐标系的坐标信息;当上述处理对象是从待处理图像中截取出的人脸图像时,上述所涉及到的坐标信息可以为属于第二坐标系的坐标信息。Of course, it is worth pointing out that when the processing object is an image to be processed when performing face-lifting processing, the coordinate information involved above may be coordinate information belonging to the first coordinate system; When the human face image is displayed, the above-mentioned coordinate information may be coordinate information belonging to the second coordinate system.
可选地,在一些实施方式中,还可以基于瘦脸强度系数k、移动前的位置信息和移动后的位置信息,确定处理图像(可以是待处理图像,也可以是从待处理图像中截取出的人脸图像)的人脸区域中各像素在X轴和Y轴的移动信息(该移动信息包括移动距离和移动方向),并基于上述移动信息对人脸区域内的各个像素点进行移动。Optionally, in some embodiments, it is also possible to determine the processed image (which may be the image to be processed, or the The movement information of each pixel on the X-axis and the Y-axis (the movement information includes the moving distance and the moving direction) in the human face area of the human face image), and each pixel in the human face area is moved based on the above-mentioned movement information.
可选地,在一种可选实施方式中,上述位置信息可通过坐标信息表征,相应地,可以通过如下过程确定上述各像素点所对应的移动信息:Optionally, in an optional implementation manner, the above position information may be represented by coordinate information, and correspondingly, the movement information corresponding to each of the above pixel points may be determined through the following process:
针对人脸区域中经过移动的各个像素点,确定第二坐标信息(x′ i,y′ i)和第一坐标信息(x i,y i)之间的第一坐标差值;然后计算设定数值与瘦脸强度系数k之间的差值,计算该差值与第一坐标差值之间的乘积,将该乘积值确定为上述移动信息,上述乘积值可以为正值、也可以为负值,上述乘积值的正负表示移动的方向。 Determine the first coordinate difference between the second coordinate information (x′ i , y′ i ) and the first coordinate information ( xi , y i ) for each pixel point that has been moved in the face area; then calculate the design The difference between the fixed value and the face-lifting intensity coefficient k, calculate the product between the difference and the first coordinate difference, and determine the product value as the above-mentioned moving information, and the above-mentioned product value can be positive or negative Value, the positive or negative of the above product value indicates the direction of movement.
当然,本公开实施例只是示例性的列举了基于瘦脸强度系数进行瘦脸时的可能的实现方式,除此之外,还可以基于瘦脸强度系数采用其他方式进行瘦脸,此处不再一一赘述。Of course, the embodiments of the present disclosure only exemplify possible implementations of face thinning based on the face thinning intensity coefficient. In addition, face thinning can also be performed in other ways based on the face thinning intensity coefficient, which will not be repeated here.
此外,在一些实施方式中,当进行瘦脸处理时的处理对象为从待处理图像中截取出的人脸图像时,在对人脸图像进行瘦脸处理后,还需要用经过瘦脸处理后的人脸图像替换待处理图像所包括的原有的人脸图像,从而使得待处理图像呈现出瘦脸的效果。In addition, in some implementations, when the processing object of the face thinning process is a human face image cut out from the image to be processed, after the face thinning process is performed on the face image, it is also necessary to use the face after the face thinning process The image replaces the original face image included in the image to be processed, so that the image to be processed presents a face-thinning effect.
此外,请参照图2,本公开实施例还提供一种图像处理装置400,图像处理装置400可以包括:获取模块410、确定模块420、划分模块430以及调整模块440。In addition, referring to FIG. 2 , an embodiment of the present disclosure further provides an image processing apparatus 400 , which may include: an acquisition module 410 , a determination module 420 , a division module 430 and an adjustment module 440 .
获取模块410,被配置成获取待处理图像的人脸关键点信息,所述待处理图像可以包括人脸区域;The obtaining module 410 is configured to obtain face key point information of an image to be processed, and the image to be processed may include a face area;
确定模块420,被配置成根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点;The determination module 420 is configured to determine a moving reference point when moving pixels in the human face area according to the key point information of the human face;
划分模块430,被配置成根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域;The dividing module 430 is configured to divide the human face area into a plurality of local areas according to the key point information of the human face;
调整模块440,被配置成针对每个所述局部区域,基于所述局部区域对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。The adjustment module 440 is configured to, for each of the local areas, move each pixel in the local area toward the moving reference point based on the pixel movement strategy parameter corresponding to the local area.
在一种可能的实施方式中,所述像素移动策略可以包括像素位置调整比例;所述调整模块440,被配置成针对所述局部区域内的每个像素点,确定所述像素点与所述移动参考点之间的第一距离;基于所述第一距离和对应的像素位置调整比例,确定所述像素点与所述移动参考点之间的第二距离;将所述像素点朝向所述移动参考点的方向移动,以使所述像素点在移动后与所述移动参考点之间的距离等于所述第二距离。In a possible implementation manner, the pixel moving strategy may include a pixel position adjustment ratio; the adjustment module 440 is configured to determine the relationship between the pixel point and the pixel point for each pixel point in the local area. moving a first distance between reference points; adjusting a ratio based on the first distance and a corresponding pixel position, determining a second distance between the pixel point and the moving reference point; moving the pixel point toward the The direction of the moving reference point is moved, so that the distance between the pixel point and the moving reference point after the movement is equal to the second distance.
在一种可能的实施方式中,所述调整模块440,被配置成确定所述第一距离与所述像素点所属的局部区域所对应的像素位置调整比例的第一乘积值;将所述第一乘积值确定为所述第二距离。In a possible implementation manner, the adjustment module 440 is configured to determine a first product value of the first distance and a pixel position adjustment ratio corresponding to the local area to which the pixel point belongs; A product value is determined as the second distance.
在一种可能的实施方式中,所述调整模块440,还被配置成响应于用户触发的人脸调整指令,获取所述人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息;其中,所述人脸调整指令中可以携带有瘦脸强度系数;基于所述瘦脸强度系数、所述移动前的位置信息和所述移动后的位置信息,确定所述人脸区域中经过移动的各个像素点所对应的目标位置信息;将所述人脸区域中经过移动的各个像素点移动至所述目标位置信息所对应的目标位置处。In a possible implementation manner, the adjustment module 440 is further configured to, in response to a user-triggered face adjustment instruction, obtain the position information and movement information of each pixel in the face area that has been moved before the movement. post-movement position information; wherein, the face adjustment instruction may carry a face-thinning intensity coefficient; based on the face-thinning intensity coefficient, the position information before the movement, and the position information after the movement, determine the face area the target position information corresponding to each pixel point that has been moved; and move each pixel point that has been moved in the face area to the target position corresponding to the target position information.
在一种可能的实施方式中,所述位置信息通过坐标信息表征;所述调整模块440,被配置成针对所述人脸区域中经过移动的各个像素点,确定第二坐标信息和第一坐标信息之间的第一坐标差值;其中,所述移动前的位置信息可以通过所述第一坐标信息表征,所述移动后的位置信息可以通过所述第二坐标信息表征;计算所述瘦脸强度系数与所述第一坐标差值之间的第二乘积值;将所述第一坐标信息和所述第二乘积值的和值,确定为所述目标位置信息。In a possible implementation manner, the position information is represented by coordinate information; the adjustment module 440 is configured to determine the second coordinate information and the first coordinate information for each pixel point in the face area that has been moved The first coordinate difference between the information; wherein, the position information before the movement can be represented by the first coordinate information, and the position information after the movement can be represented by the second coordinate information; calculate the thin face A second product value between the intensity coefficient and the first coordinate difference; determining the sum of the first coordinate information and the second product value as the target position information.
在一种可能的实施方式中,所述确定模块420,被配置成确定所述人脸区域中双眼中心位置处所对应的人脸关键点,将所述双眼中心位置处所对应的人脸关键点确定为所述移动参考点;In a possible implementation manner, the determination module 420 is configured to determine the key points of the human face corresponding to the center position of the eyes in the face area, and determine the key point of the human face corresponding to the center position of the eyes is said mobile reference point;
或者,被配置成确定所述人脸区域中鼻尖位置处所对应的人脸关键点,将所述鼻尖位置处所对应的人脸关键点确定为所述移动参考点。Or, it is configured to determine the key point of the human face corresponding to the position of the tip of the nose in the human face area, and determine the key point of the human face corresponding to the position of the tip of the nose as the moving reference point.
在一种可能的实施方式中,所述确定模块420,被配置成基于位于所述人脸区域竖直方向上的中心线上的人脸关键点确定参考线;针对所述人脸区域中的每个像素点,将所述参考线上与所述像素点之间的距离最小的人脸关键点确定为所述像素点对应的移动参考点;In a possible implementation manner, the determination module 420 is configured to determine a reference line based on the key points of the face located on the center line in the vertical direction of the face area; For each pixel point, the face key point with the minimum distance between the reference line and the pixel point is determined as the moving reference point corresponding to the pixel point;
相应地,所述调整模块440,被配置成将所述局部区域内的各个像素点朝向与其对应的移动参考点的方向移动。Correspondingly, the adjustment module 440 is configured to move each pixel point in the local area toward a direction corresponding to the moving reference point.
在一种可能的实施方式中,所述人脸关键点信息可以包括人脸关键点的标识信息,所述划分模块430,被配置成根据获取到的人脸关键点的标识信息与人脸器官的对应关系, 确定各个人脸器官所对应的人脸关键点集合;将每个人脸关键点集合中的各个人脸关键点所围成的区域确定为一个局部区域。In a possible implementation manner, the key point information of the face may include identification information of the key points of the face, and the division module 430 is configured to The corresponding relationship of each face organ is determined to determine the face key point set corresponding to each face key point set; the area surrounded by each face key point in each face key point set is determined as a local area.
在一种可能的实施方式中,不同的局部区域所对应的像素位置调整比例不同。In a possible implementation manner, pixel position adjustment ratios corresponding to different local regions are different.
在一种可能的实施方式中,所述获取模块410,被配置成将所述待处理图像输入至人脸关键点检测模型,通过所述人脸关键点检测模型对所述待处理图像进行人脸关键点检测;获取所述人脸关键点检测模型输出的人脸关键点信息。In a possible implementation manner, the acquisition module 410 is configured to input the image to be processed into a human face key point detection model, and perform artificial intelligence on the image to be processed through the human face key point detection model. Facial key point detection; obtaining the human face key point information output by the human face key point detection model.
在一种可能的实施方式中,所述人脸关键点信息可以包括人脸定位框的坐标信息以及所述人脸关键点的坐标信息;所述装置还可以包括归一化模块,被配置成基于所述人脸定位框的坐标信息对所述人脸关键点的坐标信息进行归一化处理,得到归一化后的人脸关键点的坐标信息。In a possible implementation manner, the face key point information may include the coordinate information of the face positioning frame and the coordinate information of the face key point; the device may also include a normalization module configured to The coordinate information of the key points of the face is normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face is obtained.
本公开实施例所提供的图像处理装置400,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。The image processing device 400 provided by the embodiment of the present disclosure has the same realization principle and technical effect as the aforementioned method embodiment. For a brief description, for the part not mentioned in the device embodiment, please refer to the corresponding content in the aforementioned method embodiment. .
此外,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被计算机运行时,执行如上述的图像处理方法所包含的步骤。In addition, an embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is run by a computer, the steps included in the above-mentioned image processing method are executed.
此外,请参照图3,本公开实施例还提供一种用于实施图像处理方法、装置的电子设备100。In addition, referring to FIG. 3 , an embodiment of the present disclosure further provides an electronic device 100 for implementing an image processing method and apparatus.
电子设备100可以是移动手机、智能相机、平板电脑、个人电脑(Personal computer,PC)等设备。用户可以通过电子设备100进行拍照、视频直播、图像处理等活动。The electronic device 100 may be a mobile phone, a smart camera, a tablet computer, a personal computer (Personal computer, PC) and other devices. Users can use the electronic device 100 to perform activities such as taking pictures, live video broadcasting, and image processing.
其中,电子设备100可以包括:处理器110、存储器120、显示屏130。Wherein, the electronic device 100 may include: a processor 110 , a memory 120 , and a display screen 130 .
应当注意,图3所示的电子设备100的组件和结构只是示例性的,而非限制性的,根据需要,电子设备100也可以具有其他组件和结构。例如,在一些情况下,电子设备100还可以包括摄像头,被配置成实时捕捉待处理图像。It should be noted that the components and structure of the electronic device 100 shown in FIG. 3 are only exemplary rather than limiting, and the electronic device 100 may also have other components and structures as required. For example, in some cases, the electronic device 100 may further include a camera configured to capture images to be processed in real time.
处理器110、存储器120、显示屏130以及其他可能出现于电子设备100的组件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,处理器110、存储器120、显示屏130以及其他可能出现的组件相互之间可以通过一条或多条通讯总线或信号线实现电性连接。The processor 110 , the memory 120 , the display screen 130 and other components that may appear in the electronic device 100 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the processor 110, the memory 120, the display screen 130 and other possible components may be electrically connected to each other through one or more communication buses or signal lines.
存储器120用于存储程序,例如存储有前文出现的图像处理方法对应的程序或者前文出现的图像处理装置。可选地,当存储器120内存储有图像处理装置时,图像处理装置可以包括至少一个可以以软件或固件(firmware)的形式存储于存储器120中的软件功能模块。The memory 120 is used to store a program, for example, a program corresponding to the above-mentioned image processing method or the above-mentioned image processing device is stored. Optionally, when the image processing device is stored in the memory 120, the image processing device may include at least one software function module that may be stored in the memory 120 in the form of software or firmware (firmware).
可选地,图像处理装置所包括的软件功能模块也可以固化在电子设备100的操作系统(operating system,OS)中。Optionally, the software function modules included in the image processing apparatus may also be solidified in an operating system (operating system, OS) of the electronic device 100 .
处理器110被配置成执行存储器120中存储的可执行模块,例如图像处理装置所包括 的软件功能模块或计算机程序。当处理器110在接收到执行指令后,可以执行计算机程序,例如执行:获取待处理图像的人脸关键点信息,所述待处理图像包括人脸区域;根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点;根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域;针对每个所述局部区域,基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。The processor 110 is configured to execute executable modules stored in the memory 120, such as software function modules or computer programs included in the image processing apparatus. After the processor 110 receives the execution instruction, it can execute the computer program, for example, execute: acquire the face key point information of the image to be processed, the image to be processed includes the face area; according to the face key point information, determine A moving reference point when moving the pixels in the face area; according to the key point information of the face, dividing the face area into a plurality of local areas; for each of the local areas, based on the The parameter of the pixel movement strategy corresponding to the local area is used to move each pixel in the local area toward the direction of the moving reference point.
当然,本公开任一实施例所揭示的方法都可以应用于处理器110中,或者由处理器110实现。Certainly, the method disclosed in any embodiment of the present disclosure may be applied to the processor 110 or implemented by the processor 110 .
本公开实施例还提供了一种包括代码指令的计算机程序产品,所述代码指令在被处理器执行时使所述处理器执行上述本公开实施例和/或结合本公开实施例的任一种可能的实施方式所提供的方法。Embodiments of the present disclosure also provide a computer program product including code instructions, which, when executed by a processor, cause the processor to execute any one of the above-mentioned embodiments of the present disclosure and/or in combination with the embodiments of the present disclosure. Possible implementations of the methods provided.
本公开实施例还提供了一种计算机程序,所述计算机程序在被计算机运行时执行述本公开实施例和/或结合本公开实施例的任一种可能的实施方式所提供的方法。The embodiments of the present disclosure also provide a computer program, which executes the method provided by the embodiments of the present disclosure and/or in combination with any possible implementation manners of the embodiments of the present disclosure when the computer program is run by a computer.
综上所述,本公开实施例提出的图像处理方法、装置、电子设备及计算机可读存储介质,当需要对待处理图像进行瘦脸处理时,可以先获取待处理图像的人脸关键点信息,然后根据人脸关键点信息,确定移动参考点以及将人脸区域划分为不同的局部区域,那么在进行瘦脸处理时,将各个局部区域所包括的像素点按照其所属的局部区域所对应的像素移动策略往移动参考点的方向移动,从而可以使得最终得到的瘦脸效果尽可能的自然,进而可以提高瘦脸后所呈现出的视觉效果。To sum up, the image processing method, device, electronic device, and computer-readable storage medium proposed by the embodiments of the present disclosure, when the image to be processed needs to be thinned, can first obtain the face key point information of the image to be processed, and then According to the face key point information, determine the mobile reference point and divide the face area into different local areas, then when performing face thinning processing, move the pixels included in each local area according to the pixels corresponding to the local area to which it belongs The strategy moves in the direction of moving the reference point, so that the final face-lifting effect can be as natural as possible, and the visual effect after the face-lifting can be improved.
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。It should be noted that each embodiment in this specification is described in a progressive manner, each embodiment describes the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other .
在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本公开的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个被配置成实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。In the several embodiments provided in the present disclosure, it should be understood that the disclosed devices and methods may also be implemented in other ways. The device embodiments described above are only illustrative. For example, the flowcharts and block diagrams in the accompanying drawings show the architecture, functions and possible implementations of devices, methods and computer program products according to multiple embodiments of the present disclosure. operate. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions configured to implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
另外,在本公开各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也 可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。In addition, each functional module in each embodiment of the present disclosure may be integrated together to form an independent part, each module may exist independently, or two or more modules may be integrated to form an independent part.
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、笔记本电脑、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质可以包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are realized in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present disclosure or the part that contributes to the related technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product can be stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a notebook computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage medium can include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. medium.
以上所述,仅为本公开的可选实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可以轻易地想到变化或替换,这些变化或替换都应涵盖在本公开的保护范围之内。The above are only optional implementations of the present disclosure, but the scope of protection of the present disclosure is not limited thereto. Any person skilled in the art can easily think of changes or replacements within the technical scope of the present disclosure. , these changes or substitutions should all fall within the protection scope of the present disclosure.
工业实用性Industrial Applicability
本公开提供了一种图像处理方法、装置、电子设备及计算机可读存储介质。当需要对待处理图像进行瘦脸处理时,先获取待处理图像的人脸关键点信息,然后根据人脸关键点信息,确定对人脸区域中的像素点进行移动时的移动参考点以及将人脸区域划分为不同的局部区域,在进行瘦脸处理时,将各个局部区域所包括的像素点按照其所属的局部区域所对应的像素移动策略参数往移动参考点的方向移动,从而可以使得最终得到的瘦脸效果尽可能的自然,进而可以提高瘦脸后所呈现出的视觉效果。The disclosure provides an image processing method, device, electronic equipment and computer-readable storage medium. When the image to be processed needs to be thinned, the key point information of the face of the image to be processed is obtained first, and then according to the key point information of the face, the reference point for moving the pixels in the face area is determined and the face is moved. The area is divided into different local areas. When performing face thinning processing, the pixels included in each local area are moved in the direction of the moving reference point according to the pixel movement strategy parameters corresponding to the local areas to which they belong, so that the final obtained The effect of face-lifting is as natural as possible, which can improve the visual effect after face-lifting.
此外,可以理解的是,本公开的图像处理方法、装置、电子设备及计算机可读存储介质是可以重现的,并且可以使用在多种工业应用中,例如,图像处理领域。In addition, it can be understood that the image processing method, device, electronic device, and computer-readable storage medium of the present disclosure are reproducible, and can be used in various industrial applications, for example, in the field of image processing.

Claims (15)

  1. 一种图像处理方法,其特征在于,所述方法包括:An image processing method, characterized in that the method comprises:
    获取待处理图像的人脸关键点信息,所述待处理图像包括人脸区域;Acquiring face key point information of an image to be processed, the image to be processed includes a face area;
    根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点;According to the key point information of the human face, determine a moving reference point when moving the pixels in the human face area;
    根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域;According to the key point information of the human face, the human face area is divided into a plurality of local areas;
    针对每个所述局部区域,基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动。For each local area, each pixel in the local area is moved toward the moving reference point based on the pixel movement strategy parameter corresponding to the local area.
  2. 根据权利要求1所述的方法,其特征在于,所述像素移动策略参数包括像素位置调整比例;The method according to claim 1, wherein the pixel movement strategy parameters include a pixel position adjustment ratio;
    相应地,所述基于所述局部区域所对应的像素移动策略参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动,包括:Correspondingly, the moving each pixel in the local area toward the moving reference point based on the pixel movement strategy parameter corresponding to the local area includes:
    针对所述局部区域内的每个像素点,确定所述像素点与所述移动参考点之间的第一距离;For each pixel in the local area, determine a first distance between the pixel and the moving reference point;
    基于所述第一距离和对应的像素位置调整比例,确定所述像素点与所述移动参考点之间的第二距离;determining a second distance between the pixel point and the moving reference point based on the first distance and a corresponding pixel position adjustment ratio;
    将所述像素点朝向所述移动参考点的方向移动,以使所述像素点在移动后与所述移动参考点之间的距离等于所述第二距离。Moving the pixel point toward the direction of the moving reference point, so that the distance between the pixel point and the moving reference point after moving is equal to the second distance.
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述第一距离和所述对应的像素位置调整比例,确定所述像素点与所述移动参考点之间的第二距离,包括:The method according to claim 2, wherein said determining a second distance between said pixel point and said moving reference point based on said first distance and said corresponding pixel position adjustment ratio comprises :
    确定所述第一距离与所述像素点所属的局部区域所对应的像素位置调整比例的第一乘积值;determining a first product value of the first distance and a pixel position adjustment ratio corresponding to the local area to which the pixel belongs;
    将所述第一乘积值确定为所述第二距离。The first product value is determined as the second distance.
  4. 根据权利要求1所述的方法,其特征在于,所述基于所述局部区域所对应的像素移动参数,将所述局部区域内的各个像素点朝向所述移动参考点的方向移动之后,所述方法还包括:The method according to claim 1, characterized in that, after moving each pixel in the local area toward the moving reference point based on the pixel movement parameters corresponding to the local area, the Methods also include:
    响应于用户触发的人脸调整指令,获取所述人脸区域中经过移动的各个像素点在移动前的位置信息和移动后的位置信息;其中,所述人脸调整指令中携带有瘦脸强度系数;In response to the face adjustment command triggered by the user, the position information of each pixel point in the face area that has been moved before the movement and the position information after the movement are obtained; wherein, the face adjustment command carries a face-thinning intensity coefficient ;
    基于所述瘦脸强度系数、所述移动前的位置信息和所述移动后的位置信息,确定所述人脸区域中经过移动的各个像素点所对应的目标位置信息;Based on the face-thinning intensity coefficient, the position information before the movement, and the position information after the movement, determine the target position information corresponding to each pixel point in the face area that has been moved;
    将所述人脸区域中经过移动的各个像素点移动至所述目标位置信息所对应的目标位置处。Moving the moved pixels in the face area to the target position corresponding to the target position information.
  5. 根据权利要求4所述的方法,其特征在于,所述位置信息通过坐标信息表征;The method according to claim 4, wherein the location information is represented by coordinate information;
    所述基于所述瘦脸强度系数、所述移动前的位置信息和所述移动后的位置信息,确定所述人脸区域中经过移动的各个像素点所对应的目标位置信息,包括:The determining the target position information corresponding to each pixel point in the face area that has been moved based on the face-thinning intensity coefficient, the position information before the movement, and the position information after the movement includes:
    针对所述人脸区域中经过移动的各个像素点,确定第二坐标信息和第一坐标信息之间的第一坐标差值;其中,所述移动前的位置信息通过所述第一坐标信息表征,所述移动后的位置信息通过所述第二坐标信息表征;For each pixel in the face area that has been moved, determine a first coordinate difference between the second coordinate information and the first coordinate information; wherein, the position information before the movement is represented by the first coordinate information , the moved position information is represented by the second coordinate information;
    计算所述瘦脸强度系数与所述第一坐标差值之间的第二乘积值;calculating a second product value between the face-thinning strength coefficient and the first coordinate difference;
    将所述第一坐标信息和所述第二乘积值的和值,确定为所述目标位置信息。A sum of the first coordinate information and the second product value is determined as the target position information.
  6. 根据权利要求4或5所述的方法,其特征在于,所述瘦脸强度系数的大小可调。The method according to claim 4 or 5, characterized in that the size of the face-thinning intensity coefficient is adjustable.
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点,包括:The method according to any one of claims 1 to 6, wherein, according to the key point information of the human face, determining the moving reference point when moving the pixel points in the human face area includes:
    确定所述人脸区域中双眼中心位置处所对应的人脸关键点,将所述双眼中心位置处所对应的人脸关键点确定为所述移动参考点;Determining the key point of the face corresponding to the center position of the eyes in the face area, and determining the key point of the face corresponding to the center position of the eyes as the moving reference point;
    或者,or,
    确定所述人脸区域中鼻尖位置处所对应的人脸关键点,将所述鼻尖位置处所对应的人脸关键点确定为所述移动参考点。Determining a human face key point corresponding to a nose tip position in the human face area, and determining the human face key point corresponding to the nose tip position as the moving reference point.
  8. 根据权利要求1至6任一项所述的方法,其特征在于,所述根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点,包括:The method according to any one of claims 1 to 6, wherein, according to the key point information of the human face, determining the moving reference point when moving the pixel points in the human face area includes:
    基于位于所述人脸区域竖直方向上的中心线上的人脸关键点确定参考线;Determining a reference line based on the key points of the face located on the center line in the vertical direction of the face area;
    针对所述人脸区域中的每个像素点,将所述参考线上与所述像素点之间的距离最小的人脸关键点确定为所述像素点对应的移动参考点;For each pixel point in the human face area, determine the human face key point with the smallest distance between the reference line and the pixel point as the moving reference point corresponding to the pixel point;
    相应地,所述将所述局部区域内的各个像素点朝向所述移动参考点的方向移动,包括:Correspondingly, the moving each pixel point in the local area toward the direction of the moving reference point includes:
    将所述局部区域内的各个像素点朝向与其对应的移动参考点的方向移动。Each pixel point in the local area is moved towards the direction corresponding to the moving reference point.
  9. 根据权利要求1至6中任一项所述的方法,其特征在于,所述人脸关键点信息包括人脸关键点的标识信息,所述根据所述人脸关键点信息,将所述人脸区域划分为多个局部区域,包括:The method according to any one of claims 1 to 6, wherein the facial key point information includes identification information of human face key points, and according to the human face key point information, the The face region is divided into multiple local regions, including:
    根据获取到的人脸关键点的标识信息与人脸器官的对应关系,确定各个人脸器官所对应的人脸关键点集合;According to the corresponding relationship between the acquired identification information of the key points of the face and the facial organs, determine the set of key points of the face corresponding to each of the facial organs;
    将每个人脸关键点集合中的各个人脸关键点所围成的区域确定为一个局部区域。The area surrounded by each face key point in each face key point set is determined as a local area.
  10. 根据权利要求2或3所述的方法,其特征在于,不同的局部区域所对应的像素位置调整比例不同。The method according to claim 2 or 3, characterized in that the pixel position adjustment ratios corresponding to different local areas are different.
  11. 根据权利要求1至6中任一项所述的方法,其特征在于,所述获取待处理图像的 人脸关键点信息,包括:The method according to any one of claims 1 to 6, wherein said acquisition of the face key point information of the image to be processed comprises:
    将所述待处理图像输入至人脸关键点检测模型,通过所述人脸关键点检测模型对所述待处理图像进行人脸关键点检测;The image to be processed is input to a human face key point detection model, and the human face key point detection is carried out to the image to be processed by the human face key point detection model;
    获取所述人脸关键点检测模型输出的所述人脸关键点信息。The human face key point information output by the human face key point detection model is acquired.
  12. 根据权利要求1至6任一项所述的方法,其特征在于,所述人脸关键点信息包括人脸定位框的坐标信息以及所述人脸关键点的坐标信息;所述根据所述人脸关键点信息,确定对所述人脸区域中的像素点进行移动时的移动参考点之前,所述方法还包括:The method according to any one of claims 1 to 6, wherein the face key point information includes the coordinate information of the face positioning frame and the coordinate information of the face key point; Face key point information, before determining the moving reference point when moving the pixels in the human face area, the method also includes:
    基于所述人脸定位框的坐标信息对所述人脸关键点的坐标信息进行归一化处理,得到归一化后的人脸关键点的坐标信息。The coordinate information of the key points of the face is normalized based on the coordinate information of the face positioning frame, and the normalized coordinate information of the key points of the face is obtained.
  13. 一种电子设备,其特征在于,包括:存储器和处理器,所述存储器和所述处理器连接;An electronic device, characterized in that it includes: a memory and a processor, the memory is connected to the processor;
    所述存储器用于存储程序;The memory is used to store programs;
    所述处理器调用存储于所述存储器中的程序,以执行根据权利要求1至12中任一项所述的方法。The processor invokes a program stored in the memory to perform the method according to any one of claims 1 to 12.
  14. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机运行时执行根据权利要求1至12中任一项所述的方法。A computer-readable storage medium, characterized in that a computer program is stored thereon, and when the computer program is run by a computer, the method according to any one of claims 1 to 12 is executed.
  15. 一种包括代码指令的计算机程序产品,所述代码指令在被处理器执行时使所述处理器执行根据权利要求1至12中任一项所述的方法。A computer program product comprising code instructions which, when executed by a processor, cause the processor to perform the method according to any one of claims 1 to 12.
PCT/CN2022/087744 2021-06-23 2022-04-19 Image processing method, electronic device, and computer readable storage medium WO2022267653A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110701338.XA CN113591562B (en) 2021-06-23 Image processing method, device, electronic equipment and computer readable storage medium
CN202110701338.X 2021-06-23

Publications (1)

Publication Number Publication Date
WO2022267653A1 true WO2022267653A1 (en) 2022-12-29

Family

ID=78244528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/087744 WO2022267653A1 (en) 2021-06-23 2022-04-19 Image processing method, electronic device, and computer readable storage medium

Country Status (1)

Country Link
WO (1) WO2022267653A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994947A (en) * 2023-03-22 2023-04-21 万联易达物流科技有限公司 Positioning-based intelligent card punching estimation method
CN117974902A (en) * 2024-02-26 2024-05-03 杭州万物互云科技有限公司 Digital three-dimensional face modeling method
CN118052723A (en) * 2023-12-08 2024-05-17 深圳市石代科技集团有限公司 Intelligent design system for face replacement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198141A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Realize image processing method, device and the computing device of thin face special efficacy
CN111652794A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face adjusting method, face live broadcasting method, face adjusting device, live broadcasting device, electronic equipment and storage medium
CN111652795A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN112488909A (en) * 2019-09-11 2021-03-12 广州虎牙科技有限公司 Multi-face image processing method, device, equipment and storage medium
US20210166070A1 (en) * 2019-12-02 2021-06-03 Qualcomm Incorporated Multi-Stage Neural Network Process for Keypoint Detection In An Image
CN113591562A (en) * 2021-06-23 2021-11-02 北京旷视科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198141A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Realize image processing method, device and the computing device of thin face special efficacy
CN111652794A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face adjusting method, face live broadcasting method, face adjusting device, live broadcasting device, electronic equipment and storage medium
CN111652795A (en) * 2019-07-05 2020-09-11 广州虎牙科技有限公司 Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN112488909A (en) * 2019-09-11 2021-03-12 广州虎牙科技有限公司 Multi-face image processing method, device, equipment and storage medium
US20210166070A1 (en) * 2019-12-02 2021-06-03 Qualcomm Incorporated Multi-Stage Neural Network Process for Keypoint Detection In An Image
CN113591562A (en) * 2021-06-23 2021-11-02 北京旷视科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994947A (en) * 2023-03-22 2023-04-21 万联易达物流科技有限公司 Positioning-based intelligent card punching estimation method
CN115994947B (en) * 2023-03-22 2023-06-02 万联易达物流科技有限公司 Positioning-based intelligent card punching estimation method
CN118052723A (en) * 2023-12-08 2024-05-17 深圳市石代科技集团有限公司 Intelligent design system for face replacement
CN117974902A (en) * 2024-02-26 2024-05-03 杭州万物互云科技有限公司 Digital three-dimensional face modeling method

Also Published As

Publication number Publication date
CN113591562A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
WO2022267653A1 (en) Image processing method, electronic device, and computer readable storage medium
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
US10599914B2 (en) Method and apparatus for human face image processing
WO2019218824A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
WO2022156640A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
WO2021213067A1 (en) Object display method and apparatus, device and storage medium
CN108388882B (en) Gesture recognition method based on global-local RGB-D multi-mode
Liu et al. Real-time robust vision-based hand gesture recognition using stereo images
CN102332095B (en) Face motion tracking method, face motion tracking system and method for enhancing reality
WO2020125499A1 (en) Operation prompting method and glasses
US20140153832A1 (en) Facial expression editing in images based on collections of images
WO2020078119A1 (en) Method, device and system for simulating user wearing clothing and accessories
WO2019075666A1 (en) Image processing method and apparatus, terminal, and storage medium
WO2021218293A1 (en) Image processing method and apparatus, electronic device and storage medium
CN111563502A (en) Image text recognition method and device, electronic equipment and computer storage medium
WO2023050992A1 (en) Network training method and apparatus for facial reconstruction, and device and storage medium
WO2022042624A1 (en) Information display method and device, and storage medium
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
CN111079625A (en) Control method for camera to automatically rotate along with human face
WO2023098635A1 (en) Image processing
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
Purps et al. Reconstructing facial expressions of hmd users for avatars in vr

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827155

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22827155

Country of ref document: EP

Kind code of ref document: A1