CN110688962B - Face image processing method, user equipment, storage medium and device - Google Patents

Face image processing method, user equipment, storage medium and device Download PDF

Info

Publication number
CN110688962B
CN110688962B CN201910939068.9A CN201910939068A CN110688962B CN 110688962 B CN110688962 B CN 110688962B CN 201910939068 A CN201910939068 A CN 201910939068A CN 110688962 B CN110688962 B CN 110688962B
Authority
CN
China
Prior art keywords
face
image
building block
height
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910939068.9A
Other languages
Chinese (zh)
Other versions
CN110688962A (en
Inventor
孙碧亮
刘凯能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Ar Show Software Co ltd
Original Assignee
Wuhan Ar Show Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Ar Show Software Co ltd filed Critical Wuhan Ar Show Software Co ltd
Priority to CN201910939068.9A priority Critical patent/CN110688962B/en
Publication of CN110688962A publication Critical patent/CN110688962A/en
Application granted granted Critical
Publication of CN110688962B publication Critical patent/CN110688962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face image processing method, user equipment, a storage medium and a device. The method comprises the steps of obtaining a target image to be processed, identifying the position of a face in the target image, segmenting the identified face according to the position to generate a sectional image, and mapping the sectional image to the corresponding building block particle color to generate a face modular image. The identified human face is segmented and then mapped to the color of the building block particles to generate a human face building block image, so that the technical problem that the image is seriously distorted easily in the human face building block processing process is solved.

Description

Face image processing method, user equipment, storage medium and device
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, a user equipment, a storage medium, and an apparatus for processing a face image.
Background
Face recognition is a biometric technology for identifying an identity based on facial feature information of a person. A series of related technologies, also called face recognition and face recognition, are used to capture an image or video stream containing a human face by a camera or a video camera, automatically detect and track the human face in the image, and further perform face recognition on the detected human face.
The face building block is a special image formed by identifying the face in the image and processing the color of the image and the like and is used for making portrait pictures, souvenirs, gifts and the like
In the existing face building block processing process, the face building block is manufactured by processing such as image binaryzation, image edge detection and the like, and the situation of serious image distortion is easy to occur.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
The invention mainly aims to provide a face image processing method, user equipment, a storage medium and a device, and aims to solve the technical problem that images are easy to be seriously distorted in a face building block processing process in the prior art.
In order to achieve the above object, the present invention provides a face image processing method, comprising the following steps:
acquiring a target image to be processed;
identifying the position of the face in the target image;
segmenting the recognized face according to the position to generate a sectional image;
and mapping the sectional image to the corresponding building block particle color to generate the face building block image.
Preferably, the recognizing the position of the face in the target image includes:
and identifying the face in the target image based on a preset convolutional neural network model, and acquiring the position of the face.
Preferably, the segmenting the identified face according to the position, and generating the sectional image includes:
identifying a communication area where the face is located according to the position where the face is located;
and segmenting the face based on the identified connected region to generate a sectional image.
Preferably, the segmenting the human face based on the identified connected regions, and the generating the sectional image includes:
acquiring the maximum height and the maximum width of a communication area;
determining a compression ratio based on the maximum height, the maximum width and the height and width of the building block floor grain pixels;
determining the height and width of the compressed facial features in the target image according to the compression ratio;
when the compressed height is smaller than the height of the building block bottom plate particle pixel or/and the compressed width is smaller than the width of the building block bottom plate particle pixel, the maximum height and the maximum width of the communication area are adjusted, and the adjusted communication area is used as a cutout image.
Preferably, the adjusting the maximum height and the maximum width of the connected region, and taking the adjusted connected region as a matting image includes:
reducing the maximum height and the maximum width of the communication area according to a preset rule;
when the height after compression is greater than the height of building block bottom plate particle pixel and the width after compression is greater than the width of building block bottom plate particle pixel, the current connected region is used as a cutout image.
Preferably, the mapping the scratch image to the corresponding building block particle color to generate the face building block image includes:
dividing the sectional image into a plurality of characteristic subregions;
determining the color characteristic value of each characteristic subregion according to a preset formula;
matching the color characteristic value with the color of the building blocks according to the color characteristic value and the color value interval of the building blocks to be spliced;
and performing threshold segmentation on each characteristic subregion according to the color characteristic value to generate a face cordwood image.
Preferably, the characteristic sub-regions include an eye characteristic sub-region, an eyebrow characteristic sub-region, a nose characteristic sub-region, a mouth characteristic sub-region, and a cheek characteristic sub-region.
In order to achieve the above object, the present invention further provides a user equipment, where the user equipment includes: the face image processing method comprises a memory, a processor and a face image processing program stored on the memory and capable of running on the processor, wherein the steps of the face image processing method are realized when the face image processing program is executed by the processor.
In order to achieve the above object, the present invention further provides a storage medium, in which a face image processing program is stored, and the face image processing program, when executed by a processor, implements the steps of the face image processing method as described above.
In order to achieve the above object, the present invention further provides a face image processing apparatus, including:
the acquisition module is used for acquiring a target image to be processed;
the recognition module is used for recognizing the position of the face in the target image;
the segmentation module is used for segmenting the identified human face according to the position to generate a sectional image;
and the mapping module is used for mapping the sectional image to the corresponding building block particle color so as to generate the face cordwood image.
According to the technical scheme, the position of the face in the target image is identified by acquiring the target image to be processed, the identified face is segmented according to the position to generate a scratch image, and the scratch image is mapped to the corresponding building block particle color to generate a face building block image. According to the technical scheme, the face building block image is generated by mapping the segmented face to the color of the building block particles, so that the technical problem that the image is seriously distorted easily in the face building block processing process is solved.
Drawings
FIG. 1 is a schematic diagram of a user equipment architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of a face image processing method according to the present invention;
FIG. 3 is a detailed flowchart of step S30 in FIG. 2;
FIG. 4 is a detailed flowchart of step S31 in FIG. 3;
FIG. 5 is a detailed flowchart of step S324 in FIG. 4;
FIG. 6 is a detailed flowchart of step S40 in FIG. 2;
FIG. 7 is a functional block diagram of a face image processing apparatus according to a first embodiment of the present invention;
FIG. 8 is a diagram of a target image in the first embodiment of the face image processing method of the present invention;
FIG. 9 is a sectional image of a first embodiment of the face image processing method of the present invention;
FIG. 10 is a block diagram of a human face according to the first embodiment of the facial image processing method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 is a schematic structural diagram of a user equipment in a hardware operating environment according to an embodiment of the present invention. The user equipment comprises a processor 1001, a network interface 1004, a memory 1005 and a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001. The network interface 1004 may include a standard wired interface, a wireless interface (e.g., WI-FI interface). Optionally, the user device 104 may also include a user interface. The user Interface may include an I/O (Input/Output) Interface such as a USB Interface, and a Video Interface such as an HDMI (High Definition Multimedia Interface), an SDI (Digital component serial) Interface, a VGA (Video Graphics Array) Interface, and a DVI (Digital Visual Interface) Interface, etc. The I/O device and the display device can be in connected communication with the user device through the I/O interface and the video interface, respectively. The I/O device may be an input device such as a keyboard and a mouse. Those skilled in the art will appreciate that the user equipment configuration shown in fig. 1 does not constitute a limitation of the user equipment and may be more or less components than those shown.
The memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and an access display program. The user equipment can be electronic equipment such as a personal computer or a smart phone.
In the user equipment shown in fig. 1, the network interface 1004 is mainly used for connecting a terminal and communicating data with the terminal; and the processor 1001 may be configured to call the access display program stored in the memory 1005 and perform the following operations:
acquiring a target image to be processed;
identifying the position of the face in the target image;
segmenting the recognized face according to the position to generate a sectional image;
and mapping the sectional image to the corresponding building block particle color to generate the face building block image.
Preferably, the recognizing the position of the face in the target image includes:
and identifying the face in the target image based on a preset convolutional neural network model, and acquiring the position of the face.
Preferably, the segmenting the identified face according to the position, and generating the sectional image includes:
identifying a communication area where the face is located according to the position where the face is located;
and segmenting the face based on the identified connected region to generate a sectional image.
Preferably, the segmenting the human face based on the identified connected regions, and the generating the sectional image includes:
acquiring the maximum height and the maximum width of a communication area;
determining a compression ratio based on the maximum height, the maximum width and the height and width of the building block floor grain pixels;
determining the height and width of the compressed facial features in the target image according to the compression ratio;
when the compressed height is smaller than the height of the building block bottom plate particle pixel or/and the compressed width is smaller than the width of the building block bottom plate particle pixel, the maximum height and the maximum width of the communication area are adjusted, and the adjusted communication area is used as a cutout image.
Preferably, the adjusting the maximum height and the maximum width of the connected region, and taking the adjusted connected region as a matting image includes:
reducing the maximum height and the maximum width of the communication area according to a preset rule;
when the height after compression is greater than the height of building block bottom plate particle pixel and the width after compression is greater than the width of building block bottom plate particle pixel, the current connected region is used as a cutout image.
Preferably, the mapping the scratch image to the corresponding building block particle color to generate the face building block image includes:
dividing the sectional image into a plurality of characteristic subregions;
determining the color characteristic value of each characteristic subregion according to a preset formula;
matching the color characteristic value with the color of the building blocks according to the color characteristic value and the color value interval of the building blocks to be spliced;
and performing threshold segmentation on each characteristic subregion according to the color characteristic value to generate a face cordwood image.
Based on the hardware structure, the embodiment of the face image processing method is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a face image processing method according to the present invention.
In a first embodiment, the face image processing method includes the steps of:
step S10: acquiring a target image to be processed;
it is worth mentioning that the image may be uploaded locally from the user device or may be obtained by instant shooting. In this embodiment, referring to the face image shown in fig. 8, the target image includes a face, so as to facilitate subsequent processing of the face.
Step S20: identifying the position of the face in the target image;
in order to extract a face image from a target image, the position of the face needs to be recognized first. The position in this embodiment refers to a region where the face is located in the target image. The face usually includes hair, forehead, eyebrow, cheek, nose, ear, mouth, and even ornaments, cap, neck, etc.
Step S30: segmenting the recognized face according to the position to generate a sectional image;
in this embodiment, 1500 images including only a human face and having a large background difference are used to obtain a database of human face images, so as to train a preset convolutional neural network model to recognize the human face. Because the final face is mapped to the building block bottom plate with limited area, too large non-face areas, such as too long hat, hair, too many neck, shoulders and the like, can reduce the features of the face, and finally the effect of building blocks is poor. Therefore, the human face needs to be segmented.
The principle of segmentation is that the proportion of five sense organs of the face is as large as possible, the proportion of features above the head is larger than that below the head, and the features of the five sense organs can still be kept from losing after being reduced to the size of pixels corresponding to the building block bottom plate. In general, a human face is characterized by a height from the chin to the forehead that is greater than the width of the cheeks. Therefore, the height from the chin to the forehead is used as the height of the divided seeds for searching the connected domain.
It is worth mentioning that, in order to meet diversified requirements of customers and improve user experience, the obtained cutout image can be grayed, and in the graying process, the RGB channel is converted according to the standard. Refer to the matte image shown in fig. 9.
Step S40: and mapping the sectional image to the corresponding building block particle color to generate the face building block image.
Because the colors of the building block particles are limited, and the cutout image is colored, the cutout image needs to be mapped to the corresponding building block particle colors, and the cutout image is subjected to face building block by adopting a mapping method in the embodiment. Reference is made to the face-cordwood image shown in figure 10.
According to the technical scheme, the face building block image is generated by mapping the segmented face to the color of the building block particles, so that the technical problem that the image is seriously distorted easily in the face building block processing process is solved.
In a first embodiment, the identifying a position of a face in the target image includes:
and identifying the face in the target image based on a preset convolutional neural network model, and acquiring the position of the face.
The convolutional neural network model is a feedforward neural network including convolutional calculation and having a deep structure, is one of typical algorithms for deep learning, and is widely applied to visual recognition, image processing and the like.
Referring to fig. 3, in the first embodiment, the segmenting the recognized face according to the position and generating the matte image includes:
step S31: identifying a communication area where the face is located according to the position where the face is located;
it should be noted that the Connected Component generally refers to an image area (Blob) formed by foreground pixels having the same pixel value and adjacent positions in the image.
Step S31: and segmenting the face based on the identified connected region to generate a sectional image.
After the face position is identified through the preset convolutional neural network model, in order to further improve the effect of the block batch image, the face is segmented through the communication area so as to remove too many non-face areas.
Referring to fig. 4, the segmenting the human face based on the identified connected regions and generating the sectional image includes:
step S311: acquiring the maximum height and the maximum width of a communication area;
after the connected region is found, the height Hmax and the width Wmax at that time are recorded.
Step S312: determining a compression ratio based on the maximum height, the maximum width and the height and width of the building block floor grain pixels;
the method comprises the steps of obtaining height Hbdst of building block bottom plate particle pixels, calculating a first ratio of the maximum height of a communication area to the height of the building block bottom plate particle pixels, calculating a second ratio of the maximum width of the communication area to the width of the building block bottom plate particle pixels, and obtaining a compression ratio R value according to the first ratio and the second ratio.
Step S313: determining the height and width of the compressed facial features in the target image according to the compression ratio;
and obtaining the height and width of the facial features, and calculating the height and width of the compressed facial features according to the R value. For example, the compressed eye has a height and a width of Heye and Weye, respectively.
Step S314: when the compressed height is smaller than the height of the building block bottom plate particle pixel or/and the compressed width is smaller than the width of the building block bottom plate particle pixel, the maximum height and the maximum width of the communication area are adjusted, and the adjusted communication area is used as a cutout image.
If Heye is less than the height of the building block bottom particle pixel and Weye is less than the width of the building block bottom particle pixel, the scaling ratio at this time is considered to be too large, and details are lost. Hmax and Wmax need to be reduced. If Heye and Weye are larger than one pixel, the feature of five sense organs can be considered to be reserved, and the current segmentation value is acceptable.
Referring to fig. 5, the adjusting the maximum height and the maximum width of the connected region, and the using the adjusted connected region as a matting image includes:
step S3141: reducing the maximum height and the maximum width of the communication area according to a preset rule;
when Heye and Weye are less than one pixel value, the values of Hmax and Wmax need to be reduced, and the above-mentioned processes of solving for the ratio R, Heye and Weye are repeated.
Step S3142: and when the compressed height is greater than the height of the building block bottom plate particle pixel and the compressed width is greater than the width of the building block bottom plate particle pixel, taking the current connected region as a sectional image.
The equivalent of Heye and Weye is obtained, and the values are all more than one pixel. I.e., the features may be preserved, Hmax and Wmax are deemed appropriate.
In some special cases, if the aspect ratio of the face is too different from that of the building block bottom plate (usually, the aspect ratio of the face is 3:4 or 4:5), the face may be deformed after being zoomed. Thus, the face region can be extracted by adopting a direct cutting and little outline extension method, and the image extraction result obtained in the above step is ignored.
Referring to fig. 6, in a second embodiment, the mapping of the scratch images to the corresponding block particle colors to generate a face-building image includes:
step S41: dividing the sectional image into a plurality of characteristic subregions;
it should be noted that the face feature region can be roughly divided into 5 regions of an eye area, an eyebrow area, a nose area, a mouth area, and a cheek area, so that the cutout image is divided into 5 feature sub-regions in this embodiment.
Step S42: determining the color characteristic value of each characteristic subarea according to a preset formula;
it should be noted that the color values of each of the characteristic sub-regions are different, and the color characteristic values are calculated by using a mature calculation formula in this embodiment, which is a mature technique in the art and is not described again.
It should be noted that the preset formula for calculating the color feature value is as follows:
Figure BDA0002221528010000091
ΔR=C1,R-C2,R
ΔG=C1,G-C2,G
ΔB=C1,B-C2,B
Figure BDA0002221528010000092
where C1, C2 represent color 1 and color 2, C1, R represents the R channel for color 1, C2, R represents the R channel for color 2. Similarly, C1, G denotes the G channel for color 1, and C2, G denotes the R channel for color 2. C1, B for color 1B channel, C2, B for color 2B channel. Δ C is the color feature value of the region where color 1 is located.
Step S43: matching the color characteristic value with the color of the building blocks according to the color characteristic value and the color value interval of the building blocks to be spliced;
in this embodiment, when the color characteristic value is within the color value range of the building block to be spliced, the color characteristic value is matched with the color value range of the building block to be spliced.
Step S44: and performing threshold segmentation on each characteristic subregion according to the color characteristic value to generate a face cordwood image.
Since the threshold segmentation maps the original multi-color information to several color intervals, the color information is inevitably lost. If the loss of color results in loss of facial features. This is unacceptable. It is therefore necessary to adjust the color values of the multi-thresholding described above. And judging whether the features of the five sense organs are reduced or disappear according to the color values after threshold segmentation. The judgment of disappearance is based on the fact that the color value of the block in the area is not available or the color of the area is modified by other colors. The reduction is determined by reducing the number of pixels of the corresponding color of the region compared to that before thresholding. If the percentage of reduction is greater than the empirical value for the characteristic sub-region (e.g., 50% reduction in the mouth, not recognized at all, 60% reduction in the nose, only the tip or nostril of the nose left, discernable), the threshold value is considered to be unreasonable. A threshold adjustment is required. The direction of adjustment is the reduced pixel value within the sub-region. And repeating the threshold adjusting process, and finally, if all the regions meet the characteristic empirical values of the sub-regions, determining that the adjustment is finished. If not, different priority adjustments are made in order of eyes, mouth, eyebrows, nose, cheeks. And when the eyebrow can be met, considering that the eyebrow is partially met, and finishing threshold adjustment. If all the parameters cannot be met, the face building block is not successful, and the user is prompted to regenerate the picture.
Referring to fig. 7, based on the above-mentioned face image processing method, the present invention further provides a face image processing apparatus, which includes
An obtaining module 100, configured to obtain a target image to be processed;
it is worth mentioning that the image may be uploaded locally from the user device or may be obtained by instant shooting. In this embodiment, the target image includes a face, so as to facilitate subsequent processing of the face.
The recognition module 200 is configured to recognize a position of a face in the target image;
in order to extract a face image from a target image, the position of the face needs to be recognized first. The position in this embodiment refers to a region where the face is located in the target image. The face usually includes hair, forehead, eyebrow, cheek, nose, ear, mouth, and even ornaments, cap, neck, etc.
A segmentation module 300, configured to segment the identified face according to the position to generate a matting image;
in this embodiment, 1500 images including only a human face and having a large background difference are used to obtain a database of human face images, so as to train a preset convolutional neural network model to recognize the human face. Because the final face is mapped to the building block bottom plate with limited area, too large non-face areas, such as too long hat, hair, too many neck, shoulders and the like, can reduce the features of the face, and finally the effect of building blocks is poor. Therefore, the human face needs to be segmented.
The principle of segmentation is that the proportion of five sense organs of the face is as large as possible, the proportion of features above the head is larger than that below the head, and the features of the five sense organs can still be kept from losing after being reduced to the size of pixels corresponding to the building block bottom plate. In general, a human face is characterized by a height from the chin to the forehead that is greater than the width of the cheeks. Therefore, the height from the chin to the forehead is used as the height of the divided seeds for searching the connected domain.
It is worth mentioning that, in order to meet diversified requirements of customers and improve user experience, the obtained cutout image can be grayed, and in the graying process, the RGB channel is converted according to the standard.
And the mapping module 400 is used for mapping the sectional image to the corresponding building block particle color so as to generate the face cordwood image.
Because the colors of the building block particles are limited, and the cutout image is colored, the cutout image needs to be mapped to the corresponding building block particle colors, and the cutout image is subjected to face building block by adopting a mapping method in the embodiment.
Preferably, the identification module 200 is further configured to:
and identifying the face in the target image based on a preset convolutional neural network model, and acquiring the position of the face.
The convolutional neural network model is a feedforward neural network including convolutional calculation and having a deep structure, is one of typical algorithms for deep learning, and is widely applied to visual recognition, image processing and the like.
Preferably, the segmentation module 300 is also used for
Identifying a communication area where the face is located according to the position where the face is located;
it should be noted that Connected Component generally refers to an image area (Region, Blob) composed of foreground pixels with the same pixel value and adjacent positions in the image
And segmenting the face based on the identified connected region to generate a sectional image.
After the face position is identified through the preset convolutional neural network model, in order to further improve the effect of the block batch image, the face is segmented through the communication area so as to remove too many non-face areas.
Preferably, the segmentation module 300 is also used for
Acquiring the maximum height and the maximum width of a communication area;
after the connected region is found, the height Hmax and the width Wmax are recorded
Determining a compression ratio based on the maximum height, the maximum width and the height and width of the building block floor grain pixels;
the method comprises the steps of obtaining height Hbdst of building block bottom plate particle pixels, calculating a first ratio of the maximum height of a communication area to the height of the building block bottom plate particle pixels, calculating a second ratio of the maximum width of the communication area to the width of the building block bottom plate particle pixels, and obtaining a compression ratio R value according to the first ratio and the second ratio.
Determining the height and width of the compressed facial features in the target image according to the compression ratio;
and obtaining the height and width of the facial features, and calculating the height and width of the compressed facial features according to the R value. For example, the height and width of the compressed eye are Heye and Weye, respectively.
When the compressed height is smaller than the height of the building block bottom plate particle pixel or/and the compressed width is smaller than the width of the building block bottom plate particle pixel, the maximum height and the maximum width of the communication area are adjusted, and the adjusted communication area is used as a cutout image.
If Heye is less than the height of the building block bottom particle pixel and Weye is less than the width of the building block bottom particle pixel, the scaling ratio at this time is considered to be too large, and details are lost. Hmax and Wmax need to be reduced. If Heye and Weye are larger than one pixel, the feature of five sense organs can be considered to be reserved, and the current segmentation value is acceptable.
Preferably, the segmentation module 300 is further configured to:
reducing the maximum height and the maximum width of the communication area according to a preset rule;
when Heye and Weye are less than one pixel value, the values of Hmax and Wmax need to be reduced, and the above-mentioned processes of solving for the ratio R, Heye and Weye are repeated.
When the height after compression is greater than the height of building block bottom plate particle pixel and the width after compression is greater than the width of building block bottom plate particle pixel, the current connected region is used as a cutout image.
The equivalent of Heye and Weye is obtained, and the values are all more than one pixel. I.e., the features may be preserved, Hmax and Wmax are deemed appropriate.
In some special cases, if the aspect ratio of the face is too different from that of the building block bottom plate (usually, the aspect ratio of the face is 3:4 or 4:5), the face may be deformed after being zoomed. Thus, the face region can be extracted by adopting a direct cutting and little outline extension method, and the image extraction result obtained in the above step is ignored.
Preferably, the mapping module 400 is further configured to:
dividing the sectional image into a plurality of characteristic subregions;
it should be noted that the face feature region can be roughly divided into 5 regions of an eye area, an eyebrow area, a nose area, a mouth area, and a cheek area, so that the cutout image is divided into 5 feature sub-regions in this embodiment.
Determining the color characteristic value of each characteristic subregion according to a preset formula;
it should be noted that the color values of each of the characteristic sub-regions are different, and the color characteristic values are calculated by using a mature calculation formula in this embodiment, which is a mature technique in the art and is not described again.
Matching the color characteristic value with the color of the building blocks according to the color characteristic value and the color value interval of the building blocks to be spliced;
in this embodiment, when the color characteristic value is within the color value interval of the building block to be spliced, the color characteristic value is matched with the color value interval of the building block to be spliced.
And performing threshold segmentation on each characteristic subregion according to the color characteristic value to generate a face cordwood image.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. The use of the words first, second, third, etc. do not denote any order, but rather the words first, second, etc. are to be interpreted as names.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a user equipment, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (7)

1. A face image processing method is characterized by comprising the following steps:
acquiring a target image to be processed;
identifying the position of the face in the target image;
segmenting the recognized face according to the position to generate a sectional image;
mapping the sectional image to the corresponding color of the building block particles to generate a face building block image;
the recognizing the position of the face in the target image comprises:
recognizing a face in a target image based on a preset convolutional neural network model, and acquiring the position of the face;
the segmenting the identified face according to the position and generating a matting image comprises:
identifying a communication area where the face is located according to the position where the face is located;
segmenting the face based on the identified connected region to generate a sectional image;
the segmentation of the human face based on the identified connected region and the generation of the sectional image comprise:
acquiring the maximum height and the maximum width of a communication area;
determining a compression ratio based on the maximum height, the maximum width and the height and width of the building block bottom plate;
determining the height and width of the compressed facial features in the target image according to the compression ratio;
when the compressed height is smaller than the height of the building block bottom plate particle pixel or/and the compressed width is smaller than the width of the building block bottom plate particle pixel, the maximum height and the maximum width of the communication area are adjusted, and the adjusted communication area is used as a cutout image.
2. The method for processing a human face image according to claim 1, wherein the adjusting the maximum height and the maximum width of the connected region, and the using the adjusted connected region as a matting image comprises:
reducing the maximum height and the maximum width of the communication area according to a preset rule;
when the height after compression is greater than the height of building block bottom plate particle pixel and the width after compression is greater than the width of building block bottom plate particle pixel, the current connected region is used as a cutout image.
3. The method of claim 2, wherein the mapping the scratch image to a corresponding building block particle color to generate a face building block image comprises:
dividing the sectional image into a plurality of characteristic subregions;
determining the color characteristic value of each characteristic subregion according to a preset formula;
matching the color characteristic value with the color of the building blocks according to the color characteristic value and the color value interval of the building blocks to be spliced;
and performing threshold segmentation on the matched characteristic sub-regions according to the color characteristic values to generate a face cordwood image.
4. The face image processing method according to claim 3, wherein the feature sub-regions include an eye feature sub-region, an eyebrow feature sub-region, a nose feature sub-region, a mouth feature sub-region, and a cheek feature sub-region.
5. A user equipment, the user equipment comprising: a memory, a processor and a face image processing program stored on the memory and executable on the processor, the face image processing program when executed by the processor implementing the steps of the face image processing method according to any one of claims 1 to 4.
6. A storage medium, characterized in that the storage medium has stored thereon a face image processing program, which when executed by a processor implements the steps of the face image processing method according to any one of claims 1 to 4.
7. A face image processing device is characterized by comprising
The acquisition module is used for acquiring a target image to be processed;
the recognition module is used for recognizing the position of the face in the target image;
the segmentation module is used for segmenting the identified human face according to the position to generate a sectional image;
the mapping module is used for mapping the sectional image to the corresponding building block particle color so as to generate a face building block image;
the recognition module is further used for recognizing the face in the target image based on a preset convolutional neural network model and acquiring the position of the face;
the segmentation module is also used for identifying a communication area where the face is located according to the position where the face is located;
segmenting the face based on the identified connected region to generate a sectional image;
the segmentation module is further used for obtaining the maximum height and the maximum width of the communication area;
determining a compression ratio based on the maximum height, the maximum width and the height and width of the building block bottom plate;
determining the height and width of the compressed facial features in the target image according to the compression ratio;
and when the compressed height is smaller than the height of the building block bottom plate particle pixel or/and the compressed width is smaller than the width of the building block bottom plate particle pixel, adjusting the maximum height and the maximum width of the communication area, and taking the adjusted communication area as a sectional image.
CN201910939068.9A 2019-09-29 2019-09-29 Face image processing method, user equipment, storage medium and device Active CN110688962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910939068.9A CN110688962B (en) 2019-09-29 2019-09-29 Face image processing method, user equipment, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910939068.9A CN110688962B (en) 2019-09-29 2019-09-29 Face image processing method, user equipment, storage medium and device

Publications (2)

Publication Number Publication Date
CN110688962A CN110688962A (en) 2020-01-14
CN110688962B true CN110688962B (en) 2022-05-20

Family

ID=69111233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910939068.9A Active CN110688962B (en) 2019-09-29 2019-09-29 Face image processing method, user equipment, storage medium and device

Country Status (1)

Country Link
CN (1) CN110688962B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885306B (en) * 2020-07-28 2021-12-07 重庆虚拟实境科技有限公司 Target object adjusting method, computer device, and storage medium
CN112967301A (en) * 2021-04-08 2021-06-15 北京华捷艾米科技有限公司 Self-timer image matting method and device
CN113269170A (en) * 2021-07-20 2021-08-17 北京拍立拼科技有限公司 Intelligent portrait building block matching method and system based on feature similarity measurement

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166240A (en) * 2006-10-19 2008-04-23 索尼株式会社 Image processing device, image forming device and image processing method
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN104715449A (en) * 2015-03-31 2015-06-17 百度在线网络技术(北京)有限公司 Method and device for generating mosaic image
CN105654420A (en) * 2015-12-21 2016-06-08 小米科技有限责任公司 Face image processing method and device
CN106023081A (en) * 2016-05-21 2016-10-12 广东邦宝益智玩具股份有限公司 Mosaic processing method of 2D picture
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN108010102A (en) * 2017-12-19 2018-05-08 刘邵宏 Mosaic image generation method, device, terminal device and storage medium
CN108717719A (en) * 2018-05-23 2018-10-30 腾讯科技(深圳)有限公司 Generation method, device and the computer storage media of cartoon human face image
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110246244A (en) * 2019-05-16 2019-09-17 珠海华园信息技术有限公司 Intelligent foreground management system based on recognition of face

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166240A (en) * 2006-10-19 2008-04-23 索尼株式会社 Image processing device, image forming device and image processing method
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN104715449A (en) * 2015-03-31 2015-06-17 百度在线网络技术(北京)有限公司 Method and device for generating mosaic image
CN105654420A (en) * 2015-12-21 2016-06-08 小米科技有限责任公司 Face image processing method and device
CN106023081A (en) * 2016-05-21 2016-10-12 广东邦宝益智玩具股份有限公司 Mosaic processing method of 2D picture
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN108010102A (en) * 2017-12-19 2018-05-08 刘邵宏 Mosaic image generation method, device, terminal device and storage medium
CN108717719A (en) * 2018-05-23 2018-10-30 腾讯科技(深圳)有限公司 Generation method, device and the computer storage media of cartoon human face image
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110246244A (en) * 2019-05-16 2019-09-17 珠海华园信息技术有限公司 Intelligent foreground management system based on recognition of face

Also Published As

Publication number Publication date
CN110688962A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110688962B (en) Face image processing method, user equipment, storage medium and device
CN109952594B (en) Image processing method, device, terminal and storage medium
US8515136B2 (en) Image processing device, image device, image processing method
CN104361131B (en) The method for building up of four-dimensional faceform's database
WO2014186422A1 (en) Image masks for face-related selection and processing in images
CN111241975B (en) Face recognition detection method and system based on mobile terminal edge calculation
CN105049911A (en) Video special effect processing method based on face identification
US10586098B2 (en) Biometric method
KR102198360B1 (en) Eye tracking system and method based on face images
JPH04101280A (en) Face picture collating device
CN113344837B (en) Face image processing method and device, computer readable storage medium and terminal
Li et al. An efficient face normalization algorithm based on eyes detection
Sheu et al. Automatic generation of facial expression using triangular geometric deformation
Zhang et al. A skin color model based on modified GLHS space for face detection
JP3578321B2 (en) Image normalizer
Prinosil et al. Automatic hair color de-identification
Juang et al. Vision-based human body posture recognition using support vector machines
CN115601807A (en) Face recognition method suitable for online examination system and working method thereof
CN114972014A (en) Image processing method and device and electronic equipment
CN109145875B (en) Method and device for removing black frame glasses in face image
Ghimire et al. A lighting insensitive face detection method on color images
Saeed Comparative analysis of lip features for person identification
Pachoud et al. Macro-cuboid based probabilistic matching for lip-reading digits
Li et al. Automatic facial expression recognition using 3D faces
Srikantaswamy et al. A novel face segmentation algorithm from a video sequence for real-time face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant