CN111008935A - Face image enhancement method, device, system and storage medium - Google Patents
Face image enhancement method, device, system and storage medium Download PDFInfo
- Publication number
- CN111008935A CN111008935A CN201911060534.2A CN201911060534A CN111008935A CN 111008935 A CN111008935 A CN 111008935A CN 201911060534 A CN201911060534 A CN 201911060534A CN 111008935 A CN111008935 A CN 111008935A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- dimensional
- dimensional face
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000001514 detection method Methods 0.000 claims abstract description 102
- 238000012545 processing Methods 0.000 claims abstract description 38
- 230000002708 enhancing effect Effects 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims description 24
- 230000000877 morphologic effect Effects 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 15
- 230000001815 facial effect Effects 0.000 claims description 9
- 210000000697 sensory organ Anatomy 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 11
- 238000007500 overflow downdraw method Methods 0.000 description 10
- 238000012549 training Methods 0.000 description 7
- 210000000887 face Anatomy 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 210000004709 eyebrow Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method, a device, a system and a storage medium for enhancing a face image, wherein the method comprises the following steps: carrying out face detection on the original image to obtain a face detection frame; determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model; the face image in the original image is processed based on the original image and the face three-dimensional model, the face three-dimensional information can be fully utilized, the accuracy of image processing is improved, in addition, the face distortion and blurring can be greatly reduced by carrying out image processing through the original image, and the image processing effect is favorably improved. According to the method, the device, the system and the storage medium, the face image in the original image is processed based on the original image and the face three-dimensional model, the face three-dimensional information can be fully utilized, the accuracy rate of image processing is improved, in addition, the face distortion and blurring can be greatly reduced by carrying out image processing on the original image, and the image processing effect is favorably improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to processing based on human face image enhancement.
Background
The face fusion is to obtain a face enhancement result with more information content by fusing face information of multiple frames, and the face enhancement result comprises noise reduction, resolution improvement, richer details and the like. The existing face fusion method is based on the fusion of RGB images processed by an image signal processor (isp), and the RGB images processed by the isp lose much information. In addition, the existing method does not consider the shape and expression information of the original face, which can cause the face distortion of the fused face and the fused person due to the influence of the face shape, the expression and other factors, or cause the error of the fused face because of the error of the posture estimation.
Therefore, the face fusion technology in the prior art has the problems that information is lost too much before fusion and the face is distorted and fuzzy after fusion.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a method, a device and a system for enhancing a face image and a computer storage medium, which are used for processing the face image in an original image based on the original image and a three-dimensional model of the face, can fully utilize three-dimensional information of the face and improve the accuracy of image processing.
According to a first aspect of the present invention, there is provided a face image enhancement method, including:
carrying out face detection on the original image to obtain a face detection frame;
determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model;
and processing the original image according to the three-dimensional face information to obtain an output image.
According to a second aspect of the present invention, there is provided a face image enhancement apparatus, comprising:
the face detection module is used for carrying out face detection on the original image to obtain a face detection frame;
the three-dimensional face module is used for determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model;
and the fusion module is used for processing the original image according to the three-dimensional face information to obtain an output image.
According to a third aspect of the present invention, there is provided a face image enhancement system comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the steps of the method of the first aspect are implemented when the computer program is executed by the processor.
According to a fourth aspect of the present invention, there is provided a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a computer, implements the steps of the method of the first aspect.
According to the face image enhancement method, the face image enhancement device, the face image enhancement system and the computer storage medium, the face image in the original image is processed based on the original image and the face three-dimensional model, face three-dimensional information can be fully utilized, the accuracy of image processing is improved, in addition, face distortion and blurring can be greatly reduced by processing the image through the original image, and the image processing effect is favorably improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic block diagram of an exemplary electronic device for implementing a method and apparatus for enhancing a face image according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a face image enhancement method according to an embodiment of the invention;
FIG. 3 is an example of a face image enhancement method according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a face image enhancement apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a face image enhancement system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
First, an exemplary electronic device 100 for implementing the face image enhancement method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 101, one or more memory devices 102, an input device 103, an output device 104, and an image sensor 105, which are interconnected via a bus system 106 or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having facial image enhancement capabilities or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 103 may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 104 may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.
The image sensor 105 may take an image (e.g., a photograph, a video, etc.) desired by the user and store the taken image in the storage device 102 for use by other components.
Exemplarily, an exemplary electronic device for implementing the method and apparatus for enhancing a facial image according to an embodiment of the present invention may be implemented as a smart phone, a tablet computer, a computer device, or the like.
Next, a face image enhancement method 200 according to an embodiment of the present invention will be described with reference to fig. 2. As shown in fig. 2, a method 200 for enhancing a face image includes:
firstly, in step S210, performing face detection on an original image to obtain a face detection frame;
in step S220, determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model;
in step S230, the original image is processed according to the three-dimensional face information, so as to obtain the original image with enhanced face image.
The original image, namely the raw image, is original data which is obtained by converting a light source signal capturing an image into a digital signal by an image sensor and is not processed at all, has rich image layers, can utilize all data spaces to the maximum extent, and contains a large amount of rich information; the original image provides a user with quite sufficient processing space, almost all the original image is processed in a lossless mode, the processed image is good in effect, and the method is suitable for serving as the basis of image processing.
For the processing of the face image, compared with the traditional method in which the face fusion enhancement is performed based on the RGB image processed by the image signal processor (isp), the information of the original image can be utilized to the maximum extent by directly performing the face fusion enhancement on the original image. The 3D face model is constructed through the original image, the information such as the shape, expression, posture and the like of the face in the original image is fully utilized, the 3D face model is projected to the 2D to obtain a 2D fused face image, and then the 2D fused face image is fused into the original image, so that the distortion of face fusion enhancement is greatly reduced, and the face synergy effect is improved; because the 3D face model does not depend on color information, the problem that the face in the image is shielded and is greatly influenced by objective factors such as light and the like can be solved after the 3D face model is constructed, the face distortion condition after fusion is further reduced, and the face fusion enhancement effect is ensured. The method is suitable for various occasions needing to enhance the face images in image processing, is beneficial to saving time and cost, makes full use of the original information to select the face image distortion, and can further improve the precision of subsequent face image processing.
Illustratively, the face image enhancement method according to the embodiment of the present invention can be implemented in a device, apparatus or system having a memory and a processor.
The human face image enhancement method can be deployed at a human face image acquisition end, for example, the human face image enhancement method can be deployed at an image acquisition end of an access control system; may be deployed at a personal terminal such as a smart phone, tablet, personal computer, etc.
The face image enhancement method according to the embodiment of the invention can be deployed at a personal terminal such as a smart phone, a tablet computer, a personal computer and the like or a server side (or a cloud side). For example, the original image data may be acquired and face image enhancement may be performed at a personal terminal or at a server (or cloud).
For example, the facial image enhancement method according to the embodiment of the present invention may also be distributively deployed at a personal terminal and a server side (or cloud side). For example, the original image data may be acquired at a server side (or a cloud side), the server side (or the cloud side) transmits the acquired original image data to the personal terminal, and the personal terminal performs face image enhancement according to the received original image data. For another example, the raw image data may be acquired at a personal terminal. And transmitting the acquired original image data to a server side (or a cloud side), and then enhancing the face image by the server side (or the cloud side).
According to the face image enhancement method provided by the embodiment of the invention, the face image based on the raw domain is fused to construct the face 3D model to fuse and enhance the face image, so that the face shape expression and the face posture are ensured to the greatest extent, and the distortion and the blurring of the face image are greatly reduced.
According to an embodiment of the present invention, before the step S210, the method 200 may further include: an original image is acquired.
Illustratively, the acquiring the original image may further include: and acquiring the original image through an image acquisition device.
The image acquisition device can acquire single-frame images and also can acquire multi-frame images or video data. When the image acquisition device acquires a single-frame image, the single-frame image can be directly used as the original image without other processing; when the image acquisition device acquires multi-frame images or video data, the acquired multi-frame images or video data can be framed to obtain at least one frame of the original image.
Illustratively, the acquisition of the raw image may also be obtained by acquiring image data from other data sources. The image data may include video data and/or non-video data, and the non-video data may include a single frame image, and the single frame image may be directly used as the original image without performing framing processing.
It should be noted that the original image may be data acquired in real time or non-real time; the original image is not necessarily all images containing human faces in the image data, but can be only a part of image frames in the image data; on the other hand, at least one of the original images may be a continuous multi-frame image, or may be a discontinuous multi-frame image arbitrarily selected, which is not limited herein.
According to the embodiment of the present invention, in step S210, performing face detection on the original image to obtain a face detection frame may include:
and inputting the original image into a trained face detection and/or face tracking network to obtain a face detection frame of the original image.
Illustratively, the face detection box is a detection box (bounding box) of an image containing a target face, which is determined by performing face detection and face tracking processing on at least one frame of the original image. Specifically, the size and the position of the target face may be determined in the original image containing the target face through various face detection and tracking methods commonly used in the art, such as template matching, SVM (support vector machine), neural network, and the like, and then the target face is tracked and/or located based on color information, local features, motion information, and the like of the target face, so as to determine an image containing the target face and a detection frame thereof in at least one frame of the original image. The above processing for determining an image containing a target face and a detection frame thereof through a face detection and/or face tracking network is a common processing in the field of image processing, and will not be described in detail here.
It should be understood that the present invention is not limited by the specifically adopted face detection method, face tracking method, or detection frame positioning method, and that the present face detection method, face tracking method, or detection frame positioning method, or the future developed face detection method, face tracking method, or detection frame positioning method, can be applied to the face image enhancement method according to the embodiment of the present invention, and shall also be included in the scope of the present invention.
In some embodiments, performing face detection on the original image to obtain the face detection frame of the original image may further include:
and displaying the face detection frame of the face image in the original image.
According to an embodiment of the present invention, the method 200 may further include: and allocating corresponding identification information to each face detection frame.
The identification information may be any information for distinguishing different faces, such as a face ID, and the like, which is not limited herein. The original image may include one or more faces, identification information is set for the detected face image and/or face detection frame to distinguish different faces in the original image, and the face image and/or face detection frame with the same identification information represents the same person.
In one embodiment, the identification information includes an ID, and assigning a corresponding ID to each face detection box may include:
inputting the original image into a trained face detection and/or face tracking network;
the face detection and tracking network detects a face detection frame comprising n face images in the original image;
judging whether the face image in the ith face detection frame and the face image in the face detection frame which is already allocated with the ID belong to the same face, wherein i is 1,2,3 … … n, and n is a positive integer;
if the face image in the ith personal face detection frame and the face image in the face detection frame which is already allocated with the ID do not belong to the same face, allocating a new ID to the ith personal face detection frame;
if the face image in the ith personal face detection frame and the face image in the face detection frame to which the ID has been assigned belong to the same face, the ID of the face detection frame belonging to the same face is assigned to the ith personal face detection frame.
According to the embodiment of the present invention, in step S220, determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model may include:
acquiring morphological parameters of the face image in the face detection frame;
and obtaining three-dimensional face information corresponding to the face image according to the morphological parameters and the three-dimensional face model.
Illustratively, the obtaining of the morphological parameters of the face image in the face detection frame may include:
and inputting the face image into a trained three-dimensional parameter detection network to obtain the morphological parameters of the face image.
The three-dimensional parameter detection network can obtain three-dimensional parameters of the face image according to the face image estimation based on an average face model, and can construct a 3D face image corresponding to the face image according to the three-dimensional parameters.
In some embodiments, the 3D face network may be a residual neural network, such as ResNets, for example, but the embodiments of the present application are not limited thereto. Illustratively, the training of the three-dimensional parameter detection network comprises:
taking the 2D face training image as input layer data;
the three-dimensional parameter detection network establishes a three-dimensional face training image for the 2D face training image based on the three-dimensional face model to obtain three-dimensional parameters of the three-dimensional face training image, and the three-dimensional parameters enable Euclidean distances between points (such as key points) in the three-dimensional face training image projected onto a 2D plane coordinate system and corresponding points (such as key points) of the 2D face training image to be minimum.
Illustratively, the morphological parameters include at least one of: shape parameters and expression parameters.
The shape parameters can represent the shape (such as face shape and the like) of a human face, the angle of the human face and the like, and the expression parameters can represent the facial features and the like of the human face; the three-dimensional face image of the face image can be constructed from factors such as the shape, the expression and the angle of the face by estimating the three-dimensional parameters of the original image, and compared with the traditional face fusion enhancement method which excessively depends on the color information of the image, richer face information is obtained, the problem that the face information is lost too much before fusion is solved, a more accurate data base is provided for the face fusion process, and the face fusion effect is favorably improved.
Illustratively, the method 200 further comprises:
acquiring texture information of the face image in the face detection frame;
the obtaining of the three-dimensional face information corresponding to the face image according to the form parameters and the three-dimensional face model includes:
obtaining a three-dimensional face gesture corresponding to the face image according to the morphological parameters and the three-dimensional face model;
and fusing the texture information of the face image to the corresponding position of the three-dimensional face posture to obtain the three-dimensional face information corresponding to the face image.
Illustratively, the obtaining texture information of the face image in the face detection frame may include: and acquiring pixel information of the face image as the texture information, or extracting the features of the face image to obtain the texture information.
The texture information is expressed by gray distribution of pixels and surrounding space neighborhood, and describes surface properties of the face in the face image.
In some embodiments, the method for extracting features of the face image to obtain the texture information may include: statistical methods, model methods or signal processing methods. The statistical method is based on the gray attribute of a pixel and the neighborhood thereof, and researches the statistical characteristics in a texture area or the first-order, second-order or high-order statistical characteristics of the gray in the pixel and the neighborhood thereof so as to obtain the texture information; the model method assumes that the texture is formed in a distributed model mode controlled by certain parameters, and estimates and calculates model parameters from the realization of a texture image; the signal processing method is to extract a feature value which keeps relatively stable after a certain region in a texture image is transformed (such as wavelet transformation) to be used as the texture information.
Illustratively, obtaining a three-dimensional face pose corresponding to the face image according to the morphological parameters and the three-dimensional face model includes:
acquiring the three-dimensional face model with a preset angle;
adjusting the outline of the three-dimensional face model at the preset angle based on the morphological parameters;
and adjusting the five sense organs of the three-dimensional face model subjected to contour adjustment based on the expression parameters to obtain the three-dimensional face posture corresponding to the face image.
Illustratively, the obtaining a three-dimensional face pose corresponding to the face image according to the morphological parameters and the three-dimensional face model includes:
acquiring identification information of the face image and a three-dimensional face model corresponding to the identification information;
and adjusting the five sense organs of the three-dimensional face model corresponding to the identification information based on the expression parameters to obtain the three-dimensional face posture corresponding to the face image.
According to the embodiment of the present invention, after step S210, the method 200 may further include:
judging whether the face in the face detection frame has corresponding identification information or not; if the identification information exists, directly acquiring three-dimensional face information corresponding to the face image;
if the identification information does not exist, step S220 is performed.
If the identification information exists, it is indicated that the face image in the current frame has appeared in the previous image frame, and accordingly, the three-dimensional face model of the face image may have been constructed based on the previous image frame, in order to accelerate the speed of the whole face fusion process and save computing resources, the three-dimensional face model of the same person can be projected after being fused with texture information according to expression parameters, so that the face image appearing in the original image of each frame does not need to be repeatedly established, a large amount of computing resources are saved, and the speed of face fusion is improved.
If the identification information does not exist, the fact that the face image in the current frame does not appear in the previous image frame is that the face image is a newly added face, and new identification information needs to be set for distinguishing the face image from other face images; and meanwhile, constructing a three-dimensional face model according to the morphological parameters.
Exemplarily, fusing the texture information of the face image to a corresponding position of the three-dimensional face pose to obtain three-dimensional face information corresponding to the face image, including:
obtaining a face angle of the three-dimensional face image according to the attitude parameters;
rotating the three-dimensional face image to a preset angle according to the face angle of the three-dimensional face image;
copying the texture information of the face image to a corresponding position in the three-dimensional face image based on the key point of the face image to obtain the three-dimensional face information corresponding to the face image.
Because the angles of the human faces in the human face images may be different, the texture information distribution of the human faces is different, and in order to avoid adverse effects of the factors on the construction of the 3D human face image, the 3D human face model can be rotated to a uniform preset angle (such as a front face) and then subjected to texture information fusion so as to improve the accuracy of the texture information fusion. It should be understood that the preset angle can be set according to the requirement, and is not limited herein.
In some embodiments, the fusing the texture information of the face image to the corresponding position of the three-dimensional face pose to obtain the three-dimensional face information corresponding to the face image includes:
the three-dimensional face pose comprises a plurality of unit areas;
obtaining corresponding areas of the plurality of unit areas in the face image according to the key points of the three-dimensional face pose and the corresponding key points of the face image;
and respectively fusing the texture information of the unit areas in the corresponding areas of the face image to the unit areas to obtain the three-dimensional face information corresponding to the face image.
The three-dimensional face pose corresponds to the face image according to the key points, and texture information at or near the key points in the face image is copied to unit areas corresponding to the same key points in the three-dimensional face pose, so that the three-dimensional face information corresponding to the face image is obtained. The key points can be key points used for recognizing the face, such as face five sense organs key points and/or face contour key points; wherein the facial features key points may include at least one of eyebrow key points, nose key points, lip key points, and ear key points. In some embodiments, eyebrow keypoints include eyebrow keypoints and eye keypoints, eye keypoints include upper eyelid keypoints and lower eyelid keypoints, lip keypoints include upper lip keypoints and lower lip keypoints, and so forth.
In one embodiment, the unit region includes a triangular region.
According to the embodiment of the present invention, in step S230, the processing is performed on the original image according to the three-dimensional face information to obtain the original image with enhanced face image, which includes:
projecting the three-dimensional face information to a 2D coordinate system to obtain a 2D face image corresponding to the three-dimensional face information;
and fusing the 2D face image and the original image to obtain the original image enhanced by the face image.
Exemplarily, projecting the three-dimensional face information to a 2D coordinate system to obtain a 2D face image corresponding to the three-dimensional face information includes:
determining the face angle of the face image according to the morphological parameters;
calculating an angle transformation matrix between the three-dimensional face information and the face image according to the face angle and the three-dimensional face information;
rotating the three-dimensional face information according to the angle transformation matrix;
and projecting the rotated three-dimensional face information to a 2D coordinate system to obtain the 2D face image.
The three-dimensional face image is rotated to a uniform preset angle in the process of fusing the texture information, so that when the fused and enhanced face image is fused to the original image, the obtained 2D face image needs to be consistent with the face image before fusion and enhancement to ensure the fusion effect, the three-dimensional face image can be rotated to be consistent with the corresponding face image in the original image, and then the three-dimensional face image is projected to obtain the fused and enhanced face image.
Illustratively, the fusing the 2D face image with the original image to obtain an output image includes:
obtaining a mask image of the original image according to the face image in the detection frame;
and fusing the 2D face image and the mask image to obtain the output image.
After obtaining the enhanced 2D face image according to the three-dimensional face model established from the original image, the enhanced 2D face image is fused to the original image to replace the face image in the original image, so as to obtain a good face image processing effect. Specifically, the pixels of the face image in the detection frame in the original image may be set to 0, which is equivalent to removing the face image to be enhanced, so as to obtain a mask image of the original image; and then fusing the enhanced 2D face image obtained based on the information of the original image with the mask image to obtain the original image enhanced by the face image as an output image.
In some embodiments, the 2D face image and the mask image may be fused by a weighted average image fusion method, a HIS space image fusion method, a principal component analysis image fusion method, a pseudo-color image fusion method, a pyramid transform-based fusion method, a wavelet transform-based image fusion method, and the like. It should be noted that the face image enhancement method according to the embodiment of the present invention is not limited by the image fusion method, and both the existing image fusion method and the image fusion method developed in the future are applicable to the face image enhancement method according to the embodiment of the present invention.
In some embodiments, the method further comprises: and carrying out image signal processing on the original image enhanced by the face image. The image signal processing may include feature extraction, etc., among others. After the face image enhancement method according to the embodiment of the invention enhances the face image of the original image, the face image contains more abundant information, and provides an accurate and effective data basis for subsequent image processing processes such as face recognition, so that the face fusion effect is improved, and the accuracy of the whole image processing process is improved.
In one embodiment, referring to fig. 3, fig. 3 shows an example of a face image enhancement method according to an embodiment of the present invention. As shown in fig. 3, the method includes:
firstly, collecting an original image through an image sensor;
then, performing face detection and tracking on each frame of original image acquired by the image acquisition device specifically may include: inputting the original image into a trained face detection and/or face tracking network; the face detection and tracking network detects a face detection frame comprising n face images in the original image; judging whether the face image in the ith face detection frame and the face image in the face detection frame which is already allocated with the ID belong to the same face; if the face image in the ith personal face detection frame and the face image in the face detection frame which is already allocated with the ID do not belong to the same face, allocating a new ID to the ith personal face detection frame; if the face image in the ith personal face detection frame and the face image in the face detection frame which is already allocated with the ID belong to the same face, allocating the ID of the face detection frame which belongs to the same face to the ith personal face detection frame;
assuming that all face frames displaying the N face images in the original image appear, and setting corresponding IDs to be N1, N2, … … Nn, wherein N is a positive integer;
then, according to the face detection frame and the three-dimensional face model, determining three-dimensional face information corresponding to a face image in the face detection frame, which may specifically include:
inputting the face image into a trained three-dimensional parameter detection network to obtain morphological parameters of the face image; wherein the morphological parameters comprise shape parameters and expression parameters;
then, judging whether the ID of the face image exists or not;
if the ID exists, acquiring a three-dimensional face model corresponding to the ID, and adjusting the five sense organs of the three-dimensional face model corresponding to the identification information based on the expression parameters to obtain a three-dimensional face posture corresponding to the face image;
if the ID does not exist, a three-dimensional face model of the face image is established according to the form parameters, and a three-dimensional face pose corresponding to the face image is obtained according to the form parameters and the three-dimensional face model, specifically comprising: acquiring the three-dimensional face model with a preset angle; adjusting the outline of the three-dimensional face model at the preset angle based on the morphological parameters; adjusting the five sense organs of the three-dimensional face model subjected to contour adjustment based on the expression parameters to obtain a three-dimensional face posture corresponding to the face image;
for example, if the ID is N2, directly acquiring the three-dimensional face model corresponding to N2 (the ID of the corresponding three-dimensional face model may also be N2), and if the ID is N5, establishing the three-dimensional face model corresponding to N5 according to the three-dimensional morphological parameters of the face image with the ID of N5 (or setting the ID of the three-dimensional face model to be N5);
the three-dimensional face pose comprises a plurality of unit areas;
then, obtaining corresponding areas of the plurality of unit areas in the face image according to the key points of the three-dimensional face pose and the corresponding key points of the face image;
respectively fusing the texture information of the unit areas in the corresponding areas of the face image to the unit areas to obtain three-dimensional face information corresponding to the face image;
then, according to the morphological parameters, determining the face angle of the face image; calculating an angle transformation matrix between the three-dimensional face information and the face image according to the face angle and the three-dimensional face information; rotating the three-dimensional face information according to the angle transformation matrix; projecting the rotated three-dimensional face information to a 2D coordinate system to obtain a 2D face image;
then, according to the face image in the detection frame, obtaining a mask image of the original image; fusing the 2D face image and the mask image to obtain the output image;
and finally, inputting the original image enhanced by the face image into an image signal processor for image processing.
Therefore, according to the face image enhancement method provided by the embodiment of the invention, the three-dimensional face model is constructed on the basis of the original image, so that the face image in the original image is subjected to fusion enhancement, the face shape expression and the face posture are ensured to the greatest extent, and the face distortion blur is greatly reduced.
Fig. 4 shows a schematic block diagram of a face image enhancement apparatus 400 according to an embodiment of the present invention. As shown in fig. 4, the face image enhancement apparatus 400 according to the embodiment of the present invention includes:
a face detection module 410, configured to perform face detection on the original image to obtain a face detection frame;
a three-dimensional face module 420, configured to determine, according to the face detection frame and the three-dimensional face model, three-dimensional face information corresponding to a face image in the face detection frame;
and the fusion module 430 is configured to process the original image according to the three-dimensional face information to obtain the original image enhanced by the face image.
It should be noted that the various modules in the facial image enhancement apparatus 400 according to the embodiment of the present invention can respectively execute the various steps/functions of the facial image enhancement method described above in conjunction with fig. 2. Only the main functions of the parts of the face image enhancement device 400 are described above, and the details already described above are omitted. FIG. 5 shows a schematic block diagram of a face image enhancement system 500 according to an embodiment of the present invention. The face image enhancement system 500 includes an image sensor 510, a storage device 520, and a processor 530.
The image sensor 510 is used to collect image data.
The storage 520 stores program codes for implementing the corresponding steps in the face image enhancement method according to the embodiment of the present invention.
The processor 530 is configured to run the program codes stored in the storage device 520 to execute the corresponding steps of the human face image enhancement method according to the embodiment of the present invention, and is configured to implement the corresponding modules in the human face image enhancement device according to the embodiment of the present invention.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the corresponding steps of the face image enhancement method according to an embodiment of the present invention, and for implementing the corresponding modules in the face image enhancement device according to an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer readable storage medium may be any combination of one or more computer readable storage media, such as one containing computer readable program code for randomly generating sequences of action instructions and another containing computer readable program code for performing facial image enhancement.
In one embodiment, the computer program instructions, when executed by a computer, may implement the functional modules of the face image enhancement apparatus according to the embodiment of the present invention, and/or may execute the face image enhancement method according to the embodiment of the present invention.
The modules in the face image enhancement system according to the embodiment of the present invention can be implemented by a processor of the electronic device for face image enhancement according to the embodiment of the present invention running computer program instructions stored in a memory, or can be implemented when computer instructions stored in a computer readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
According to the face image enhancement method, the face image enhancement device, the face image enhancement system and the storage medium, a face 3D model is constructed by fusing the face images based on the raw domain to perform fusion enhancement on the face images, so that the face shape expression and the face posture are ensured to the greatest extent, and the face distortion blur is greatly reduced.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (12)
1. A face image enhancement method is characterized by comprising the following steps:
carrying out face detection on the original image to obtain a face detection frame;
determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model;
and processing the original image according to the three-dimensional face information to obtain an output image.
2. The method of claim 1, wherein determining three-dimensional face information corresponding to the face image in the face detection frame according to the face detection frame and the three-dimensional face model comprises:
acquiring morphological parameters of the face image in the face detection frame;
and obtaining three-dimensional face information corresponding to the face image according to the morphological parameters and the three-dimensional face model.
3. The method of claim 2, further comprising:
acquiring texture information of the face image in the face detection frame;
the obtaining of the three-dimensional face information corresponding to the face image according to the form parameters and the three-dimensional face model includes:
obtaining a three-dimensional face gesture corresponding to the face image according to the morphological parameters and the three-dimensional face model;
and fusing the texture information of the face image to the corresponding position of the three-dimensional face posture to obtain the three-dimensional face information corresponding to the face image.
4. The method of claim 3, wherein the morphological parameters comprise shape parameters and expression parameters.
5. The method of claim 4, wherein obtaining the three-dimensional face pose corresponding to the face image according to the morphological parameters and the three-dimensional face model comprises:
acquiring the three-dimensional face model with a preset angle;
adjusting the outline of the three-dimensional face model at the preset angle based on the morphological parameters;
and adjusting the five sense organs of the three-dimensional face model subjected to contour adjustment based on the expression parameters to obtain the three-dimensional face posture corresponding to the face image.
6. The method of claim 4, wherein obtaining the three-dimensional face pose corresponding to the face image according to the morphological parameters and the three-dimensional face model comprises:
acquiring identification information of the face image and a three-dimensional face model corresponding to the identification information;
and adjusting the five sense organs of the three-dimensional face model corresponding to the identification information based on the expression parameters to obtain the three-dimensional face posture corresponding to the face image.
7. The method of any one of claims 1-6, wherein processing the original image according to the three-dimensional face information to obtain an output image comprises:
projecting the three-dimensional face information to a 2D coordinate system to obtain a 2D face image corresponding to the three-dimensional face information;
and fusing the 2D face image and the original image to obtain the output image.
8. The method of claim 7, wherein projecting the three-dimensional face information to a 2D coordinate system to obtain a 2D face image corresponding to the three-dimensional face information comprises:
determining the face angle of the face image according to the morphological parameters;
calculating an angle transformation matrix between the three-dimensional face information and the face image according to the face angle and the three-dimensional face information;
rotating the three-dimensional face information according to the angle transformation matrix;
and projecting the rotated three-dimensional face information to a 2D coordinate system to obtain the 2D face image.
9. The method of claim 7, wherein fusing the 2D face image with the original image to obtain an output image comprises:
obtaining a mask image of the original image according to the face image in the detection frame;
and fusing the 2D face image and the mask image to obtain the output image.
10. An apparatus for enhancing a face image, the apparatus comprising:
the face detection module is used for carrying out face detection on the original image to obtain a face detection frame;
the three-dimensional face module is used for determining three-dimensional face information corresponding to a face image in the face detection frame according to the face detection frame and the three-dimensional face model;
and the fusion module is used for processing the original image according to the three-dimensional face information to obtain an output image.
11. A facial image enhancement system comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the steps of the method of any of claims 1 to 9 are implemented when the computer program is executed by the processor.
12. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a computer, implements the steps of the method of any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911060534.2A CN111008935B (en) | 2019-11-01 | 2019-11-01 | Face image enhancement method, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911060534.2A CN111008935B (en) | 2019-11-01 | 2019-11-01 | Face image enhancement method, device, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111008935A true CN111008935A (en) | 2020-04-14 |
CN111008935B CN111008935B (en) | 2023-12-12 |
Family
ID=70111434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911060534.2A Active CN111008935B (en) | 2019-11-01 | 2019-11-01 | Face image enhancement method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111008935B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111508050A (en) * | 2020-04-16 | 2020-08-07 | 北京世纪好未来教育科技有限公司 | Image processing method and device, electronic equipment and computer storage medium |
CN112257552A (en) * | 2020-10-19 | 2021-01-22 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN112381927A (en) * | 2020-11-19 | 2021-02-19 | 北京百度网讯科技有限公司 | Image generation method, device, equipment and storage medium |
CN112712004A (en) * | 2020-12-25 | 2021-04-27 | 英特灵达信息技术(深圳)有限公司 | Face detection system, face detection method and device and electronic equipment |
CN114120414A (en) * | 2021-11-29 | 2022-03-01 | 北京百度网讯科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
US11688177B2 (en) | 2020-05-29 | 2023-06-27 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Obstacle detection method and device, apparatus, and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751144A (en) * | 2015-04-02 | 2015-07-01 | 山东大学 | Frontal face quick evaluation method for video surveillance |
CN104809687A (en) * | 2015-04-23 | 2015-07-29 | 上海趣搭网络科技有限公司 | Three-dimensional human face image generation method and system |
CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
CN108921795A (en) * | 2018-06-04 | 2018-11-30 | 腾讯科技(深圳)有限公司 | A kind of image interfusion method, device and storage medium |
CN109242760A (en) * | 2018-08-16 | 2019-01-18 | Oppo广东移动通信有限公司 | Processing method, device and the electronic equipment of facial image |
CN109325437A (en) * | 2018-09-17 | 2019-02-12 | 北京旷视科技有限公司 | Image processing method, device and system |
CN109712080A (en) * | 2018-10-12 | 2019-05-03 | 迈格威科技有限公司 | Image processing method, image processing apparatus and storage medium |
CN109949386A (en) * | 2019-03-07 | 2019-06-28 | 北京旷视科技有限公司 | A kind of Method for Texture Image Synthesis and device |
WO2019128508A1 (en) * | 2017-12-28 | 2019-07-04 | Oppo广东移动通信有限公司 | Method and apparatus for processing image, storage medium, and electronic device |
CN110096925A (en) * | 2018-01-30 | 2019-08-06 | 普天信息技术有限公司 | Enhancement Method, acquisition methods and the device of Facial Expression Image |
-
2019
- 2019-11-01 CN CN201911060534.2A patent/CN111008935B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751144A (en) * | 2015-04-02 | 2015-07-01 | 山东大学 | Frontal face quick evaluation method for video surveillance |
CN104809687A (en) * | 2015-04-23 | 2015-07-29 | 上海趣搭网络科技有限公司 | Three-dimensional human face image generation method and system |
CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
WO2019128508A1 (en) * | 2017-12-28 | 2019-07-04 | Oppo广东移动通信有限公司 | Method and apparatus for processing image, storage medium, and electronic device |
CN110096925A (en) * | 2018-01-30 | 2019-08-06 | 普天信息技术有限公司 | Enhancement Method, acquisition methods and the device of Facial Expression Image |
CN108921795A (en) * | 2018-06-04 | 2018-11-30 | 腾讯科技(深圳)有限公司 | A kind of image interfusion method, device and storage medium |
CN109242760A (en) * | 2018-08-16 | 2019-01-18 | Oppo广东移动通信有限公司 | Processing method, device and the electronic equipment of facial image |
CN109325437A (en) * | 2018-09-17 | 2019-02-12 | 北京旷视科技有限公司 | Image processing method, device and system |
CN109712080A (en) * | 2018-10-12 | 2019-05-03 | 迈格威科技有限公司 | Image processing method, image processing apparatus and storage medium |
CN109949386A (en) * | 2019-03-07 | 2019-06-28 | 北京旷视科技有限公司 | A kind of Method for Texture Image Synthesis and device |
Non-Patent Citations (1)
Title |
---|
唐超影;浦世亮;叶鹏钊;肖飞;冯华君;: "基于卷积神经网络的低照度可见光与近红外图像融合", 光学学报, no. 16 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111508050A (en) * | 2020-04-16 | 2020-08-07 | 北京世纪好未来教育科技有限公司 | Image processing method and device, electronic equipment and computer storage medium |
CN111508050B (en) * | 2020-04-16 | 2022-05-13 | 北京世纪好未来教育科技有限公司 | Image processing method and device, electronic equipment and computer storage medium |
US11688177B2 (en) | 2020-05-29 | 2023-06-27 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Obstacle detection method and device, apparatus, and storage medium |
CN112257552A (en) * | 2020-10-19 | 2021-01-22 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN112257552B (en) * | 2020-10-19 | 2023-09-05 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN112381927A (en) * | 2020-11-19 | 2021-02-19 | 北京百度网讯科技有限公司 | Image generation method, device, equipment and storage medium |
CN112712004A (en) * | 2020-12-25 | 2021-04-27 | 英特灵达信息技术(深圳)有限公司 | Face detection system, face detection method and device and electronic equipment |
CN112712004B (en) * | 2020-12-25 | 2023-09-12 | 英特灵达信息技术(深圳)有限公司 | Face detection system, face detection method and device and electronic equipment |
CN114120414A (en) * | 2021-11-29 | 2022-03-01 | 北京百度网讯科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN114120414B (en) * | 2021-11-29 | 2022-11-01 | 北京百度网讯科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN111008935B (en) | 2023-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
CN110532984B (en) | Key point detection method, gesture recognition method, device and system | |
US10872420B2 (en) | Electronic device and method for automatic human segmentation in image | |
CN106778928B (en) | Image processing method and device | |
CN108875524B (en) | Sight estimation method, device, system and storage medium | |
WO2018137623A1 (en) | Image processing method and apparatus, and electronic device | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
US20210279971A1 (en) | Method, storage medium and apparatus for converting 2d picture set to 3d model | |
CN108876804B (en) | Matting model training and image matting method, device and system and storage medium | |
CN108961149B (en) | Image processing method, device and system and storage medium | |
CN109815843B (en) | Image processing method and related product | |
CN106650662B (en) | Target object shielding detection method and device | |
WO2018176938A1 (en) | Method and device for extracting center of infrared light spot, and electronic device | |
JP5873442B2 (en) | Object detection apparatus and object detection method | |
CN107808111B (en) | Method and apparatus for pedestrian detection and attitude estimation | |
CN108875542B (en) | Face recognition method, device and system and computer storage medium | |
JP6544900B2 (en) | Object identification device, object identification method and program | |
KR20170008638A (en) | Three dimensional content producing apparatus and three dimensional content producing method thereof | |
CN108875517B (en) | Video processing method, device and system and storage medium | |
CN107959798B (en) | Video data real-time processing method and device and computing equipment | |
JP6351243B2 (en) | Image processing apparatus and image processing method | |
CN109274891B (en) | Image processing method, device and storage medium thereof | |
CN108921070B (en) | Image processing method, model training method and corresponding device | |
CN107767358B (en) | Method and device for determining ambiguity of object in image | |
CN107734207B (en) | Video object transformation processing method and device and computing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |