CN113012030A - Image splicing method, device and equipment - Google Patents

Image splicing method, device and equipment Download PDF

Info

Publication number
CN113012030A
CN113012030A CN201911335071.6A CN201911335071A CN113012030A CN 113012030 A CN113012030 A CN 113012030A CN 201911335071 A CN201911335071 A CN 201911335071A CN 113012030 A CN113012030 A CN 113012030A
Authority
CN
China
Prior art keywords
images
spliced
image
splicing
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911335071.6A
Other languages
Chinese (zh)
Inventor
李虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN201911335071.6A priority Critical patent/CN113012030A/en
Publication of CN113012030A publication Critical patent/CN113012030A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image splicing method, device and equipment, relating to the technical field of image processing, wherein in the image splicing method, two images to be spliced are obtained; determining the matching and splicing positions of the two images to be spliced; acquiring two analytic images of the two images to be spliced; acquiring fusion parameters corresponding to the two images to be spliced according to the two analytic images; and splicing the two images to be spliced according to the matching splicing position and the fusion parameters respectively corresponding to the two images to be spliced. The method has good splicing effect when the brightness difference of the images to be spliced is large, and is suitable for splicing scenes of facial feature partial images.

Description

Image splicing method, device and equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image splicing method, device and equipment.
Background
Image stitching techniques have been applied in various fields such as post-processing of photographs, criminal investigation, and entertainment. Taking the splicing of the face images as an example, the local images of the face, such as the local images of the five sense organs, can be spliced to the face through a specific algorithm, so that the splicing process of the face is realized.
In the prior art, image splicing is usually realized by using a feature point matching algorithm or a distance proportion matching algorithm, but the feature point matching algorithm has a high requirement on the brightness distribution between images to be spliced and is only suitable for scenes with approximate brightness distribution of the images to be spliced, the splicing effect of other scenes is poor, and the distance proportion matching algorithm needs to set a distance threshold value during image splicing, but for irregular images to be spliced, the distance is difficult to measure, so that the threshold value is difficult to set, deviation is easy to occur, and the splicing effect is poor.
Disclosure of Invention
The embodiment of the invention aims to provide an image splicing method, device and equipment so as to improve the image splicing effect.
In a first aspect, an embodiment of the present invention provides an image stitching method, where the method includes:
acquiring two images to be spliced;
determining the matching and splicing positions of the two images to be spliced;
acquiring two analytic images of the two images to be spliced;
acquiring fusion parameters corresponding to the two images to be spliced according to the two analytic images;
and splicing the two images to be spliced according to the matching splicing position and the fusion parameters respectively corresponding to the two images to be spliced.
In some embodiments, determining the matched stitching location of the two images to be stitched comprises:
carrying out characteristic point detection on the two images to obtain a plurality of characteristic point pairs;
and determining the matching and splicing positions of the two images according to the obtained characteristic point pairs.
In some embodiments, after the step of performing feature point detection on the two images to obtain a plurality of feature point pairs, the method further includes:
removing outliers from the characteristic point pairs by using an outlier detection method;
determining the matching and splicing positions of the two images according to the obtained characteristic point pairs comprises the following steps:
and determining the matching and splicing positions of the two images according to the characteristic point pairs with the outliers removed.
In some embodiments, the acquiring two analysis images of the two images to be stitched includes:
and respectively inputting the two images into a pre-trained analytical model, and outputting analytical images of the two images.
In some embodiments, the obtaining, according to the two analysis images, corresponding fusion parameters of the two images to be stitched respectively includes:
performing morphological closed operation processing on the two analytic images to obtain two analytic images after the closed operation processing;
performing Gaussian blur processing on the two analytic images after the closed operation processing to obtain the two analytic images after the Gaussian blur processing;
and determining the pixel values of the two analytic images after the Gaussian blur processing as fusion parameters.
In some embodiments, the step of performing stitching processing on the two images to be stitched according to the matching stitching position and the fusion parameters respectively corresponding to the two images to be stitched includes:
and calculating the pixel value of the splicing matching position on the spliced image according to the fusion parameters respectively corresponding to the two images to be spliced.
In some embodiments, the method further comprises:
an analytical model is trained and formed using a pre-constructed analytical data set.
In some embodiments, the two images to be stitched comprise partial images of a human face;
training an analytical model using a pre-constructed analytical data set, comprising:
obtaining face local analysis data to form a face local analysis data set;
and training the initial network model by using the face local analysis data set to obtain an analysis model.
In some embodiments, the initial network model is a deepab V3 Plus network.
In a second aspect, an embodiment of the present invention provides an image stitching apparatus, where the apparatus includes:
the image to be spliced acquisition module is used for acquiring two images to be spliced;
the image matching position determining module is used for determining the matching and splicing positions of the two images to be spliced;
the analysis image acquisition module is used for acquiring two analysis images of the two images to be spliced;
a fusion parameter obtaining module, configured to obtain, according to the two analysis images, fusion parameters corresponding to the two images to be stitched respectively;
and the image splicing module is used for splicing the two images to be spliced according to the matching splicing position and the fusion parameters respectively corresponding to the two images to be spliced.
In a third aspect, an embodiment of the present invention provides an image stitching apparatus, where the apparatus includes: the image stitching method comprises a memory and a processor, wherein a computer program capable of running on the processor is stored in the memory, and the steps of the image stitching method are realized when the processor executes the computer program.
In a fourth aspect, a computer readable medium has non-volatile program code executable by a processor, the program code causing the processor to perform the steps of the image stitching method described above.
The embodiment of the invention provides an image splicing method, device and equipment. After two analytic images of two images to be spliced are obtained, the two images to be spliced are spliced according to the matching splicing position and the two analytic images. By adopting the image splicing method, the device and the equipment provided by the embodiment of the invention, the analysis images of the two images to be spliced are obtained, the fusion parameters are obtained according to the analysis images, and the splicing processing is realized based on the fusion parameters. The analytic image can reflect the brightness difference of the image to be spliced, so that the fusion parameters are obtained based on the analytic image, the splicing processing is realized based on the fusion parameters, the natural transition of the splicing area in the image obtained after splicing can be effectively ensured, and a good splicing effect can be still achieved even if the brightness difference of the image to be spliced is large.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of an image stitching method according to an embodiment of the present invention;
fig. 2 is a flowchart of step S102 in the image stitching method according to the embodiment of the present invention;
fig. 3 is a flowchart of step S104 in the image stitching method according to the embodiment of the present invention;
FIG. 4 is a flowchart illustrating training of an analysis model formed by using a pre-configured analysis dataset in the image stitching method according to the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present invention;
fig. 7 is an exemplary schematic diagram of analyzing an image in the image stitching method according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Image stitching techniques have been applied in various fields such as post-processing of photographs, criminal investigation, and entertainment. Taking face stitching as an example, specifically, for example, when a group photo of people is taken, the distortion of the lens can cause the distortion of the face of people at the edge, and the face stitching needs to be performed at a later stage for repairing; in criminal investigation, various incomplete photos need to be spliced, and the complete human face of a person is finally obtained, so that case detection is facilitated; in an entertainment scene, a face splicing application program can be developed, the facial features of different characters can be spliced, the process similar to face changing is realized, and the entertainment effect is achieved.
In the application scene, it can be seen that, in the process of splicing face images, most of the face images to be spliced are shot in the same scene, but are shot at different times and different places, so that the brightness difference between the faces in the images to be spliced is usually large, and the requirement on the splicing process is high for obtaining a natural and unobtrusive splicing effect.
For the related algorithm adopted in the image stitching process, a feature point matching method or a distance ratio algorithm is usually adopted in the prior art to realize the image stitching. However, the feature point matching method is only suitable for image stitching with similar brightness distribution, and obvious stitching traces appear when the brightness distribution of the image is not uniform. Because the images to be spliced are often shot at different time and different places, the difference of the image brightness is often larger, and the final splicing effect is poorer. The distance proportion algorithm needs to set a threshold value when image splicing is carried out, the distance proportion algorithm is only suitable for regular images to be spliced, when the shape of the images is irregular, the distance cannot be accurately measured, and the images to be spliced are often irregular images, so that the scene of threshold value setting is complex, deviation is easy to occur, and the splicing effect is poor.
Aiming at the problem of poor splicing effect in the existing image splicing process, the embodiment of the invention provides an image splicing method, device and equipment.
It should be noted that the image stitching method provided by the embodiment of the present invention may be applied to stitching an image into another image, and replace a scene of a part of images in the another image with the image. In an application scenario, the one image may be cut from the other image, and after performing image processing such as beautification, the processed one image is stitched back to the other image.
The embodiment of the invention provides an image splicing method, as shown in fig. 1, the method comprises the following steps:
step S101: and acquiring two images to be spliced.
The images to be stitched can be digital images and can be acquired by a camera, a scanner and other equipment. Furthermore, the at least one image to be stitched may be a previously processed image, such as a beautified image, an enhanced image, or the like.
The two images to be stitched may be images of a main body image and components included in the main body, for example, the images to be stitched are face images and eye images, and the stitching result is that the eye images are stitched into the face images to replace eye parts in the original face images.
For example, for a conventional splicing scene, images to be spliced need to have the same subject to be spliced, and the desired splicing result is the subject. For example, in two images to be spliced, one image is one of five sense organs in a human face, and the other image is a human face image with five sense organs, so that a main body in the spliced image is the human face; if the images to be spliced contain automobile tires and automobile headlights, the main body in the spliced images is the automobile.
The two images to be spliced can contain the same region, for example, two eyes of a person can be contained in the two images to be spliced, so that the common part of the eyes of the person can be used as a reference for combination, and the splicing precision is higher. If the two images to be spliced do not contain the same area, the two images to be spliced can be subjected to feature description, so that the picture layout in the subsequent splicing is facilitated, for example, the left eye and the right eye of the face are respectively arranged in the two images to be spliced, the faces in the two images do not have any overlapping area, the left eye and the right eye in the images can be distinguished at the moment, the labels of the left eye and the right eye are respectively added, the basis is provided for the subsequent splicing process, and the situation that the two eyes are similar to each other to cause the merging repetition is prevented.
Step S102: and determining the matching and splicing positions of the two images to be spliced.
The matching and splicing position refers to a position or a region where one of the two images to be spliced is spliced into the other image, and may be a corresponding relationship of pixel points.
Optionally, in this step, a feature point detection mode may be used to find corresponding feature point pairs in the two images to be stitched, and based on the feature point pairs, the matching stitching position of the two images is determined.
The determination of the matching and splicing position is the key of image splicing, for two images with overlapping regions, the splicing position is the overlapping region, the overlapping regions in the two images are matched and obtained, one image can be kept unchanged during subsequent image splicing, the other image is spliced with the other image, and the overlapping regions of the two images are fused in the splicing process, so that the image splicing can be completed.
And for two images without an overlapped area, the splicing position is determined by calculating the related characteristic points. The characteristic points are used for measuring points with special marking functions in the image and can be realized by using a related digital image processing algorithm.
The process can be realized in digital images through image registration, the image registration is a technical means for determining the overlapping area and the overlapping position between the images to be spliced, and a transformation matrix between image sequences can be constructed through matching characteristic points, so that the final splicing of the images is completed. In this process, the solution of the transformation matrix is the core of image registration, and the key is how to obtain the most suitable feature points.
Specifically, in the solving process of the transformation matrix, firstly, feature points in two images are detected, and the matching between the feature points of the two images is calculated; then, the initial value of the transformation matrix between the two images is calculated, and the transformation matrix is refined in an iteration mode. And defining a search area near epipolar lines by transforming the matrix in a guiding matching mode, further determining the corresponding relation of the characteristic points, and repeating the steps until the number of the corresponding points is stable. Finally, the two images can be transformed to determine the overlapping area between the images by the transformation matrix between the two images.
Step S103, two analytic images of the two images to be spliced are obtained.
The analysis image is a binary image generated by dividing feature information of interest in an image.
For two images to be stitched, it can be understood that the target stitching area of the two has been determined in advance. For example, when the human eye image is spliced to the human face image, the human eye regions on the two images are the target splicing regions, and therefore, in order to achieve natural splicing of the target splicing regions, in this step, the analysis images of the two images to be spliced are obtained, specifically, the images to be spliced are divided according to the target splicing regions to generate a binary image, and in the binary image, the target splicing region is a pixel value interval, and the other regions are another pixel value interval, so that the target splicing region and the other regions can be effectively distinguished. Referring to fig. 7, an analysis image of a face image is a black-and-white image, in which five sense organ regions are separated, and the five sense organ region (white, with a pixel value of 255) is clearly distinguished from other regions (black, with a pixel value of 1).
In one embodiment, in this step, the two images may be input into a pre-trained analytical model, and the analytical images of the two images may be output.
The process of analyzing the image can be realized through a related neural network model in the field of machine learning, for example, the analysis model is used for analyzing local features of the face, different regions of the face are used as input data in the training process, for example, the facial features are used as training data, the training is carried out through a convolutional neural network model, and the trained model can be used for analyzing the facial features. Convolutional Neural Networks (CNN) are a type of feed-forward Neural network that includes convolution calculations and has a deep structure, and are one of the representative algorithms for deep learning. The convolutional neural network selected in this embodiment may select various derived network models according to actual situations, which is not described herein.
The analysis process may be regarded as further processing of the image to be stitched, for example, the brightness of the two input images to be stitched is not consistent, and after the two images are input to the trained model, both the two analysis images output from the model are binary images.
And step S104, acquiring fusion parameters respectively corresponding to the two images to be spliced according to the two analytical images.
Further, after the analysis image is obtained, in this step, the fusion parameters of the images to be spliced are obtained. Specifically, the analytic images can be subjected to gaussian blur processing according to the image to be spliced and the two analytic images to obtain the analytic images subjected to gaussian blur processing, which are gray level images capable of reflecting the brightness difference of the two analytic images to be spliced, so that fusion parameters can be obtained based on the gray level images, and then splicing processing is performed based on the fusion parameters. Therefore, the transition area spliced in the two images to be spliced can be subjected to smoothing processing during splicing, so that the splicing traces of the transition area are reduced during final combination of the images.
Specifically, the fusion parameter may be pixel values of two gray-scale images. In another embodiment of the present invention, the stitching parameter of the two images can be further obtained according to the pixel value.
And S105, splicing the two images to be spliced according to the matching splicing position and the fusion parameters respectively corresponding to the two images to be spliced.
In this step, specifically, the pixel values of the stitching matching positions on the stitched images may be calculated according to the respective corresponding fusion parameters of the two images to be stitched.
The process of splicing the two images to be spliced comprises the step of calculating the pixel value of the splicing matching position on the spliced images according to the fusion parameters respectively corresponding to the two images to be spliced.
Assuming that the spliced image is a and B, the fusion parameter corresponding to the matching and splicing position of a is m, and the fusion parameter corresponding to the matching and splicing position of B is n, the pixel value of the matching and splicing position on the spliced image is a m/(m + n) + B n/(m + n). Namely, the pixel value of the matched splicing position after splicing is equal to the weighted average value of the pixel values of the corresponding positions of the images to be spliced, and the weighting parameters are obtained according to the fusion parameters.
It should be noted that the fusion parameter is mainly used for processing the joint of the two images, for example, for human face image splicing, due to the reason of the difference in illumination and color of the two human face local images, the splicing area of the two human face local images is not naturally transited, at this time, the part needs to be weighted and fused, and the overlapped part is slowly transited from the first human face local image to the second human face local image, so that the transition area is more natural. For example, since the part that is easily obtrusive when stitching is the border part of the matching stitching position, the border part may set the pixel value of the grayscale image to the pixel value of the grayscale image, and for the central part of the matching stitching position, it is assumed that a replaces a part of the B image, and for the central part, n may be set to 1, and m may be set to 0, so that the pixel value of the central part of the stitched image matching the stitching position is equal to the pixel value of a. Therefore, the splicing of the A to the B can be realized, the splicing boundary can be in smooth transition, and a good splicing effect is ensured.
For example, in two partial images of the face, the luminance n of the first image is higher than the luminance m of the second image, so that the weighting parameter value in the first analysis image can be lower than the weighting parameter value in the second analysis image, for example, the weighting parameter value in the first analysis image is m/(n + m), and the weighting parameter value in the second analysis image is n/(n + m). The brightness of the splicing area in the first image is reduced in the fusion process, and the impression effect of the splicing area is improved.
The splicing process of the two images is realized through the characteristic point pairs in the two analytic images, the position of the matching point pair with the best matching effect in the two analytic images is found firstly, then the position of the matching point pair in one image projected to the other image after being mapped is obtained through mapping matrix transformation, finally the two images are spliced at the corresponding matching point on the other image, and finally the splicing of the images is realized.
By adopting the image splicing method provided by the embodiment of the invention, the splicing processing is realized after the matching splicing positions of the two images to be spliced and the two analytic images are obtained. The analytic image can reflect the brightness difference of the image to be spliced, so that the fusion parameters are obtained based on the analytic image, the splicing processing is realized based on the fusion parameters, the natural transition of the splicing area in the image obtained after splicing can be effectively ensured, and a good splicing effect can be still achieved even if the brightness difference of the image to be spliced is large.
In some embodiments, the step S102 of determining a matching and splicing position of two images to be spliced, as shown in fig. 2, includes:
step S201, feature point detection is carried out on the two images to obtain a plurality of feature point pairs;
and determining the matching and splicing positions of the two images according to the obtained characteristic point pairs. The process of feature point detection can use a related feature point detection algorithm in the field, and if feature point extraction is carried out on two images based on color features, a color histogram algorithm, a color set algorithm, a color moment algorithm, a color aggregation vector algorithm, an MEPG-7 color layout operator algorithm and the like can be adopted; if the feature points in the two images are extracted based on the texture features, a Tamura texture feature algorithm, an autoregressive texture model algorithm, a Gabor transformation algorithm, a wavelet transformation algorithm, an MPEG7 edge histogram algorithm and the like can be adopted; if feature point extraction is performed on the two images based on shape features, a Fourier shape descriptor algorithm, a shape independent moment algorithm, a wavelet contour descriptor algorithm and the like can be adopted. The selection of the algorithm is carried out according to the requirements of an actual scene, and the combination of various algorithms can be adopted for feature extraction so as to improve the feature extraction effect.
For example, in a specific implementation, a related Feature extraction algorithm in OpenCV may be selected to extract Features of two input images, and the related Feature extraction algorithm in OpenCV may select an edge detection algorithm, a corner detection algorithm, a line detection algorithm, a circle detection algorithm, a Speeded Up Robust Feature algorithm, a SIFT (scale invariant Feature Transform) algorithm, an ORB (organized FAST and Rotated BRIEF) algorithm, a FAST Feature point extraction description algorithm, a FAST redundant Feature Test (Feature obtained by Accelerated segmentation Test), a Harris algorithm corner, an HOG (Histogram of Oriented Gradient) algorithm, an LBP (Local Binary Pattern) algorithm, and the like, and a corresponding method may be called through a function corresponding to OpenCV, and is not performed again.
After the feature points of the two images are obtained, all the feature points need to be matched to form one-to-one corresponding feature point pairs, and the process needs a related feature point matching algorithm to be realized. For example, in OpenCV, a FLANN (Fast approximation Neighbor Search Library), a planar object recognition algorithm, a KAZE algorithm, an AKAZE algorithm, a BRISK algorithm, and the like may be used to implement the method, and the corresponding method may be called through a function corresponding to OpenCV, which is not described again. And obtaining a plurality of characteristic point pairs through the characteristic point matching algorithm, and summarizing the characteristic point pairs to obtain a splicing position.
And step S202, determining the matching and splicing positions of the two images according to the obtained characteristic point pairs.
The above process can be realized in digital images through image registration, and a transformation matrix between image sequences is constructed through matching the characteristic points, so that the final splicing of the images is completed. Specifically, in the solving process of the transformation matrix, firstly, feature points in two images are detected, and the matching between the feature points of the two images is calculated; then, the initial value of the transformation matrix between the two images is calculated, and the transformation matrix is refined in an iteration mode. And defining a search area near epipolar lines by transforming the matrix in a guiding matching mode, further determining the corresponding relation of the characteristic points, and repeating the steps until the number of the corresponding points is stable. And finally, the two images can be transformed through a transformation matrix between the two images so as to determine the matching and splicing positions of the two images.
In some embodiments, after the step S201 of performing feature point detection on the two images to obtain a plurality of feature point pairs, the method further includes: and removing outliers from the plurality of characteristic point pairs by using an outlier detection method.
Since the contents in the two images may have a large difference, the matching result between the feature points may have a large error, and therefore the feature points with large errors need to be removed. The specific elimination process can be realized by an outlier detection method, for example, a Neel test method can be adopted under the condition that the standard deviation is known; for the case of unknown standard deviation, if the number of outliers is 1, a Lauda method, a 4d test, a Xiaoverer (Chauvenet) method, a t test, a Grubbs test, a Dixon (Dixon) test (sample capacity 3. ltoreq. n. ltoreq.30), and a Q test can be used; if the number of outliers is greater than 1, skewness-kurtosis test, Dixon test, Grubbs test, etc. can be used.
After the feature points obtained after feature point detection is carried out on the two images by the outlier detection method are detected, the matching and splicing positions of the two images are determined according to the obtained feature point pairs, and the matching and splicing positions of the two images are determined according to the feature point pairs with the outliers removed.
Outlier detection has a key role in the analysis process of data spatial feature distribution, and can extract special data in feature points, wherein the main method comprises a statistical method based on set statistics and test level; the method also comprises a detection method based on the distance measurement of the given feature space and the distance of the adjacent domains; there is also a method of determining the local reachable region and the density of the corresponding density parameter; and an outlier detection method for discriminating according to the class center point through unsupervised clustering. Because the number of the acquired feature points is not fixed, when the number of the feature points is large, the extracted feature points cannot be guaranteed to meet the requirements, so that the feature points need to be screened, and an outlier detection method is adopted in the screening process.
After the outlier detection is carried out on the plurality of feature points, which feature points belong to abnormal feature points are judged according to the actual situation, and the rejecting operation is carried out. The characteristic points which are removed are the characteristic points with larger difference compared with other characteristic points, and the difference between the characteristic points after being removed is smaller, thereby being more in line with the requirement of characteristic extraction and being beneficial to improving the accuracy of the matching position.
In some embodiments, the acquiring two analysis images of the two images to be stitched includes:
and respectively inputting the two images into a pre-trained analytical model, and outputting analytical images of the two images.
The analytic model in the step is used for analyzing the content contained in the image, and different types of data sets are selected as input data to be trained in the training process. For example, for a model for analyzing a human face, a data set during training can be obtained by selecting pictures of different human face five sense organs, and the analysis model obtained after training can be used for analyzing the human face five sense organs. The analytic model in this step may be a Convolutional Neural network model, and a Convolutional Neural Network (CNN) is a type of feed-forward Neural network that includes Convolutional calculation and has a deep structure, and is one of the representative algorithms for deep learning. The convolutional neural network selected in this embodiment may select various derived network models according to actual situations, which is not described herein.
The analysis process may be regarded as further processing of the two images, for example, the picture brightness of the two input human face facial features local images is not consistent, the two human face facial features local images are input to the trained model, then the two human face facial features local images are subjected to related operations with other facial features data adopted in the model training process, and finally the two human face facial features local images with small brightness difference are output from the model.
As shown in fig. 3, in some embodiments, the step S104 includes:
step S301, performing morphological closing operation on the two analysis images to obtain two analysis images after the closing operation.
Because the shooting environment and the brightness of the images to be spliced are not fixed, the forms in the obtained analytic graphs are also various, and the forms of the analytic graphs are not fixed and are not beneficial to the subsequent splicing effect. Therefore, morphological operations are necessary for the analysis. Therefore, the method further needs to perform morphological closing operation processing on the two analysis images to obtain the two analysis images after the closing operation processing.
Morphology in the field is the field of digital morphology, and for digital images, the main application of morphology is to extract image components capable of expressing and describing regions from images, so that the subsequent recognition work can obtain essential features of objects, such as boundaries, connected regions and the like. Morphological operations are classified into erosion, dilation, opening operations, closing operations, morphological gradients, top hat, black hat, and the like. For the local image of the human face, the inside of the analytic image obtained by the analytic model cannot contain the hole point, otherwise, the splicing process of the subsequent image is influenced. Therefore, after the step of inputting the two face partial images into the pre-trained analytical model and outputting the two analytical images respectively, the two analytical images are processed through morphological closed operation to obtain the two analytical images without the internal cavity points.
The closed operation in morphology can smooth the image contour, and can usually close narrow gaps and fill small holes. The above process can be implemented by using a function related to morphological closed operation in OpenCV, and is not described again.
The two analytic images without the internal cavity point are obtained by performing morphological closed operation processing on the two analytic images, and the effect of splicing the subsequent images is favorably improved.
Because the brightness of the two input images is not necessarily identical, the brightness of the two corresponding analyzed images obtained respectively is also inconsistent, so that the spliced images generate splicing traces, and the effect of similar creases can be generated in a real scene. Therefore, two images to be spliced need to be subjected to filtering operation, and after the two images to be spliced are subjected to fuzzy smoothing processing, the layering sense caused by brightness difference is reduced, and the splicing effect is favorably improved.
Step S302, performing Gaussian blur processing on the two analytic images after the closed operation processing to obtain the two analytic images after the Gaussian blur processing.
The filtering operation may employ gaussian blur processing. And converting the analytic image into a gray-scale image reflecting brightness difference through Gaussian blur processing, and determining fusion parameters for splicing by further utilizing the gray-scale image.
The gaussian blur processing in this step may also be performed by other filtering methods, such as mean filtering, median filtering, bilateral filtering, etc., and the filtering selection may be performed according to a specific use scenario.
Step S303, determining pixel values of the two analytic images after the Gaussian blur processing as fusion parameters.
And the two analytic images after the Gaussian blur processing are used as fusion parameters, so that the impression effect of the splicing area is further improved.
Of course, the fusion parameters may also be obtained by further calculation based on the pixel values of the two analysis images after the gaussian blurring process.
In some embodiments, the image stitching method further comprises: an analytical model is trained and formed using a pre-constructed analytical data set.
In the specific implementation process, in a face image splicing scene, two images to be spliced comprise face local images; then, the step of training and forming the analytical model by using the pre-configured analytical data set, as shown in fig. 4, may include:
step S401, obtaining face local analysis data to form a face local analysis data set.
The partial face image is a digital image and can be acquired by equipment such as a camera and a scanner, and the image can be obtained by cutting a complete face image. The partial image of the face usually contains at least one of the five sense organs of the eyes, nose, mouth, ears, eyebrows. If the above five sense organs are not included, the face partial image also needs to provide a face contour part.
The two partial face images to be stitched may include a common face portion, for example, both the two images to be stitched may include a pair of eye portions of a person, which is beneficial to merging the common eye portion as a reference, and the stitching precision is higher. If the two partial human face images to be spliced do not contain a common human face part, the two partial human face images to be spliced can be subjected to feature description, and the subsequent picture layout in splicing is facilitated.
The local human face analysis data source is corresponding to different parts of human face in various human face images, such as eyes, nose, mouth, eyebrows and ears corresponding to five sense organs. Analyzing various face images to obtain relevant data of position coordinates, outlines, colors and the like corresponding to the face parts in the images, representing corresponding face local information, and analyzing the face parts through the data.
In the process of acquiring the local analysis data of the human face, a special area, such as a black mole, a birthmark and the like, can be preferentially selected from two local images of the human face, a plurality of points can be selected from the special area as feature points, and the feature points can be referred to for splicing in the subsequent image splicing process.
After the face local analysis data is obtained, a corresponding face local analysis data set can be formed according to the respective corresponding face parts, for example, the eye data set corresponding to the five sense organs only contains all eye analysis data, and the mouth data set only contains all mouth analysis data.
Step S402, training an initial network model by using a face local analysis data set to obtain an analysis model.
After a face local analysis data set is obtained, the analysis data set is input to an initial network model for training, and the model can be various derived network models in a convolutional neural network or other machine learning related network models.
Specifically, the initial network model is a deep V3 Plus network, the model is a semantic segmentation network developed by google, a convolution structure with a multi-scale band-hole structure is adopted, and then the result is obtained through up-sampling and splicing with different convolution layers and finally convolution and up-sampling.
Specifically, the face local analysis data set contains analysis data of the face five sense organs, and the local analysis data set containing the face five sense organs is input into a Deeplab V3 Plus network for training to obtain an analysis model of the face five sense organs. In the use process of the model, a human face image containing five sense organs needs to be input, and an analyzed image is output after model operation. Identifying the five sense organs of the face in the input image in the analyzed image, and finally using the identified five sense organs image in the splicing of the face image.
Therefore, the face information in the face image is recognized by adopting the relevant model of the deep learning method, and convenience is provided for splicing the follow-up face.
As shown in fig. 5, an embodiment of the present invention provides an image stitching apparatus, including:
an image to be stitched obtaining module 501, configured to obtain two images to be stitched;
an image matching position determining module 502, configured to determine matching and splicing positions of two images to be spliced;
an analytic image obtaining module 503, configured to obtain two analytic images of the two images to be stitched;
a fusion parameter obtaining module 504, configured to obtain, according to the two analysis images, fusion parameters corresponding to the two images to be spliced, respectively;
and the image stitching module 505 is configured to perform stitching processing on the two images to be stitched according to the matching stitching position and the fusion parameters respectively corresponding to the two images to be stitched.
In some embodiments, the image matching location determining module includes:
the characteristic point acquisition module is used for detecting characteristic points of the two images to obtain a plurality of characteristic point pairs;
and the splicing position calculation module is used for determining the matching splicing position of the two images according to the obtained characteristic point pairs.
In some embodiments, the image matching location determining module further includes:
the outlier removing module is used for removing outliers from the characteristic point pairs by using an outlier detection method;
and the splicing position calculation module is used for determining the matching splicing position of the two images according to the characteristic point pairs with the outliers removed.
In some embodiments, the analytic image acquisition module includes:
and the model analysis output module is used for respectively inputting the two images into a pre-trained analysis model and outputting analysis images of the two images.
In some embodiments, the fusion parameter obtaining module further includes:
the morphological closed-loop operation processing module is used for performing morphological closed-loop operation processing on the two analytic images to obtain the two analytic images after the closed-loop operation processing;
the filtering calculation module is used for carrying out Gaussian blur processing on the two analytic images after the closed operation processing to obtain the two analytic images after the Gaussian blur processing;
and the fusion calculation module is used for determining the pixel values of the two analytic images after the Gaussian blur processing as fusion parameter values.
In some embodiments, the image stitching module is configured to: and calculating the pixel value of the splicing matching position on the spliced image according to the fusion parameters respectively corresponding to the two images to be spliced.
In some embodiments, the image stitching device further comprises:
and the analysis model building module is used for training and forming an analysis model by utilizing a pre-formed analysis data set.
In some embodiments, the two images to be stitched comprise partial images of a human face; the analysis model building module in this case includes:
the human face local analysis data set acquisition module is used for acquiring human face local analysis data to form a human face local analysis data set;
and the face local analysis model training module is used for training the initial network model by using the face local analysis data set to obtain an analysis model.
In some embodiments, the initial network model is a deepab V3 Plus network.
The image stitching device provided by the embodiment of the invention has similar technical characteristics to the image stitching method provided by the embodiment, and detailed implementation modes are not repeated in the embodiment.
An embodiment of the present invention provides an image stitching apparatus, as shown in fig. 6, the apparatus includes a processor 601 and a memory 602; the memory 602 is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the steps of the facial image stitching method.
The image stitching apparatus shown in fig. 6 further includes a bus 603 and a communication interface 604, and the processor 601, the communication interface 604, and the memory 602 are connected by the bus 603.
The Memory 602 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The bus 603 may be an ISA bus, a PCI bus, or an EISA bus, etc. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The communication interface 604 is used for connecting with at least one user terminal and other network units through a network interface, and sending the packaged IPv4 message or IPv4 message to the user terminal through the network interface.
The processor 601 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601. The Processor 601 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 602, and the processor 601 reads the information in the memory 602, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
Embodiments of the present invention provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the steps of the method provided by the above embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. An image stitching method, characterized in that the method comprises:
acquiring two images to be spliced;
determining the matching and splicing positions of the two images to be spliced;
acquiring two analytic images of the two images to be spliced;
acquiring fusion parameters corresponding to the two images to be spliced according to the two analytic images;
and splicing the two images to be spliced according to the matching splicing position and the fusion parameters respectively corresponding to the two images to be spliced.
2. The method of claim 1, wherein the determining the matched stitching location of the two images to be stitched comprises:
carrying out characteristic point detection on the two images to obtain a plurality of characteristic point pairs;
and determining the matching and splicing positions of the two images according to the obtained characteristic point pairs.
3. The method according to claim 2, wherein after the step of performing feature point detection on the two images to obtain a plurality of pairs of feature points, the method further comprises:
removing outliers from the characteristic point pairs by using an outlier detection method;
the determining the matching and splicing positions of the two images according to the obtained characteristic point pairs comprises:
and determining the matching and splicing positions of the two images according to the characteristic point pairs with the outliers removed.
4. The method of claim 1, wherein the obtaining two resolved images of the two images to be stitched comprises:
and respectively inputting the two images into a pre-trained analytical model, and outputting analytical images of the two images.
5. The method according to claim 1, wherein the obtaining, according to the two analysis images, corresponding fusion parameters of the two images to be stitched respectively comprises:
performing morphological closed operation processing on the two analytic images to obtain two analytic images after the closed operation processing;
performing Gaussian blur processing on the two analytic images after the closed operation processing to obtain the two analytic images after the Gaussian blur processing;
and determining the pixel values of the two analytic images after the Gaussian blur processing as fusion parameters.
6. The method according to any one of claims 1 to 5, wherein the step of stitching the two images to be stitched according to the matching stitching position and the fusion parameters respectively corresponding to the two images to be stitched comprises:
and calculating the pixel value of the splicing matching position on the spliced image according to the fusion parameters respectively corresponding to the two images to be spliced.
7. The method of claim 4, further comprising:
and training and forming the analytical model by utilizing a pre-constructed analytical data set.
8. The method according to claim 7, wherein the two images to be stitched comprise partial images of human faces;
the step of training and forming the analytical model by using a pre-constructed analytical data set comprises:
obtaining face local analysis data to form a face local analysis data set;
and training an initial network model by using the face local analysis data set to obtain the analysis model.
9. The method of claim 8, wherein the initial network model is a Deeplab V3 Plus network.
10. An image stitching device, characterized in that the device comprises:
the image to be spliced acquisition module is used for acquiring two images to be spliced;
the image matching position determining module is used for determining the matching and splicing positions of the two images to be spliced;
the analysis image acquisition module is used for acquiring two analysis images of the two images to be spliced;
a fusion parameter obtaining module, configured to obtain, according to the two analysis images, fusion parameters corresponding to the two images to be stitched respectively;
and the image splicing module is used for splicing the two images to be spliced according to the matching splicing position and the fusion parameters respectively corresponding to the two images to be spliced.
11. An image stitching device, characterized in that the device comprises: memory, a processor, in which a computer program is stored which is executable on the processor, characterized in that the processor implements the steps of the method according to any of the preceding claims 1 to 9 when executing the computer program.
12. A computer-readable medium having non-volatile program code executable by a processor, characterized in that the program code causes the processor to perform the steps of the method of any of claims 1 to 9.
CN201911335071.6A 2019-12-20 2019-12-20 Image splicing method, device and equipment Pending CN113012030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911335071.6A CN113012030A (en) 2019-12-20 2019-12-20 Image splicing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911335071.6A CN113012030A (en) 2019-12-20 2019-12-20 Image splicing method, device and equipment

Publications (1)

Publication Number Publication Date
CN113012030A true CN113012030A (en) 2021-06-22

Family

ID=76382960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911335071.6A Pending CN113012030A (en) 2019-12-20 2019-12-20 Image splicing method, device and equipment

Country Status (1)

Country Link
CN (1) CN113012030A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111526302A (en) * 2020-04-28 2020-08-11 飞友科技有限公司 Stackable panoramic video real-time splicing method
CN113496467A (en) * 2021-06-29 2021-10-12 武汉理工大学 Tibetan image splicing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN107067003A (en) * 2017-03-09 2017-08-18 百度在线网络技术(北京)有限公司 Extracting method, device, equipment and the computer-readable storage medium of region of interest border
CN107093166A (en) * 2017-04-01 2017-08-25 华东师范大学 The seamless joint method of low coincidence factor micro-image
CN107301620A (en) * 2017-06-02 2017-10-27 西安电子科技大学 Method for panoramic imaging based on camera array
CN107424120A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of image split-joint method in panoramic looking-around system
CN107682645A (en) * 2017-09-11 2018-02-09 广东欧珀移动通信有限公司 Image processing method and device
US20190251663A1 (en) * 2017-03-22 2019-08-15 Tencent Technology (Shenzhen) Company Limited Image splicing method, apparatus, terminal, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN107067003A (en) * 2017-03-09 2017-08-18 百度在线网络技术(北京)有限公司 Extracting method, device, equipment and the computer-readable storage medium of region of interest border
US20190251663A1 (en) * 2017-03-22 2019-08-15 Tencent Technology (Shenzhen) Company Limited Image splicing method, apparatus, terminal, and storage medium
CN107093166A (en) * 2017-04-01 2017-08-25 华东师范大学 The seamless joint method of low coincidence factor micro-image
CN107424120A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of image split-joint method in panoramic looking-around system
CN107301620A (en) * 2017-06-02 2017-10-27 西安电子科技大学 Method for panoramic imaging based on camera array
CN107682645A (en) * 2017-09-11 2018-02-09 广东欧珀移动通信有限公司 Image processing method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
崔丽: "《MATLAB小波分析与应用30个案例分析》", 30 June 2016, 北京:北京航空航天大学出版社, pages: 144 - 146 *
张东等: "基于特征点的图像拼接方法", 《计算机系统应用》, vol. 25, no. 3 *
李杭: "《伪造数字图像盲检测技术研究》", 31 January 2016, 长春:吉林大学出版社, pages: 21 - 22 *
杨帆 等: "《精通图像处理经典算法(MATLAB版)(第2版)》", 28 February 2018, 北京:北京航空航天大学出版社, pages: 104 - 105 *
谢宝荣: "《多媒体制作应用教程》", 31 December 2002, 北京希望电子出版社, pages: 104 *
韩晓微 等著: "《数字图像融合技术》", 31 December 2010, 沈阳:东北大学出版社, pages: 61 - 63 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111526302A (en) * 2020-04-28 2020-08-11 飞友科技有限公司 Stackable panoramic video real-time splicing method
CN113496467A (en) * 2021-06-29 2021-10-12 武汉理工大学 Tibetan image splicing method and system

Similar Documents

Publication Publication Date Title
Matern et al. Exploiting visual artifacts to expose deepfakes and face manipulations
Chen et al. Fsrnet: End-to-end learning face super-resolution with facial priors
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109325954B (en) Image segmentation method and device and electronic equipment
CN110675487B (en) Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face
JP7476428B2 (en) Image line of sight correction method, device, electronic device, computer-readable storage medium, and computer program
CN108463823B (en) Reconstruction method and device of user hair model and terminal
CN112801057B (en) Image processing method, image processing device, computer equipment and storage medium
KR20180065889A (en) Method and apparatus for detecting target
CN109472193A (en) Method for detecting human face and device
KR20070016849A (en) Method and apparatus for serving prefer color conversion of skin color applying face detection and skin area detection
CN110781770B (en) Living body detection method, device and equipment based on face recognition
CN112836625A (en) Face living body detection method and device and electronic equipment
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
CN109063598A (en) Face pore detection method, device, computer equipment and storage medium
CN112800978A (en) Attribute recognition method, and training method and device for part attribute extraction network
CN112633221A (en) Face direction detection method and related device
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
CN111353325A (en) Key point detection model training method and device
CN113012030A (en) Image splicing method, device and equipment
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
CN116188720A (en) Digital person generation method, device, electronic equipment and storage medium
CN111753722B (en) Fingerprint identification method and device based on feature point type
CN112949571A (en) Method for identifying age, and training method and device of age identification model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination