WO2020097834A1 - 一种特征处理方法及装置、存储介质及程序产品 - Google Patents

一种特征处理方法及装置、存储介质及程序产品 Download PDF

Info

Publication number
WO2020097834A1
WO2020097834A1 PCT/CN2018/115473 CN2018115473W WO2020097834A1 WO 2020097834 A1 WO2020097834 A1 WO 2020097834A1 CN 2018115473 W CN2018115473 W CN 2018115473W WO 2020097834 A1 WO2020097834 A1 WO 2020097834A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
feature vector
target
vector
image
Prior art date
Application number
PCT/CN2018/115473
Other languages
English (en)
French (fr)
Inventor
马熠东
Original Assignee
北京比特大陆科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京比特大陆科技有限公司 filed Critical 北京比特大陆科技有限公司
Priority to CN201880098361.0A priority Critical patent/CN112868019A/zh
Priority to PCT/CN2018/115473 priority patent/WO2020097834A1/zh
Publication of WO2020097834A1 publication Critical patent/WO2020097834A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • This application relates to the field of data processing, for example, to a feature processing method and device, storage medium, and program product.
  • Facial features have a greater impact on the accuracy of human recognition technology.
  • the existing face feature extraction method will cause the dimension of the face feature vector to increase linearly with the number of neural network models, resulting in a large increase in the calculation amount of face recognition, and its recognition speed and recognition accuracy will be adversely affected influences.
  • Embodiments of the present disclosure provide a feature processing method and device, a storage medium, and a program product to reduce the dimension of features and improve data processing speed and accuracy.
  • An embodiment of the present disclosure provides a feature processing method, including:
  • Dimension reduction processing is performed on the spliced feature vector to obtain a target feature vector.
  • An embodiment of the present disclosure also provides a feature processing device, including:
  • An extraction module which is used to extract features of the target image by using multiple feature extraction models to obtain multiple feature vectors of the target image
  • a stitching module used to stitch the multiple feature vectors to obtain a stitched feature vector of the target image
  • the dimensionality reduction module is used for performing dimensionality reduction processing on the spliced feature vector to obtain a target feature vector.
  • An embodiment of the present disclosure also provides a computer including the above-mentioned feature processing device.
  • An embodiment of the present disclosure also provides a computer-readable storage medium that stores computer-executable instructions that are configured to perform the above-described feature processing method.
  • An embodiment of the present disclosure also provides a computer program product.
  • the computer program product includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by a computer, the The computer executes the feature processing method described above.
  • An embodiment of the present disclosure also provides an electronic device, including:
  • At least one processor and,
  • a memory communicatively connected to the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and when the instructions are executed by the at least one processor, the at least one processor executes the above-mentioned feature processing method.
  • feature extraction is performed through multiple feature extraction models, after which, the stitching is fused together, and the dimensionality reduction processing is performed on the stitched stitching feature vector to obtain the target feature vector.
  • FIG. 1 is a schematic flowchart of a feature processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another feature processing method according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of another feature processing method provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of another feature processing method provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a feature processing device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a solution idea as follows: use multiple feature extraction models to perform feature extraction on the target image, and stitch the extracted feature vectors, and then do dimensionality reduction After processing, the target feature vector is obtained to reduce the dimension and shorten the length of the feature vector, so as to improve the operation efficiency and accuracy.
  • An embodiment of the present disclosure provides a feature processing method. Please refer to Figure 1, the method includes:
  • S102 Use multiple feature extraction models to perform feature extraction on the target image to obtain multiple feature vectors of the target image.
  • S106 Perform dimensionality reduction processing on the spliced feature vector to obtain the target feature vector.
  • the embodiments of the present disclosure have no limitation on the number of target images. Specifically, when the number of target images is one, processing may be performed according to the flow shown in FIG. 1; when the number of target images is multiple, the foregoing flow may be performed separately for each target image, or, it may be Perform the aforementioned steps S102 and S104 for each target image separately, and use the stitching feature vector of each target image as a stitching feature matrix to perform the aforementioned step S106 together to obtain the target feature matrix, that is, to obtain the target feature matrix Target feature vector.
  • the embodiments of the present disclosure have no particular limitation on how many target images are included in a specific image.
  • the target image is a face image
  • a single face one target image
  • multiple face images multiple target images
  • the embodiments of the present disclosure are not particularly limited to the application scenarios of the foregoing feature processing methods. It can be specifically applied to the scene of face recognition described in the background art, or can also be applied to other feature processing scenes, for example, a feature processing scene for user usage habits.
  • the feature extraction model may be a deep neural network (DNN) model.
  • DNN deep neural network
  • the input of the DNN model is an image
  • the output is a feature vector.
  • the feature vector is composed of feature values, and each feature value represents a feature in one dimension.
  • the feature vector output by a DNN model can be (a, b, c), which means that the feature vector has facial features in 3 dimensions, and the definition of each dimension can be constructed in the DNN model.
  • a represents gender, gender is male, the characteristic value is 0, and gender is female, the characteristic value is 1; assuming b is age, the characteristic value corresponding to b can be passed through a specific age number or age group.
  • the identification symbol is used to represent; assuming that c represents the face shape, you can define the feature values of different face types, for example, the round face is represented by the feature value 1, the square face is represented by the feature value 2, and the goose egg face is represented by the feature 3. It can be seen that the foregoing examples are only for describing the feature vectors involved in the embodiments of the present disclosure, and are not intended to limit the present application. In specific implementation, the feature vectors output by each DNN model may be different.
  • a DNN model needs to be trained.
  • a large number of images and corresponding features of the images need to be prepared in advance. These pre-prepared information is used as a database.
  • the DNN model automatically iterates and learns in the database until the learning termination condition is met. At this point, the DNN model training is completed and can be used as the feature extraction model described in the foregoing S102 in the embodiment of the present disclosure.
  • the type of DNN model used in the embodiments of the present disclosure may be determined according to needs.
  • the type of the DNN model may include, but is not limited to: Residual Neural Network (Residual Network, ResNet).
  • ResNet is a deep convolutional network with a good optimization space, and can increase the learning accuracy by increasing the depth, and improve the network performance.
  • the multiple feature extraction models used in the embodiments of the present disclosure may all be DNN models, or some of the feature extraction models may be DNN models.
  • feature extraction can also be achieved in a way that none of them are DNN models.
  • each feature vector obtained in the foregoing step is fused and stitched.
  • S1042 Perform normalization processing on each feature vector separately to obtain multiple normalized feature vectors.
  • S1044 Splice multiple normalized feature vectors to obtain a spliced feature vector.
  • An embodiment of the present disclosure provides the following normalization processing method: obtaining the normalization coefficients of multiple feature values in each feature vector, and then, obtaining the ratio between each feature value and the normalization coefficients to obtain Normalized feature vector of feature vector.
  • the feature vector A (x1, x2, ... xn), where n is an integer greater than 1, it can be used to characterize the dimension of the feature vector A.
  • the normalization coefficient is generally related to the feature value in the feature vector.
  • the normalization coefficient K can be obtained by obtaining the sum of squares of all the eigenvalues in the eigenvector A and rooting. That is,
  • each target image corresponds to multiple normalized feature vectors.
  • these normalized feature vectors are simply stitched together to obtain a stitched feature vector.
  • each target image corresponds to a unique feature to maintain a large mosaic feature vector.
  • the splicing feature vectors corresponding to the multiple target images may exist as a matrix.
  • the stitching feature vectors of the m face images can form an m ⁇ t Splice feature matrix.
  • a mosaic feature vector corresponding to a single target image can also be used as a 1 ⁇ t mosaic feature matrix.
  • the embodiment of the present disclosure normalizes the aforementioned spliced feature vector.
  • PCA Principal Component Analysis
  • PCA also known as principal component analysis
  • principal component analysis is a statistical method designed to use the idea of dimensionality reduction to turn multiple indicators into fewer comprehensive indicators.
  • the idea of the algorithm is to convert a set of variables that may be related to a set of linearly uncorrelated variables through orthogonal transformation.
  • the converted set of variables is called the principal component.
  • the embodiment of the present disclosure also provides an implementation manner of performing dimensionality reduction processing in a PCA manner to obtain a target feature vector. Please refer to FIG. 3, the method includes the following steps:
  • S1064 Obtain the difference between each feature value and the average value in the stitched feature vector to obtain a mean-removing vector.
  • S1068 Solve the covariance matrix to obtain the covariance eigenvalue and covariance eigenvector of the covariance matrix.
  • S10610 Acquire the partial feature vector with the highest covariance eigenvalue according to the order of the largest covariance eigenvalue.
  • S10612 Construct a new feature space according to the partial feature vectors to obtain the target feature vector.
  • the dimensionality reduction processing can also be performed simultaneously for the stitching feature matrix corresponding to multiple target images.
  • this method has high processing efficiency and can determine the principal components in multiple target images.
  • the stitching feature matrix after dimensionality reduction has just The effect of removing redundant features.
  • the processing result is a target feature matrix, and the target feature vector corresponding to each target image can be determined according to the identifier (1 to m) of a target image.
  • the dimensionality reduction processing of the spliced feature vectors is realized by the PCA method, which can effectively shorten the length of the spliced feature vectors (due to the dimensionality reduction and the reduced dimension), avoid high-dimensional operation noise, and help improve The accuracy of subsequent operations.
  • the length of the stitching feature vector becomes shorter, the amount of data in subsequent applications can be greatly reduced, which is beneficial to improve the operation efficiency.
  • PCA technology also has the function of online learning, so it can alleviate the problem of data set adaptation to a certain extent.
  • the PCA technology can also be used as a feature extractor, that is, the PCA technology is used to reduce the feature vector composed of all pixels in the target image to a feature vector with a smaller number of pixels than the original pixel.
  • Implement feature extraction The essence is to learn how to construct a new feature dimension on the target image data set, and how to select fewer features to represent a target image. This is different from the feature processing method provided by the embodiments of the present disclosure.
  • the embodiment of the present disclosure uses PCA to perform dimensionality reduction processing on the extracted high-dimensional stitching feature vectors, which is essentially different from the way of using PCA to realize feature extraction in terms of implementation ideas and steps.
  • the dimensionality reduction processing on the spliced feature vectors may also be implemented through other means, which is not particularly limited in the embodiments of the present disclosure.
  • the target feature vector of the target image can be obtained, and embodiments of the present disclosure further provide specific application scenarios of the foregoing target feature vector.
  • the target feature vector since the target feature vector will already be able to fully characterize a target image, it can also: store the target feature vector to the feature database. That is, the target feature vector is used to construct the database instead of the target image directly. This enables subsequent recognition or other data processing without repeating the foregoing acquisition process of the target feature, saving processing efficiency.
  • the feature database is constructed in the form of the target feature vector, which can save storage resources and help improve the storage and reading efficiency.
  • an embodiment of the present disclosure also provides a method for matching and identifying using the aforementioned target feature vector.
  • the method further includes: acquiring the image to be recognized, and then, in the feature database constructed above, matching the image to be recognized to obtain a target image corresponding to the image to be recognized.
  • S1082 Acquire an image to be recognized.
  • S1088 For any target feature vector, if the distance is less than a preset distance threshold, determine that the target image corresponding to the target feature vector is the target image corresponding to the image to be identified.
  • any target feature vector if the distance is greater than or equal to a preset distance threshold, it is determined that the target image corresponding to the target feature vector is not the target image corresponding to the image to be identified.
  • obtaining the feature vector to be recognized in the image to be recognized in step S1084 can also be achieved by the aforementioned method, and the accuracy of this implementation is high; or, it can also be achieved by other methods, for example, through a single DNN model Feature extraction.
  • the embodiments of the present disclosure have no particular limitation on this.
  • a feature database needs to be constructed.
  • the step of constructing the feature database can be implemented according to the aforementioned feature processing method of the embodiment of the present disclosure, that is, acquiring multiple target feature vectors corresponding to multiple target images and storing the multiple target feature vectors to construct the feature database .
  • the target image is a face image; the image to be recognized is a face image. It can be seen that when the method is applied to other scenes, it may be other types of images. However, in the method shown in FIG. 4, the target image and the image to be recognized are images of the same type, or images containing the same type of target.
  • an embodiment of the present disclosure further provides a feature processing device.
  • the feature processing device 500 includes:
  • the extraction module 51 is used to extract features of the target image by using multiple feature extraction models to obtain multiple feature vectors of the target image;
  • the stitching module 52 is used to stitch multiple feature vectors to obtain the stitched feature vector of the target image
  • the dimensionality reduction module 53 is used for performing dimensionality reduction processing on the spliced feature vector to obtain the target feature vector.
  • the splicing module 52 includes:
  • the normalization sub-module is used to normalize each feature vector separately to obtain multiple normalized feature vectors
  • the stitching submodule is used to stitch multiple normalized feature vectors to obtain stitched feature vectors.
  • the normalization sub-module can be specifically used for:
  • the ratio between each feature value and the normalization coefficient is obtained to obtain the normalized feature vector of the feature vector.
  • the dimension reduction module 53 is specifically used for:
  • Principal component analysis PCA is used to process spliced feature vectors to obtain target feature vectors.
  • the dimensionality reduction module 53 is specifically used for:
  • a new feature space is constructed to obtain the target feature vector.
  • the feature extraction model is a deep neural network model DNN.
  • the feature processing device 500 may further include:
  • the storage module (not shown in FIG. 5) is used to store the target feature vector to the feature database.
  • the feature processing device 500 may further include:
  • An acquisition module (not shown in FIG. 5), used to acquire the image to be recognized;
  • the recognition module (not shown in FIG. 5) is used to match the image to be recognized in the feature database to obtain the target image corresponding to the image to be recognized.
  • the identification module is specifically used for:
  • the target image corresponding to the target feature vector is the target image corresponding to the image to be identified.
  • the target image is a face image
  • the image to be recognized is a face image
  • An embodiment of the present disclosure also provides a computer. Please refer to FIG. 6, the computer 600 includes the above-mentioned feature processing device 500.
  • An embodiment of the present disclosure also provides a computer-readable storage medium that stores computer-executable instructions that are configured to perform the above-described feature processing method.
  • An embodiment of the present disclosure also provides a computer program product.
  • the computer program product includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by a computer, the The computer executes the above feature processing method.
  • the aforementioned computer-readable storage medium may be a transient computer-readable storage medium or a non-transitory computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, whose structure is shown in FIG. 7, and the electronic device 700 includes:
  • At least one processor (processor) 710 one processor 710 is taken as an example in FIG. 7; and the memory (memory) 720 may further include a communication interface 730 and a bus.
  • the processor 710, the communication interface 730, and the memory 720 can complete communication with each other through the bus.
  • the communication interface 730 may be used for information transmission.
  • the processor 710 may call logical instructions in the memory 720 to execute the feature processing method of the above-mentioned embodiment.
  • logic instructions in the above-mentioned memory 720 may be implemented in the form of software functional units and sold or used as an independent product, and may be stored in a computer-readable storage medium.
  • the memory 720 is a computer-readable storage medium that can be used to store software programs and computer-executable programs, such as program instructions / modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 710 executes software applications, instructions, and modules stored in the memory 720 to execute functional applications and data processing, that is, to implement the feature processing method in the foregoing method embodiments.
  • the memory 720 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required by at least one function; the storage data area may store data created according to the use of a terminal device, and the like.
  • the memory 720 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the technical solutions of the embodiments of the present disclosure may be embodied in the form of software products, which are stored in a storage medium and include one or more instructions to make a computer device (which may be a personal computer, server, or network) Equipment, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure.
  • the aforementioned storage medium may be a non-transitory storage medium, including: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • a medium that can store program codes may also be a transient storage medium.
  • first, second, etc. may be used in this application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first element can be called the second element, and likewise, the second element can be called the first element, as long as all occurrences of the "first element” are consistently renamed and all occurrences of The “second component” can be renamed consistently.
  • the first element and the second element are both elements, but they may not be the same element.
  • the various aspects, implementations, implementations, or features in the described embodiments can be used alone or in any combination.
  • Various aspects in the described embodiments may be implemented by software, hardware, or a combination of software and hardware.
  • the described embodiments may also be embodied by a computer-readable medium that stores computer-readable code including instructions executable by at least one computing device.
  • the computer-readable medium can be associated with any data storage device capable of storing data, which can be read by a computer system.
  • Computer-readable media used for examples may include read-only memory, random access memory, CD-ROM, HDD, DVD, magnetic tape, optical data storage devices, and the like.
  • the computer-readable medium may also be distributed in computer systems connected through a network, so that computer-readable codes can be stored and executed in a distributed manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种特征处理方法及装置、存储介质及程序产品,该方法包括:利用多个特征提取模型分别对目标图像进行特征提取,得到所述目标图像的多个特征向量(S102),然后,将所述多个特征向量进行拼接,得到所述目标图像的拼接特征向量(S104),进而,对所述拼接特征向量进行降维处理,得到目标特征向量(S106)。能够降低特征的维度,提高数据处理速度与精度。

Description

一种特征处理方法及装置、存储介质及程序产品 技术领域
本申请涉及数据处理领域,例如涉及一种特征处理方法及装置、存储介质及程序产品。
背景技术
随着人脸识别技术的发展,人脸验证或人脸检索越来越多的应用于人们的日常生活。人脸特征对人来识别技术的准确率有较大的影响。
现有技术中一般采用深度学习实现人脸特征的提取,并且,考虑到不同的神经网络模型会提取到不同的人脸特征,因此,现有技术中经常采用多种神经网络模型进行人脸特征的提取,得到多个人脸特征向量。
但是,现有的人脸特征提取方式会导致人脸特征向量的维度随着神经网络模型的个数线性增长,导致人脸识别的计算量大大增加,其识别速度和识别准确率均会受到不良影响。
发明内容
本公开实施例提供了一种特征处理方法及装置、存储介质及程序产品,用以降低特征的维度,提高数据处理速度与精度。
本公开实施例提供了一种特征处理方法,包括:
利用多个特征提取模型分别对目标图像进行特征提取,得到所述目标图像的多个特征向量;
将所述多个特征向量进行拼接,得到所述目标图像的拼接特征向量;
对所述拼接特征向量进行降维处理,得到目标特征向量。
本公开实施例还提供了一种特征处理装置,包括:
提取模块,用于利用多个特征提取模型分别对目标图像进行特征提取,得到所述目标图像的多个特征向量;
拼接模块,用于将所述多个特征向量进行拼接,得到所述目标图像的拼接特征向量;
降维模块,用于对所述拼接特征向量进行降维处理,得到目标特征向量。
本公开实施例还提供了一种计算机,包含上述的特征处理装置。
本公开实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行上述的特征处理方法。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述的特征处理方法。
本公开实施例还提供了一种电子设备,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行时,使所述至少一个处理器执行上述的特征处理方法。
本公开实施例所提供的技术方案,通过多个特征提取模型分别进行特征提取,之后,将其拼接融合在一起,并对拼接后的拼接特征向量进行降维处理,以得到目标特征向量,如此,通过特征维度的降低,降低了高维运算导致的噪音干扰,提高了识别精度,并且,特征维度的降低也缩短了特征向量的长度,这有利于缩减人脸识别时的特征比对时长,有利于提高人脸识别的处理效率。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1为本公开实施例提供的一种特征处理方法的流程示意图;
图2为本公开实施例提供的另一种特征处理方法的流程示意图;
图3为本公开实施例提供的另一种特征处理方法的流程示意图;
图4为本公开实施例提供的另一种特征处理方法的流程示意图;
图5为本公开实施例提供的一种特征处理装置的结构示意图;
图6为本公开实施例提供的一种计算机的结构示意图;
图7为本公开实施例提供的电子设备的结构示意图。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。在以下的技术描述中,为方便解释起见,通过多个细节以提供对所披露实施例的充分理解。然而,在没有这些细节的情况下,一个或多个实施例仍然可以实施。在其它情况下,为简化附图,熟知的结构和装置可以简化展示。
针对现有技术中存在的前述问题,本公开实施例给出如下一种解决思路:利用多个特征提取模型对目标图像进行特征提取,并对提取出的特征向量进行拼接,之后,做降维处理,得到目标特征向量,以降低维度并缩短特征向量长度,提高运算效率与准确率。
本公开实施例提供了一种特征处理方法。请参考图1,该方法包括:
S102,利用多个特征提取模型分别对目标图像进行特征提取,得到目标图像的多个特征向量。
S104,将多个特征向量进行拼接,得到目标图像的拼接特征向量。
S106,对拼接特征向量进行降维处理,得到目标特征向量。
首先,需要说明的是,本公开实施例对于目标图像的数目无限定。具体的,当目标图像的数目为一个时,按照如图1所示流程进行处理即可;当目标图像的数目为多个时,则可分别针对每个目标图像执行前述流程,或者,也可以针对每个目标图像分别执行前述S102和前述S104步骤,并将各目标图像的拼接特征向量,作为一个拼接特征矩阵,一起执行前述S106步骤,得到目标特征矩阵,也就是,得到每个目标图像的目标特征向量。
此外,本公开实施例对于一个具体的图像中具备几个目标图像无特殊限定。例如,若目标图像为人脸图像,则一张照片中可以包括一个人脸(一个目标图像),或者,也可以包括多个人脸图像(多个目标图像)。
此外,本公开实施例对于前述特征处理方法的应用场景无特别限定。其可具体应用于背景技术所述的人脸识别这一场景,或者,还可以应用于其他特征处理场景,例如,针对用户使用习惯的特征处理场景等。
具体的,在执行前述S102中的特征提取步骤时,本公开实施例给出一种 实现方式:利用深度学习技术,进行特征提取。此时,特征提取模型可以为深度神经网络(Deep nueral nework,DNN)模型。其中,DNN模型的输入为图像,输出为特征向量。特征向量由特征值组成,每个特征值表征一个维度上的特征。
具体的,以前述针对人脸图像的特征提取为例。例如,一个DNN模型输出的特征向量可以为(a,b,c),则表示该特征向量具备3个维度上的人脸特征,各维度的定义可以在DNN模型中构建。举例说明,假设a表示性别,性别为男则用特征值0表示,性别为女则用特征值1表示;假设b表示年龄,则b所对应的特征值可以通过具体的年龄数字或年龄段的标识符号进行表示;假设c代表脸型,则可以分别定义不同脸型的特征值,例如,圆脸用特征值1表示,方脸用特征值2表示,鹅蛋脸用特征3表示等。可知前述举例仅为对本公开实施例所涉及到的特征向量进行说明,并不用以限制本申请,在具体实现时,各DNN模型输出的特征向量可能不同。
此外,以这种实现方式执行S102之前,还需要训练DNN模型。在对DNN模型的训练阶段,还需要预先准备大量的图像及图像对应的特征,这些预先准备的信息作为数据库,DNN模型在数据库中自动进行迭代、学习,直至满足学习终止条件。此时,DNN模型训练完成,可作为本公开实施例前述S102所述的特征提取模型。
此外,本公开实施例中所采用的DNN模型的类型可以根据需要确定。一种可能的设计中,该DNN模型的类型可以包括但不限于:残差神经网络(Residual Network,ResNet)。其中,ResNet是一种深度卷积网络,具有良好的优化空间,且能够通过增加深度来提高学习准确性,以及,提高网络性能。
需要说明的是,本公开实施例中所采用的多个特征提取模型,可以全部都是DNN模型,或者,其中的部分特征提取模型是DNN模型。此外,还可以通过全都不是DNN模型的方式实现特征提取。
正是由于特征提取模型所采用的算法或训练模型或训练样本的区别,多个特征提取模型各自提取出的特征向量不同,这能够尽可能多的提取到目标图像的特征,在依据这些特征进行进一步应用时,有利于挺高后续应用的准确性。
但是,采用多个特征提取模型必然会导致同一个目标图像中提取出来的特征维度随着模型个数的增加而线性增长,从而,特征维度的增加既会导致后续应用过程中计算量的增加,又因为高维运算引起的噪声干扰,对后续应用的运算结果产生不利影响。
因此,本公开实施例在通过多个特征提取模型进行特征提取之后,对前述步骤得到的各特征向量进行融合拼接。
此时,请参考图2所示流程,S104所述的拼接步骤可通过如下方式实现。
S1042,对每个特征向量分别进行归一化处理,得到多个归一化特征向量。
S1044,将多个归一化特征向量进行拼接,得到拼接特征向量。
其中,考虑到各特征提取模型提取出的特征向量的差异性,为了尽量降低这种差异性对特征向量拼接的影响,因此,在执行拼接步骤之前,执行归一化处理。
本公开实施例给出如下所示的归一化处理方式:获取每个特征向量中多个特征值的归一化系数,然后,获取每个特征值与归一化系数之间的比值,得到特征向量的归一化特征向量。
举例说明,若特征向量A=(x1,x2,…xn),其中,n为大于1的整数,可用于表征特征向量A的维度。若归一化系数为K,那么,该特征向量A经归一化处理之后,可得到向量B=(x1/K,x2/K,…xn/K)=(v1,v2,…vn)。
其中,归一化系数一般与特征向量中的特征值有关。在一种可能的设计中,该归一化系数K可以通过获取特征向量A中所有特征值的平方和并开根号得到。也就是,
Figure PCTCN2018115473-appb-000001
如此,在经过前述归一化处理之后,每个目标图像都对应于多个归一化特征向量,此时,将这些归一化特征向量简单拼接在一起,即可得到一个拼接特征向量。其中,每一个目标图像对应于唯一的一个特征维护很大的拼接特征向量。
举例说明,若目标图像经前述归一化处理后对应于三个特征向量,分别是:特征向量B=(v1,v2,v3)、特征向量C=(v4,v5)、特征向量D=(v6,v7,v8)。则对前述特征向量B、C、D进行拼接可得到拼接特征向量F=(v1,v2,v3,v4,v5,v6,v7,v8)。
在实际的应用场景中,可能会有多个目标图像,此时,多个目标图像对应的拼接特征向量可以作为一个矩阵存在。
例如,在人脸识别场景中,若具备m张人脸图像,且每个人脸图像对应的拼接特征向量具备t个维度,那么,这m张人脸图像的拼接特征向量可构成一个m×t的拼接特征矩阵。
可知,由单独一张目标图像对应的一个拼接特征向量,也可以作为一个1×t的拼接特征矩阵。
由于拼接特征向量的维度较高,为了降低高维运算的噪音,以及,去除冗余数据,本公开实施例对前述拼接特征向量做归一化处理。
具体的,可以利用主成分分析(Principal component analysis,PCA)处理拼接特征向量,得到目标特征向量。
PCA又称为主分量分析,是一种统计方法,旨在利用降维的思想,把多指标转为更少的综合指标。算法的思想在于通过正交变换将一组可能存在相关性的变量转换为一组线性不相关的变量,转换后的这组变量叫做主成分。
本公开实施例还给出了以PCA方式进行降维处理,得到目标特征向量的实现方式,请参考图3,该方法包括如下步骤:
S1062,获取拼接特征向量中各特征值的平均值。
S1064,获取拼接特征向量中每个特征值与平均值之差,得到去均值向量。
S1066,获取去均值向量的协方差矩阵。
S1068,求解协方差矩阵,得到协方差矩阵的协方差特征值与协方差特征向量。
S10610,按照协方差特征值由大至小的顺序,获取协方差特征值靠前的部分特征向量。
S10612,按照部分特征向量,构建新的特征空间,得到目标特征向量。
通过如图3所示的降维处理方案可知,除针对一个目标图像的拼接特征向量进行降维处理之外,还可以同时针对多个目标图像对应的拼接特征矩阵同时进行降维处理,相较于逐个对单一图像进行降维的方式,这种方式处理效率较高,且能够在多个目标图像中确定主成分,相较于对单一图像的处理方式,降维后的拼接特征矩阵具备刚好的去除冗余特征的效果。
此外,若同时针对多个目标图像进行降维处理,其处理结果为目标特征矩阵,根据某一目标图像的标识(1~m),则可确定各目标图像对应的目标特征向量。
在如图3所示的流程中,通过PCA方式实现对拼接特征向量的降维处理,这能够有效缩短拼接特征向量的长度(由于降维,维度降低),避免高维运算噪声,有利于提高后续运算的准确性。此外,由于拼接特征向量的长度变短,还能够大大降低后续应用时的数据量,有利于提高运算效率。并且,PCA技术还具备在线学习的功能,因此,能够在一定程度上缓解数据集适配的问题。
此外,现有技术中,PCA技术还可以被用作特征提取器,即利用PCA技术将目标图像中所有像素构成的特征向量,降维到相比于原始像素数目更少的特征向量上,以实现特征提取。其实质是在目标图像数据集上学习如何构造新的特征维度,以及如何选取更少的特征来代表一张目标图像。这与本公开实施例所提供的特征处理方法不同。本公开实施例是利用PCA对已经提取出来的高维的拼接特征向量进行降维处理,与利用PCA实现特征提取的方式在实现思想及步骤上有本质区别。
还需要说明的是,除通过前述PCA法实现降维处理之外,还可以通过其他手段实现对拼接特征向量的降维处理,本公开实施例对此无特别限定。
通过前述处理,可以得到目标图像的目标特征向量,本公开实施例还进一步给出前述目标特征向量的具体应用场景。
一种可能的设计中,由于目标特征向量已将能够充分表征一个目标图像,因此,还可以:存储目标特征向量至特征数据库。也就是,由目标特征向量组建数据库,而不是直接由目标图像组建组数据库,这能够在后续进行识别或其他数据处理时,无需重复执行目标特征的前述获取过程,节省处理效率。并且,相较于直接存储目标图像的方式,以目标特征向量的方式组建特征数据库,能够节省存储资源,有利于提高存储及读取效率。
此外,本公开实施例还给出一种利用前述目标特征向量进行匹配识别的方式。此时,该方法还包括:获取待识别图像,然后,在前述构建的特征数据库中,对待识别图像进行匹配,得到待识别图像对应的目标图像。
具体的匹配过程,可以参考图4所示流程,该方法包括如下步骤:
S1082,获取待识别图像。
S1084,获取待识别图像的待识别特征向量。
S1086,获取待识别特征向量与特征数据库中的每个目标特征向量之间的距离。
S1088,针对任一目标特征向量,若距离小于预设距离阈值,确定该目标特征向量对应的目标图像为待识别图像对应的目标图像。
可知,针对任一目标特征向量,若距离大于或者等于预设距离阈值,确定该目标特征向量对应的目标图像不是待识别图像对应的目标图像。
其中,S1084步骤中获取待识别图像的待识别特征向量也可以通过前述方法实现,这种实现方式的准确率较高;或者,也可以通过其他方式实现,例如,通过单一的一种DNN模型进行特征提取。本公开实施例对此无特别限定。
此外,在如图4所示流程执行之前,还需要构建特征数据库。其中,构建特征数据库的步骤,可以按照本公开实施例前述特征处理方法实现,也就是,获取多个目标图像对应的多个目标特征向量,并将多个目标特征向量进行存储,以构建特征数据库。
当如图4所示的方法具体应用于人脸识别或人脸匹配这一实现场景时,目标图像为人脸图像;待识别图像为人脸图像。可知,当该方法应用于其他场景,则可能为其他类型的图像,但是,如图4所示方法中,目标图像与待识别图像为同一类图像,或者,为包含同一类目标物的图像。
基于本公开实施例提供的前述特征处理方法,本公开实施例还进一步提供了一种特征处理装置。
请参考图5,该特征处理装置500,包括:
提取模块51,用于利用多个特征提取模型分别对目标图像进行特征提取,得到目标图像的多个特征向量;
拼接模块52,用于将多个特征向量进行拼接,得到目标图像的拼接特征向量;
降维模块53,用于对拼接特征向量进行降维处理,得到目标特征向量。
一种可能的设计中,拼接模块52,包括:
归一化子模块,用于对每个特征向量分别进行归一化处理,得到多个归一化特征向量;
拼接子模块,用于将多个归一化特征向量进行拼接,得到拼接特征向量。
其中,归一化子模块,可具体用于:
获取每个特征向量中多个特征值的归一化系数;
获取每个特征值与归一化系数之间的比值,得到特征向量的归一化特征向量。
另一种可能的设计中,降维模块53,具体用于:
利用主成分分析PCA处理拼接特征向量,得到目标特征向量。
具体的,降维模块53,具体用于:
获取拼接特征向量中各特征值的平均值;
获取拼接特征向量中每个特征值与平均值之差,得到去均值向量;
获取去均值向量的协方差矩阵;
求解协方差矩阵,得到协方差矩阵的协方差特征值与协方差特征向量;
按照协方差特征值由大至小的顺序,获取协方差特征值靠前的部分特征向量;
按照部分特征向量,构建新的特征空间,得到目标特征向量。
另一种可能的设计中,特征提取模型为深度神经网络模型DNN。
另一种可能的设计中,该特征处理装置500还可以包括:
存储模块(图5未示出),用于存储目标特征向量至特征数据库。
另一种可能的设计中,该特征处理装置500还可以包括:
获取模块(图5未示出),用于获取待识别图像;
识别模块(图5未示出),用于在特征数据库中,对待识别图像进行匹配,得到待识别图像对应的目标图像。
其中,识别模块,具体用于:
获取待识别图像的待识别特征向量;
获取待识别特征向量与特征数据库中的每个目标特征向量之间的距离;
针对任一目标特征向量,若距离小于预设距离阈值,确定该目标特征向量对应的目标图像为待识别图像对应的目标图像。
在其具体应用于人脸识别的场景中时,目标图像为人脸图像;待识别图像为人脸图像。
本公开实施例还提供了一种计算机。请参考图6,该计算机600包含上述的特征处理装置500。
本公开实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行上述特征处理方法。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述特征处理方法。
上述的计算机可读存储介质可以是暂态计算机可读存储介质,也可以是非暂态计算机可读存储介质。
本公开实施例还提供了一种电子设备,其结构如图7所示,该电子设备700包括:
至少一个处理器(processor)710,图7中以一个处理器710为例;和存储器(memory)720,还可以包括通信接口(Communication Interface)730和总线。其中,处理器710、通信接口730、存储器720可以通过总线完成相互间的通信。通信接口730可以用于信息传输。处理器710可以调用存储器720中的逻辑指令,以执行上述实施例的特征处理方法。
此外,上述的存储器720中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器720作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令/模块。处理器710通过运行存储在存储器720中的软件程序、指令以及模块,从而执行功能应用以及数据处理,即实现上述方法实施例中的特征处理方法。
存储器720可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例所述方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
当用于本申请中时,虽然术语“第一”、“第二”等可能会在本申请中使用以描述各元件,但这些元件不应受到这些术语的限制。这些术语仅用于将一个元件与另一个元件区别开。比如,在不改变描述的含义的情况下,第一元件可以叫做第二元件,并且同样第,第二元件可以叫做第一元件,只要所有出现的“第一元件”一致重命名并且所有出现的“第二元件”一致重命名即可。第一元件和第二元件都是元件,但可以不是相同的元件。
本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”(a)、“一个”(an)和“所述”(the)旨在同样包括复数形式。类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”(comprise)及其变型“包括”(comprises)和/或包括(comprising)等指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。
所描述的实施例中的各方面、实施方式、实现或特征能够单独使用或以任意组合的方式使用。所描述的实施例中的各方面可由软件、硬件或软硬件的结合实现。所描述的实施例也可以由存储有计算机可读代码的计算机可读介质体现,该计算机可读代码包括可由至少一个计算装置执行的指令。所述计算机可读介质可与任何能够存储数据的数据存储装置相关联,该数据可由计算机系统读取。用于举例的计算机可读介质可以包括只读存储器、随机存取存储器、CD-ROM、HDD、DVD、磁带以及光数据存储装置等。所述计算机可读介质还可以分布于通过网络联接的计算机系统中,这样计算机可读代码就可以分布式存储并执行。
上述技术描述可参照附图,这些附图形成了本申请的一部分,并且通过描述在附图中示出了依照所描述的实施例的实施方式。虽然这些实施例描述的足够详细以使本领域技术人员能够实现这些实施例,但这些实施例是非限制性的;这样就可以使用其它的实施例,并且在不脱离所描述的实施例的范围的情况下还可以做出变化。比如,流程图中所描述的操作顺序是非限制性的,因此在流程图中阐释并且根据流程图描述的两个或两个以上操作的顺序 可以根据若干实施例进行改变。作为另一个例子,在若干实施例中,在流程图中阐释并且根据流程图描述的一个或一个以上操作是可选的,或是可删除的。另外,某些步骤或功能可以添加到所公开的实施例中,或两个以上的步骤顺序被置换。所有这些变化被认为包含在所公开的实施例以及权利要求中。
另外,上述技术描述中使用术语以提供所描述的实施例的透彻理解。然而,并不需要过于详细的细节以实现所描述的实施例。因此,实施例的上述描述是为了阐释和描述而呈现的。上述描述中所呈现的实施例以及根据这些实施例所公开的例子是单独提供的,以添加上下文并有助于理解所描述的实施例。上述说明书不用于做到无遗漏或将所描述的实施例限制到本公开的精确形式。根据上述教导,若干修改、选择适用以及变化是可行的。在某些情况下,没有详细描述为人所熟知的处理步骤以避免不必要地影响所描述的实施例。

Claims (24)

  1. 一种特征处理方法,其特征在于,包括:
    利用多个特征提取模型分别对目标图像进行特征提取,得到所述目标图像的多个特征向量;
    将所述多个特征向量进行拼接,得到所述目标图像的拼接特征向量;
    对所述拼接特征向量进行降维处理,得到目标特征向量。
  2. 根据权利要求1所述的方法,其特征在于,将所述多个特征向量进行拼接,包括:
    对每个所述特征向量分别进行归一化处理,得到多个归一化特征向量;
    将所述多个归一化特征向量进行拼接,得到所述拼接特征向量。
  3. 根据权利要求2所述的方法,其特征在于,所述对每个所述特征向量分别进行归一化处理,包括:
    获取每个所述特征向量中多个特征值的归一化系数;
    获取每个所述特征值与所述归一化系数之间的比值,得到所述特征向量的所述归一化特征向量。
  4. 根据权利要求1所述的方法,其特征在于,所述对所述拼接特征向量进行降维处理,得到目标特征向量,包括:
    利用主成分分析PCA处理所述拼接特征向量,得到所述目标特征向量。
  5. 根据权利要求4所述的方法,其特征在于,所述利用主成分分析PCA处理所述拼接特征向量,包括:
    获取所述拼接特征向量中各特征值的平均值;
    获取所述拼接特征向量中每个特征值与所述平均值之差,得到去均值向量;
    获取所述去均值向量的协方差矩阵;
    求解所述协方差矩阵,得到所述协方差矩阵的协方差特征值与协方差特征向量;
    按照所述协方差特征值由大至小的顺序,获取所述协方差特征值靠前的部分特征向量;
    按照所述部分特征向量,构建新的特征空间,得到所述目标特征向量。
  6. 根据权利要求1所述的方法,其特征在于,所述特征提取模型为深度 神经网络模型DNN。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    存储所述目标特征向量至特征数据库。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    获取待识别图像;
    在所述特征数据库中,对所述待识别图像进行匹配,得到所述待识别图像对应的目标图像。
  9. 根据权利要求8所述的方法,其特征在于,所述在所述特征数据库中,对所述待识别图像进行匹配,包括:
    获取所述待识别图像的待识别特征向量;
    获取所述待识别特征向量与所述特征数据库中的每个所述目标特征向量之间的距离;
    针对任一所述目标特征向量,若所述距离小于预设距离阈值,确定该目标特征向量对应的目标图像为所述待识别图像对应的目标图像。
  10. 根据权利要求8所述的方法,其特征在于,所述目标图像为人脸图像;所述待识别图像为人脸图像。
  11. 一种特征处理装置,其特征在于,包括:
    提取模块,用于利用多个特征提取模型分别对目标图像进行特征提取,得到所述目标图像的多个特征向量;
    拼接模块,用于将所述多个特征向量进行拼接,得到所述目标图像的拼接特征向量;
    降维模块,用于对所述拼接特征向量进行降维处理,得到目标特征向量。
  12. 根据权利要求11所述的装置,其特征在于,所述拼接模块,包括:
    归一化子模块,用于对每个所述特征向量分别进行归一化处理,得到多个归一化特征向量;
    拼接子模块,用于将所述多个归一化特征向量进行拼接,得到所述拼接特征向量。
  13. 根据权利要求12所述的装置,其特征在于,所述归一化子模块,具体用于:
    获取每个所述特征向量中多个特征值的归一化系数;
    获取每个所述特征值与所述归一化系数之间的比值,得到所述特征向量的所述归一化特征向量。
  14. 根据权利要求11所述的装置,其特征在于,所述降维模块,具体用于:
    利用主成分分析PCA处理所述拼接特征向量,得到所述目标特征向量。
  15. 根据权利要求14所述的装置,其特征在于,所述降维模块,具体用于:
    获取所述拼接特征向量中各特征值的平均值;
    获取所述拼接特征向量中每个特征值与所述平均值之差,得到去均值向量;
    获取所述去均值向量的协方差矩阵;
    求解所述协方差矩阵,得到所述协方差矩阵的协方差特征值与协方差特征向量;
    按照所述协方差特征值由大至小的顺序,获取所述协方差特征值靠前的部分特征向量;
    按照所述部分特征向量,构建新的特征空间,得到所述目标特征向量。
  16. 根据权利要求11所述的装置,其特征在于,所述特征提取模型为深度神经网络模型DNN。
  17. 根据权利要求11所述的装置,其特征在于,所述装置还包括:
    存储模块,用于存储所述目标特征向量至特征数据库。
  18. 根据权利要求17所述的装置,其特征在于,所述装置还包括:
    获取模块,用于获取待识别图像;
    识别模块,用于在所述特征数据库中,对所述待识别图像进行匹配,得到所述待识别图像对应的目标图像。
  19. 根据权利要求18所述的装置,其特征在于,所述识别模块,具体用于:
    获取所述待识别图像的待识别特征向量;
    获取所述待识别特征向量与所述特征数据库中的每个所述目标特征向量之间的距离;
    针对任一所述目标特征向量,若所述距离小于预设距离阈值,确定该目 标特征向量对应的目标图像为所述待识别图像对应的目标图像。
  20. 根据权利要求18所述的装置,其特征在于,所述目标图像为人脸图像;所述待识别图像为人脸图像。
  21. 一种计算机,其特征在于,包含权利要求11-20任一项所述的装置。
  22. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行时,使所述至少一个处理器执行权利要求1-10任一项所述的方法。
  23. 一种计算机可读存储介质,其特征在于,存储有计算机可执行指令,所述计算机可执行指令设置为执行权利要求1-10任一项所述的方法。
  24. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1-10任一项所述的方法。
PCT/CN2018/115473 2018-11-14 2018-11-14 一种特征处理方法及装置、存储介质及程序产品 WO2020097834A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880098361.0A CN112868019A (zh) 2018-11-14 2018-11-14 一种特征处理方法及装置、存储介质及程序产品
PCT/CN2018/115473 WO2020097834A1 (zh) 2018-11-14 2018-11-14 一种特征处理方法及装置、存储介质及程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/115473 WO2020097834A1 (zh) 2018-11-14 2018-11-14 一种特征处理方法及装置、存储介质及程序产品

Publications (1)

Publication Number Publication Date
WO2020097834A1 true WO2020097834A1 (zh) 2020-05-22

Family

ID=70731024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115473 WO2020097834A1 (zh) 2018-11-14 2018-11-14 一种特征处理方法及装置、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN112868019A (zh)
WO (1) WO2020097834A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784828A (zh) * 2020-08-03 2020-10-16 腾讯科技(深圳)有限公司 三维模型的融合方法、装置及计算机可读存储介质
CN115389882A (zh) * 2022-08-26 2022-11-25 中国南方电网有限责任公司超高压输电公司广州局 电晕放电状态评估方法、装置、计算机设备和存储介质
CN115495712A (zh) * 2022-09-28 2022-12-20 支付宝(杭州)信息技术有限公司 数字作品处理方法及装置
CN117346657A (zh) * 2023-10-07 2024-01-05 上海勃傲自动化系统有限公司 一种基于5g相机的事件触发方法和系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965791A (zh) * 2022-12-19 2023-04-14 北京字跳网络技术有限公司 图像生成方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112018A (zh) * 2014-07-21 2014-10-22 南京大学 一种大规模图像检索方法
CN107103266A (zh) * 2016-02-23 2017-08-29 中国科学院声学研究所 二维人脸欺诈检测分类器的训练及人脸欺诈检测方法
US20180070089A1 (en) * 2016-09-08 2018-03-08 Qualcomm Incorporated Systems and methods for digital image stabilization
CN107886070A (zh) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 人脸图像的验证方法、装置及设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764019A (zh) * 2018-04-03 2018-11-06 天津大学 一种基于多源深度学习的视频事件检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112018A (zh) * 2014-07-21 2014-10-22 南京大学 一种大规模图像检索方法
CN107103266A (zh) * 2016-02-23 2017-08-29 中国科学院声学研究所 二维人脸欺诈检测分类器的训练及人脸欺诈检测方法
US20180070089A1 (en) * 2016-09-08 2018-03-08 Qualcomm Incorporated Systems and methods for digital image stabilization
CN107886070A (zh) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 人脸图像的验证方法、装置及设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784828A (zh) * 2020-08-03 2020-10-16 腾讯科技(深圳)有限公司 三维模型的融合方法、装置及计算机可读存储介质
CN111784828B (zh) * 2020-08-03 2023-11-10 腾讯科技(深圳)有限公司 三维模型的融合方法、装置及计算机可读存储介质
CN115389882A (zh) * 2022-08-26 2022-11-25 中国南方电网有限责任公司超高压输电公司广州局 电晕放电状态评估方法、装置、计算机设备和存储介质
CN115389882B (zh) * 2022-08-26 2024-05-28 中国南方电网有限责任公司超高压输电公司广州局 电晕放电状态评估方法、装置、计算机设备和存储介质
CN115495712A (zh) * 2022-09-28 2022-12-20 支付宝(杭州)信息技术有限公司 数字作品处理方法及装置
CN115495712B (zh) * 2022-09-28 2024-04-16 支付宝(杭州)信息技术有限公司 数字作品处理方法及装置
CN117346657A (zh) * 2023-10-07 2024-01-05 上海勃傲自动化系统有限公司 一种基于5g相机的事件触发方法和系统
CN117346657B (zh) * 2023-10-07 2024-03-19 上海勃傲自动化系统有限公司 一种基于5g相机的事件触发方法和系统

Also Published As

Publication number Publication date
CN112868019A (zh) 2021-05-28

Similar Documents

Publication Publication Date Title
WO2020097834A1 (zh) 一种特征处理方法及装置、存储介质及程序产品
WO2020119350A1 (zh) 视频分类方法、装置、计算机设备和存储介质
KR102385463B1 (ko) 얼굴 특징 추출 모델 학습 방법, 얼굴 특징 추출 방법, 장치, 디바이스 및 저장 매체
US20190325342A1 (en) Embedding multimodal content in a common non-euclidean geometric space
WO2020228446A1 (zh) 模型训练方法、装置、终端及存储介质
US9928410B2 (en) Method and apparatus for recognizing object, and method and apparatus for training recognizer
EP3772036A1 (en) Detection of near-duplicate image
Murray et al. A deep architecture for unified aesthetic prediction
US10380173B2 (en) Dynamic feature selection for joint probabilistic recognition
Cevikalp et al. Semi-supervised dimensionality reduction using pairwise equivalence constraints
WO2020238515A1 (zh) 图像匹配方法、装置、设备、介质和程序产品
US20220342921A1 (en) Systems and methods for parsing log files using classification and a plurality of neural networks
WO2021237570A1 (zh) 影像审核方法及装置、设备、存储介质
CN109086697A (zh) 一种人脸数据处理方法、装置及存储介质
CN110381392B (zh) 一种视频摘要提取方法及其系统、装置、存储介质
CN111428557A (zh) 基于神经网络模型的手写签名的自动校验的方法和装置
Ozkan et al. Kinshipgan: Synthesizing of kinship faces from family photos by regularizing a deep face network
CN112069884A (zh) 一种暴力视频分类方法、系统和存储介质
WO2023179429A1 (zh) 一种视频数据的处理方法、装置、电子设备及存储介质
CN112581355A (zh) 图像处理方法、装置、电子设备和计算机可读介质
CN113128287A (zh) 训练跨域人脸表情识别模型、人脸表情识别的方法及系统
WO2024051480A1 (zh) 图像处理方法、装置及计算机设备、存储介质
US20230021551A1 (en) Using training images and scaled training images to train an image segmentation model
Abbad et al. Application of MEEMD in post‐processing of dimensionality reduction methods for face recognition
KR20210051473A (ko) 동영상 콘텐츠 식별 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 09.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18940451

Country of ref document: EP

Kind code of ref document: A1