CN114283460A - Feature extraction method and device, computer equipment and storage medium - Google Patents

Feature extraction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114283460A
CN114283460A CN202111128152.6A CN202111128152A CN114283460A CN 114283460 A CN114283460 A CN 114283460A CN 202111128152 A CN202111128152 A CN 202111128152A CN 114283460 A CN114283460 A CN 114283460A
Authority
CN
China
Prior art keywords
image
feature extraction
target
training
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111128152.6A
Other languages
Chinese (zh)
Inventor
赵胜伟
黄迎松
白琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111128152.6A priority Critical patent/CN114283460A/en
Publication of CN114283460A publication Critical patent/CN114283460A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a feature extraction method, a feature extraction device, computer equipment and a storage medium, which can be applied to the field of artificial intelligence or the field of intelligent transportation and the like. The method comprises the following steps: obtaining an image to be extracted; based on the target resolution information, compressing the image to be extracted to obtain a compressed image to be extracted, wherein the target resolution information is generated by adopting the following method: respectively determining a target feature extraction model, aiming at extraction evaluation information of each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting target evaluation conditions; and performing feature extraction processing on the compressed image to be processed by adopting the target feature extraction model to obtain target feature information of the image to be extracted, so that the processing efficiency in the process of processing the image by adopting the neural network model is improved.

Description

Feature extraction method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for feature extraction, a computer device, and a storage medium.
Background
With the continuous development of science and technology, more and more devices can adopt a neural network model related to artificial intelligence to realize an intelligent image processing task.
Because the process of processing the image by adopting the neural network model is complex, the processing efficiency of processing the image by adopting the neural network model is low, the performance requirement on equipment is high, and in order to improve the efficiency of processing the image by adopting the neural network model, the traditional method is generally realized by reducing the network structure complexity of the neural network model.
However, in order to ensure the stability of the processing capability of the neural network model for the image, the network structure of the neural network model is limited in degree of simplification, and therefore, the problem of low processing efficiency still exists in the process of processing the image by using the neural network model.
Disclosure of Invention
The embodiment of the application provides a feature extraction method, a feature extraction device, computer equipment and a storage medium, which are used for solving the problem of low processing efficiency in the process of processing an image by adopting a neural network model.
In a first aspect, a feature extraction method is provided, including:
obtaining an image to be extracted;
based on the target resolution information, compressing the image to be extracted to obtain a compressed image to be extracted, wherein the target resolution information is generated by adopting the following method: respectively determining a target feature extraction model, aiming at extraction evaluation information of each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting target evaluation conditions;
and performing feature extraction processing on the compressed image to be processed by adopting the target feature extraction model to obtain target feature information of the image to be extracted.
In a second aspect, there is provided a feature extraction device including:
an acquisition module: used for obtaining an image to be extracted;
a processing module: the image processing device is used for compressing the image to be extracted based on target resolution information to obtain a compressed image to be extracted, wherein the target resolution information is generated by adopting the following method: respectively determining a target feature extraction model, aiming at extraction evaluation information of each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting target evaluation conditions;
the processing module is further configured to: and performing feature extraction processing on the compressed image to be processed by adopting the target feature extraction model to obtain target feature information of the image to be extracted.
Optionally, the processing module is further configured to:
and after the target feature extraction model is adopted to perform feature extraction processing on the compressed image to be processed to obtain target feature information of the image to be extracted, performing corresponding processing on the target feature information based on a target image processing task to obtain a target processing result aiming at the image to be extracted, wherein the target image processing task comprises one or more of an image classification task, an image recognition task or an image detection task.
Optionally, the target feature extraction model is obtained by training using the following method, and the processing module is further configured to:
acquiring a trained reference feature extraction model as a feature extraction model to be trained, wherein the reference feature extraction model is obtained by training based on a sample set, and the sample set comprises training sample images which are not subjected to compression processing;
and sequentially reducing the image resolution of the input images processed by the feature extraction model to be trained based on a resolution reduction sampling strategy, determining the extraction evaluation information of the feature extraction model to be trained, and outputting the feature extraction model to be trained obtained after the image resolution is reduced last time and obtained by training until the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, wherein the feature extraction model to be trained is used as the trained target feature extraction model.
Optionally, each time the resolution of the image is reduced, the processing module is specifically configured to:
based on the resolution down-sampling strategy, respectively compressing the training sample images to obtain compressed training sample images;
performing multiple rounds of iterative training on the feature extraction model to be trained based on each compressed training sample image, and determining extraction evaluation information of the feature extraction model to be trained until the training loss of the feature extraction model to be trained reaches a training target;
and if the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, outputting the feature extraction model to be trained obtained after the image resolution is reduced last time and used as the trained target feature extraction model.
Optionally, for multiple rounds of iterative training, the following operations are respectively performed, and the processing module is specifically configured to:
performing feature extraction processing on a compressed training sample image by using the feature extraction model to be trained to obtain training feature information corresponding to the compressed training sample image;
based on a target image processing task, correspondingly processing the training characteristic information to obtain a training processing result corresponding to the compressed training sample image;
and if the training loss of the feature extraction model to be trained is determined to reach the training target based on the obtained training feature information and the training processing result, determining the extraction evaluation information of the feature extraction model to be trained.
Optionally, the sample set further includes sample processing results corresponding to the training sample images for the target image processing task; the processing module is specifically configured to:
acquiring historical feature information obtained after a historical feature extraction model performs feature extraction processing on the compressed training sample image, wherein the historical feature extraction model is a feature extraction model to be trained obtained by training after the resolution of the image is reduced last time;
obtaining a sample processing result corresponding to a training sample image which is not subjected to compression processing and corresponds to the compressed training sample image in the sample set;
and if the training loss of the feature extraction model to be trained is determined to reach the training target based on the feature information error between the training feature information and the historical feature information and the processing result error between the training processing result and the sample processing result, determining the extraction evaluation information of the feature extraction model to be trained.
Optionally, the sample set further includes each verification sample image that is not subjected to compression processing and a sample processing result corresponding to each verification sample image for the target image processing task; the processing module is specifically configured to:
based on the resolution down-sampling strategy, respectively compressing the verification sample images to obtain the compression verification sample images;
respectively performing feature extraction processing on each compressed verification sample image by using the feature extraction model to be trained to obtain verification feature information corresponding to each verification sample image;
respectively and correspondingly processing each verification characteristic information based on the target image processing task to obtain verification processing results corresponding to each compressed verification sample image;
and determining the extraction evaluation information of the feature extraction model to be trained based on the error between each obtained verification processing result and the corresponding sample processing result.
In a third aspect, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of the method according to the first aspect.
In a fourth aspect, there is provided a computer device comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the method according to the first aspect according to the obtained program instructions.
In a fifth aspect, there is provided a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of the first aspect.
In the embodiment of the application, after the image to be extracted is obtained, the image to be extracted is compressed, and then the target feature extraction model is adopted to perform feature extraction on the compressed image to be processed, so that the data volume required to be processed by the target feature extraction model in the feature extraction process is reduced, and the processing efficiency in the process of processing the image by adopting the neural network model is improved.
Furthermore, in the compression process, the image to be extracted is compressed based on the target resolution information, and the target resolution information is generated based on the minimum value in the image resolution corresponding to the extraction evaluation information meeting the target evaluation condition, so that the data volume to be processed by the target feature extraction model is reduced to the maximum extent on the premise of ensuring the extraction capability of the target feature extraction model, and the processing efficiency in the process of processing the image by adopting the neural network model is further improved.
Drawings
FIG. 1a is a schematic diagram of a first principle of a feature extraction method in the related art;
FIG. 1b is a schematic flow chart of a feature extraction method in the related art;
FIG. 1c is a schematic diagram of a second principle of a feature extraction method in the related art;
fig. 2 is a schematic diagram of a first principle of a feature extraction method provided in an embodiment of the present application;
fig. 3 is an application scenario of the feature extraction method according to the embodiment of the present application;
fig. 4 is a first flowchart of a feature extraction method provided in the embodiment of the present application;
fig. 5a is a schematic flow chart of a feature extraction method according to an embodiment of the present application;
fig. 5b is a schematic diagram illustrating a principle of a feature extraction method according to an embodiment of the present application;
fig. 5c is a schematic flow chart of a feature extraction method according to an embodiment of the present application;
fig. 5d is a schematic flowchart of a feature extraction method according to an embodiment of the present application;
fig. 5e is a schematic diagram of a principle of a feature extraction method provided in the embodiment of the present application;
fig. 6a is a schematic diagram illustrating a principle of a feature extraction method according to an embodiment of the present application;
fig. 6b is a schematic flowchart of a feature extraction method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a feature extraction apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a feature extraction apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) Knowledge distillation:
implicit knowledge in one model, the teacher model, is migrated to another model, the student model. Compared with a teacher model, the student model is simple in model structure and model parameters, and compared with the student model, the teacher model is high in feature extraction capacity. Through knowledge distillation, the student model can approach or exceed the feature extraction capability of the teacher model as much as possible.
(2) A convolutional neural network:
a neural network is a mathematical model that mimics the structure and function of a biological neural network and is used to estimate or approximate a function. The neural network is formed by connecting a large number of artificial neurons, forms a hierarchical structure, can change an internal structure on the basis of external information, is a self-adaptive system, and has a learning function.
Convolutional neural networks are neural networks that use convolutional layers, which are used to capture the local features of the input, and are used in a number of image-dependent tasks.
(3) Global pooling layer:
the pooling layer is a neural network layer that performs dimensionality reduction (down-sampling) to mimic the human visual system, with higher levels of abstraction representing image features. The global pooling layer is a pooling layer with a pool size equal to the input size, and the global pooling layer summarizes the spatial information, so the spatial transformation of the input is more robust.
The embodiments of the present application relate to Artificial Intelligence (AI). The design is based on Computer Vision technology (CV) and Machine Learning (ML) among artificial intelligence technology.
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and the like.
Computer vision technology computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eye observation or transmitted to an instrument for detection. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and map construction, automatic driving, intelligent transportation and other technologies, and also includes common biometric identification technologies such as face recognition and fingerprint recognition.
Machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical services, smart customer service, internet of vehicles, automatic driving, smart traffic, and the like.
The application field of the feature extraction method provided by the embodiment of the present application is briefly introduced below.
With the continuous development of science and technology, more and more devices can adopt a neural network model related to artificial intelligence to realize an intelligent image processing task. For example, the neural network model is used to identify whether a target exists in the image, and for example, the neural network model is used to track a specified target in the video, and the like.
Because the process of processing the image by adopting the neural network model is complex, the processing efficiency of processing the image by adopting the neural network model is low, the performance requirement on equipment is high, and in order to improve the efficiency of processing the image by adopting the neural network model, the traditional method is generally realized by reducing the network structure complexity of the neural network model.
Referring to fig. 1a, the primitive neural network has a large number of neurons, and a large number of connections are provided between the neurons. Please refer to fig. 1b, which is a schematic flow chart illustrating a network structure of a simplified neural network model in the related art. S101, obtaining an original neural network model; s102, sequentially eliminating each neuron to evaluate the importance degree of each neuron; s103, sequentially canceling the connection between every two neurons to evaluate the importance degree of each connection; s104, adjusting the network structure of the model according to the evaluation result; s105, judging whether to continuously adjust the network structure of the model; and S106, if the network structure of the model does not need to be continuously adjusted, determining the adjusted neural network model.
There are various methods for assessing the importance of a neuron or a junction, for example, a method based on the magnitude of a junction. Since the output of a feature is weighted by multiplying the input by the weight, the smaller the magnitude of the weight, the less contribution to the output, and thus the degree of importance of the neuron or connection, such as the magnitude of the L1 or L2 norm, can be assessed based on the magnitude of the weight.
Also for example, a method based on a loss function. The importance of the corresponding neuron or connection is evaluated by the influence on the loss of the neural network model after the neuron or connection is deleted, so that a model parameter set can be found, and the increase of the loss function after the model parameter set is deleted is minimum.
Thus, referring to fig. 1c, the complexity of the network structure of the neural network model is simplified by removing the neurons with lower importance and removing the connections with lower importance.
However, in order to ensure the stability of the processing capability of the neural network model for the image, the network structure of the neural network model is limited in degree of simplification, and therefore, the problem of low processing efficiency still exists in the process of processing the image by using the neural network model.
In order to solve the problem of low processing efficiency in the process of processing images by adopting a neural network model, the application provides a feature extraction method. Referring to fig. 2, after obtaining the image to be extracted, the method compresses the image to be extracted based on the target resolution information to obtain a compressed image to be extracted. The target resolution information is generated by adopting the following method: and respectively determining a target feature extraction model, extracting evaluation information aiming at each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting the target evaluation condition. And (3) performing feature extraction processing on the compressed image to be processed by adopting a target feature extraction model to obtain target feature information of the image to be extracted.
In the embodiment of the application, after the image to be extracted is obtained, the image to be extracted is compressed, and then the target feature extraction model is adopted to perform feature extraction on the compressed image to be processed, so that the data volume required to be processed by the target feature extraction model in the feature extraction process is reduced, and the processing efficiency in the process of processing the image by adopting the neural network model is improved.
Furthermore, in the compression process, the image to be extracted is compressed based on the target resolution information, and the target resolution information is generated based on the minimum value in the image resolution corresponding to the extraction evaluation information meeting the target evaluation condition, so that the data volume to be processed by the target feature extraction model is reduced to the maximum extent on the premise of ensuring the extraction capability of the target feature extraction model, and the processing efficiency in the process of processing the image by adopting the neural network model is further improved.
An application scenario of the feature extraction method provided in the present application is described below.
Please refer to fig. 3, which is an application scenario of the feature extraction method according to the embodiment of the present application. The application scene comprises a feature extraction end 301 and an image processing end 302. The feature extraction terminal 301 and the image processing terminal 302 may communicate with each other, and the communication mode may be a wired communication technology, for example, communication is performed through a connection network line or a serial port line; the communication may also be performed by using a wireless communication technology, for example, communication may be performed by using technologies such as bluetooth or wireless fidelity (WIFI), and the like, which is not limited specifically.
The feature extraction end 301 generally refers to a device that can perform feature extraction processing on an image, for example, a device that includes a feature extraction part of a convolutional neural network, and the feature extraction end 301 may include a global pooling layer, and after performing feature extraction on input images with different image resolutions, feature information of the same dimension may be obtained. Image processing side 302 broadly refers to a device that can perform image processing tasks on an image. For example, the terminal device, a third party application accessible by the terminal device, a web page or server accessible by the terminal device, etc. The terminal devices include, but are not limited to, mobile phones, computers, intelligent voice interaction devices, intelligent household appliances, vehicle-mounted terminals, and the like. Servers include, but are not limited to, cloud servers, local servers, or associated third party servers, etc. The feature extraction end 301 and the image processing end 302 can both adopt cloud computing to reduce the occupation of local computing resources; cloud storage can also be adopted to reduce the occupation of local storage resources.
As an embodiment, the feature extraction terminal 301 and the image processing terminal 302 may be the same device or different devices, and are not limited specifically. In the embodiment of the present application, an example in which the feature extraction terminal 301 and the image processing terminal 302 are deployed on the same server is described.
Please refer to fig. 4, which is a flowchart illustrating a feature extraction method according to an embodiment of the present disclosure.
S401, obtaining an image to be extracted.
The server may receive the image to be extracted sent by the client, may also receive the image to be extracted sent by other devices, may also call the image to be extracted stored in the content, and the like, which is not limited specifically. The image to be extracted may be a photographed or created image, or may be a frame of image in a photographed video, or a frame of image in a created animation, and the like, and is not limited specifically.
S402, based on the target resolution information, compressing the image to be extracted to obtain the compressed image to be extracted.
After obtaining the image to be extracted, the server may perform compression processing on the image to be extracted based on the target resolution information, so as to obtain a compressed image to be extracted. The target resolution information may be obtained by a method in which the server determines the target feature extraction models, extracts evaluation information for respective input images having different image resolutions, and generates the target resolution information based on a minimum value of image resolutions corresponding to the extracted evaluation information satisfying the target evaluation condition.
The target resolution information may be the minimum value of the image resolution, may be a value section including the minimum value of the image resolution, may also be a value section associated with the minimum value of the image resolution, and the like, and is not limited in particular. The target resolution information may be obtained through a training process of the target feature extraction model, the target feature extraction model may be obtained through training by a server, or may be obtained through training by other devices, and the server may be called from other devices when using the target feature extraction model, and the like, which is not particularly limited.
The following describes a process of training a feature extraction model to be trained to obtain a target feature extraction model. Please refer to fig. 5a, which is a schematic flow chart of training a feature extraction model to be trained.
S51, the trained reference feature extraction model is obtained and used as the feature extraction model to be trained.
The reference feature extraction model is trained based on a sample set including training sample images that have not been subjected to compression processing. The server may use the network structure of the reference feature extraction model as the network structure of the feature extraction model to be trained, and use the model parameters of the reference feature extraction model as the initial model parameters of the feature extraction model to be trained, so that the server obtains the feature extraction model to be trained.
And S52, sequentially reducing the image resolution of the input images processed by the feature extraction model to be trained based on a resolution reduction sampling strategy, determining the extraction evaluation information of the feature extraction model to be trained, and outputting the feature extraction model to be trained obtained after the image resolution is reduced last time and obtained by training until the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, wherein the feature extraction model to be trained is used as the trained target feature extraction model.
After obtaining the feature extraction model to be trained, the server may train the feature extraction model to be trained, in order to ensure the feature extraction capability of the feature extraction model to be trained, the server may sequentially reduce the image resolution of the input image processed by the feature extraction model to be trained based on a resolution reduction sampling strategy, and each time the image resolution is reduced, performing multiple rounds of iterative training on the feature extraction model to be trained, determining extraction evaluation information of the feature extraction model to be trained, judging whether the extraction evaluation information of the feature extraction model to be trained meets target evaluation conditions, and when the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, outputting the feature extraction model to be trained obtained after the image resolution is reduced last time, and taking the feature extraction model to be trained as the trained target feature extraction model.
For example, please refer to fig. 5b, which is a schematic diagram illustrating a principle of extracting a model for training a feature to be trained. The feature extraction model to be trained obtained by reducing the image resolution at the last time is recorded as Mt-1The feature extraction model to be trained after the current image resolution is reduced is marked as Mt. For model Mt-1Each input image x oft∈AtImage resolution R oftDown-sampling dt(. to obtain respective compressed training sample images x)t-1=dt(xt)∈At-1Image resolution R oft-1=Rt-S. Model Mt-1The model parameter of (2) is model MtUsing each compressed training sample image xt-1For model MtAnd performing multiple rounds of iterative training, and determining extraction evaluation information of the feature extraction model to be trained.
The following description is given for an example of a process after image resolution is reduced once, and the process after image resolution is reduced each time is similar and is not described herein again. Please refer to fig. 5c, which is a flowchart illustrating the process after the image resolution is reduced once.
S501, based on a resolution down-sampling strategy, compressing each training sample image respectively to obtain each compressed training sample image.
After the image resolution of the input image of the feature extraction model to be trained is reduced once based on the resolution down-sampling strategy, each training sample image included in the sample set may be compressed based on the resolution down-sampling strategy to obtain each compressed training sample image. Therefore, the server can adopt each compressed training sample image for training without acquiring the image set used for training the feature extraction model to be trained again.
And S502, performing multiple rounds of iterative training on the feature extraction model to be trained based on each compressed training sample image, and determining extraction evaluation information of the feature extraction model to be trained until the training loss of the feature extraction model to be trained reaches a training target.
After obtaining each compressed training sample image, the server may perform multiple rounds of iterative training on the feature extraction model to be trained based on each compressed training sample image, and the server may determine a training loss of the feature extraction model to be trained every time one round of iterative training is performed, and determine whether the obtained training loss reaches a training target.
The following description is given for an example of a process of one round of iterative training, and the process of each round of iterative training is similar and is not repeated herein. Fig. 5d is a schematic flow chart of a round of iterative training, and fig. 5e is a schematic principle diagram of a round of iterative training.
S5001, adopting a feature extraction model to be trained to perform feature extraction processing on the compressed training sample image to obtain training feature information corresponding to the compressed training sample image.
Based on one compressed training sample image in each compressed training sample image, the server inputs the compressed training sample image into a feature extraction model to be trained, and performs feature extraction processing on the compressed training sample image by adopting the feature extraction model to be trained to obtain training feature information corresponding to the compressed training sample image.
S5002, based on the target image processing task, correspondingly processing the training characteristic information to obtain a training processing result corresponding to the compressed training sample image.
After obtaining the training feature information corresponding to the compressed training sample image, the server may perform corresponding processing on the training feature information based on the target image processing task to obtain a training processing result corresponding to the compressed training sample image. The target image processing task includes one or more of an image classification task, an image recognition task, an image detection task, and other image-specific processing tasks, which may be specifically set according to an actual situation, and is not limited herein.
S5003, if it is determined that the training loss of the feature extraction model to be trained reaches the training target based on the obtained training feature information and the training processing result, determining extraction evaluation information of the feature extraction model to be trained.
After obtaining training feature information and a training processing result corresponding to the compressed training sample image, the server may determine whether a training loss of the feature extraction model to be trained reaches a training target based on the obtained training feature information and the training processing result.
As an embodiment, if the sample set further includes sample processing results corresponding to the respective training sample images for the target image processing task, the determining whether the training loss of the feature extraction model to be trained reaches the training target may be performed by first obtaining historical feature information obtained after the historical feature extraction model performs the feature extraction processing on the compressed training sample images. The historical feature extraction model is a feature extraction model to be trained, which is obtained by training after the resolution of the image is reduced last time. And then obtaining a sample processing result corresponding to the training sample image which is not subjected to compression processing and corresponds to the compressed training sample image in the sample set.
And the server respectively determines a characteristic information error between the training characteristic information and the historical characteristic information and a processing result error between the training processing result and the sample processing result. And if the training loss of the feature extraction model to be trained is determined to reach the training target based on the feature information error and the processing result error, determining the extraction evaluation information of the feature extraction model to be trained.
The method for determining that the training loss of the feature extraction model to be trained reaches the training target based on the feature information error and the processing result error may be that, if it is determined that the feature information error and the processing result error respectively converge, it is determined that the training loss of the feature extraction model to be trained reaches the training target; or, if it is determined that the feature information error and the processing result error are respectively smaller than the respective corresponding error thresholds, it is determined that the training loss of the feature extraction model to be trained reaches the training target, and the like, and the details are not limited.
As an embodiment, if the sample set further includes each verification sample image that has not been subjected to compression processing and each sample processing result corresponding to each verification sample image for the target image processing task, the server may determine the extraction evaluation information of the feature extraction model to be trained based on each verification sample image and each sample processing result corresponding to each verification sample image for the target image processing task.
After the image resolution of the input image of the feature extraction model to be trained is reduced once based on the resolution down-sampling strategy, each verification sample image included in the sample set may be compressed based on the resolution down-sampling strategy, so as to obtain each compressed verification sample image. Therefore, the server can determine the extraction evaluation information of the feature extraction model to be trained by adopting each compressed verification sample image without acquiring the image set used for verifying the feature extraction model to be trained again.
After obtaining each compressed verification sample image, the server may use the feature extraction model to be trained to perform feature extraction processing on each compressed verification sample image respectively, so as to obtain verification feature information corresponding to each verification sample image.
The server can respectively and correspondingly process each verification characteristic information based on the target image processing task to obtain a verification processing result corresponding to each compressed verification sample image. After obtaining the verification processing results corresponding to the respective verification sample images, the server may determine extraction evaluation information of the feature extraction model to be trained based on errors between the obtained respective verification processing results and the corresponding sample processing results.
S5004, if it is determined that the training loss of the feature extraction model to be trained does not reach the training target based on the obtained training feature information and the training processing result, adjusting the model parameters of the feature extraction model to be trained, and continuing to perform the next round of iterative training on the feature extraction model to be trained.
If the server determines that the training loss of the feature extraction model to be trained does not reach the training target based on the feature information error and the processing result error after obtaining the feature information error between the training feature information and the historical feature information and the processing result error between the training processing result and the sample processing result, the model parameters of the feature extraction model to be trained can be adjusted, and the next round of iterative training of the feature extraction model to be trained is continued.
And S503, if the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, outputting the feature extraction model to be trained obtained after the image resolution is reduced last time, and taking the feature extraction model as the trained target feature extraction model.
If the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, which indicates that the feature extraction model to be trained cannot guarantee the feature extraction capability under the condition of the image resolution of the current input image, therefore, the image resolution corresponding to the feature extraction model to be trained, which is obtained after the image resolution is reduced last time, is the minimum image resolution corresponding to the feature extraction capability of the feature extraction model to be trained, which is obtained after the image resolution is reduced last time, can be output as the trained target feature extraction model.
S504, if the extraction evaluation information of the feature extraction model to be trained meets the target evaluation condition, continuing to reduce the image resolution of the input image processed by the feature extraction model to be trained based on the resolution reduction sampling strategy, and continuing to train.
If the extraction evaluation information of the feature extraction model to be trained meets the target evaluation condition, the image resolution of the input image of the feature extraction model to be trained can be further reduced, so that the image resolution of the input image processed by the feature extraction model to be trained can be reduced on the basis of the resolution reduction sampling strategy, the training is continued, and whether the image resolution is further reduced or not is determined according to whether the extraction evaluation information of the feature extraction model to be trained meets the target evaluation condition or not after the image resolution is reduced next time.
And S403, performing feature extraction processing on the compressed image to be processed by adopting a target feature extraction model to obtain target feature information of the image to be extracted.
After the compressed image to be extracted is obtained, a target feature extraction model can be adopted, and feature extraction processing is performed on the compressed image to be processed to obtain target feature information of the image to be extracted.
As an embodiment, after obtaining the target feature information of the image to be extracted, the target feature information may be correspondingly processed based on a target image processing task to obtain a target processing result for the image to be extracted, where the target image processing task includes one or more of an image classification task, an image recognition task, an image detection task, and other processing tasks for the image.
The following describes an example of a training process in the feature extraction method provided in the embodiment of the present application. The feature extraction model to be trained is marked as MtWhen the trained reference feature extraction model is t equal to 0, the image resolution of the input image of the feature extraction model to be trained is recorded as RtReferring to the case that the image resolution of the input image of the feature extraction model is t equal to 0, the model parameter of the feature extraction model to be trained is recorded as WtThe model parameter of the reference feature extraction model is t 0. Each compressed training sample image after compression processing is recorded as xt=dt(xt-1)∈AtIf each sample training image in the sample set is t ═ 0, the sample training image in MS1M-ArcFace is selected for use in this embodiment, and the compressed verification sample image after compression processing is recorded as BtWhen each verification sample image in the sample set is t ═ 0, the verification sample image in the LFW is selected in the embodiment of the present application.
The MS1M-ArcFace data set comprises 5.8 million face pictures of 8.5 million persons, and the resolution of each face picture is 112x 112; the LFW data set contains a total of 6000 pairs of face image pairs, 3000 for 2 face images belonging to the same person and 3000 for 1 face image per person belonging to different persons. All face images in the LFW data set are aligned and scaled to 112 × 112 resolution by preprocessing.
Referring to fig. 6a, the reference feature extraction model adopts the feature extraction part in SE-ResNet18, the feature extraction part in SE-ResNet18 can output 256-dimensional feature vectors, the target image task is completed by using the full-link layer in SE-ResNet18, and the full-link layer can output 8.5 ten thousand-dimensional classification probability vectors.
Please refer to fig. 6b, which is a schematic diagram of a training process of the feature extraction model to be trained.
S601, constructing a model M based on a model obtained by training when the image resolution is reduced last timet-1(ii) a Based on image resolution as Rt-1112x112 of each compressed training sample image xt-1∈At-1Obtaining a compressed training sample image x as an input imaget-1History feature information f oft-1=Mt-1(xt-1,Wt-1)。
S602, based on the model Mt-1Lowering the resolution R of the imaget(112-16) × (112-16), model M was obtainedt
S603, adopting each compression training sample image x after down samplingtAs an input image, model M is matchedtTraining is carried out to obtain each training characteristic information ft=Mt(xt,Wt) (ii) a Adopting image processing end to carry out training on each piece of characteristic information ftPerforming corresponding processing to obtain each training processing result pt
S604, determining training characteristic information f according to formula (1)tWith historical feature information ft-1Characteristic information error L therebetweenfd
Figure BDA0003279446530000171
Mt-1The learned implicit knowledge may be lost after input image compression, by featuresMode of characteristic distillation, characteristic distillation loss LfdCan combine Mt-1Learned implicit knowledge is passed to model MtIn the method, the image resolution is reduced, and meanwhile, the feature extraction capability of the feature extraction model can be ensured.
S605, determining a training processing result p according to the formula (2)tProcess result error L from sample process result yt
Figure BDA0003279446530000172
S606, according to the formula (3), determining the model MtTo determine the model MtWhether the training loss of (a) reaches a training target;
L=a×Lfd+b×Lt (3)
wherein a and b are respectively the characteristic information error LfdAnd process the result error LtThe weight coefficient of (2).
S607, if the training goal is reached, model M may be employedtVerifying sample image B for compressiontPerforming feature extraction to obtain verification feature information, and performing corresponding processing on the verification feature information by adopting an image processing end to obtain a verification processing result;
determining model M by verifying error between processing result and sample processing resulttExtracting the evaluation information so as to obtain the task precision of the image represented by the extracted evaluation information;
if the training goal is not met, then model M is adjustedtThe training is continued.
S608, if the image task precision is larger than the specified threshold value A-0.005, the image resolution is continuously reduced, and if the image task precision is smaller than or equal to the specified threshold value A-0.005, the model M obtained after the image resolution is reduced last time is trainedt-1And (5) as a target feature extraction model. Wherein, A is the image task precision of the trained reference feature extraction model, and 0.005 is the allowable maximum recognition rate reduction range.
When t is 3, the input resolution of the face recognition model is 64x64, and the image task precision meets the requirement of the lowest recognition accuracy rate. The image resolution of the face recognition model SEResNet-18 is reduced from the initial 112x112 to 64x64, meanwhile, the recognition accuracy rate is reduced by less than 0.5 point, and the computational complexity is that (64/112)2 before acceleration is 32.65%, and is reduced by 67.35%.
Based on the same inventive concept, the embodiment of the present application provides a feature extraction device, which is equivalent to the server discussed above and can implement the corresponding functions of the feature extraction method described above. Referring to fig. 7, the apparatus includes an obtaining module 701 and a processing module 702, wherein:
an acquisition module 701: used for obtaining an image to be extracted;
the processing module 702: the method is used for compressing an image to be extracted based on target resolution information to obtain a compressed image to be extracted, wherein the target resolution information is generated by adopting the following method: respectively determining a target feature extraction model, aiming at extraction evaluation information of each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting target evaluation conditions;
the processing module 702 is further configured to: and (3) performing feature extraction processing on the compressed image to be processed by adopting a target feature extraction model to obtain target feature information of the image to be extracted.
In a possible embodiment, the processing module 702 is further configured to:
the method comprises the steps of performing feature extraction processing on a compressed image to be processed by adopting a target feature extraction model to obtain target feature information of the image to be extracted, and then performing corresponding processing on the target feature information based on a target image processing task to obtain a target processing result aiming at the image to be extracted, wherein the target image processing task comprises one or more of an image classification task, an image recognition task or an image detection task.
In a possible embodiment, the target feature extraction model is obtained by training according to the following method, and the processing module 702 is further configured to:
acquiring a trained reference feature extraction model as a feature extraction model to be trained, wherein the reference feature extraction model is obtained by training based on a sample set, and the sample set comprises training sample images which are not subjected to compression processing;
and sequentially reducing the image resolution of the input images processed by the feature extraction model to be trained based on a resolution reduction sampling strategy, determining the extraction evaluation information of the feature extraction model to be trained, and outputting the feature extraction model to be trained obtained after the image resolution is reduced last time until the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, wherein the feature extraction model to be trained is used as the trained target feature extraction model.
In a possible embodiment, each time the resolution of the image is reduced, the following operations are executed by the processing module 702:
based on a resolution down-sampling strategy, respectively compressing each training sample image to obtain each compressed training sample image;
performing multiple rounds of iterative training on the feature extraction model to be trained based on each compressed training sample image until the training loss of the feature extraction model to be trained reaches a training target, and determining extraction evaluation information of the feature extraction model to be trained;
and if the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, outputting the feature extraction model to be trained obtained after the image resolution is reduced last time and used as the trained target feature extraction model.
In a possible embodiment, for multiple rounds of iterative training, the following operations are respectively performed, and the processing module 702 is specifically configured to:
performing feature extraction processing on the compressed training sample image by adopting a feature extraction model to be trained to obtain training feature information corresponding to the compressed training sample image;
based on the target image processing task, carrying out corresponding processing on the training characteristic information to obtain a training processing result corresponding to the compressed training sample image;
and if the training loss of the feature extraction model to be trained is determined to reach the training target based on the obtained training feature information and the training processing result, determining the extraction evaluation information of the feature extraction model to be trained.
In a possible embodiment, the sample set further includes sample processing results corresponding to the training sample images for the target image processing task; the processing module 702 is specifically configured to:
acquiring historical feature information obtained after a historical feature extraction model performs feature extraction processing on a compressed training sample image, wherein the historical feature extraction model is a feature extraction model to be trained obtained by training after the resolution of the image is reduced last time;
obtaining a sample processing result corresponding to a training sample image which is not subjected to compression processing and corresponds to a compressed training sample image in a sample set;
and if the training loss of the feature extraction model to be trained is determined to reach the training target based on the feature information error between the training feature information and the historical feature information and the processing result error between the training processing result and the sample processing result, determining the extraction evaluation information of the feature extraction model to be trained.
In a possible embodiment, the sample set further includes each verification sample image that has not been subjected to compression processing and a sample processing result corresponding to each verification sample image for the target image processing task; the processing module 702 is specifically configured to:
based on a resolution down-sampling strategy, respectively compressing each verification sample image to obtain each compressed verification sample image;
respectively carrying out feature extraction processing on each compressed verification sample image by adopting a feature extraction model to be trained to obtain verification feature information corresponding to each verification sample image;
respectively and correspondingly processing each verification characteristic information based on the target image processing task to obtain a verification processing result corresponding to each compressed verification sample image;
and determining extraction evaluation information of the feature extraction model to be trained based on the obtained errors between the verification processing results and the corresponding sample processing results.
Based on the same inventive concept, the embodiment of the present application provides a computer device, and the computer device 800 is described below.
Referring to fig. 8, the feature extraction apparatus may be run on a computer device 800, and a current version and a historical version of a data storage program and application software corresponding to the data storage program may be installed on the computer device 800, where the computer device 800 includes a display unit 840, a processor 880, and a memory 820, where the display unit 840 includes a display panel 841 for displaying an interface interacted with by a user, and the like.
In one possible embodiment, the Display panel 841 may be configured in the form of a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED) or the like.
The processor 880 is used to read the computer program and then execute a method defined by the computer program, for example, the processor 880 reads a data storage program or a file, etc., so as to run the data storage program on the computer device 800 and display a corresponding interface on the display unit 840. The Processor 880 may include one or more general-purpose processors, and may further include one or more DSPs (Digital Signal processors) for performing relevant operations to implement the technical solutions provided in the embodiments of the present application.
Memory 820 typically includes both internal and external memory, which may be Random Access Memory (RAM), Read Only Memory (ROM), and CACHE memory (CACHE). The external memory can be a hard disk, an optical disk, a USB disk, a floppy disk or a tape drive. The memory 820 is used for storing a computer program including an application program and the like corresponding to each client, and other data, which may include data generated after an operating system or the application program is executed, including system data (e.g., configuration parameters of the operating system) and user data. The program instructions in the embodiments of the present application are stored in the memory 820, and the processor 880 executes the program instructions stored in the memory 820 to implement any one of the feature extraction methods discussed in the previous figures.
The display unit 840 is used to receive input numerical information, character information, or contact touch operation/non-contact gesture, and generate signal input related to user setting and function control of the computer device 800, and the like. Specifically, in the embodiment of the present application, the display unit 840 may include a display panel 841. The display panel 841, such as a touch screen, may collect touch operations of a user (e.g., operations of a user on the display panel 841 or on the display panel 841 using a finger, a stylus, or any other suitable object or accessory) thereon or nearby, and drive a corresponding connection device according to a preset program.
In one possible embodiment, the display panel 841 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a player, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 880, and can receive and execute commands from the processor 880.
The display panel 841 can be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the display unit 840, the computer device 800 may also include an input unit 830, the input unit 830 may include a graphical input device 831 and other input devices 832, wherein the other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
In addition to the above, computer device 800 may also include a power supply 890 for powering the other modules, audio circuitry 860, near field communication module 870, and RF circuitry 810. The computer device 800 may also include one or more sensors 850, such as acceleration sensors, light sensors, pressure sensors, and the like. The audio circuit 860 specifically includes a speaker 861, a microphone 862, and the like, for example, the computer device 800 may collect the sound of the user through the microphone 862 and perform corresponding operations.
For one embodiment, the number of the processors 880 may be one or more, and the processors 880 and the memory 820 may be coupled or relatively independent.
Processor 880 of fig. 8 may be used to implement the functionality of acquisition module 701 and processing module 702 of fig. 7, as an example.
As an example, the processor 880 in fig. 8 may be used to implement the corresponding functions of the server 102 discussed above.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (11)

1. A method of feature extraction, comprising:
obtaining an image to be extracted;
based on the target resolution information, compressing the image to be extracted to obtain a compressed image to be extracted, wherein the target resolution information is generated by adopting the following method: respectively determining a target feature extraction model, aiming at extraction evaluation information of each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting target evaluation conditions;
and performing feature extraction processing on the compressed image to be processed by adopting the target feature extraction model to obtain target feature information of the image to be extracted.
2. The method according to claim 1, after performing feature extraction processing on the compressed image to be processed by using the target feature extraction model to obtain target feature information of the image to be extracted, further comprising:
and correspondingly processing the target characteristic information based on a target image processing task to obtain a target processing result aiming at the image to be extracted, wherein the target image processing task comprises one or more of an image classification task, an image identification task or an image detection task.
3. The method according to claim 1 or 2, wherein the target feature extraction model is obtained by training by adopting the following method:
acquiring a trained reference feature extraction model as a feature extraction model to be trained, wherein the reference feature extraction model is obtained by training based on a sample set, and the sample set comprises training sample images which are not subjected to compression processing;
and sequentially reducing the image resolution of the input images processed by the feature extraction model to be trained based on a resolution reduction sampling strategy, determining the extraction evaluation information of the feature extraction model to be trained, and outputting the feature extraction model to be trained obtained after the image resolution is reduced last time and obtained by training until the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, wherein the feature extraction model to be trained is used as the trained target feature extraction model.
4. The method according to claim 3, wherein the sequentially reducing the image resolution of the input images processed by the feature extraction model to be trained based on a resolution down-sampling strategy, determining the extraction evaluation information of the feature extraction model to be trained until the extraction evaluation information of the feature extraction model to be trained will not satisfy the target evaluation condition, and outputting the feature extraction model to be trained obtained after the image resolution is reduced last time as the trained target feature extraction model, comprises:
each time the image resolution is reduced, the following operations are performed:
based on the resolution down-sampling strategy, respectively compressing the training sample images to obtain compressed training sample images;
performing multiple rounds of iterative training on the feature extraction model to be trained based on each compressed training sample image, and determining extraction evaluation information of the feature extraction model to be trained until the training loss of the feature extraction model to be trained reaches a training target;
and if the extraction evaluation information of the feature extraction model to be trained does not meet the target evaluation condition, outputting the feature extraction model to be trained obtained after the image resolution is reduced last time and used as the trained target feature extraction model.
5. The method according to claim 4, wherein performing multiple rounds of iterative training on the feature extraction model to be trained based on the compressed training sample images until the training loss of the feature extraction model to be trained reaches a training target, and determining extraction evaluation information of the feature extraction model to be trained comprises:
for multiple rounds of iterative training, the following operations are respectively performed:
performing feature extraction processing on a compressed training sample image by using the feature extraction model to be trained to obtain training feature information corresponding to the compressed training sample image;
based on a target image processing task, correspondingly processing the training characteristic information to obtain a training processing result corresponding to the compressed training sample image;
and if the training loss of the feature extraction model to be trained is determined to reach the training target based on the obtained training feature information and the training processing result, determining the extraction evaluation information of the feature extraction model to be trained.
6. The method according to claim 5, wherein the sample set further includes sample processing results corresponding to the training sample images for the target image processing task;
if it is determined that the training loss of the feature extraction model to be trained reaches the training target based on the obtained training feature information and the training processing result, determining extraction evaluation information of the feature extraction model to be trained, including:
acquiring historical feature information obtained after a historical feature extraction model performs feature extraction processing on the compressed training sample image, wherein the historical feature extraction model is a feature extraction model to be trained obtained by training after the resolution of the image is reduced last time;
obtaining a sample processing result corresponding to a training sample image which is not subjected to compression processing and corresponds to the compressed training sample image in the sample set;
and if the training loss of the feature extraction model to be trained is determined to reach the training target based on the feature information error between the training feature information and the historical feature information and the processing result error between the training processing result and the sample processing result, determining the extraction evaluation information of the feature extraction model to be trained.
7. The method according to claim 5 or 6, wherein the sample set further comprises each verification sample image that has not been subjected to compression processing and a sample processing result corresponding to each verification sample image for the target image processing task; the determining of the extraction evaluation information of the feature extraction model to be trained includes:
based on the resolution down-sampling strategy, respectively compressing the verification sample images to obtain the compression verification sample images;
respectively performing feature extraction processing on each compressed verification sample image by using the feature extraction model to be trained to obtain verification feature information corresponding to each verification sample image;
respectively and correspondingly processing each verification characteristic information based on the target image processing task to obtain verification processing results corresponding to each compressed verification sample image;
and determining the extraction evaluation information of the feature extraction model to be trained based on the error between each obtained verification processing result and the corresponding sample processing result.
8. A feature extraction device characterized by comprising:
an acquisition module: used for obtaining an image to be extracted;
a processing module: the image processing device is used for compressing the image to be extracted based on target resolution information to obtain a compressed image to be extracted, wherein the target resolution information is generated by adopting the following method: respectively determining a target feature extraction model, aiming at extraction evaluation information of each input image with different image resolutions, and generating target resolution information based on the minimum value in the image resolutions corresponding to the extraction evaluation information meeting target evaluation conditions;
the processing module is further configured to: and performing feature extraction processing on the compressed image to be processed by adopting the target feature extraction model to obtain target feature information of the image to be extracted.
9. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method according to claims 1 to 7 when executed by a processor.
10. A computer device, comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the method according to any one of claims 1 to 7 according to the obtained program instructions.
11. A computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202111128152.6A 2021-09-26 2021-09-26 Feature extraction method and device, computer equipment and storage medium Pending CN114283460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111128152.6A CN114283460A (en) 2021-09-26 2021-09-26 Feature extraction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111128152.6A CN114283460A (en) 2021-09-26 2021-09-26 Feature extraction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114283460A true CN114283460A (en) 2022-04-05

Family

ID=80868622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111128152.6A Pending CN114283460A (en) 2021-09-26 2021-09-26 Feature extraction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114283460A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372722A (en) * 2023-12-06 2024-01-09 广州炫视智能科技有限公司 Target identification method and identification system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372722A (en) * 2023-12-06 2024-01-09 广州炫视智能科技有限公司 Target identification method and identification system
CN117372722B (en) * 2023-12-06 2024-03-22 广州炫视智能科技有限公司 Target identification method and identification system

Similar Documents

Publication Publication Date Title
CN111709409B (en) Face living body detection method, device, equipment and medium
EP3968179A1 (en) Place recognition method and apparatus, model training method and apparatus for place recognition, and electronic device
WO2021088556A1 (en) Image processing method and apparatus, device, and storage medium
WO2022000420A1 (en) Human body action recognition method, human body action recognition system, and device
CN111666919B (en) Object identification method and device, computer equipment and storage medium
CN110555481A (en) Portrait style identification method and device and computer readable storage medium
CN113449700B (en) Training of video classification model, video classification method, device, equipment and medium
KR20190061538A (en) Method and apparatus of recognizing motion pattern base on combination of multi-model
CN114581502A (en) Monocular image-based three-dimensional human body model joint reconstruction method, electronic device and storage medium
CN114529982A (en) Lightweight human body posture estimation method and system based on stream attention
CN115131218A (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN112966574A (en) Human body three-dimensional key point prediction method and device and electronic equipment
CN110121719A (en) Device, method and computer program product for deep learning
CN112712068B (en) Key point detection method and device, electronic equipment and storage medium
CN114283460A (en) Feature extraction method and device, computer equipment and storage medium
CN114283899A (en) Method for training molecule binding model, and molecule screening method and device
CN113723164A (en) Method, device and equipment for acquiring edge difference information and storage medium
CN116229584A (en) Text segmentation recognition method, system, equipment and medium in artificial intelligence field
CN112989134B (en) Processing method, device, equipment and storage medium of node relation graph
CN111275183B (en) Visual task processing method, device and electronic system
CN114373098A (en) Image classification method and device, computer equipment and storage medium
CN114283461A (en) Image processing method, apparatus, device, storage medium, and computer program product
CN110533749B (en) Dynamic texture video generation method, device, server and storage medium
CN112115900B (en) Image processing method, device, equipment and storage medium
CN114283290B (en) Training of image processing model, image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination