CN109241988A - Feature extracting method and device, electronic equipment, storage medium, program product - Google Patents
Feature extracting method and device, electronic equipment, storage medium, program product Download PDFInfo
- Publication number
- CN109241988A CN109241988A CN201810779116.8A CN201810779116A CN109241988A CN 109241988 A CN109241988 A CN 109241988A CN 201810779116 A CN201810779116 A CN 201810779116A CN 109241988 A CN109241988 A CN 109241988A
- Authority
- CN
- China
- Prior art keywords
- feature
- network
- sample
- nervus opticus
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application provides a kind of feature extracting method and device, electronic equipment, storage medium, program product, wherein method includes: acquisition pending data;Utilize first nerves network, feature extraction is carried out to the pending data, obtains fisrt feature, wherein, the similarity of the fisrt feature and second feature is greater than preset threshold, and the second feature is to carry out the feature of feature extraction acquisition to the pending data using nervus opticus network.Based on the above embodiments of the present application since the fisrt feature of acquisition and the similarity of second feature are greater than preset threshold, first nerves network has reached performance similar with nervus opticus network by the network structure different from nervus opticus network.Quick, lightweight first nerves network is allowed to imitate the feature space to high-performance but redundancy, slow nervus opticus network, to realize the compression and acceleration of network.
Description
Technical field
This application involves technical field of computer vision, especially a kind of feature extracting method and device, are deposited electronic equipment
Storage media, program product.
Background technique
In recent years, convolutional neural networks (Convolutional Neural Network, CNN) are regarded in most of computer
The effect realized in the task in feel field has surmounted traditional algorithm, but convolutional neural networks are really being applied to actual scene
In the process, however it remains serious problems: one, since the depth of network is increased to pole to realize optimum performance in the industry at present
It causes, leads to the ginseng enormous amount of network;Two, convolutional layer and full articulamentum need a large amount of floating-point matrix multiplying, and calculating is opened
Pin is also very big;Although some networks can on GPU real time execution, it is weaker that same network is deployed to computing capability
After platform such as mobile phone embedded device, inefficient is unable to reach the demand of real time execution away from even can achieve 50 times or more.
Summary of the invention
A kind of Feature Extraction Technology provided by the embodiments of the present application.
According to the one aspect of the embodiment of the present application, a kind of feature extracting method for providing, comprising:
Obtain pending data;
Using first nerves network, feature extraction is carried out to the pending data, obtains fisrt feature, wherein described
The similarity of fisrt feature and second feature be greater than preset threshold, the second feature be using nervus opticus network to it is described to
Handle the feature that data carry out feature extraction acquisition.
Optionally, described to utilize first nerves network, feature extraction is carried out to the data, before obtaining fisrt feature,
Further include:
Using the nervus opticus network, the training first nerves network, the nervus opticus network is training in advance
Good neural network.
It is optionally, described to utilize the nervus opticus network, the training first nerves network, comprising:
Feature extraction is carried out to sample data using the first nerves network, obtains first sample feature;
Obtain the second sample characteristics that the sample data corresponds to the nervus opticus network;
First-loss is determined based on the first sample feature and second sample characteristics;
Utilize the first-loss training first nerves network.
Optionally, second sample characteristics for obtaining the sample data and corresponding to the nervus opticus network, comprising:
Second sample characteristics that the sample data is obtained based on the nervus opticus network are obtained from memory.
Optionally, second sample that the sample data is obtained from memory and is obtained using nervus opticus network
Before eigen, further includes:
Using the nervus opticus network, feature extraction is carried out to the sample data, obtains second sample characteristics;
The sample data and its corresponding second sample characteristics are stored in the memory.
Optionally, include at least one record in the memory, save a sample number in each record
According to and its corresponding second sample characteristics;
Second sample characteristics for obtaining the sample data from memory and being obtained based on the nervus opticus network,
Include:
Corresponding record is obtained in memory based on the sample data;
Corresponding second sample characteristics of the sample data are obtained from the record.
Optionally, second sample characteristics for obtaining the sample data and corresponding to the nervus opticus network, comprising:
Using the nervus opticus network, feature extraction is carried out to the sample data, obtains second sample characteristics.
It is optionally, described that first-loss is determined based on the first sample feature and second sample characteristics, comprising:
Based on the distance between the first sample feature and second sample characteristics, first-loss is determined.
Optionally, described to be based on the distance between the first sample feature and second sample characteristics, determine first
Loss, comprising:
Based on cosine similarity between the first sample feature and second sample characteristics, the first sample is obtained
The distance between feature and second sample characteristics;
The first-loss is determined based on the distance of the acquisition.
Optionally, before the first nerves network using first-loss training, further includes:
Based on the first sample feature, the second loss is determined using loss function;
It is described to utilize the first-loss training first nerves network, comprising:
It is lost based on the first-loss and described second and determines network losses;
Based on the network losses training first nerves network.
Optionally, it is described based on the first-loss and it is described second lose determine network losses, comprising:
Based on the first-loss and the second loss weighted sum, the network losses are obtained.
Optionally, described to be based on the first sample feature, the second loss is determined using loss function, comprising:
Based on the first sample feature and the corresponding mark feature of the sample data, second is determined using loss function
Loss.
According to the other side of the embodiment of the present application, a kind of feature deriving means for providing, comprising:
Data capture unit, for obtaining pending data;
Feature extraction unit carries out feature extraction to the data for utilizing first nerves network, obtains the first spy
Sign, wherein the similarity of the fisrt feature and second feature is greater than preset threshold, and the second feature is to utilize nervus opticus
Network carries out the feature of feature extraction acquisition to the pending data.
Optionally, further includes:
Network training unit, for utilizing the nervus opticus network, the training first nerves network, second mind
It is preparatory trained neural network through network.
Optionally, the network training unit, comprising:
First sample characteristic extracting module, for carrying out feature extraction to sample data using the first nerves network,
Obtain first sample feature;
Second sample characteristics extraction module corresponds to the second sample of the nervus opticus network for obtaining the sample data
Eigen;
First-loss determining module, for determining the first damage based on the first sample feature and second sample characteristics
It loses;
First network training module, for utilizing the first-loss training first nerves network.
Optionally, the second sample characteristics extraction module is based on institute for obtaining the sample data from memory
State second sample characteristics that nervus opticus network obtains.
Optionally, the second sample characteristics extraction module is also used to using the nervus opticus network, to the sample
Data carry out feature extraction, obtain second sample characteristics;The sample data and its corresponding second sample is special
Sign is stored in the memory.
Optionally, include at least one record in the memory, save in each record a sample data and
Its corresponding second sample characteristics;
The second sample characteristics extraction module, it is corresponding specifically for being obtained in memory based on the sample data
Record;Corresponding second sample characteristics of the sample data are obtained from the record.
Optionally, the second sample characteristics extraction module is specifically used for utilizing the nervus opticus network, to the sample
Notebook data carries out feature extraction, obtains second sample characteristics.
Optionally, the loss determining module, for based on the first sample feature and second sample characteristics it
Between distance, determine first-loss.
Optionally, the loss determining module is specifically used for special based on the first sample feature and second sample
Cosine similarity between sign obtains the distance between the first sample feature and second sample characteristics;It is obtained based on described
The distance obtained determines the first-loss.
Optionally, the network training unit, further includes:
Second loss determining module determines the second loss using loss function for being based on the first sample feature;
The network training unit is specifically used for losing determining network losses based on the first-loss and described second;
Based on the network losses training first nerves network.
Optionally, the network training unit is losing determining network losses based on the first-loss and described second
When, for obtaining the network losses based on the first-loss and the second loss weighted sum.
Optionally, the second loss determining module, is specifically used for being based on the first sample feature and the sample number
According to corresponding mark feature, the second loss is determined using loss function.
According to the another aspect of the embodiment of the present application, a kind of electronic equipment provided, including processor, the processor
Including feature deriving means described in any one as above.
According to the still another aspect of the embodiment of the present application, a kind of electronic equipment that provides, comprising: memory, for storing
Executable instruction;
And processor, it is as above any one to complete that the executable instruction is executed for communicating with the memory
The operation of the item feature extracting method.
According to another aspect of the embodiment of the present application, a kind of computer readable storage medium provided, based on storing
The instruction that calculation machine can be read, described instruction are performed the operation for executing feature extracting method described in any one as above.
According to another aspect of the embodiment of the present application, a kind of computer program product provided, including it is computer-readable
Code, when the computer-readable code is run in equipment, the processor in the equipment is executed for realizing such as taking up an official post
The instruction of one feature extracting method of meaning.
A kind of feature extracting method and device, electronic equipment, storage medium, journey provided based on the above embodiments of the present application
Sequence product obtains pending data;Using first nerves network, feature extraction is carried out to pending data, obtains fisrt feature,
Wherein, the similarity of fisrt feature and second feature is greater than preset threshold, and second feature is using nervus opticus network to identical
Pending data carry out the obtained feature of feature extraction.Since the fisrt feature of acquisition and the similarity of second feature are greater than in advance
If threshold value, first nerves network has reached similar with nervus opticus network by the network structure different from nervus opticus network
Performance.Quick, lightweight first nerves network is allowed to imitate to high-performance but the feature of redundancy, slow nervus opticus network
Space, to realize the compression and acceleration of network.
Below by drawings and examples, the technical solution of the application is described in further detail.
Detailed description of the invention
The attached drawing for constituting part of specification describes embodiments herein, and together with description for explaining
The principle of the application.
The application can be more clearly understood according to following detailed description referring to attached drawing, in which:
Fig. 1 is a flow diagram of the embodiment of the present application feature extracting method.
Fig. 2 is a flow diagram of training first nerves network in the embodiment of the present application feature extracting method.
Fig. 3 is a structural schematic diagram of the embodiment of the present application feature deriving means.
Fig. 4 is the structural representation suitable for the electronic equipment of the terminal device or server that are used to realize the embodiment of the present application
Figure.
Specific embodiment
The various exemplary embodiments of the application are described in detail now with reference to attached drawing.It should also be noted that unless in addition having
Body explanation, the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally
The range of application.
Simultaneously, it should be appreciated that for ease of description, the size of various pieces shown in attached drawing is not according to reality
Proportionate relationship draw.
Be to the description only actually of at least one exemplary embodiment below it is illustrative, never as to the application
And its application or any restrictions used.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable
In the case of, the technology, method and apparatus should be considered as part of specification.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, then in subsequent attached drawing does not need that it is further discussed.
The embodiment of the present application can be applied to computer system/server, can be with numerous other general or specialized calculating
System environments or configuration operate together.Suitable for be used together with computer system/server well-known computing system, ring
The example of border and/or configuration includes but is not limited to: personal computer system, server computer system, thin client, thick client
Machine, hand-held or laptop devices, microprocessor-based system, set-top box, programmable consumer electronics, NetPC Network PC,
Minicomputer system, large computer system and distributed cloud computing technology environment including above-mentioned any system, etc..
Computer system/server can be in computer system executable instruction (such as journey executed by computer system
Sequence module) general context under describe.In general, program module may include routine, program, target program, component, logic, number
According to structure etc., they execute specific task or realize specific abstract data type.Computer system/server can be with
Implement in distributed cloud computing environment, in distributed cloud computing environment, task is long-range by what is be linked through a communication network
Manage what equipment executed.In distributed cloud computing environment, it includes the Local or Remote meter for storing equipment that program module, which can be located at,
It calculates in system storage medium.
It should be noted that, for the needs of description, having borrowed teacher's network and student's net in each embodiment below
The concept of network.Each embodiment in the application is to carry out the imitation of feature space, i.e. needle to teacher's network using student network
To same image, the feature extracted using teacher's network is approached using the feature that student network extracts.Therefore, teacher's network
It should not be understood as the limitation to the application with the concept of student network.
Fig. 1 is a flow diagram of the embodiment of the present application feature extracting method.As shown in Figure 1, this method comprises:
Step 110, pending data is obtained.
In technical field of computer vision, usual pending data is acquired image or picture.In the present embodiment,
The pending data of acquisition can be image or other kinds of data, and the application does not limit the concrete type of data, and obtains
The specific method of data.
Step 120, using first nerves network, feature extraction is carried out to data, obtains fisrt feature, wherein first is special
The similarity of sign and second feature is greater than preset threshold, and second feature is special to be carried out using nervus opticus network handles processing data
Sign extracts the feature obtained.
Wherein, fisrt feature and first eigenvector and second feature vector can be respectively corresponded with second feature, first
Feature vector is consistent with the dimension of second feature vector;Or fisrt feature and second feature can also respectively correspond fisrt feature
Figure and second feature figure, fisrt feature figure is identical with the size of second feature figure, and the application does not limit fisrt feature and the second spy
The concrete form of sign.
In one or more optional embodiments, using nervus opticus network as teacher's network, first nerves network is made
For student network, the ability in feature extraction of trained teacher's network is made full use of to carry out training of students network (based on the second mind
Second feature is obtained through network).Because teacher's network realizes good performance on appointed task, therefore what it was extracted
Second feature also possesses the good properties such as compact, separability is strong.Allow student network that study teacher's network is gone to mention in the present embodiment
The ability for taking feature makes teacher's network and student network to same image, and the feature extracted is close as far as possible.When input is any
One image, student network can export close to teacher's network export feature when, faster smaller student network (first
Neural network) just reach the performance close to teacher's network (nervus opticus network).
The feature extracting method provided based on the above embodiment obtains pending data;Using first nerves network, treat
It handles data and carries out feature extraction, obtain fisrt feature, wherein the similarity of fisrt feature and second feature is greater than default threshold
Value, second feature are to carry out the feature that feature extraction obtains to identical pending data using nervus opticus network.Due to obtaining
The similarity of the fisrt feature and second feature that obtain is greater than preset threshold, and first nerves network has reached phase with nervus opticus network
As performance.Allow quickly, the first nerves network of lightweight imitates to high-performance but redundancy, slow nervus opticus network
Feature space, to realize the compression and acceleration of network.
In one or more optional embodiments, before operation 120, can also include:
Using nervus opticus network, training first nerves network, nervus opticus network is preparatory trained neural network.
Using to current task, trained nervus opticus network, training first nerves network make in the present embodiment
Obtained first nerves network is to the performance of the task close to nervus opticus network, but training process is with respect to nervus opticus network
The parameter amount of computing cost needed for training process simplifies the neural network for much making realization same performance and network is all contracted
Subtract, the first nerves network that training obtains is convenient for being applied in practical application.
Fig. 2 is a flow diagram of training first nerves network in the embodiment of the present application feature extracting method.Such as Fig. 2
Shown, the present embodiment utilizes nervus opticus network, training first nerves network, comprising:
Step 210, feature extraction is carried out to sample data using first nerves network, obtains first sample feature.
Wherein, sample data can be the data of sample image, samples pictures or other forms, to first nerves network
During being trained, which can have markup information (such as: mark feature etc.) or does not have markup information.
The sample data obtains corresponding first sample feature by the processing of first nerves network, at this point, first nerves network is not
Trained first nerves network.
Step 220, the second sample characteristics that sample data corresponds to nervus opticus network are obtained.
Optionally, the above-mentioned sample data for being input to first nerves network is input in nervus opticus network, Ke Yitong
When sample data is input to first nerves network and nervus opticus network, that is, be performed simultaneously step 210 and step 220, can also
Sample data is input in first nerves network and nervus opticus network with successively sequence, it may be assumed that step 210 can be first carried out again
It executes step 220 or first carries out step 220 and execute step 210 again, the unlimited sample preparation notebook data of this implementation is input to first nerves net
The sequencing of network and nervus opticus network.
Nervus opticus network carries out feature extraction to sample data and obtains the second sample characteristics, or obtains from memory
The second extracted sample characteristics.
Step 230, first-loss is determined based on first sample feature and the second sample characteristics, and utilizes first-loss training
First nerves network.
It is special using first sample feature and the second sample in order to make the performance of first nerves network close to nervus opticus network
Sign obtains first-loss, is passed through in backpropagation (such as: reversed gradient algorithm) adjustment first nerves network using first-loss
Parameter, make first nerves network output first sample feature and nervus opticus network output the second sample characteristics
Between difference be less than setting value, i.e. the feature extraction performance of first nerves network close to nervus opticus network.
In some alternative embodiments, step 220 can obtain sample data from memory and be based on nervus opticus net
The second sample characteristics that network obtains.
Due to being the optimal performance of realization in training nervus opticus network (teacher's network), usually by GPU video memory
Used it is ultimate attainment, therefore in training first nerves network (student network), if same image need it is refreshing by first simultaneously
Through network and nervus opticus network feature is calculated if, have the situation for being greatly likely to occur GPU video memory deficiency, so just
It can not be unfolded to train.And even if GPU can just support the processing of first nerves network and nervus opticus network simultaneously, but due to
The speed of nervus opticus network is much more slowly than first nerves network, will also result in that training speed is excessively slow, the extremely low problem of efficiency.
Therefore, the present embodiment proposition acquisition from memory is in advance based on nervus opticus network and handles to obtain to sample data
The second sample characteristics, can avoid during to first nerves network training nervus opticus network and occupy too many resource, mention
High training speed and training effectiveness.
Optionally, before obtaining the second sample characteristics in memory, can also include:
Using nervus opticus network, feature extraction is carried out to sample data, obtains the second sample characteristics;
It will be in sample data and its corresponding second sample characteristics deposit memory.
In order to reach training first nerves network end-to-endly, it is necessary first to nervus opticus network to the number in training set
It according to progress feature extraction, and is saved in storage equipment (memory), offline (off-line) that is characterized extraction can be referred to as.Such as
This, has just obtained the feature (such as: feature vector or characteristic pattern) of every image in training set, for every image, it is only necessary to
Fisrt feature is extracted by first nerves network, then directly reads corresponding second feature from storage equipment, can be realized
Efficiently, it quickly trains end to end.
Specifically, include at least one record in memory, a sample data and its corresponding is saved in each record
Second sample characteristics;
The second sample characteristics are obtained from memory, comprising:
Obtain corresponding record in memory based on sample data;
Corresponding second sample characteristics of sample data are obtained from record.
Since the data volume for including in training set is larger, and training process requires the first sample feature of each sample data
Corresponding to the second sample characteristics of the data, it is just able to achieve the training to first nerves network at this time.Therefore, the present embodiment proposes
The second sample characteristics of the different sample datas of correspondence are passed through when needing to obtain corresponding second sample characteristics based on record
Sample data matching obtains corresponding record.
In other optionally embodiment, step 220 can also utilize nervus opticus network, carry out to sample data special
Sign is extracted, and the second sample characteristics are obtained.
In the present embodiment, by carrying out feature extraction to sample data using nervus opticus network in real time, based on real-time
The second obtained sample characteristics are trained first nerves network in handling obtained first sample feature in real time, are not required at this time
Sample data is handled by nervus opticus network in advance, save the pretreated time.
In one or more optional embodiments, the first damage is determined based on first sample feature and the second sample characteristics
It loses, comprising:
Based on the distance between first sample feature and the second sample characteristics, first-loss is determined.
In a specific example, optionally, if the total number of images amount of training set is N, wherein xiFor i-th figure of input
Picture is f (x by the second sample characteristics that nervus opticus network extractsi), the first sample feature extracted by first nerves network
For g (xi), then train the calculation formula of first-loss when first nerves student network that can use formula (1):
Wherein, LminicFor the corresponding first-loss of i-th image, the network structure and nervus opticus net of first nerves network
The network structure of network can be identical or different, and the dimension (first sample of the first sample feature and the second sample characteristics exported
Feature and the second sample characteristics are feature vector) it must be consistent.
Optionally, based on cosine similarity between first sample feature and the second sample characteristics, first sample feature is obtained
The distance between second sample characteristics;
First-loss is determined based on the distance of acquisition.
In practical applications, when first nerves network directly imitates the feature space of nervus opticus network, still
A very strong constraint (hard constraint), training when using the distance (such as: Euclidean distance) between feature as
Distance metric allows first nerves network directly to go the feature of fitting nervus opticus network, can be potentially encountered optimization difficulty or performance
Bad problem.
It, can be by using cosine similarity as distance metric, thus relaxation to solve the problems, such as that above-mentioned constraint is too strong
Before too strong constraint accelerates convergence rate of the first nerves network in training, and promotes final performance, at this time
First-loss calculation formula can use formula (2):
Wherein, LminicFor the corresponding first-loss of i-th image, the total number of images amount of training set is N, wherein xiFor input
I-th image, the second sample characteristics extracted by nervus opticus network are f (xi), extracted by first nerves network
One sample characteristics are g (xi)。
In one or more optional embodiments, before first-loss training first nerves network, further includes:
Based on first sample feature, the second loss is determined using loss function;
Utilize first-loss training first nerves network, comprising:
It is lost based on first-loss and second and determines network losses;
Based on network losses training first nerves network.
Allow the feature space of first nerves e-learning nervus opticus network although feasible, but merely in order to further mention
The network performance for rising first nerves network, can also increase additional supervisory signals, be based on increased supervisory signals and the first sample
Eigen determines network losses at a distance from the second sample characteristics.Such as: it is based on first sample feature, using training before fusion
The method of nervus opticus network obtains additional supervisory signals;For example nervus opticus network is using normalization index (softmax)
Sorting algorithm, then additional supervisory signals are loss function Lsoftmax, the application is to nervus opticus network using what algorithm
It is not specifically limited, can also include: triple loss function algorithm, center damage other than above-mentioned softmax sorting algorithm
Miscalculate method, angle allowance loses algorithm etc., and different algorithms can obtain different network losses.
Optionally, it is lost based on first-loss and second and determines network losses, comprising:
Based on first-loss and the second loss weighted sum, network losses are obtained.
In one example, such as: the second loss is based on loss function LsoftmaxIt obtains, then at this point, network damages
The calculation formula of mistake can use formula (3):
L=Lminic+αLsoftmaxFormula (3)
Wherein, LminicIndicate the first-loss obtained based on formula (1) or formula (2), LsoftmaxIndicate the second loss, it should
Second loss is obtained based on first nerves network using softmax sorting algorithm;Wherein α is two kinds of important journeys of loss function of balance
The weight (hyper parameter) of degree, the value of the weight is obtained according to many experiments, not fixed best value, value root
It is determined according to the final test performance of first nerves network.When the performance once to first nerves network is tested, weight
Value be it is constant, judge whether present weight value meets the requirements by obtained test performance, by repeatedly testing, will obtain
Test performance it is best when corresponding weight value as the weight value in formula (3).
Optionally, it is based on first sample feature, determines the second loss using loss function, comprising:
Based on first sample feature and the corresponding mark feature of sample data, the second loss is determined using loss function.
Specifically, it is based on first sample feature, the second loss can be determined using any one loss function, it can be with the second mind
The loss function for obtaining penalty values through network is identical, or different, and the application is not especially limited this.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: ROM, RAM, magnetic disk or light
The various media that can store program code such as disk.
Fig. 3 is a structural schematic diagram of the embodiment of the present application feature deriving means.The device of the embodiment can be used for reality
The existing above-mentioned each method embodiment of the application.As shown in figure 3, the device of the embodiment includes:
Data capture unit 31, for obtaining pending data.
In technical field of computer vision, data usually to be processed are acquired image or picture.In the present embodiment
In, the data to be processed of acquisition can be image or other kinds of data, and the application does not limit the concrete type of data,
With the specific method for obtaining data.
Feature extraction unit 32 carries out feature extraction to data, obtains fisrt feature for utilizing first nerves network,
Wherein, the similarity of fisrt feature and second feature be greater than preset threshold, second feature be using nervus opticus network handles at
Manage the feature that data carry out feature extraction acquisition.
Wherein, fisrt feature and first eigenvector and second feature vector can be respectively corresponded with second feature, first
Feature vector is consistent with the dimension of second feature vector;Or fisrt feature and second feature can also respectively correspond fisrt feature
Figure and second feature figure, fisrt feature figure is identical with the size of second feature figure, and the application does not limit fisrt feature and the second spy
The concrete form of sign.
In one or more optional embodiments, using nervus opticus network as teacher's network, first nerves network is made
For student network;The ability in feature extraction of trained teacher's network is made full use of to carry out training of students network (based on the second mind
Second feature is obtained through network).Because teacher's network realizes good performance on appointed task, therefore what it was extracted
Second feature also possesses the good properties such as compact, separability is strong.Allow student network that study teacher's network is gone to mention in the present embodiment
The ability for taking feature makes teacher's network and student network to same image, and the feature extracted is close as far as possible.When input is any
One image, student network can export close to teacher's network export feature when, faster smaller student network (first
Neural network) just reach the performance close to teacher's network (nervus opticus network).
The feature deriving means provided based on the above embodiment obtain data to be processed;It is right using first nerves network
Data carry out feature extraction, obtain fisrt feature;The similarity of fisrt feature and second feature is greater than preset threshold, second feature
For the corresponding feature of data obtained using nervus opticus network;Since the fisrt feature of acquisition and the similarity of second feature are big
In preset threshold, i.e. first nerves network and nervus opticus network has reached similar performance.
In one or more optional embodiments, can also include:
Network training unit, for utilizing nervus opticus network, training first nerves network, nervus opticus network is preparatory
Trained neural network.
Using to current task, trained nervus opticus network, training first nerves network make in the present embodiment
Obtained first nerves network is to the performance of the task close to nervus opticus network, but training process is with respect to nervus opticus network
The parameter amount of computing cost needed for training process simplifies the neural network for much making realization same performance and network is all contracted
Subtract, the first nerves network that training obtains is convenient for being applied in practical application.
Optionally, network training unit, comprising:
First sample characteristic extracting module is obtained for carrying out feature extraction to sample data using first nerves network
First sample feature;
Second sample characteristics extraction module corresponds to the second sample characteristics of nervus opticus network for obtaining sample data;
First-loss determining module, for determining first-loss based on first sample feature and the second sample characteristics;
First network training module, for utilizing first-loss training first nerves network.
It is special using first sample feature and the second sample in order to make the performance of first nerves network close to nervus opticus network
Sign obtains first-loss, is passed through in backpropagation (such as: reversed gradient algorithm) adjustment first nerves network using first-loss
Parameter, make first nerves network output first sample feature and nervus opticus network output the second sample characteristics
Between difference be less than setting value, i.e. the feature extraction performance of first nerves network close to nervus opticus network.
In some alternative embodiments, the second sample characteristics extraction module, for obtaining sample data from memory
The second sample characteristics obtained based on nervus opticus network.
Due to being the optimal performance of realization in training nervus opticus network (teacher's network), usually by GPU video memory
Used it is ultimate attainment, therefore in training first nerves network (student network), if same image need it is refreshing by first simultaneously
Through network and nervus opticus network feature is calculated if, have the situation for being greatly likely to occur GPU video memory deficiency, so just
It can not be unfolded to train.And even if GPU can just support the processing of first nerves network and nervus opticus network simultaneously, but due to
The speed of nervus opticus network is much more slowly than first nerves network, will also result in that training speed is excessively slow, the extremely low problem of efficiency.
Therefore, the present embodiment proposition acquisition from memory is in advance based on nervus opticus network and handles to obtain to sample data
The second sample characteristics, can avoid during to first nerves network training nervus opticus network and occupy too many resource, mention
High training speed and training effectiveness.
Optionally, the second sample characteristics extraction module, is also used to using nervus opticus network, carries out feature to sample data
It extracts, obtains the second sample characteristics;It will be in sample data and its corresponding second sample characteristics deposit memory.
Optionally, include at least one record in memory, a sample data and its corresponding is saved in each record
Second sample characteristics;
Second sample characteristics extraction module, specifically for obtaining corresponding record in memory based on sample data;From
Corresponding second sample characteristics of sample data are obtained in record.
In other optionally embodiment, the second sample characteristics extraction module is specifically used for utilizing nervus opticus network,
Feature extraction is carried out to sample data, obtains the second sample characteristics.
In the present embodiment, by carrying out feature extraction to sample data using nervus opticus network in real time, based on real-time
The second obtained sample characteristics are trained first nerves network in handling obtained first sample feature in real time, are not required at this time
Sample data is handled by nervus opticus network in advance, save the pretreated time.
In one or more optional embodiments, determining module is lost, for based on first sample feature and the second sample
The distance between eigen determines first-loss.
Optionally, determining module is lost, is specifically used for based on cosine phase between first sample feature and the second sample characteristics
Like degree, the distance between first sample feature and the second sample characteristics are obtained;First-loss is determined based on the distance of acquisition.
In one or more optional embodiments, network training unit, further includes:
Second loss determining module determines the second loss using loss function for being based on first sample feature;
Network training unit is specifically used for losing determining network losses based on first-loss and second;Based on network losses
Training first nerves network.
Allow the feature space of first nerves e-learning nervus opticus network although feasible, but merely in order to further mention
The network performance for rising first nerves network, can also increase additional supervisory signals, be based on increased supervisory signals and the first sample
Eigen determines network losses at a distance from the second sample characteristics.Such as: it is based on first sample feature, using training before fusion
The method of nervus opticus network obtains additional supervisory signals;For example nervus opticus network is using normalization index (softmax)
Sorting algorithm, then additional supervisory signals are loss function Lsoftmax, the application is to nervus opticus network using what algorithm
It is not specifically limited, can also include: triple loss function algorithm, center damage other than above-mentioned softmax sorting algorithm
Miscalculate method, angle allowance loses algorithm etc., and different algorithms can obtain different network losses.
Optionally, network training unit lost based on first-loss and second determine network losses when, for based on the
One loss and the second loss weighted sum, obtain network losses.
Optionally, the second loss determining module is specifically used for being based on first sample feature and the corresponding mark of sample data
Feature determines the second loss using loss function.
According to the other side of the embodiment of the present application, a kind of electronic equipment provided, including processor, the processor packet
Include feature deriving means described in any one as above.
According to the other side of the embodiment of the present application, a kind of electronic equipment that provides, comprising: memory, for storing
Executable instruction;
And processor, for being communicated with the memory to execute the executable instruction to complete any one as above
The operation of the feature extracting method.
The embodiment of the invention also provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down
Plate computer, server etc..Below with reference to Fig. 4, it illustrates the terminal device or the services that are suitable for being used to realize the embodiment of the present application
The structural schematic diagram of the electronic equipment 400 of device: as shown in figure 4, electronic equipment 400 includes one or more processors, communication unit
For example Deng, one or more of processors: one or more central processing unit (CPU) 401, and/or one or more figures
As processor (GPU) 413 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 402 or from
Executable instruction that storage section 408 is loaded into random access storage device (RAM) 403 and execute various movements appropriate and place
Reason.Communication unit 412 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card.
Processor can with communicate in read-only memory 402 and/or random access storage device 403 to execute executable instruction,
It is connected by bus 404 with communication unit 412 and is communicated through communication unit 412 with other target devices, to completes the application implementation
The corresponding operation of any one method that example provides, for example, obtaining pending data;Using first nerves network, data are carried out
Feature extraction obtains fisrt feature, wherein the similarity of fisrt feature and second feature is greater than preset threshold, and second feature is
The feature of feature extraction acquisition is carried out using nervus opticus network handles processing data.
In addition, in RAM 403, various programs and data needed for being also stored with device operation.CPU401,ROM402
And RAM403 is connected with each other by bus 404.In the case where there is RAM403, ROM402 is optional module.RAM403 storage
Executable instruction, or executable instruction is written into ROM402 at runtime, executable instruction makes central processing unit (CPU)
401 execute the corresponding operation of above-mentioned communication means.Input/output (I/O) interface 405 is also connected to bus 404.Communication unit 412
It can integrate setting, may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interface 405 is connected to lower component: the importation 406 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 407 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 408 including hard disk etc.;
And the communications portion 409 of the network interface card including LAN card, modem etc..Communications portion 409 via such as because
The network of spy's net executes communication process.Driver 410 is also connected to I/O interface 405 as needed.Detachable media 411, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 410, in order to read from thereon
Computer program be mounted into storage section 408 as needed.
It should be noted that framework as shown in Figure 4 is only a kind of optional implementation, it, can root during concrete practice
The component count amount and type of above-mentioned Fig. 4 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component
It sets, separately positioned or integrally disposed and other implementations, such as the separable setting of GPU413 and CPU401 or can also be used
GPU413 is integrated on CPU401, the separable setting of communication unit, can also be integrally disposed on CPU401 or GPU413, etc..
These interchangeable embodiments each fall within protection scope disclosed by the invention.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be tangibly embodied in machine readable
Computer program on medium, computer program include the program code for method shown in execution flow chart, program code
It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, obtaining pending data;Utilize
One neural network carries out feature extraction to data, obtains fisrt feature, wherein the similarity of fisrt feature and second feature is big
In preset threshold, second feature is to handle the feature that data carry out feature extraction acquisition using nervus opticus network handles.At this
In the embodiment of sample, which can be downloaded and installed from network by communications portion 409, and/or from removable
Medium 411 is unloaded to be mounted.When the computer program is executed by central processing unit (CPU) 401, execute in the present processes
The above-mentioned function of limiting.
According to the other side of the embodiment of the present application, a kind of computer readable storage medium provided, based on storing
The instruction that calculation machine can be read, the instruction are performed the operation for executing feature extracting method described in any one as above.
According to the other side of the embodiment of the present application, a kind of computer program product provided, including it is computer-readable
Code, when the computer-readable code is run in equipment, the processor in equipment is executed for realizing any one as above
The instruction of the feature extracting method.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with its
The difference of its embodiment, the same or similar part cross-reference between each embodiment.For system embodiment
For, since it is substantially corresponding with embodiment of the method, so being described relatively simple, referring to the portion of embodiment of the method in place of correlation
It defends oneself bright.
The present processes and device may be achieved in many ways.For example, can by software, hardware, firmware or
Software, hardware, firmware any combination realize the present processes and device.The said sequence of the step of for the method
Merely to be illustrated, the step of the present processes, is not limited to sequence described in detail above, special unless otherwise
It does not mentionlet alone bright.In addition, in some embodiments, also the application can be embodied as to record program in the recording medium, these programs
Including for realizing according to the machine readable instructions of the present processes.Thus, the application also covers storage for executing basis
The recording medium of the program of the present processes.
The description of the present application is given for the purpose of illustration and description, and is not exhaustively or by the application
It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches
Embodiment is stated and be the principle and practical application in order to more preferably illustrate the application, and those skilled in the art is enable to manage
Solution the application is to design various embodiments suitable for specific applications with various modifications.
Claims (10)
1. a kind of feature extracting method characterized by comprising
Obtain pending data;
Using first nerves network, feature extraction is carried out to the pending data, obtains fisrt feature, wherein described first
The similarity of feature and second feature is greater than preset threshold, and the second feature is using nervus opticus network to described to be processed
The feature of data progress feature extraction acquisition.
2. being carried out the method according to claim 1, wherein described utilize first nerves network to the data
Feature extraction, before obtaining fisrt feature, further includes:
Using the nervus opticus network, the training first nerves network, the nervus opticus network is trained in advance
Neural network.
3. according to the method described in claim 2, it is characterized in that, described utilize the nervus opticus network, training described the
One neural network, comprising:
Feature extraction is carried out to sample data using the first nerves network, obtains first sample feature;
Obtain the second sample characteristics that the sample data corresponds to the nervus opticus network;
First-loss is determined based on the first sample feature and second sample characteristics;
Utilize the first-loss training first nerves network.
4. according to the method described in claim 3, it is characterized in that, the acquisition sample data corresponds to the nervus opticus
Second sample characteristics of network, comprising:
Second sample characteristics that the sample data is obtained based on the nervus opticus network are obtained from memory.
5. according to the method described in claim 4, it is characterized in that, described obtain the sample data from memory and utilize the
Before second sample characteristics that two neural networks obtain, further includes:
Using the nervus opticus network, feature extraction is carried out to the sample data, obtains second sample characteristics;
The sample data and its corresponding second sample characteristics are stored in the memory.
6. a kind of feature deriving means characterized by comprising
Data capture unit, for obtaining pending data;
Feature extraction unit carries out feature extraction to the data, obtains fisrt feature for utilizing first nerves network,
In, the similarity of the fisrt feature and second feature is greater than preset threshold, and the second feature is to utilize nervus opticus network
The feature of feature extraction acquisition is carried out to the pending data.
7. a kind of electronic equipment, which is characterized in that including processor, the processor includes that feature as claimed in claim 6 mentions
Take device.
8. a kind of electronic equipment characterized by comprising memory, for storing executable instruction;
And processor, for being communicated with the memory to execute the executable instruction to complete claim 1 to 5 times
The operation of one feature extracting method of meaning.
9. a kind of computer readable storage medium, for storing computer-readable instruction, which is characterized in that described instruction quilt
Perform claim requires the operation of feature extracting method described in 1 to 5 any one when execution.
10. a kind of computer program product, including computer-readable code, which is characterized in that when the computer-readable code
When running in equipment, the processor in the equipment is executed for realizing feature extraction described in claim 1 to 5 any one
The instruction of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810779116.8A CN109241988A (en) | 2018-07-16 | 2018-07-16 | Feature extracting method and device, electronic equipment, storage medium, program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810779116.8A CN109241988A (en) | 2018-07-16 | 2018-07-16 | Feature extracting method and device, electronic equipment, storage medium, program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109241988A true CN109241988A (en) | 2019-01-18 |
Family
ID=65071943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810779116.8A Pending CN109241988A (en) | 2018-07-16 | 2018-07-16 | Feature extracting method and device, electronic equipment, storage medium, program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109241988A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816035A (en) * | 2019-01-31 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN110009052A (en) * | 2019-04-11 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of method of image recognition, the method and device of image recognition model training |
CN110415297A (en) * | 2019-07-12 | 2019-11-05 | 北京三快在线科技有限公司 | Localization method, device and unmanned equipment |
CN110880018A (en) * | 2019-10-29 | 2020-03-13 | 北京邮电大学 | Convolutional neural network target classification method based on novel loss function |
CN112115928A (en) * | 2020-11-20 | 2020-12-22 | 城云科技(中国)有限公司 | Training method and detection method of neural network based on illegal parking vehicle labels |
WO2020253127A1 (en) * | 2019-06-21 | 2020-12-24 | 深圳壹账通智能科技有限公司 | Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium |
CN112560978A (en) * | 2020-12-23 | 2021-03-26 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic device and storage medium |
CN113505797A (en) * | 2021-09-09 | 2021-10-15 | 深圳思谋信息科技有限公司 | Model training method and device, computer equipment and storage medium |
CN115061769A (en) * | 2022-08-08 | 2022-09-16 | 杭州实在智能科技有限公司 | Self-iteration RPA interface element matching method and system for supporting cross-resolution |
CN116128768A (en) * | 2023-04-17 | 2023-05-16 | 中国石油大学(华东) | Unsupervised image low-illumination enhancement method with denoising module |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247989A (en) * | 2017-06-15 | 2017-10-13 | 北京图森未来科技有限公司 | A kind of neural network training method and device |
CN107977707A (en) * | 2017-11-23 | 2018-05-01 | 厦门美图之家科技有限公司 | A kind of method and computing device for resisting distillation neural network model |
CN108122031A (en) * | 2017-12-20 | 2018-06-05 | 杭州国芯科技股份有限公司 | A kind of neutral net accelerator architecture of low-power consumption |
CN108229652A (en) * | 2017-11-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural network model moving method and system, electronic equipment, program and medium |
CN108229651A (en) * | 2017-11-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural network model moving method and system, electronic equipment, program and medium |
CN108229534A (en) * | 2017-11-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural network model moving method and system, electronic equipment, program and medium |
-
2018
- 2018-07-16 CN CN201810779116.8A patent/CN109241988A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247989A (en) * | 2017-06-15 | 2017-10-13 | 北京图森未来科技有限公司 | A kind of neural network training method and device |
CN107977707A (en) * | 2017-11-23 | 2018-05-01 | 厦门美图之家科技有限公司 | A kind of method and computing device for resisting distillation neural network model |
CN108229652A (en) * | 2017-11-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural network model moving method and system, electronic equipment, program and medium |
CN108229651A (en) * | 2017-11-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural network model moving method and system, electronic equipment, program and medium |
CN108229534A (en) * | 2017-11-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural network model moving method and system, electronic equipment, program and medium |
CN108122031A (en) * | 2017-12-20 | 2018-06-05 | 杭州国芯科技股份有限公司 | A kind of neutral net accelerator architecture of low-power consumption |
Non-Patent Citations (3)
Title |
---|
ZHENGHUA C.等: "Distilling the Knowledge From Handcrafted Features for Human Activity Recognition", 《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》 * |
葛仕明 等: "基于深度特征蒸馏的人脸识别", 《北京交通大学学报》 * |
黄震华 著: "《大数据信息推荐理论与关键技术》", 31 December 2016, 北京:科学技术文献出版社 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816035A (en) * | 2019-01-31 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109816035B (en) * | 2019-01-31 | 2022-10-11 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN110009052A (en) * | 2019-04-11 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of method of image recognition, the method and device of image recognition model training |
CN110009052B (en) * | 2019-04-11 | 2022-11-18 | 腾讯科技(深圳)有限公司 | Image recognition method, image recognition model training method and device |
JP6994588B2 (en) | 2019-06-21 | 2022-01-14 | ワン・コネクト・スマート・テクノロジー・カンパニー・リミテッド・(シェンチェン) | Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium |
WO2020253127A1 (en) * | 2019-06-21 | 2020-12-24 | 深圳壹账通智能科技有限公司 | Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium |
JP2021532434A (en) * | 2019-06-21 | 2021-11-25 | ワン・コネクト・スマート・テクノロジー・カンパニー・リミテッド・(シェンチェン) | Face feature extraction model Training method, face feature extraction method, device, equipment and storage medium |
CN110415297B (en) * | 2019-07-12 | 2021-11-05 | 北京三快在线科技有限公司 | Positioning method and device and unmanned equipment |
CN110415297A (en) * | 2019-07-12 | 2019-11-05 | 北京三快在线科技有限公司 | Localization method, device and unmanned equipment |
CN110880018A (en) * | 2019-10-29 | 2020-03-13 | 北京邮电大学 | Convolutional neural network target classification method based on novel loss function |
CN110880018B (en) * | 2019-10-29 | 2023-03-14 | 北京邮电大学 | Convolutional neural network target classification method |
CN112115928A (en) * | 2020-11-20 | 2020-12-22 | 城云科技(中国)有限公司 | Training method and detection method of neural network based on illegal parking vehicle labels |
CN112560978A (en) * | 2020-12-23 | 2021-03-26 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic device and storage medium |
CN112560978B (en) * | 2020-12-23 | 2023-09-12 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113505797A (en) * | 2021-09-09 | 2021-10-15 | 深圳思谋信息科技有限公司 | Model training method and device, computer equipment and storage medium |
CN115061769A (en) * | 2022-08-08 | 2022-09-16 | 杭州实在智能科技有限公司 | Self-iteration RPA interface element matching method and system for supporting cross-resolution |
CN116128768A (en) * | 2023-04-17 | 2023-05-16 | 中国石油大学(华东) | Unsupervised image low-illumination enhancement method with denoising module |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109241988A (en) | Feature extracting method and device, electronic equipment, storage medium, program product | |
TWI773189B (en) | Method of detecting object based on artificial intelligence, device, equipment and computer-readable storage medium | |
CN106599789B (en) | The recognition methods of video classification and device, data processing equipment and electronic equipment | |
CN109299716A (en) | Training method, image partition method, device, equipment and the medium of neural network | |
CN108229296A (en) | The recognition methods of face skin attribute and device, electronic equipment, storage medium | |
CN108229298A (en) | The training of neural network and face identification method and device, equipment, storage medium | |
CN110046537A (en) | The system and method for carrying out dynamic face analysis using recurrent neural network | |
CN106548192B (en) | Image processing method, device and electronic equipment neural network based | |
CN108960036A (en) | 3 D human body attitude prediction method, apparatus, medium and equipment | |
CN108460338A (en) | Estimation method of human posture and device, electronic equipment, storage medium, program | |
CN108830288A (en) | Image processing method, the training method of neural network, device, equipment and medium | |
CN108228686A (en) | It is used to implement the matched method, apparatus of picture and text and electronic equipment | |
CN108960086A (en) | Based on the multi-pose human body target tracking method for generating confrontation network positive sample enhancing | |
CN109558832A (en) | A kind of human body attitude detection method, device, equipment and storage medium | |
CN110222700A (en) | SAR image recognition methods and device based on Analysis On Multi-scale Features and width study | |
CN108304921A (en) | The training method and image processing method of convolutional neural networks, device | |
CN110147721A (en) | A kind of three-dimensional face identification method, model training method and device | |
CN108416059A (en) | Training method and device, equipment, medium, the program of image description model | |
Lee et al. | Minimizing trajectory curvature of ode-based generative models | |
CN108228700A (en) | Training method, device, electronic equipment and the storage medium of image description model | |
CN108235116A (en) | Feature propagation method and device, electronic equipment, program and medium | |
US20230053911A1 (en) | Detecting an object in an image using multiband and multidirectional filtering | |
CN117152363B (en) | Three-dimensional content generation method, device and equipment based on pre-training language model | |
CN110008961A (en) | Text real-time identification method, device, computer equipment and storage medium | |
CN110457677A (en) | Entity-relationship recognition method and device, storage medium, computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190118 |