CN110458218A - Image classification method and device, sorter network training method and device - Google Patents
Image classification method and device, sorter network training method and device Download PDFInfo
- Publication number
- CN110458218A CN110458218A CN201910702266.3A CN201910702266A CN110458218A CN 110458218 A CN110458218 A CN 110458218A CN 201910702266 A CN201910702266 A CN 201910702266A CN 110458218 A CN110458218 A CN 110458218A
- Authority
- CN
- China
- Prior art keywords
- feature
- network
- training
- group
- sorter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This disclosure relates to a kind of image classification method and device, sorter network training method and device, described image classification method, comprising: extract the fisrt feature figure of image to be processed;Fisrt feature figure is split according to division coefficient, the second feature figure group after obtaining segmentation;Fusion treatment is carried out to second feature figure, obtains first eigenvector;First eigenvector is input to sorter network, obtains classification results, is classified according to classification results to image to be processed.Image classification method according to an embodiment of the present disclosure, second feature figure can be obtained, and merge to second feature figure by the fisrt feature figure of division coefficient and image to be processed, obtain first eigenvector, more features information can be got, to can be improved classification accuracy when according to tagsort, and passes through fusion treatment, both the characteristic information of each second feature figure can have been retained, so that characteristic information is abundant, and information redundancy can be reduced, promote treatment effeciency.
Description
Technical field
This disclosure relates to field of computer technology more particularly to a kind of image classification method and device, sorter network training
Method and device.
Background technique
In the related art, neural network is when extracting the feature of the target object in image, using target object as one
A entirety carries out feature extraction, but the appearance of target object, posture, in the picture proportion, complex degree of background, target pair
As the factors such as integrated degree in the picture may influence to extract as a result, causing the accuracy to image classification lower.
Summary of the invention
The present disclosure proposes a kind of image classification method and devices, sorter network training method and device.
According to the one side of the disclosure, a kind of image classification method is provided, comprising:
Extract the fisrt feature figure of image to be processed;
The fisrt feature figure is split according at least one division coefficient, obtains and divides according to each respectively
The second feature figure group obtained after coefficient segmentation is cut, includes at least one second feature figure in the second feature figure group;
Fusion treatment is carried out to the second feature figure in each second feature figure group, is obtained and each division coefficient pair
The first eigenvector answered;
The first eigenvector is separately input into corresponding sorter network, obtains corresponding point of each sorter network
Class is as a result, the classification results obtained according to all sorter networks classify to the image to be processed.
Image classification method according to an embodiment of the present disclosure can pass through the fisrt feature of division coefficient and image to be processed
Figure obtains second feature figure, and carries out fusion treatment to second feature figure, obtains at least one first eigenvector, can get
An at least category feature for image to be processed, so that more features information is got, to can mention according to when tagsort
High-class accuracy, and pass through fusion treatment, the characteristic information of each second feature figure can be retained, so that characteristic information is abundant,
Information redundancy can be reduced again, reduces the occupancy to process resource, promote treatment effeciency.
In one possible implementation, the second feature figure in each second feature figure group is carried out at fusion
Reason obtains first eigenvector corresponding with each division coefficient, comprising:
First dimension-reduction treatment, second target are carried out at least one second feature figure in the second target signature group
Characteristic pattern group is any one or more characteristic pattern groups in the second feature figure group;
According to each second feature figure group after first dimension-reduction treatment, fisrt feature corresponding with each division coefficient is obtained
Vector.
In one possible implementation, according to each second feature figure group after first dimension-reduction treatment, obtain with
The corresponding first eigenvector of each division coefficient, comprising:
Splicing is carried out to the second feature figure in each second feature figure group after the first dimension-reduction treatment respectively, is obtained respectively
Obtain third feature figure;
The third feature figure is subjected to the second dimension-reduction treatment, obtain corresponding with each division coefficient fisrt feature to
Amount.
In one possible implementation, the sorter network is respectively distributed on different processors.
In this way, the resource occupation that can reduce each processor reduces the complexity of classification processing, avoids single
Process resource is unable to satisfy the case where process demand.
In one possible implementation, the division coefficient includes dividing number and segmentation degree of overlapping, the segmentation
Quantity indicates that by the quantity of the feature after the segmentation of fisrt feature figure, the segmentation degree of overlapping indicates will be after the segmentation of fisrt feature figure
Degree of overlapping between feature.
According to the one side of the disclosure, a kind of sorter network training method is provided, comprising:
Sample image input feature vector is extracted into network, obtains the first training characteristics figure of the sample image;
The first training characteristics figure is split according at least one division coefficient, is obtained respectively according to each institute
State obtained the second training characteristics figure group after division coefficient segmentation, include in the second training characteristics figure group at least one second
Training characteristics figure;
Fusion treatment is carried out to the second training characteristics figure in each second training characteristics figure group, is obtained and each described point
Cut corresponding first training feature vector of coefficient;
According to first training feature vector corresponding with each division coefficient, at least one sorter network of training, wherein
Each sorter network is corresponding with each first training feature vector respectively.
In one possible implementation, according to the first eigenvector corresponding with each division coefficient, training is extremely
A few sorter network, comprising:
Each first training feature vector is separately input into corresponding sorter network, obtains each classification net respectively
The classification results of network;
According to the classification results of each sorter network and the markup information of the sample image, determine and each institute respectively
State the corresponding network losses of sorter network;
According to network losses corresponding with each sorter network, each sorter network and the feature extraction net are adjusted
The network parameter of network.
In one possible implementation, according to network losses corresponding with each sorter network, each sorter network is adjusted
And the network parameter of feature extraction network, comprising:
Respectively according to network losses corresponding with each sorter network, the network parameter of each sorter network is adjusted;
According to network losses corresponding with each sorter network, the network parameter of the feature extraction network is adjusted.
In this way, each sorter network can be trained according to the network losses of each sorter network respectively, each point can be made
The training of class network is mutually indepedent, the training of each sorter network can be completed by multiple process resources, mitigate each processing
The case where resource occupation of resource reduces trained complexity, single process resource is avoided to be unable to satisfy trained demand.
In one possible implementation, it according to network losses corresponding with each sorter network, adjusts the feature and mentions
Take the network parameter of network, comprising:
Summation is weighted to network losses corresponding with each sorter network, obtains the network damage of the feature extraction network
It loses;
According to the network losses of the feature extraction network, the network parameter of the feature extraction network is adjusted.
According to the one side of the disclosure, a kind of image classification device is provided, comprising:
First extraction module, for extracting the fisrt feature figure of image to be processed;
First segmentation module obtains and divides for being split according at least one division coefficient to the fisrt feature figure
It include at least one in the second feature figure group not according to obtained second feature figure group after the segmentation of division coefficient described in each
A second feature figure;
First Fusion Module is obtained for carrying out fusion treatment to the second feature figure in each second feature figure group
First eigenvector corresponding with each division coefficient;
Categorization module obtains each point for the first eigenvector to be separately input into corresponding sorter network
The corresponding classification results of class network, the classification results obtained according to all sorter networks classify to the image to be processed.
In one possible implementation, first Fusion Module is further configured to:
First dimension-reduction treatment, second target are carried out at least one second feature figure in the second target signature group
Characteristic pattern group is any one or more characteristic pattern groups in the second feature figure group;
According to each second feature figure group after first dimension-reduction treatment, fisrt feature corresponding with each division coefficient is obtained
Vector.
In one possible implementation, first Fusion Module is further configured to:
Splicing is carried out to the second feature figure in each second feature figure group after the first dimension-reduction treatment respectively, is obtained respectively
Obtain third feature figure;
The third feature figure is subjected to the second dimension-reduction treatment, obtain corresponding with each division coefficient fisrt feature to
Amount.
In one possible implementation, the sorter network is respectively distributed on different processors.
In one possible implementation, the division coefficient includes dividing number and segmentation degree of overlapping, the segmentation
Quantity indicates that by the quantity of the feature after the segmentation of fisrt feature figure, the segmentation degree of overlapping indicates will be after the segmentation of fisrt feature figure
Degree of overlapping between feature.
According to the one side of the disclosure, a kind of sorter network training device is provided, comprising:
Second extraction module obtains the first instruction of the sample image for sample image input feature vector to be extracted network
Practice characteristic pattern;
Second segmentation module is obtained for being split according at least one division coefficient to the first training characteristics figure
It obtains respectively according to the second training characteristics figure group obtained after the segmentation of division coefficient described in each, the second training characteristics figure group
In include at least one second training characteristics figure;
Second Fusion Module, for being carried out at fusion to the second training characteristics figure in each second training characteristics figure group
Reason obtains the first training feature vector corresponding with each division coefficient;
Training module, for training at least one according to first training feature vector corresponding with each division coefficient
Sorter network, wherein each sorter network is corresponding with each first training feature vector respectively.
In one possible implementation, the training module is further configured to:
Each first training feature vector is separately input into corresponding sorter network, obtains each classification net respectively
The classification results of network;
According to the classification results of each sorter network and the markup information of the sample image, determine and each institute respectively
State the corresponding network losses of sorter network;
According to network losses corresponding with each sorter network, each sorter network and the feature extraction net are adjusted
The network parameter of network.
In one possible implementation, the training module is further configured to:
Respectively according to network losses corresponding with each sorter network, the network parameter of each sorter network is adjusted;
According to network losses corresponding with each sorter network, the network parameter of the feature extraction network is adjusted.
In one possible implementation, the training module is further configured to:
Summation is weighted to network losses corresponding with each sorter network, obtains the network damage of the feature extraction network
It loses;
According to the network losses of the feature extraction network, the network parameter of the feature extraction network is adjusted.
According to the one side of the disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: execute the above method.
According to the one side of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with
Instruction, the computer program instructions realize the above method when being executed by processor.
It should be understood that above general description and following detailed description is only exemplary and explanatory, rather than
Limit the disclosure.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become
It is clear.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and those figures show meet this public affairs
The embodiment opened, and together with specification it is used to illustrate the technical solution of the disclosure.
Fig. 1 shows the flow chart of the image classification method according to the embodiment of the present disclosure;
Fig. 2 shows the flow charts according to the sorter network training method of the embodiment of the present disclosure;
Fig. 3 shows the application schematic diagram of the sorter network training method according to the embodiment of the present disclosure;
Fig. 4 shows the block diagram of the image classification device according to the embodiment of the present disclosure;
Fig. 5 shows the block diagram of the sorter network training device according to the embodiment of the present disclosure;
Fig. 6 shows the block diagram of the electronic device according to the embodiment of the present disclosure;
Fig. 7 shows the block diagram of the electronic device according to the embodiment of the present disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing
Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes
System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein
Middle term "at least one" indicate a variety of in any one or more at least two any combination, it may for example comprise A,
B, at least one of C can indicate to include any one or more elements selected from the set that A, B and C are constituted.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure.
It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for
Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the image classification method according to the embodiment of the present disclosure, as shown in Figure 1, which comprises
In step s 11, the fisrt feature figure of image to be processed is extracted;
In step s 12, the fisrt feature figure is split according at least one division coefficient, obtains basis respectively
The second feature figure group obtained after each described division coefficient segmentation, include in the second feature figure group at least one second
Characteristic pattern;
In step s 13, fusion treatment is carried out to the second feature figure in each second feature figure group, obtained and each institute
State the corresponding first eigenvector of division coefficient;
In step S14, the first eigenvector is separately input into corresponding sorter network, obtains each classification
The corresponding classification results of network, the classification results obtained according to all sorter networks classify to the image to be processed.
Image classification method according to an embodiment of the present disclosure can pass through the fisrt feature of division coefficient and image to be processed
Figure obtains second feature figure, and carries out fusion treatment to second feature figure, obtains at least one first eigenvector, can obtain
To more features information, to can be improved classification accuracy when according to tagsort, and pass through fusion treatment, can retain each
The characteristic information of second feature figure so that characteristic information is abundant, and can reduce information redundancy, reduce the occupancy to process resource,
Promote treatment effeciency.
In one possible implementation, described image classification method can be held by terminal device or other processing equipments
Row, wherein terminal device can be user equipment (User Equipment, UE), mobile device, user terminal, terminal, honeycomb
Phone, wireless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, calculate equipment,
Mobile unit, wearable device etc..Other processing equipments can be server or cloud server etc..In some possible realization sides
In formula, this method can be realized in such a way that processor calls the computer-readable instruction stored in memory.
In one possible implementation, in step s 11, the processing of the fisrt feature figure of image to be processed is extracted,
It can be executed by feature extraction network, the feature extraction network can be the nerve of the deep learning with multiple network layers grade
Network, such as convolutional neural networks etc. are also possible to other networks, it is not limited here.May include in the image to be processed
One or more target objects, the target object can be people, vehicle, article etc..
In one possible implementation, image input feature vector to be processed can be extracted network, feature extraction network
Multiple network layers grade can successively carry out feature extraction processing, and the network level may include convolutional layer, active coating etc., in example
In, after active coating, the fisrt feature figure of the exportable image to be processed of feature extraction network.
In one possible implementation, in step s 12, the division coefficient includes dividing number and segmentation overlapping
Degree, the dividing number indicate that by the quantity of the feature after the segmentation of fisrt feature figure, the segmentation degree of overlapping is indicated the first spy
The degree of overlapping between feature after sign figure segmentation.In this example, i-th (1≤i≤n, n are the quantity of division coefficient, and i and n are whole
Number) a division coefficient is that the quantity that fisrt feature figure is divided into second feature figure is i, described in i second feature figure constitutes
Second feature figure group, the height of each second feature figure are fisrt feature figureDegree of overlapping is μ.
In this example, the quantity for fisrt feature figure being divided into second feature figure can be for 1, that is, not to fisrt feature figure into
Row segmentation, the fisrt feature figure is second feature figure.
In this example, fisrt feature figure being divided into the quantity of second feature figure can be 2, that is, divide fisrt feature figure
For two second feature figures, also, two second feature figures can have certain overlapping, and lap can be special according to described second
Degree of overlapping between sign figure determines that the degree of overlapping can account for the face of fisrt feature figure with the overlapping region between second feature figure
Product ratio is to indicate.For example, two second feature figures, the width of second feature figure can be divided into fisrt feature figure along short transverse
Consistent with fisrt feature figure, the height of each second feature figure is the 2/3 of fisrt feature figure, that is, degree of overlapping 1/3.By first
Fisrt feature figure can be divided into three second feature figures for 3 by the quantity that characteristic pattern is divided into second feature figure, and each second
The height of characteristic pattern is the 1/2 of fisrt feature figure, and fisrt feature figure can be divided into n second feature by degree of overlapping 1/4 ...
Figure, the height of each second feature figure are fisrt feature figureDegree of overlapping is μ.Each second feature figure being partitioned into is mutual
There can be certain overlapping between phase, can make classification results that there is robustness, for example, the appearance shooting in target object is unclear
It is clear, posture is complicated, proportion is smaller in the picture, background is more complex or target object it is imperfect in the picture when,
Classification processing is more complex, and the second feature figure in second feature figure group has certain overlapping from each other, so that each second
Characteristic pattern has a part of same characteristic features, makes have certain connection between second feature figure, that is, make difference between second feature figure
Reduce, may make that training process is more stable, improve the robustness of classification results.
In this example, it can be also split by other partitioning schemes, for example, can divide along the vertical direction, it can be along annular
Fisrt feature figure, can be divided into multiple irregular figures etc. by direction segmentation, the disclosure to partitioning scheme with no restrictions.
In one possible implementation, in step s 13, can to the second feature figure in each second feature figure group into
Row fusion treatment obtains first eigenvector corresponding with each division coefficient.Step S13 can include: to the second target spy
It levies at least one second feature figure in figure group and carries out the first dimension-reduction treatment, the second target signature group is described second special
Levy any one or more characteristic pattern groups in figure group;According to each second feature figure group after first dimension-reduction treatment, obtain
First eigenvector corresponding with each division coefficient.
It in this example, can be to i second feature figure in i-th group of second feature figure point by taking i-th group of second feature figure as an example
The first dimension-reduction treatment is not carried out, obtains the second feature vector (that is, second feature figure group after dimensionality reduction) of each second feature figure.And
First eigenvector corresponding with the i-th each division coefficient can be obtained based on second feature vector.After the dimension-reduction treatment
Each second feature figure group obtains corresponding with each division coefficient first eigenvector, comprising: to each the after the first dimension-reduction treatment
Second feature figure in two characteristic pattern groups carries out splicing respectively, obtains third feature figure respectively;By the third feature figure
The second dimension-reduction treatment is carried out, the first eigenvector corresponding with each division coefficient is obtained.
In this example, the second feature figure in each second feature figure group after the first dimension-reduction treatment can be subjected to stitching portion
Reason, that is, i second feature vector is spliced, for example, can be using each second feature vector as the one of third feature figure
A feature channel, that is, by second feature Vector Fusion be third feature figure.And can to third feature figure carry out dimensionality reduction, obtain with
The corresponding first eigenvector f of target classification networki.It can determine respectively with the 1st sorter network to n-th through the above way
The corresponding first eigenvector f of sorter network1, f2…fn。
In this example, the first dimension-reduction treatment can not also be carried out to each second feature figure in second feature figure group, it can be direct
Splicing is carried out to each second feature figure, third feature figure is obtained, and the second dimension-reduction treatment is carried out to third feature figure, obtains
First eigenvector.
In one possible implementation, in step S14, each division coefficient can be corresponding with sorter network respectively, that is,
The quantity of sorter network can be n.First eigenvector corresponding with each division coefficient can be inputted respectively and be with each segmentation
In the corresponding sorter network of number, each sorter network can distinguish output category result.
In one possible implementation, n first eigenvector separately includes the characteristic information of different segmentation granularities,
For example, f1It may include the characteristic information of fisrt feature figure full figure, f2It may include two second feature figures that fisrt feature figure is divided into
Characteristic information ... n first eigenvector can be inputted corresponding sorter network respectively handle.Each sorter network can
Feature extraction processing is carried out to first eigenvector respectively, the classification results of each sorter network are obtained, for example, i-th of classification net
Network can be to i-th of first eigenvector fiFeature extraction processing is carried out, i-th of classification results is obtained.In this example, the classification
As a result can be intended to indicate that the feature vector of the classification of image to be processed, the disclosure to classification results with no restrictions.
In this example, image to be processed and reference picture difference input feature vector can be extracted network, and by each sorter network
The classification results of image and reference picture to be processed are obtained respectively, and determine the classification results of image and reference picture to be processed
Characteristic similarity, for example, if the characteristic similarity of the classification results of image to be processed and reference picture is more than or equal to similar
Spend threshold value, then the target object in the target object and reference picture in image to be processed be same class, that is, image to be processed with
Reference picture can be classified as same class, conversely, image to be processed and reference picture can be classified as inhomogeneity.Alternatively, using classification
As a result image to be processed is compared with multiple reference pictures, for example, determining that the reference of classification results and each reference picture is special
The characteristic similarity of sign, with the determining highest object reference feature of characteristic similarity with classification results, and by image to be processed
Reference picture corresponding with object reference feature is classified as same class.Or the classification results of multiple images to be processed are distinguished
It is compared with one or more reference pictures, to classify to multiple images to be processed, for example, one section of view can be determined respectively
The classification of each video frame in frequency (will not include target object for example, one kind can be classified as the video frame including target object
Video frame be classified as it is another kind of etc.).
In one possible implementation, the sorter network is respectively distributed on different processors.For example, can divide
Cloth is in multiple GPU (Graphics Processing Unit, graphics processor).The quantity of sorter network can relate to be multiple
And network parameter it is more, classification process operand is larger.But since the processing of each sorter network is independent mutually,
It can be distributed on multiple processors and carry out classification processing respectively, for example, a GPU can only be executed at the classification of a sorter network
Reason alternatively, sorter network can be divided into M (M is positive integer) group, and executes point of M group sorter network by M GPU respectively
Class processing.In other examples, the processor may also include CPU (Central Processing Unit, central processing
Device), FPGA (Field-Programmable Gate Array, field programmable gate array), MCU (Microcontroller
Unit, micro-control unit) etc., the disclosure to processor with no restrictions.
In this way, the resource occupation that can reduce each processor reduces the complexity of classification processing, avoids single
Process resource is unable to satisfy the case where process demand.
Image classification method according to an embodiment of the present disclosure can pass through the fisrt feature of division coefficient and sample image
Figure obtains second feature figure, and carries out fusion treatment to second feature figure, to reduce while obtaining more features information
Information redundancy promotes treatment effeciency.And image to be processed is divided according to multiple classification results that multiple sorter networks export
Class can promote classification accuracy.
It in one possible implementation, can be to the spy before using the feature extraction network and sorter network
Sign is extracted network and is trained.
Fig. 2 shows the flow charts according to the sorter network training method of the embodiment of the present disclosure, as shown in Fig. 2, the method
Include:
In the step s 21, sample image input feature vector is extracted into network, obtains the first training characteristics of the sample image
Figure;
In step S22, the first training characteristics figure is split according at least one division coefficient, is distinguished
According to the second training characteristics figure group obtained after the segmentation of division coefficient described in each, include in the second training characteristics figure group
At least one second training characteristics figure;
In step S23, fusion treatment is carried out to the second training characteristics figure in each second training characteristics figure group, is obtained
The first training feature vector corresponding with each division coefficient;
In step s 24, according to first training feature vector corresponding with each division coefficient, at least one point of training
Class network, wherein each sorter network is corresponding with each first training feature vector respectively.
In one possible implementation, in the step s 21, sample image input feature vector can be extracted network, feature
The multiple network layers grade for extracting network can successively carry out feature extraction processing, and the network level may include convolutional layer, active coating
Deng, in this example, after active coating, the first training characteristics figure of the exportable sample image of feature extraction network.
It in one possible implementation, can be right in the way of above-mentioned segmentation image to be processed in step S22
Sample image is split, and obtains the second training characteristics figure group corresponding with each division coefficient, the second training characteristics figure
It include at least one second training characteristics figure in group.It, can also be otherwise to sample image in other implementations
It is split, does not limit herein.
In one possible implementation, in step S23, fusion treatment can be carried out to second feature figure according to above-mentioned
Mode, in each second training characteristics figure group the second training characteristics figure carry out fusion treatment, obtain with each division coefficient pair
The first training feature vector answered.Fusion treatment can also be carried out to the second training characteristics figure otherwise, obtained and each
Corresponding first training feature vector of division coefficient, it is not limited here.
It in one possible implementation, in step s 24, can be special according to the first training corresponding with each division coefficient
Vector is levied, sorter network corresponding with each division coefficient is respectively trained.
In one possible implementation, step S24 can include: input each first training feature vector respectively
To corresponding sorter network, the classification results of each sorter network are obtained respectively;According to the classification knot of each sorter network
The markup information of fruit and the sample image determines network losses corresponding with each sorter network respectively;According to it is each
The corresponding network losses of the sorter network, adjust the network parameter of each sorter network and the feature extraction network.
In one possible implementation, can will the first training characteristics corresponding with each division coefficient input respectively with it is each
In the corresponding sorter network of division coefficient, each sorter network can distinguish output category result.
In one possible implementation, can be believed according to the classification results of each sorter network and the mark of sample image
Breath determines network losses corresponding with each sorter network respectively.In this example, the markup information of sample image can be sample graph
As the characteristic similarity with reference picture, in this example, if the target object in sample image and reference picture is same people,
Then the characteristic similarity of sample image and reference picture is labeled for 1, if the target pair in sample image and reference picture
As not being same people, then the characteristic similarity of sample image and reference picture is labeled for 0.
In this example, the feature of reference picture can be obtained by feature extraction network and at least one sorter network, often
The feature of the exportable reference picture of a sorter network, n sorter network can get the feature of n reference picture.It can be true respectively
Fixed each sorter network to the characteristic similarity between the classification results of sample image and the feature of reference picture, and according to
It states the difference between the characteristic similarity and the characteristic similarity of mark that mode determines and determines the corresponding network of each sorter network
Loss.For example, the classification results (i-th of classification results) and reference picture of the exportable sample image of i-th of sorter network
Feature (ith feature), it may be determined that characteristic similarity (such as the cosine similarity between i-th of classification results and ith feature
Deng), and it is corresponding with i-th of sorter network according to the difference determination between this feature similarity and the markup information of sample image
Network losses Li.It can determine the corresponding network losses L of each sorter network respectively through the above way1, L2…Ln。
In one possible implementation, each sorter network and feature extraction network are adjusted according to above-mentioned network losses
Network parameter.Wherein, according to network losses corresponding with each sorter network, the net of each sorter network and feature extraction network is adjusted
Network parameter, comprising: respectively according to network losses corresponding with each sorter network, adjust the network parameter of each sorter network;According to
Network losses corresponding with each sorter network adjust the network parameter of the feature extraction network.
In this example, the network ginseng of each sorter network can be adjusted according to the corresponding network losses of each sorter network respectively
The adjustment of number, the network parameter of each sorter network can be independent mutually, that is, some sorter network can be only by corresponding with itself
Network losses adjust itself network parameter, unrelated with the network losses of other sorter networks and network parameter.For example, can be true
Fixed i-th of network losses LiFor the gradient of each parameter of i-th of sorter network(p indicates any net of i-th of sorter network
Network parameter), and according to the direction for minimizing network losses, each network parameter is adjusted by gradient descent method.It can be by upper
Mode is stated, according to the corresponding network losses L of each sorter network1, L2…LnTo adjust the network parameter of each sorter network respectively.
In this example, the quantity of sorter network can be multiple, and the network parameter being related to is more, in the training process, to GPU
Etc. process resources resources occupation rate it is higher, and training complexity is higher, and single process resource possibly can not carry excessive point
The training of class network.Since the adjustment of the network parameter of each sorter network can be independent mutually, can be handled by multiple GPU etc.
Resource trains each sorter network, for example, a GPU can only execute the training mission of a sorter network.Alternatively, can will be multiple
Sorter network is divided into M (M is positive integer) group, and trains M group sorter network by M GPU respectively.Further, since every component
The training of class network is independent from each other, and can be trained in batches to each group sorter network, for example, can select one or more groups of
Sorter network is trained, and after the completion of the sorter network group of choosing group out is trained, reselection other components class network is trained.
The disclosure to the quantity of the process resource used with no restrictions.
It in this example, can be according to the corresponding network losses L of each sorter network1, L2…LnTo adjust the net of feature extraction network
Network parameter.Feature extraction network is the neural network of each sorter network front end, can be lost according to above-mentioned all-network to determine spy
Sign extracts the network losses of network.According to network losses corresponding with each sorter network, the net of the feature extraction network is adjusted
Network parameter, comprising: summation is weighted to network losses corresponding with each sorter network, obtains the net of the feature extraction network
Network loss;According to the network losses of the feature extraction network, the network parameter of the feature extraction network is adjusted.In example
In, it can be to L1, L2…LnThe processing such as summation or weighted sum is carried out, obtains the network losses L of feature extraction network, and determine L pairs
In the gradient of each network parameter of feature extraction network, further, can and according to make network losses minimize direction, pass through
Gradient descent method adjusts each network parameter.
In another example, training characteristics extraction network feature can be mentioned after the completion of feature extraction network training first
It takes the network parameter of network to fix, each sorter network is then respectively trained again.
In one possible implementation, feature extraction network and each sorter network can be carried out through the above way more
Secondary network adjustment, that is, carry out the training of multiple cycles of training.And when meeting training condition, complete to feature extraction network and
The training of each sorter network.The training condition may include frequency of training (that is, number cycle of training), for example, when frequency of training reaches
When to preset times, meet training condition.Alternatively, training condition may include the size of network losses or hold back scattered property, for example, working as net
Network loses L1, L2…LnWhen less than or equal to loss threshold value or converging in pre-set interval, meet training condition.It is completed in training
Afterwards, feature extraction network and each sorter network after training be can get, the feature extraction network and each sorter network can be used for
Image is carried out in classification processing, and in classification processing, feature extraction network and all sorter networks can be used, to obtain
The classification results for thering is sorter network to export, it is possible to use feature extraction network and some or a part of sorter network, to obtain
Obtain the classification results of one or the output of a part of sorter network.The disclosure to training condition with no restrictions.
In this way, each sorter network can be trained according to the network losses of each sorter network respectively, each point can be made
The training of class network is mutually indepedent, the training of each sorter network can be completed by multiple process resources, mitigate each processing
The case where resource occupation of resource reduces trained complexity, single process resource is avoided to be unable to satisfy trained demand.
In one possible implementation, the feature extraction network and at least one sorter network after training can be passed through
Classification processing is carried out to image to be processed, obtains the classification results of the image to be processed.In this example, the image to be processed
In may include one or more target objects, the target object can be people, vehicle, article etc..
In one possible implementation, it can be used at one or a part of sorter network and feature extraction network handles
It manages image and carries out classification processing, it is possible to use all classification network and feature extraction network handles processing image carry out at classification
Reason.Feature extraction network after the training can extract the fisrt feature figure of image to be processed, and according to division coefficient to first
Characteristic pattern is split.And by the second feature figure obtained after segmentation progress dimensionality reduction obtain the second feature of each second feature figure to
Amount further can splice second feature vector, the processing such as dimensionality reduction, obtain first eigenvector.And by fisrt feature
Vector input sorter network is handled, and classification results are obtained.
In this example, the classification results are the result of vector form, it may be determined that the classification results and reference picture
Characteristic similarity (for example, cosine similarity etc.) between feature, and image to be processed is determined according to the characteristic similarity
Classification.For example, if the characteristic similarity is greater than or equal to similarity threshold, it can be by image to be processed and reference picture
It is classified as one kind.Alternatively, image to be processed is compared with multiple reference pictures using classification results, for example, determining classification
As a result with the characteristic similarity of the fixed reference feature of each reference picture, with the determining highest target of characteristic similarity with classification results
Fixed reference feature, and image to be processed reference picture corresponding with object reference feature is classified as same class.Or by it is multiple to
The classification results of processing image are compared with one or more reference pictures respectively, to divide multiple images to be processed
Class, for example, can determine the classification of each video frame in one section of video (for example, can be by the video frame including target object respectively
Be classified as one kind, by do not include target object video frame be classified as it is another kind of etc.).
Sorter network training method according to an embodiment of the present disclosure can pass through the first spy of division coefficient and sample image
Sign figure, obtains at least one first eigenvector, can get an at least category feature for sample image, so that neural network is got
The characteristic information of different segmentation granularities keeps the characteristic information obtained richer, is conducive to unclear in the appearance shooting of target object
It is clear, posture is complicated, proportion is smaller in the picture, background is more complex or target object it is imperfect in the picture when,
Various features are obtained, the performance of neural network is improved.And each classification can be trained according to the network losses of each sorter network respectively
Network can make the training of each sorter network mutually indepedent, can complete the training of each sorter network by multiple process resources,
The resource occupation for mitigating each process resource reduces trained complexity, and avoiding single process resource from being unable to satisfy training needs
The case where asking.Further, in the training process, compared to one or a part of sorter network is only trained, by all classification nets
Network and feature extraction network are trained, can lifting feature extract network and each sorter network performance, can be to be processed
When image carries out classification processing, feature extraction network and one or a part of sorter network are used, it is possible to use all classification net
Network improves the flexibility for using sorter network, and promotes classification accuracy.
Fig. 3 shows the application schematic diagram of the sorter network training method according to the embodiment of the present disclosure, as shown in figure 3, in mind
In training process through network, the sample image input feature vector including one or more target objects can be extracted network, feature
Extract the first training characteristics figure of the exportable sample image of network.
In one possible implementation, the first training characteristics figure can be split, according to multiple division coefficients
In example, the first training characteristics figure can be split respectively according to division coefficient corresponding with each sorter network, be obtained and each
The corresponding second training characteristics figure of sorter network.In this example, the first training characteristics figure is divided into the second training characteristics figure
Quantity can be 1, that is, not be split to the first training characteristics figure, the first training characteristics figure is the second training characteristics figure.
It is special that first training characteristics figure can be also divided into 2 the second training characteristics figures, 3 the second training characteristics figures and 4 second training
Sign figure.
It in one possible implementation, can be to each second instruction for the second training characteristics figure of any sorter network
Practice characteristic pattern and carry out dimension-reduction treatment, obtains the second training feature vector.Second training feature vector can be carried out to splicing, and
Dimensionality reduction is carried out, the first training feature vector that can input the sorter network is obtained.Can obtain through the above way can input each point
The first eigenvector f of class network1, f2, f3, f4。
In one possible implementation, it can be instructed by the first training feature vector corresponding with each sorter network
Practice each sorter network and feature extraction network.It can be by above-mentioned first training feature vector f1, f2, f3, f4Each classification net is inputted respectively
Network, as shown, can get the 1st classification results, the 2nd classification results, the 3rd classification results and the 4th classification results.And
Network losses L corresponding with each sorter network can be determined according to the markup information of each classification results and sample image1, L2, L3, L4。
In one possible implementation, each classification can be adjusted according to the corresponding network losses of each sorter network respectively
The network parameter of network, for example, can be according to network losses L1The network parameter of first sorter network is adjusted, it can according to network
Lose L2The network parameter of second sorter network is adjusted, it can according to network losses L3To adjust the net of third sorter network
Network parameter, can be according to network losses L4To adjust the network parameter of the 4th sorter network.The network parameter of each sorter network
Adjustment can be independent mutually, therefore each sorter network can be trained by process resources such as multiple GPU.For example, a GPU can only be held
The training mission of one sorter network of row, that is, the training mission of each sorter network can be executed by 4 GPU.
In one possible implementation, the corresponding network losses L of each sorter network can be passed through1, L2, L3, L4It determines special
Sign extracts the network losses L of network, for example, can be to L1, L2, L3, L4The processing such as summation or weighted sum is carried out, feature is obtained and mentions
The network losses L of network is taken, and adjusts the network parameter of feature extraction network according to L.
In one possible implementation, feature extraction network and each sorter network can be carried out through the above way more
Secondary training, sorter network and feature extraction network after being trained.And the sorter network after training and feature extraction can be used
Network handles handle image and carry out classification processing.For example, feature extraction network and one, part or all of classification can be passed through
Network handles handle image and carry out classification processing, obtain one or more classification results of image to be processed, and utilize described one
A or multiple classification results classify to image to be processed.
In one possible implementation, the sorter network training method and image classification method can be used for video
In the classification processing of frame, for example, inquiring some pedestrian in monitor video, sorter network and one or more can be passed through
Feature extraction network determines the classification of each video frame in monitor video, that is, it will include the video frame of the pedestrian is divided into one kind, it will
Do not include the pedestrian video frame be divided into it is another kind of etc..The disclosure answers the sorter network training method and image classification method
With no restrictions with field.
It is appreciated that above-mentioned each embodiment of the method that the disclosure refers to, without prejudice to principle logic,
To engage one another while the embodiment to be formed after combining, as space is limited, the disclosure is repeated no more.
In addition, the disclosure additionally provides image classification device and sorter network training device, electronic equipment, computer-readable
Storage medium, program, the above-mentioned any method that can be used to realize the disclosure and provide, corresponding technical solution and description and referring to
The corresponding record of method part, repeats no more.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment
It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function
It can be determined with possible internal logic.
Fig. 4 shows the block diagram of the image classification device according to the embodiment of the present disclosure, as shown in figure 4, described device includes:
First extraction module 11, for extracting the fisrt feature figure of image to be processed;
First segmentation module 12 is obtained for being split according at least one division coefficient to the fisrt feature figure
It include at least in the second feature figure group respectively according to obtained second feature figure group after the segmentation of division coefficient described in each
One second feature figure;
First Fusion Module 13 is obtained for carrying out fusion treatment to the second feature figure in each second feature figure group
First eigenvector corresponding with each division coefficient;
Categorization module 14 obtains each for the first eigenvector to be separately input into corresponding sorter network
The corresponding classification results of sorter network, the classification results obtained according to all sorter networks divide the image to be processed
Class.
In one possible implementation, first Fusion Module is further configured to:
First dimension-reduction treatment, second target are carried out at least one second feature figure in the second target signature group
Characteristic pattern group is any one or more characteristic pattern groups in the second feature figure group;
According to each second feature figure group after first dimension-reduction treatment, fisrt feature corresponding with each division coefficient is obtained
Vector.
In one possible implementation, first Fusion Module is further configured to:
Splicing is carried out to the second feature figure in each second feature figure group after the first dimension-reduction treatment respectively, is obtained respectively
Obtain third feature figure;
The third feature figure is subjected to the second dimension-reduction treatment, obtain corresponding with each division coefficient fisrt feature to
Amount.
In one possible implementation, the sorter network is respectively distributed on different processors.
In one possible implementation, the division coefficient includes dividing number and segmentation degree of overlapping, the segmentation
Quantity indicates that by the quantity of the feature after the segmentation of fisrt feature figure, the segmentation degree of overlapping indicates will be after the segmentation of fisrt feature figure
Degree of overlapping between feature.
Fig. 5 shows the block diagram of the sorter network training device according to the embodiment of the present disclosure, as shown in figure 5, described device packet
It includes:
Second extraction module 21 obtains the first of the sample image for sample image input feature vector to be extracted network
Training characteristics figure;
Second segmentation module 22, for being split according at least one division coefficient to the first training characteristics figure,
It obtains respectively according to the second training characteristics figure group obtained after the segmentation of division coefficient described in each, the second training characteristics figure
It include at least one second training characteristics figure in group;
Second Fusion Module 23, for being merged to the second training characteristics figure in each second training characteristics figure group
Processing obtains the first training feature vector corresponding with each division coefficient;
Training module 24, for according to first training feature vector corresponding with each division coefficient, training at least one
A sorter network, wherein each sorter network is corresponding with each first training feature vector respectively.
In one possible implementation, the training module is further configured to:
Each first training feature vector is separately input into corresponding sorter network, obtains each classification net respectively
The classification results of network;
According to the classification results of each sorter network and the markup information of the sample image, determine and each institute respectively
State the corresponding network losses of sorter network;
According to network losses corresponding with each sorter network, each sorter network and the feature extraction net are adjusted
The network parameter of network.
In one possible implementation, the training module is further configured to:
Respectively according to network losses corresponding with each sorter network, the network parameter of each sorter network is adjusted;
According to network losses corresponding with each sorter network, the network parameter of the feature extraction network is adjusted.
In one possible implementation, the training module is further configured to:
Summation is weighted to network losses corresponding with each sorter network, obtains the network damage of the feature extraction network
It loses;
According to the network losses of the feature extraction network, the network parameter of the feature extraction network is adjusted.
In some embodiments, the embodiment of the present disclosure provides the function that has of device or comprising module can be used for holding
The method of row embodiment of the method description above, specific implementation are referred to the description of embodiment of the method above, for sake of simplicity, this
In repeat no more
The embodiment of the present disclosure also proposes a kind of computer readable storage medium, is stored thereon with computer program instructions, institute
It states when computer program instructions are executed by processor and realizes the above method.Computer readable storage medium can be non-volatile meter
Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure also proposes a kind of electronic equipment, comprising: processor;For storage processor executable instruction
Memory;Wherein, the processor is configured to the above method.
The equipment that electronic equipment may be provided as terminal, server or other forms.
Fig. 6 is the block diagram of a kind of electronic equipment 800 shown according to an exemplary embodiment.For example, electronic equipment 800 can
To be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices are good for
Body equipment, the terminals such as personal digital assistant.
Referring to Fig. 6, electronic equipment 800 may include following one or more components: processing component 802, memory 804,
Power supply module 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814,
And communication component 816.
The integrated operation of the usual controlling electronic devices 800 of processing component 802, such as with display, call, data are logical
Letter, camera operation and record operate associated operation.Processing component 802 may include one or more processors 820 to hold
Row instruction, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more moulds
Block, convenient for the interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, with
Facilitate the interaction between multimedia component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in electronic equipment 800.These data
Example include any application or method for being operated on electronic equipment 800 instruction, contact data, telephone directory
Data, message, picture, video etc..Memory 804 can by any kind of volatibility or non-volatile memory device or it
Combination realize, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable
Except programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, fastly
Flash memory, disk or CD.
Power supply module 806 provides electric power for the various assemblies of electronic equipment 800.Power supply module 806 may include power supply pipe
Reason system, one or more power supplys and other with for electronic equipment 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between the electronic equipment 800 and user.
In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch surface
Plate, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touches
Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding
The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments,
Multimedia component 808 includes a front camera and/or rear camera.When electronic equipment 800 is in operation mode, as clapped
When taking the photograph mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition
Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when electronic equipment 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone
It is configured as receiving external audio signal.The received audio signal can be further stored in memory 804 or via logical
Believe that component 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, for providing the state of various aspects for electronic equipment 800
Assessment.For example, sensor module 814 can detecte the state that opens/closes of electronic equipment 800, the relative positioning of component, example
As the component be electronic equipment 800 display and keypad, sensor module 814 can also detect electronic equipment 800 or
The position change of 800 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 800, electronic equipment 800
The temperature change of orientation or acceleration/deceleration and electronic equipment 800.Sensor module 814 may include proximity sensor, be configured
For detecting the presence of nearby objects without any physical contact.Sensor module 814 can also include optical sensor,
Such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, which may be used also
To include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between electronic equipment 800 and other equipment.
Electronic equipment 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Show at one
In example property embodiment, communication component 816 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel
Relevant information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, short to promote
Cheng Tongxin.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (ASIC), number
Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed by the processor 820 of electronic equipment 800 to complete
The above method.
Fig. 7 is the block diagram of a kind of electronic equipment 1900 shown according to an exemplary embodiment.For example, electronic equipment 1900
It may be provided as a server.Referring to Fig. 7, electronic equipment 1900 includes processing component 1922, further comprise one or
Multiple processors and memory resource represented by a memory 1932, can be by the execution of processing component 1922 for storing
Instruction, such as application program.The application program stored in memory 1932 may include it is one or more each
Module corresponding to one group of instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Electronic equipment 1900 can also include that a power supply module 1926 is configured as executing the power supply of electronic equipment 1900
Management, a wired or wireless network interface 1950 is configured as electronic equipment 1900 being connected to network and an input is defeated
(I/O) interface 1958 out.Electronic equipment 1900 can be operated based on the operating system for being stored in memory 1932, such as
Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 1932 of machine program instruction, above-mentioned computer program instructions can by the processing component 1922 of electronic equipment 1900 execute with
Complete the above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment
Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium
More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable
Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above
Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to
It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire
Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/
Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network
Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment
In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages
The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as
Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one
Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part
Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure
Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/
Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas
The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas
When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced
The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to
It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction
Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram
The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other
In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce
Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment
Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use
The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box
It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel
Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or
The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic
The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology
Other those of ordinary skill in domain can understand each embodiment disclosed herein.
Claims (10)
1. a kind of image classification method, which is characterized in that the described method includes:
Extract the fisrt feature figure of image to be processed;
The fisrt feature figure is split according at least one division coefficient, is obtained respectively according to segmentation system described in each
The second feature figure group obtained after segmentation is counted, includes at least one second feature figure in the second feature figure group;
Fusion treatment is carried out to the second feature figure in each second feature figure group, is obtained corresponding with each division coefficient
First eigenvector;
The first eigenvector is separately input into corresponding sorter network, obtains the corresponding classification knot of each sorter network
Fruit, the classification results obtained according to all sorter networks classify to the image to be processed.
2. the method according to claim 1, wherein to the second feature figure in each second feature figure group into
Row fusion treatment obtains first eigenvector corresponding with each division coefficient, comprising:
First dimension-reduction treatment, second target signature are carried out at least one second feature figure in the second target signature group
Figure group is any one or more characteristic pattern groups in the second feature figure group;
According to each second feature figure group after first dimension-reduction treatment, obtain fisrt feature corresponding with each division coefficient to
Amount.
3. according to the method described in claim 2, it is characterized in that, according to each second feature figure after first dimension-reduction treatment
Group obtains first eigenvector corresponding with each division coefficient, comprising:
Splicing is carried out to the second feature figure in each second feature figure group after the first dimension-reduction treatment respectively, obtains respectively
Three characteristic patterns;
The third feature figure is subjected to the second dimension-reduction treatment, obtains the first eigenvector corresponding with each division coefficient.
4. method according to any one of claim 1-3, which is characterized in that the sorter network is respectively distributed to difference
Processor on.
5. method according to any one of claim 1-3, which is characterized in that the division coefficient include dividing number and
Divide degree of overlapping, the dividing number indicates that, by the quantity of the feature after the segmentation of fisrt feature figure, the segmentation degree of overlapping indicates
By the degree of overlapping between the feature after the segmentation of fisrt feature figure.
6. a kind of sorter network training method characterized by comprising
Sample image input feature vector is extracted into network, obtains the first training characteristics figure of the sample image;
The first training characteristics figure is split according at least one division coefficient, obtains and divides according to each respectively
The the second training characteristics figure group obtained after coefficient segmentation is cut, includes at least one second training in the second training characteristics figure group
Characteristic pattern;
Fusion treatment is carried out to the second training characteristics figure in each second training characteristics figure group, is obtained and each segmentation system
Corresponding first training feature vector of number;
According to first training feature vector corresponding with each division coefficient, at least one sorter network of training, wherein each institute
It is corresponding with each first training feature vector respectively to state sorter network.
7. a kind of image classification device, which is characterized in that described device includes:
First extraction module, for extracting the fisrt feature figure of image to be processed;
First segmentation module obtains root respectively for being split according at least one division coefficient to the fisrt feature figure
Include according to obtained second feature figure group after the segmentation of division coefficient described in each, in the second feature figure group at least one the
Two characteristic patterns;
First Fusion Module obtains and each for carrying out fusion treatment to the second feature figure in each second feature figure group
The corresponding first eigenvector of the division coefficient;
Categorization module obtains each classification net for the first eigenvector to be separately input into corresponding sorter network
The corresponding classification results of network, the classification results obtained according to all sorter networks classify to the image to be processed.
8. a kind of sorter network training device characterized by comprising
Second extraction module, for sample image input feature vector to be extracted network, the first training for obtaining the sample image is special
Sign figure;
Second segmentation module is divided for being split according at least one division coefficient to the first training characteristics figure
Not according to obtained the second training characteristics figure group after the segmentation of division coefficient described in each, wrapped in the second training characteristics figure group
Include at least one second training characteristics figure;
Second Fusion Module, for carrying out fusion treatment to the second training characteristics figure in each second training characteristics figure group,
Obtain the first training feature vector corresponding with each division coefficient;
Training module, for according to first training feature vector corresponding with each division coefficient, at least one classification of training
Network, wherein each sorter network is corresponding with each first training feature vector respectively.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: perform claim require any one of 1 to 6 described in method.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that the computer
Method described in any one of claim 1 to 6 is realized when program instruction is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702266.3A CN110458218B (en) | 2019-07-31 | 2019-07-31 | Image classification method and device and classification network training method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702266.3A CN110458218B (en) | 2019-07-31 | 2019-07-31 | Image classification method and device and classification network training method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458218A true CN110458218A (en) | 2019-11-15 |
CN110458218B CN110458218B (en) | 2022-09-27 |
Family
ID=68484342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910702266.3A Active CN110458218B (en) | 2019-07-31 | 2019-07-31 | Image classification method and device and classification network training method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458218B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242230A (en) * | 2020-01-17 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Image processing method and image classification model training method based on artificial intelligence |
CN111242162A (en) * | 2019-12-27 | 2020-06-05 | 北京地平线机器人技术研发有限公司 | Training method and device of image classification model, medium and electronic equipment |
CN111444819A (en) * | 2020-03-24 | 2020-07-24 | 北京百度网讯科技有限公司 | Cutting frame determining method, network training method, device, equipment and storage medium |
CN112132099A (en) * | 2020-09-30 | 2020-12-25 | 腾讯科技(深圳)有限公司 | Identity recognition method, palm print key point detection model training method and device |
CN112258527A (en) * | 2020-11-02 | 2021-01-22 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112529913A (en) * | 2020-12-14 | 2021-03-19 | 北京达佳互联信息技术有限公司 | Image segmentation model training method, image processing method and device |
WO2022029482A1 (en) * | 2020-08-01 | 2022-02-10 | Sensetime International Pte. Ltd. | Target object identification method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180253622A1 (en) * | 2017-03-06 | 2018-09-06 | Honda Motor Co., Ltd. | Systems for performing semantic segmentation and methods thereof |
CN108805181A (en) * | 2018-05-25 | 2018-11-13 | 深圳大学 | A kind of image classification device and sorting technique based on more disaggregated models |
CN109255334A (en) * | 2018-09-27 | 2019-01-22 | 中国电子科技集团公司第五十四研究所 | Remote sensing image terrain classification method based on deep learning semantic segmentation network |
CN109447169A (en) * | 2018-11-02 | 2019-03-08 | 北京旷视科技有限公司 | The training method of image processing method and its model, device and electronic system |
CN109784424A (en) * | 2019-03-26 | 2019-05-21 | 腾讯科技(深圳)有限公司 | A kind of method of image classification model training, the method and device of image procossing |
CN109829920A (en) * | 2019-02-25 | 2019-05-31 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
-
2019
- 2019-07-31 CN CN201910702266.3A patent/CN110458218B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180253622A1 (en) * | 2017-03-06 | 2018-09-06 | Honda Motor Co., Ltd. | Systems for performing semantic segmentation and methods thereof |
CN108805181A (en) * | 2018-05-25 | 2018-11-13 | 深圳大学 | A kind of image classification device and sorting technique based on more disaggregated models |
CN109255334A (en) * | 2018-09-27 | 2019-01-22 | 中国电子科技集团公司第五十四研究所 | Remote sensing image terrain classification method based on deep learning semantic segmentation network |
CN109447169A (en) * | 2018-11-02 | 2019-03-08 | 北京旷视科技有限公司 | The training method of image processing method and its model, device and electronic system |
CN109829920A (en) * | 2019-02-25 | 2019-05-31 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109784424A (en) * | 2019-03-26 | 2019-05-21 | 腾讯科技(深圳)有限公司 | A kind of method of image classification model training, the method and device of image procossing |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242162A (en) * | 2019-12-27 | 2020-06-05 | 北京地平线机器人技术研发有限公司 | Training method and device of image classification model, medium and electronic equipment |
CN111242230A (en) * | 2020-01-17 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Image processing method and image classification model training method based on artificial intelligence |
CN111444819A (en) * | 2020-03-24 | 2020-07-24 | 北京百度网讯科技有限公司 | Cutting frame determining method, network training method, device, equipment and storage medium |
CN111444819B (en) * | 2020-03-24 | 2024-01-23 | 北京百度网讯科技有限公司 | Cut frame determining method, network training method, device, equipment and storage medium |
WO2022029482A1 (en) * | 2020-08-01 | 2022-02-10 | Sensetime International Pte. Ltd. | Target object identification method and apparatus |
CN112132099A (en) * | 2020-09-30 | 2020-12-25 | 腾讯科技(深圳)有限公司 | Identity recognition method, palm print key point detection model training method and device |
CN112258527A (en) * | 2020-11-02 | 2021-01-22 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112529913A (en) * | 2020-12-14 | 2021-03-19 | 北京达佳互联信息技术有限公司 | Image segmentation model training method, image processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110458218B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110458218A (en) | Image classification method and device, sorter network training method and device | |
CN109816764A (en) | Image generating method and device, electronic equipment and storage medium | |
CN109800744A (en) | Image clustering method and device, electronic equipment and storage medium | |
CN106339680B (en) | Face key independent positioning method and device | |
CN109618184A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN109522910A (en) | Critical point detection method and device, electronic equipment and storage medium | |
CN109697734A (en) | Position and orientation estimation method and device, electronic equipment and storage medium | |
CN109087238A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN109784255A (en) | Neural network training method and device and recognition methods and device | |
CN109871883A (en) | Neural network training method and device, electronic equipment and storage medium | |
CN106295515B (en) | Determine the method and device of the human face region in image | |
CN108764069A (en) | Biopsy method and device | |
CN110060262A (en) | A kind of image partition method and device, electronic equipment and storage medium | |
CN110503023A (en) | Biopsy method and device, electronic equipment and storage medium | |
CN110390394A (en) | Criticize processing method and processing device, electronic equipment and the storage medium of normalization data | |
CN109829863A (en) | Image processing method and device, electronic equipment and storage medium | |
CN110298310A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109658401A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109344832A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109801270A (en) | Anchor point determines method and device, electronic equipment and storage medium | |
CN109977847A (en) | Image generating method and device, electronic equipment and storage medium | |
CN110060215A (en) | Image processing method and device, electronic equipment and storage medium | |
CN108985176A (en) | image generating method and device | |
CN109978891A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109934275A (en) | Image processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |