CN108280469A - A kind of supermarket's commodity image recognition methods based on rarefaction representation - Google Patents
A kind of supermarket's commodity image recognition methods based on rarefaction representation Download PDFInfo
- Publication number
- CN108280469A CN108280469A CN201810038033.3A CN201810038033A CN108280469A CN 108280469 A CN108280469 A CN 108280469A CN 201810038033 A CN201810038033 A CN 201810038033A CN 108280469 A CN108280469 A CN 108280469A
- Authority
- CN
- China
- Prior art keywords
- descriptor
- training set
- test set
- key point
- commodity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000012549 training Methods 0.000 claims abstract description 72
- 238000012360 testing method Methods 0.000 claims abstract description 59
- 238000004422 calculation algorithm Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000205 computational method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 235000002864 food coloring agent Nutrition 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Supermarket's commodity image recognition methods based on rarefaction representation that the invention discloses a kind of,First acquire the image data of commodity,And it is divided into training set and test set,The partial descriptor of the partial descriptor of the key point of training set and the key point of test set is obtained after handling respectively the image data of training set and test set first,It can be trained to obtain sparse dictionary according to the partial descriptor of the key point of training set,Trained sparse dictionary can carry out partial descriptor sparse and global characteristics descriptor is calculated,The category of image data and the global characteristics descriptor of training set are trained to obtain SVM classifier using svm classifier method,Then the global characteristics descriptor of test set is input to SVM classifier and identification to commodity can be completed,The recognition methods of the present invention is higher to the accuracy of identification of supermarket's commodity.
Description
Technical field
The present invention relates to computer vision and intelligent identification technology field, especially a kind of supermarket quotient based on rarefaction representation
Product image-recognizing method.
Background technology
With the fast development of computer and the relevant technologies, artificial intelligence technology penetrates into people’s lives at leisure
In the middle, a hot spot of the unmanned supermarket as smart city concept provides facility for people, but unmanned supermarket is in business process
There are problems that one is difficult to avoid that, exactly leads to the loss of supermarket because of the not conscious behavior of some consumers;With machine
The tremendous expansion of vision technique and image processing techniques develops a kind of novel effectively commodity image identification based on image procossing
System is the effective means for solving the problems, such as unmanned supermarket and stealing damage and settling accounts automatically, and commodity image identifying system is calculated using image analysis
Method analyzes input picture, differentiates the type of merchandize belonging to it and clears the bill that consumer is consumed, can reach in this way
To automatic identification commodity and the function of checkout, the stolen damage of commodity can be also solved the problems, such as.
Invention content
To solve the above problems, the purpose of the present invention is to provide a kind of, supermarket's commodity image based on rarefaction representation identifies
Method can carry out super incity commodity more accurate identification.
Technical solution is used by the present invention solves the problems, such as it:
A kind of supermarket's commodity image recognition methods based on rarefaction representation, includes the following steps:
A, the image data for acquiring commodity, and is divided into training set and test set, respectively to each of training set and test set
Image data is split, merges and extraction process, obtains each complete food area of commodity in training set and test set;
B, the key point of corresponding food area in training set and test set is chosen;
C, the partial descriptor of each key point is obtained after carrying out feature extraction to each key point;
D, according to the partial descriptor of the key point of training set, using sparse dictionary training method to image data
Acquistion is to an excessively complete sparse dictionary;
E, using the sparse dictionary learnt to the office of the key point of each of training set and test set image data
Portion's descriptor carries out rarefaction representation, and the global characteristics description of the global characteristics descriptor and test set of training set is calculated
Symbol;
F, the category of image data and the global characteristics descriptor of training set are trained to obtain using svm classifier method
SVM classifier;
G, SVM classifier is identified commodity according to the global characteristics descriptor of test set.
Further, processing is split to each image data of training set and test set respectively in the step A, used
Image data is split based on the superpixel segmentation method of SLIC and obtains small image block.It is divided into multiple small image blocks to be convenient for
Analyze the high small image block of similarity.
Further, processing is merged to each image data of training set and test set respectively in the step A, utilized
The method of the region merging technique small image block high to similarity merges.After the high small image merged block of similarity, merging
Image block can also have some extra fringe regions.
Further, processing is extracted to each image data of training set and test set respectively in the step A, had
Body step is:Using brightness, color and texture as the feature for merging image block, global contrast meter is carried out using feature space
Corresponding brightness notable figure, color notable figure and texture notable figure are generated after calculation, and the average value of above-mentioned three kinds of notable figures is taken to make
To merge the notable figure of image block, extraction merges the conspicuousness part of image block notable figure and obtains complete food area.It is logical
Complete food area is split among crossing the image segmentation algorithm that is detected based on conspicuousness and merging image block, so as not to it is extra
Fringe region influence subsequent training process.
Further, the key point that corresponding food area in training set and test set is chosen in the step B, using SURF
Algorithm respectively carries out the food area of training set and test set the detection of key point, using the point detected as training set and survey
Try the key point of the food area of collection.
Further, the part of each key point is obtained after carrying out feature extraction to each key point in the step C
Descriptor, the specific steps are:To extracting its RGB color in the 16*16 pixel neighborhood of a point centered on key point
The partial descriptor of color histogram feature and SURF textural characteristics as each key point.
Further, utilize the sparse dictionary learnt to each of training set and test set image in the step E
The partial descriptor of the key point of data carries out rarefaction representation, wherein sparse dictionary is using orthogonal matching pursuit algorithm to part
Descriptor carries out rarefaction representation.
Further, the global characteristics descriptor of training set and the global characteristics of test set are calculated in the step E
Descriptor, the specific steps are:After obtaining the partial descriptor of all rarefaction representations of training set and test set, by sparse
The feature vector of partial descriptor be overlapped to obtain the global characteristics descriptor of image.
The beneficial effects of the invention are as follows:A kind of commodity image identification side of supermarket based on rarefaction representation that the present invention uses
Method, first acquires the image data of commodity, and is divided into training set and test set, and training set is used for sparse dictionary and svm classifier
Device is trained, and test set is for carrying out final identification, according to the partial descriptor of the key point of training set, using sparse word
Allusion quotation training method learns image data, and an excessively complete sparse dictionary can be obtained, then according to sparse dictionary to instruction
The partial descriptor for practicing the key point of collection carries out rarefaction representation and obtains the global characteristics descriptor of training set after being calculated, profit
The category of image data and the global characteristics descriptor of training set are trained to obtain SVM classifier with svm classifier method, together
Shi Liyong sparse dictionaries carry out rarefaction representation to the partial descriptor of the key point of test set and obtain test set after calculating
Global characteristics descriptor, the global characteristics descriptor of test set is input to SVM classifier, identification to commodity can be completed,
The recognition methods of the present invention is higher to the accuracy of identification of supermarket's commodity.
Description of the drawings
The invention will be further described with example below in conjunction with the accompanying drawings.
Fig. 1 is a kind of flow chart of supermarket's commodity image recognition methods based on rarefaction representation of the present invention;
Fig. 2 is a kind of identification framework figure of supermarket's commodity image recognition methods based on rarefaction representation of the present invention.
Specific implementation mode
A kind of-Fig. 2 referring to Fig.1, supermarket's commodity image recognition methods based on rarefaction representation of the invention, is specifically wrapped
Include two steps:Training and test, so after being acquired to supermarket's commodity image, supermarket's commodity image data are divided into
Training set and test set can obtain training set and test after being handled respectively the image data of training set and test set
Then the partial descriptor of collection trains a sparse dictionary according to the partial descriptor of training set, completed using training sparse
Dictionary carries out rarefaction representation to the partial descriptor of training set and the partial descriptor of test set, to the training after rarefaction representation
The partial descriptor of collection and the partial descriptor of test set are respectively calculated the global characteristics descriptor that training set can be obtained
And the global characteristics descriptor of test set, using svm classifier method to the category of image data and the global characteristics of training set
Descriptor is trained to obtain SVM classifier, SVM classifier the global characteristics descriptor of test set can be identified to
Complete the identification process to commodity.
Since image can be influenced in shooting process by illumination variation, normalizing is made to image in image processing process
The processing of change can reduce image and be illuminated by the light influence of the variation for food color.
During training and testing two, can partial descriptor be carried out to the image data of training set and test set
It obtaining, the obtaining step of the partial descriptor of two image sets is identical, the specific steps are:First using using based on simple line
Property iteration cluster (Simple Linear Iterative Clustering, SLIC) superpixel segmentation method to image data into
Row segmentation obtains many small image blocks, and the small image block for recycling the method for region merging technique high to similarity merges, and completes
After region merging technique, using the image segmentation algorithm detected based on conspicuousness, food area is completely split.
After being partitioned into food area, the detection of key point is carried out, since the packaging variation of supermarket's commodity is various, so directly
It connects and critical point detection is carried out to food area using SURF algorithm, choose key of strongest 100 key points as the image
Point;If key of all key points that the key point that image detection comes out less than 100, will detect as the image
Point.
To extracting the color histogram of its RGB color in the 16*16 pixel neighborhood of a point centered on key point
The partial descriptor of feature (totally 48 dimension) and SURF textural characteristics (128 dimension) as each key point, wherein for color spy
The expression of sign, the present invention are extracted color histogram in four kinds of common color spaces respectively, this four kinds of color space histograms
Respectively RGB color histogram, YCBCR color histograms, Lab color histograms and chrominance color histogram;For texture
The description of feature, present invention employs SURF algorithm to feature neighborhood of a point carry out textural characteristics description, first to segmentation after
Image carry out isodensity sampling choose key point, the 16*16 pixel neighborhood of a point centered on key point is divided into 4*4's
Zonule then calculates its Haar small echo response d in the horizontal direction to the point of each zonulexWith the response d of vertical directiony,
Finally the Haar small echos of point all in region are responded and sum to obtain a 4 dimensional vectors (∑ dx,∑dy,∑|dx|,∑|dy
|), in order to more accurately describe the textural characteristics of key point, d is being carried out in the summation step present inventiony、|dy| in summation process
D is considered respectivelyx>0 and dx≤ 0 the case where, the SURF being calculated in this way are characterized as 128 dimensions.
Can be obtained after above-mentioned processing the office of the partial descriptor of the key point of training set and the key point of test set
Portion's descriptor.
According to the partial descriptor of the key point of training set using sparse dictionary training method to training set image
Practise can be obtained an excessively complete sparse dictionary, then sparse dictionary respectively the partial descriptor to the key point of training set and
The partial descriptor of the key point of test set carries out rarefaction representation, and can be calculated to the partial descriptor after rarefaction representation
To the global characteristics descriptor of training set and the global characteristics descriptor of test set, after the global characteristics descriptor of training set is used for
The training of continuous SVM classifier, trained SVM classifier the global characteristics descriptor of test set can be identified to
Complete the identification of commodity.
The calculating of global characteristics descriptor is by the feature vector to sparse partial descriptor is directly overlapped
.
Sparse representation model is represented by:
Assuming that giving a dictionary D=[d1,d2,...,dk](D∈Rn×K), signal y ∈ RnCan be word by rarefaction representation
The linear combination of atom in allusion quotation D needs to meet the reconstruction error ε minimums that condition is signal, and formula can be expressed as y ≈ Dx,
Meet | | y-Dx | |2<ε, wherein x are the rarefaction representation coefficients of signal y;Learn in dictionary process, it is former according to least mean-square error
Then, an excessively complete dictionary is constantly obtained by sparse coding and dictionary updating iteration.The dictionary learnt for needs gives
Training dataThen solved according to following formula:AndWherein X is sparse square
Battle array, and the number of the rarefaction representation coefficient nonzero value of each signal is no more than T0。
After obtaining complete dictionary, using orthogonal matching pursuit algorithm (Orthogonal Matching Pursuit,
OMP rarefaction representation) is carried out to local feature, sparse local feature is obtained, its main feature is that the most elements of this feature vector are
Zero, the only nonzero value of only a few.
After the global characteristics descriptor of the global characteristics descriptor and test set that get training set, SVM is utilized
Sorting technique is trained to obtain SVM classifier to the category of image data and the global characteristics descriptor of training set, then just
The global characteristics descriptor of test set can be identified by SVM classifier to complete the identification to commodity.
((Support Vector Machine, SVM) is a kind of traditional sorting technique to support vector machines, is thought substantially
Road is to find a maximum according to the structural risk minimization classification different to these according to the different category label of data
Classification segmentation boundary, by these category divisions to different zones;For 2-D data, the boundary of the linear classifier is one
Straight line, and the supermarket's commodity faced in the present invention belong to for high dimensional data, the boundary for the linear classifier that we find at this time
Limit is then a hyperplane;The hyperplane can be defined with classification function in the present invention:F (x)=wTX+b, when f (x)=0
When, x is located on hyperplane, and other situations then represent commodity and belong to corresponding classification;If to any point y in space, definition
This is y in the upright projection point of hyperplane0, w is perpendicular to the vector of hyperplane, and d is point the distance between y and hyperplane,
Meet relationshipAnd point y0On hyperplane, then bringing hyperplane into can exportIt can from above formula
To find out, the purpose of support vector machines is to find out the interface for enabling to margin maximization, its essence is seeking optimal solution, this
The problem is optimized using Lagrangian, to help the classification for training SVM to complete to target data.
The present invention carries out identification to supermarket's commodity, wherein selected merchandise classification is 11 classes, including 2664 super
City's commodity image data, the average image per class I goods have 240, and test set shares 972 images, average per class I goods
There are 88 images, the recognition result of supermarket's commodity is as shown in the table in this method pair 11:
As can be seen from the above table, method of the invention is more accurate to the identification of supermarket's commodity, and being substantially all can accurately
Identify the classification of commodity.
The method of the present invention is being split commodity image data, is merging and when extraction process, use a variety of calculations
Method, it is specific as follows:
When being split, the superpixel segmentation method based on SLIC is used, super-pixel refers to adjacent by a series of positions
And feature similar pixel composition zonule, these zonules can preserve the boundary letter than more complete image
Breath;Compared with traditional image partition method, the super-pixel segmentation algorithm that the present invention realizes is by introducing constraints so that point
The subregion cut out can be compacter;Relative to the image processing method of Pixel-level, by the way that super-pixel is operated and can be made
The operand of data processing greatly reduces, and the profile of super-pixel keeps complete compact, can efficiently extract target area
Boundary;The present invention realizes the super-pixel segmentation of image using the super-pixel algorithm of SLIC, thinking be to each pixel,
Color characteristic and its set of coordinates 5 dimensional feature vectors of unification of Lab color model, distance metric is constructed to 5 dimensional feature vector
Standard, then Local Clustering is carried out to image pixel, specific implementation process is:
1. image is transformed into LAB color model from RGB color model, to each pixel extractor (L, a, b) value and
(x, y) combinatorial coordinates are at a 5 dimensional vector V [L, a, b, x, y];
2. initializing cluster centre:Based on customized super-pixel number, by seed point even distribution to image,
The image for having N number of pixel for one, first by itself and the super-pixel being divided into as K size, size N/K, then
The distance between neighboring seeds point can be usedApproximate representation;
3. reselecting seed point in the n*n neighborhoods of seed point, each pixel gradient of neighborhood where calculating seed point
Value, statistical analysis minimum value therein, and seed point is moved to the corresponding pixel position of minimum value;
4. pair each pixel distributes class label, i.e., the search range of SLIC is limited to 2S*2S;
5. the measurement of color distance and space length:For each pixel, it is calculated separately at a distance from seed point, away from
From computational methods it is as follows:
Wherein dcRepresent color distance, dsRepresent space length, final distance metric:
Wherein NsIt indicates maximum space distance in class, is defined asSuitable for each clustering, NsFor maximum
Color distance, since each pixel can calculate distance with multiple seed points, which is will be corresponding to distance minimum
Seed point be considered the cluster centre of the pixel.
6. iteration optimization repeats the above steps 3 to 5 until convergence.
When being merged to small image block, using the method for region merging technique, detailed process is:
1. determining neighboring region:All super-pixel in image are traversed, all of its neighbor of each super-pixel is found out
Super-pixel.Interregional neighbouring relations are generally indicated using Region adjacency graph (RAG), region is by the node set table in RAG
Show:N={ N1,N2,...,Nm, node NiIndicate region R in the picturei;
2. couple each super-pixel region Ri, all and R is determined from region adjacency matrixiAdjacent region Rj;
3. calculating RiWith RjBetween similarity measurement Sij:Calculate separately the normalized RGB triple channels color in each region
Histogram obtains vectorial Hi=[hri;hgi;hbi] and Hj=[hrj;hgj;hbj], calculate region RiWith all of its neighbor region Rj's
The Euclidean distance S of color histogramij;
4. merging similar super-pixel, if SijLess than threshold value T, then this two pieces of super-pixel are merged.
When extracting processing to the region after merging, using the image segmentation algorithm detected based on conspicuousness, figure
The conspicuousness detection of picture is exactly to come out most attractive region or target detection in image, this will use specific quantization
Index highlights its interested pixel or region, usually selects saliency value as the quantizating index, and the bigger expression of saliency value should
Pixel or region are more notable;On the contrary, the pixel or region are significantly, that is, unessential part, and a pixel or
Region is aobvious not to significantly depend on pixel adjacent thereto or region, that is, needs to embody by the difference with surrounding pixel, by this
Species diversity is known as contrast.
What contrast was weighed is the difference degree of an object or region and adjacent area, the object with high contrast
Can more cause the concern of people, the calculating of this contrast in conspicuousness detection algorithm with relatively more, can be for calculating
The feature of contrast is especially more, such as color characteristic, textural characteristics, brightness, frequecy characteristic.
The present invention uses the feature of brightness, color and texture as super-pixel block, and global contrast is carried out using feature space
Degree calculates, and as the saliency value in individual features space:
It is the contrast based on brightness first:Brightness reacts well as the most basic feature of image
The light levels of image, the expression formula that contrast is calculated with brightness are:
Wherein N indicates super-pixel block number, IiIt indicates the brightness value of super-pixel block i, on this basis, considers super-pixel block
Spatial position also have a larger impact to contrast, distance is remoter, the pixel currently calculated is influenced it is smaller, by spatial position
As parameter, it is weighted operation to each block of pixels, to embody the influence of space length, the contrast after spatial distance weighting
Calculation formula is:
Empirically value 100, P wherein σiIndicate the position where super-pixel block i.
Second is the contrast based on color characteristic:Color characteristic is to weigh the driving feature distinguished between pixel,
We calculate global contrast in lab color spaces to color characteristic, and calculation formula is:
Wherein l, a, b respectively represent the triple channel component in lab color spaces.
It is finally the contrast based on textural characteristics:The more intensive region of texture can more attract much attention in image,
General texture feature extraction mode is to be extracted using filter, and most representational method is Gabor filtering, it can be from
Different scales and angle be to image zooming-out textural characteristics, the mathematic(al) representation of two-dimensional Gabor filter:
Wherein x'=xcos θ+ysin θ, y'=-xsin θ+ycos θ, λ indicate that the wavelength of filter, θ indicate filter
Direction,Indicate that phase, σ indicate that the standard deviation of filter, γ indicate the length-width ratio of gabor filters;Four are used in the present invention
Kind of Gabor filter is filtered image, and direction corresponds to 0,45,90,135 respectively, wavelength X=14, phaseLength-width ratio
γ=0.5.
After obtaining the global contrast calculated with brightness, color characteristic and textural characteristics, using them as every
A super-pixel block correspond to feature space saliency value, generate corresponding notable figure, i.e., by be calculated above brightness,
Three of the above characteristic pattern is averaged to obtain the notable figure of the image by the notable figure of color characteristic and textural characteristics:
Wherein smI, smColor, smTexture respectively represent brightness, color and textural characteristics figure, utilize obtained image
Notable figure extracts conspicuousness part, completes the segmentation to image.
A kind of supermarket's commodity image recognition methods based on rarefaction representation of the present invention, passes through the picture number to supermarket's commodity
According to being acquired, excessively complete a sparse dictionary and SVM classifier are obtained into after crossing calculating and analysis, passes through sparse dictionary
Sparse processing is carried out to the partial descriptor of the image data of processed test set and global characteristics descriptor is calculated, so
The identification that can be completed to commodity is identified to the global characteristics descriptor of test set by SVM classifier afterwards, it is of the invention
Method is more accurate to the identification of supermarket's commodity, can differentiate the type of merchandize belonging to it and clear the account that consumer is consumed
It is single, the function of automatic identification commodity and checkout can be reached in this way, can also solve the problems, such as the stolen damage of commodity.
The above, only presently preferred embodiments of the present invention, the invention is not limited in the above embodiments, as long as
It reaches the technique effect of the present invention with identical means, should all belong to the scope of protection of the present invention.
Claims (8)
1. a kind of supermarket's commodity image recognition methods based on rarefaction representation, it is characterised in that:Include the following steps:
A, the image data of commodity is acquired, and is divided into training set and test set, respectively to each image of training set and test set
Data are split, merge and extraction process, obtain each complete food area of commodity in training set and test set;
B, the key point of corresponding food area in training set and test set is chosen;
C, the partial descriptor of each key point is obtained after carrying out feature extraction to each key point;
D, according to the partial descriptor of the key point of training set, image data learn using sparse dictionary training method
The sparse dictionary excessively complete to one;
E, the part of the key point of each of training set and test set image data is retouched using the sparse dictionary learnt
It states symbol and carries out rarefaction representation, and the global characteristics descriptor of training set and the global characteristics descriptor of test set is calculated;
F, the category of image data and the global characteristics descriptor of training set are trained to obtain SVM using svm classifier method
Grader;
G, SVM classifier is identified commodity according to the global characteristics descriptor of test set.
2. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 1, it is characterised in that:Institute
It states in step A and processing is split to each image data of training set and test set respectively, using the super-pixel based on SLIC
Dividing method is split image data and obtains small image block.
3. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 2, it is characterised in that:Institute
It states in step A and processing is merged to each image data of training set and test set respectively, utilize the method pair of region merging technique
The high small image block of similarity merges.
4. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 3, it is characterised in that:Institute
It states in step A and processing is extracted to each image data of training set and test set respectively, the specific steps are:Using bright
Degree, color and texture are corresponding using being generated after feature space progress global contrast calculating as the feature for merging image block
Brightness notable figure, color notable figure and texture notable figure take the average value of above-mentioned three kinds of notable figures as merging image block
Notable figure, extraction merge the conspicuousness part of image block notable figure and obtain complete food area.
5. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 1, it is characterised in that:Institute
The key point that corresponding food area in training set and test set is chosen in step B is stated, using SURF algorithm respectively to training set
The detection that key point is carried out with the food area of test set, by the food area put as training set and test set detected
Key point.
6. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 1, it is characterised in that:Institute
It states and obtains the partial descriptor of each key point after carrying out feature extraction to each key point in step C, the specific steps are:
To extracted in the 16*16 pixel neighborhood of a point centered on key point its RGB color color histogram feature and
Partial descriptor of the SURF textural characteristics as each key point.
7. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 1, it is characterised in that:Institute
State the part using the sparse dictionary learnt to the key point of each of training set and test set image data in step E
Descriptor carries out rarefaction representation, wherein sparse dictionary carries out rarefaction representation using orthogonal matching pursuit algorithm to partial descriptor.
8. a kind of supermarket's commodity image recognition methods based on rarefaction representation according to claim 1, it is characterised in that:Institute
The global characteristics descriptor for stating global characteristics descriptor and test set that training set is calculated in step E, the specific steps are:
After obtaining the partial descriptor of all rarefaction representations of training set and test set, pass through the feature to sparse partial descriptor
Vector is overlapped to obtain the global characteristics descriptor of image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810038033.3A CN108280469A (en) | 2018-01-16 | 2018-01-16 | A kind of supermarket's commodity image recognition methods based on rarefaction representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810038033.3A CN108280469A (en) | 2018-01-16 | 2018-01-16 | A kind of supermarket's commodity image recognition methods based on rarefaction representation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108280469A true CN108280469A (en) | 2018-07-13 |
Family
ID=62803614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810038033.3A Pending CN108280469A (en) | 2018-01-16 | 2018-01-16 | A kind of supermarket's commodity image recognition methods based on rarefaction representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108280469A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109711399A (en) * | 2018-11-05 | 2019-05-03 | 北京三快在线科技有限公司 | Shop recognition methods based on image, device, electronic equipment |
CN109977251A (en) * | 2019-03-05 | 2019-07-05 | 武汉摩小超科技有限公司 | A method of building identifies commodity based on RGB histogram feature |
CN112990062A (en) * | 2021-03-30 | 2021-06-18 | 北京中电兴发科技有限公司 | Method for managing cooperative work of multiple homogeneous intelligent algorithms to improve accuracy |
CN112991238A (en) * | 2021-02-22 | 2021-06-18 | 上海市第四人民医院 | Texture and color mixing type food image segmentation method, system, medium and terminal |
CN114358795A (en) * | 2022-03-18 | 2022-04-15 | 武汉乐享技术有限公司 | Payment method and device based on human face |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103985130A (en) * | 2014-05-27 | 2014-08-13 | 华东理工大学 | Image significance analysis method for complex texture images |
CN105389550A (en) * | 2015-10-29 | 2016-03-09 | 北京航空航天大学 | Remote sensing target detection method based on sparse guidance and significant drive |
CN105488536A (en) * | 2015-12-10 | 2016-04-13 | 中国科学院合肥物质科学研究院 | Agricultural pest image recognition method based on multi-feature deep learning technology |
CN105844292A (en) * | 2016-03-18 | 2016-08-10 | 南京邮电大学 | Image scene labeling method based on conditional random field and secondary dictionary study |
CN106446909A (en) * | 2016-09-06 | 2017-02-22 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Chinese food image feature extraction method |
US20170213109A1 (en) * | 2014-03-31 | 2017-07-27 | Los Alamos National Security, Llc | Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding |
-
2018
- 2018-01-16 CN CN201810038033.3A patent/CN108280469A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170213109A1 (en) * | 2014-03-31 | 2017-07-27 | Los Alamos National Security, Llc | Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding |
CN103985130A (en) * | 2014-05-27 | 2014-08-13 | 华东理工大学 | Image significance analysis method for complex texture images |
CN105389550A (en) * | 2015-10-29 | 2016-03-09 | 北京航空航天大学 | Remote sensing target detection method based on sparse guidance and significant drive |
CN105488536A (en) * | 2015-12-10 | 2016-04-13 | 中国科学院合肥物质科学研究院 | Agricultural pest image recognition method based on multi-feature deep learning technology |
CN105844292A (en) * | 2016-03-18 | 2016-08-10 | 南京邮电大学 | Image scene labeling method based on conditional random field and secondary dictionary study |
CN106446909A (en) * | 2016-09-06 | 2017-02-22 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Chinese food image feature extraction method |
Non-Patent Citations (1)
Title |
---|
吕伟: "《基于稀疏表示和卷积神经网络的水果图像分类与实现》", 《中国优秀硕士学位论文全文数据库 农业科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109711399A (en) * | 2018-11-05 | 2019-05-03 | 北京三快在线科技有限公司 | Shop recognition methods based on image, device, electronic equipment |
CN109711399B (en) * | 2018-11-05 | 2021-04-27 | 北京三快在线科技有限公司 | Shop identification method and device based on image and electronic equipment |
CN109977251A (en) * | 2019-03-05 | 2019-07-05 | 武汉摩小超科技有限公司 | A method of building identifies commodity based on RGB histogram feature |
CN112991238A (en) * | 2021-02-22 | 2021-06-18 | 上海市第四人民医院 | Texture and color mixing type food image segmentation method, system, medium and terminal |
CN112991238B (en) * | 2021-02-22 | 2023-08-22 | 上海市第四人民医院 | Food image segmentation method, system and medium based on texture and color mixing |
CN112990062A (en) * | 2021-03-30 | 2021-06-18 | 北京中电兴发科技有限公司 | Method for managing cooperative work of multiple homogeneous intelligent algorithms to improve accuracy |
CN112990062B (en) * | 2021-03-30 | 2022-05-31 | 北京中电兴发科技有限公司 | Method for managing cooperative work of multiple homogeneous intelligent algorithms to improve accuracy |
CN114358795A (en) * | 2022-03-18 | 2022-04-15 | 武汉乐享技术有限公司 | Payment method and device based on human face |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Karlekar et al. | SoyNet: Soybean leaf diseases classification | |
Xie et al. | Efficient and robust cell detection: A structured regression approach | |
Zheng et al. | Fast and robust segmentation of white blood cell images by self-supervised learning | |
He et al. | Accurate text localization in natural image with cascaded convolutional text network | |
US10074006B2 (en) | Methods and systems for disease classification | |
Li et al. | A survey of recent advances in visual feature detection | |
JP6710135B2 (en) | Cell image automatic analysis method and system | |
CN108280469A (en) | A kind of supermarket's commodity image recognition methods based on rarefaction representation | |
Alexe et al. | Measuring the objectness of image windows | |
CN109522908A (en) | Image significance detection method based on area label fusion | |
Soni et al. | Text detection and localization in natural scene images based on text awareness score | |
CN108710916B (en) | Picture classification method and device | |
CN112926652B (en) | Fish fine granularity image recognition method based on deep learning | |
Wu et al. | Scene text detection using adaptive color reduction, adjacent character model and hybrid verification strategy | |
CN107067037B (en) | Method for positioning image foreground by using LL C criterion | |
Liu et al. | A novel color-texture descriptor based on local histograms for image segmentation | |
CN112991238A (en) | Texture and color mixing type food image segmentation method, system, medium and terminal | |
Li et al. | SDBD: A hierarchical region-of-interest detection approach in large-scale remote sensing image | |
Wang et al. | Dermoscopic image segmentation through the enhanced high-level parsing and class weighted loss | |
Sima et al. | Bottom-up merging segmentation for color images with complex areas | |
CN111815582A (en) | Two-dimensional code area detection method for improving background prior and foreground prior | |
Saikumar et al. | Colour based image segmentation using fuzzy c-means clustering | |
Kulwa et al. | Segmentation of weakly visible environmental microorganism images using pair-wise deep learning features | |
Pham et al. | CNN-based character recognition for license plate recognition system | |
Liang et al. | Image segmentation and recognition for multi-class chinese food |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |