CN108447061A - Merchandise information processing method, device, computer equipment and storage medium - Google Patents

Merchandise information processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108447061A
CN108447061A CN201810097478.9A CN201810097478A CN108447061A CN 108447061 A CN108447061 A CN 108447061A CN 201810097478 A CN201810097478 A CN 201810097478A CN 108447061 A CN108447061 A CN 108447061A
Authority
CN
China
Prior art keywords
commodity
picture
neural network
deep learning
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810097478.9A
Other languages
Chinese (zh)
Other versions
CN108447061B (en
Inventor
陈健聪
康平陆
杨新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Axmtec Co ltd
Original Assignee
Shenzhen Axmtec Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Axmtec Co ltd filed Critical Shenzhen Axmtec Co ltd
Priority to CN201810097478.9A priority Critical patent/CN108447061B/en
Publication of CN108447061A publication Critical patent/CN108447061A/en
Application granted granted Critical
Publication of CN108447061B publication Critical patent/CN108447061B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

This application involves a kind of merchandise information processing method, device, computer equipment and storage mediums.By obtaining the corresponding color depth picture being made of color image and depth picture of commodity to be identified;It in the deep learning neural network model trained of color depth picture input, will identify and obtain merchandise classification corresponding with the commodity to be identified and product locations;The commodity value information of the commodity to be identified is determined according to the merchandise classification and the product locations.It is identified to obtain more accurate merchandise classification and product locations to color image including depth information by deep learning neural network, and commodity value information is determined by merchandise classification and product locations.

Description

Merchandise information processing method, device, computer equipment and storage medium
Technical field
This application involves field of computer technology, are set more particularly to a kind of merchandise information processing method, device, computer Standby and storage medium.
Background technology
With the development of computer technology, the application based on computer technology is more and more extensive.In supermarket or warehouse, to quotient When product carry out marketing balance and storage commodity, need to record merchandise news.It is when carrying out merchandise sales by it, it is thus necessary to determine that quotient The classification and value of product.With the continuous development of computer technology, commodity identification technology is constantly intelligent.Traditional market clearing Commodity are identified by barcode scanning, on the one hand need many manpowers, while can also have bar code missing or not available feelings Condition causes recognition speed slow.
Invention content
Based on this, it is necessary in view of the above technical problems, provide one kind and quickly be known by deep learning neural network model It the merchandise information processing method of the corresponding merchandise news of commodity not gone out in color depth picture, device, computer equipment and deposits Storage media.
A kind of merchandise information processing method, the method includes:It is corresponding by color image and depth to obtain commodity to be identified Spend the color depth picture of picture composition;The color depth picture is inputted to the deep learning neural network model trained In, identification obtains merchandise classification corresponding with the commodity to be identified and product locations;According to the merchandise classification and the quotient The commodity value information of commodity to be identified described in product location determination.
It is described in one of the embodiments, to obtain that commodity to be identified are corresponding to be made of color image and depth picture After the step of color depth picture, further include:When detecting that it is described that the commodity region for including in the color depth picture accounts for When color depth picture ratio is less than predetermined threshold value, the human body limb band of position in the color depth picture is detected;According to The human body limb band of position determines first object region;The second target area, institute are determined according to the first object region The region area for stating the second target area is less than the region area in the first object region.
The deep learning neural network model includes convolutional layer in one of the embodiments, described by the colour In the deep learning neural network model trained of depth picture input, identify obtain merchandise classification corresponding with the commodity and The step of product locations, including:Standard merchandise Suggestion box is obtained, the commodity Suggestion box is inputted into the deep learning nerve net In network model;The color depth picture after normalization is obtained, by the convolutional layer to the coloured silk after the normalization Color depth picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern;Sliding window is carried out to the convolution characteristic pattern to sample To sampled images, the sampled images are mapped in the color depth picture, obtain the depth-sampling image;According to institute It states commercial standards Suggestion box and corresponding cut zone is obtained to depth-sampling image progress region division, to the cut section Domain is identified, and obtains corresponding merchandise classification and product locations.
The deep learning neural network model step that the generation has been trained in one of the embodiments, including:It obtains Standard merchandise picture set and standard merchandise tag set, include merchandise classification information and location information in the tag set, The standard merchandise picture set is divided into training data set and test data set, the standard merchandise picture is to include depth Spend the color depth picture of information;By being trained the deep learning nerve net after being trained to the training data set Network model;The deep learning neural network model after the training is tested using the test data set, is known Other results set;Test discrimination is determined according to the recognition result set and the standard merchandise tag set;When the survey When the test discrimination of examination data acquisition system reaches predetermined threshold value, using the deep learning neural network model after the training as institute State the deep learning neural network model trained.
The deep learning neural network model includes parameter in one of the embodiments, described by the instruction Practice the step of data acquisition system is trained the deep learning neural network model after being trained, including:According to the trained number According to each standard merchandise picture in set, the parameter of the deep learning neural network model is updated;As deep learning god Each standard merchandise picture in the training data set is learnt through network, has stopped updating the deep learning nerve net When the parameter of network model, the deep learning neural network model after the training is obtained.
A kind of commodity information processor, described device include:
Data acquisition module, it is deep for obtaining the corresponding colour being made of color image and depth picture of commodity to be identified Spend picture;
Commodity identification module, for the color depth picture to be inputted the deep learning neural network model trained In, identification obtains merchandise classification corresponding with the commodity and product locations;
Commodity value information module, for determining the commodity to be identified according to the merchandise classification and the product locations Commodity value information.
Commodity information processor in one of the embodiments, including:
Position detection module block, for that ought detect that it is described that the commodity region for including in the color depth picture accounts for When color depth picture ratio is less than predetermined threshold value, the human body limb band of position in the color depth picture is detected;
First object area determination module determines first object region according to the human body limb band of position;
Second target area determining module, for determining the second target area according to the first object region, described The region area of two target areas is less than the region area in the first object region.
The commodity identification module includes in one of the embodiments,:
The commodity Suggestion box is inputted the depth by commodity Suggestion box acquiring unit for obtaining standard merchandise Suggestion box It spends in learning neural network model;
Convolution characteristic pattern acquiring unit passes through the convolutional layer for obtaining the color depth picture after normalizing To after the normalization the color depth picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern;
Depth-sampling image acquisition unit samples to obtain sampled images for carrying out sliding window to the convolution characteristic pattern, will The sampled images are mapped in the depth map, obtain the depth-sampling image;
Commodity recognition unit, for carrying out region division to the depth-sampling image according to the commercial standards Suggestion box Corresponding cut zone is obtained, the cut zone is identified, obtains corresponding merchandise classification and product locations.
A kind of computer equipment, including memory, processor and storage can be run on a memory and on a processor Computer program, which is characterized in that the processor realizes above-mentioned merchandise information processing method when executing the computer program The step of.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor The step of above-mentioned merchandise information processing method is realized when row.
Above-mentioned merchandise information processing method, device, computer equipment and storage medium are corresponded to by obtaining commodity to be identified The color depth picture being made of color image and depth picture;The color depth picture is inputted to the depth trained It practises in neural network model, identification obtains merchandise classification corresponding with the commodity to be identified and product locations;According to the quotient Category does not determine the commodity value information of the commodity to be identified with the product locations.Using the deep learning nerve trained Color depth picture is identified in network, obtains corresponding merchandise classification and product locations in commodity picture, and it is accurate to improve identification True rate so that commodity value information corresponding with merchandise classification is more accurate.
Description of the drawings
Fig. 1 is the applied environment figure of merchandise information processing method in one embodiment;
Fig. 2 is the flow diagram of merchandise information processing method in one embodiment;
Fig. 3 is the flow diagram of merchandise information processing method in another embodiment;
Fig. 4 is the scene graph of commodity attribute in one embodiment;
Fig. 5 is the flow diagram of commodity identification step in another embodiment;
Fig. 6 is the flow diagram that neural network step is generated in one embodiment;
Fig. 7 is the flow diagram of training neural network step in one embodiment;
Fig. 8 is the structure diagram of article identification device in one embodiment;
Fig. 9 is the structure diagram of article identification device in another embodiment;
Figure 10 is the structure diagram of commodity identification module in one embodiment;
Figure 11 is the structure diagram of article identification device in further embodiment;
Figure 12 is the structure diagram of training module in one embodiment;
Figure 13 is the internal structure block diagram of one embodiment Computer equipment.
Specific implementation mode
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not For limiting the application.
Merchandise information processing method provided by the present application can be applied in application environment as shown in Figure 1.Wherein, eventually End 102 is communicated with server 104 by network by network.It is corresponding by cromogram that terminal 102 obtains commodity to be identified The color depth picture of piece and depth picture composition, by the deep learning neural network model trained to color depth picture It is identified, obtains merchandise classification corresponding with commodity to be identified and product locations, the merchandise classification obtained according to identification and quotient The commodity value information of the product location determination commodity.Can also server 104 receive terminal send commodity to be identified it is corresponding Color depth picture is identified by the deep learning neural network model color depth picture in server 104, obtain with The corresponding merchandise classification of commodity to be identified and product locations, the merchandise classification and product locations obtained according to identification determine the commodity Commodity value information.Wherein, terminal 102 can be, but not limited to be various personal computers, laptop, smart mobile phone, Tablet computer and portable wearable device, server 104 can use the either multiple server compositions of independent server Server cluster is realized.
In one embodiment, as shown in Fig. 2, providing a kind of merchandise information processing method, it is applied to Fig. 1 in this way In terminal for illustrate, include the following steps:
Step S202 obtains the corresponding color depth picture being made of color image and depth picture of commodity to be identified.
Specifically, color depth picture refers to the image for including coloured image and depth image, depth image refer to by from Image of the distance (depth) of each point as pixel value, scenery is directly reflected by range information in image acquisition device to scene The geometry of visible surface.Obtain the color depth picture that capture apparatus takes, the color depth picture (Red Green Blue-Depth, RGB-D) in contain commodity region, which is used to describe the product features of commodity to be identified, the quotient Product feature includes the shape of image, Texture eigenvalue.
Step S204, by the deep learning neural network model trained of color depth picture input, identification obtain and The corresponding merchandise classification of commodity to be identified and product locations.
Wherein, the deep learning neural network model trained is the color depth picture to largely carrying merchandise news Carry out what learning training obtained.The feature that study may learn various classification commodity is carried out to a large amount of color depth picture. The deep learning neural network model trained can accurately and quickly extract the feature in image.
Specifically, the corresponding color depth picture of the commodity to be identified got is input to the deep learning god trained Through in network models, the product features of color depth picture being extracted by the network, merchandise classification, root are determined according to product features Product locations are determined according to the correspondence of merchandise classification and product locations.
In one embodiment, regular to picture according to preset segmentation to the corresponding color depth picture of commodity to be identified Repeated segmentation is carried out, cut zone corresponding to multiple partitioning schemes is identified the image after segmentation to obtain corresponding knowledge Not as a result, determining best identified result as target identification as a result, determining commodity according to target identification result from recognition result Classification and product locations.
Step S206 determines the commodity value information of commodity to be identified according to merchandise classification and product locations.
Specifically, commodity value information can be the value that the selling price of commodity, production cost etc. are used to indicate commodity Information.Different merchandise classifications and product locations corresponds to different commodity value information, and merchandise classification and commodity is being determined After position, commodity value information is determined according to the correspondence of merchandise classification and product locations and commodity value information.
Above-mentioned merchandise information processing method will get what the corresponding color depth picture input of commodity to be identified had been trained In deep learning neural network model, color depth picture is identified by the deep learning neural network model trained, is obtained Merchandise classification and product locations determine commodity value information according to merchandise classification and product locations.Color depth picture includes deep Information and colour information are spent, the commodity described jointly according to depth information and colour information are more accurate, and deep learning nerve Network model can extract more accurate product features, by enriching accurate information and fast and accurately network model It identifies that obtained merchandise classification is more accurate, commodity value information is determined according to more accurate merchandise classification and product locations It is more accurate.
As shown in figure 3, in one embodiment, after step S202, further including:
Step S208, when detect the commodity region for including in color depth picture account for color depth picture ratio be less than it is pre- If when threshold value, the human body limb band of position in sense colors depth picture.
Specifically, the corresponding color depth picture to be identified got is pre-processed, due to shooting angle and bat The ratio that photographic range etc. is inconsistent to cause commodity region in the picture of shooting to account for picture in its entirety is inconsistent, therefore to color depth figure Piece carries out localization process before being identified.To the accounting in whole image in commodity region in the color depth picture that gets It is calculated, when the accounting that commodity region is calculated is less than pre-set threshold value, to the human body in color depth picture Limbs are detected, so that it is determined that the band of position of human body limb.
Step S210 determines first object region according to the human body limb band of position.
Step S212 determines that the second target area, the region area of the second target area are less than according to first object region The region area in first object region.
Specifically, first object region is the initial alignment region for the area that region area is more than commodity region, the region In contain commodity region.First object region is determined according to the human body limb band of position, is determined according to the first object region Second target area.Wherein, the second target area is the reposition region for containing commodity region, the area of the second target area Domain area is less than the region area in first object region.Commodity are positioned by human body limb position, reduce Wrong localization.
In one embodiment, the region after being positioned to first time is amplified, and passes through the first mesh after amplification Mark region determines the second target area.
As shown in figure 4, first object region is region a, the second target area is region b, and commodity region is region c.Area The area of domain a is more than the area of region b, and the area of region c is less than the area of region b.
As shown in figure 5, in one embodiment, step S204 includes:
Step S2042 obtains standard merchandise Suggestion box, and the commodity Suggestion box is inputted the deep learning neural network In model.
Specifically, pre-set at least one standard merchandise Suggestion box is obtained, will be inputted in the standard merchandise Suggestion box Into above-mentioned deep learning neural network model.Standard merchandise Suggestion box is the side of multiple and different the ratio of width to height of self-defined setting Frame.Standard merchandise Suggestion box can determine the ratio of width to height of standard merchandise Suggestion box according to the resolution ratio of depth image, can also root The ratio of width to height is determined according to the resolution ratio of sampled images.
Step S2044 obtains the color depth picture after normalization, by convolutional layer to the color depth figure after normalization Piece carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern.
Specifically, normalization refers to that all color depth pictures are all processed into same resolution ratio.Pass through the convolution Layer is down-sampled to the progress of color depth picture, and exactly carrying out feature extraction to color depth picture by convolution algorithm obtains convolution Characteristic image.Such as, image is all adjusted to the size that resolution ratio is 416 × 416,32 times of drop is then carried out by convolutional layer and is adopted Sample obtains the convolution characteristic pattern of 13 × 13 sizes.
Step S2046 carries out sliding window to convolution characteristic pattern and samples to obtain sampled images, sampled images is mapped to colored deep It spends in picture, obtains depth-sampling image.
Specifically, sliding window sampling refers to one window of setting, is sampled by sliding window on image.One cunning is set Dynamic window, the sliding window is slided on convolution characteristic pattern and is sampled, the sampled images after being sampled.Sampled images are reflected It is mapped in color depth picture, obtains depth-sampling image
Step S2048 carries out region division to depth-sampling image according to commercial standards Suggestion box and obtains corresponding segmentation Cut zone is identified in region, obtains corresponding merchandise classification and product locations.
Specifically, by the deep learning neural network model trained in depth-sampling image according to each standard The commodity region corresponding with each standard merchandise Suggestion box that commodity Suggestion box is divided is identified, and obtains and each standard The corresponding commodity recognition result of commodity Suggestion box, according to custom algorithm from multiple according to the corresponding commodity of standard merchandise Suggestion box The one of standard merchandise Suggestion box of recognition result selection is corresponding by the target trade mark Suggestion box as end article Suggestion box Recognition result determines product locations as final merchandise classification, and according to final merchandise classification.By the way that multiple standards are arranged Commodity Suggestion box can identify to obtain the recognition result in multiple commodity regions, and best identification knot is chosen from multiple recognition results Fruit can improve recognition correct rate as final recognition result.
In one embodiment, according to the corresponding identification probability of the corresponding commodity recognition result of each standard merchandise Suggestion box Determine merchandise classification and product locations.Such as, the corresponding commodity recognition result of the highest standard merchandise Suggestion box of identification probability is chosen Merchandise classification as identification.
As shown in fig. 6, in one embodiment, merchandise information processing method further includes:
Step S214 obtains standard merchandise picture set and standard merchandise tag set, by standard merchandise picture set point At training data set and test data set, standard merchandise picture is color depth picture including depth information.
Specifically, the set that standard merchandise picture set is made of multiple color depth pictures including depth information. Standard merchandise picture set includes the image data set to crawl from network, can also be directly to be got from capture apparatus The sets of image data that at least one of image data set mode obtains.Standard merchandise picture set is divided into training data Set and test data set, training dataset are shared in training deep learning neural network model, test data set share in Deep learning neural network model after training is tested.
Step S216, by being trained the deep learning neural network model after being trained to training data set.
Specifically, training data set is input in deep learning neural network model, the network model is automatically to instruction The each picture practiced in data acquisition system is trained, the deep learning neural network model after being trained.
Step S218 tests the deep learning neural network model after training using test data set, obtains Recognition result set.
Specifically, the deep learning neural network model after training is tested using test data set, i.e., will surveyed The deep learning neural network model after each picture input training of data acquisition system is tried, is identified to obtain by the model each The corresponding recognition result of a picture, the corresponding recognition result of each picture is at recognition result set.
Step S220 determines test discrimination according to recognition result set and standard merchandise tag set.
Specifically, according to the correspondence of standard merchandise label and picture, judge whether recognition result is correct, correctly know The number of pictures that other result number of pictures closes total test data set obtains test discrimination.Wherein, test discrimination is used The recognition capability of deep learning neural network model after weighing training.
Step S222, when the test discrimination of test data set reaches predetermined threshold value, by the deep learning after training Neural network model is as the deep learning neural network model trained.
Specifically, if obtained test discrimination reaches predetermined threshold value, the deep learning neural network after training is indicated The recognition capability of model meets expected results, directly using the deep learning neural network model after training as the depth trained Learning neural network model.If obtained test discrimination does not reach predetermined threshold value, the deep learning nerve after training is indicated The recognition capability of network model does not meet expected results, needs the parameter of percentage regulation learning neural network model, trains again The network model.
It can be quickly accurate by the neural network model that the color depth picture to a large amount of tape label is learnt The true characteristic set extracted in picture, the merchandise classification determined according to this feature set are more accurate.
As shown in fig. 7, in one embodiment, step S216 includes:
Step S2162 updates deep learning neural network model according to each standard merchandise picture in training data set Parameter.
Specifically, during training deep learning nerve net, each standard merchandise figure of learning training data acquisition system The feature of piece is different since the image content of different pictures is not quite identical in learning process, therefore when extracting feature The feature that picture extracts is inconsistent, needs to be weighted the feature extracted so that final recognition result meets pre- Phase result.Therefore feature weight can be constantly adjusted in learning process, that is, update the ginseng of deep learning neural network model Number.
Step S2164, when deep learning neural network has learnt each standard merchandise picture in training data set, When stopping the parameter of update deep learning neural network model, the deep learning neural network model after being trained.
Specifically, when deep learning neural network has learnt each standard merchandise picture in training data set, The parameter of deep learning neural network model is no longer updated, and instruction is indicated after the parameter determination of deep learning neural network model White silk finishes, the deep learning neural network model after being trained.The parameter adjustment of deep learning neural network is in order to more preferable The various scenes of identification under commodity to be identified in the corresponding color depth picture of commodity to be identified that shoots, improve deep learning The recognition accuracy of neural network model.
As shown in figure 8, in one embodiment, a kind of commodity information processor 200, including:
Data acquisition module 202, for obtaining the corresponding coloured silk being made of color image and depth picture of commodity to be identified Color depth picture.
Commodity identification module 204, for color depth picture to be inputted in the deep learning neural network model trained, Identification obtains merchandise classification corresponding with commodity and product locations.
Commodity value information module 206, the commodity valence for determining commodity to be identified according to merchandise classification and product locations Value information.
As shown in figure 9, in one embodiment, commodity information processor 200, including:
Position detection module 208 detects that the commodity region for including in the color depth picture accounts for institute for working as When stating color depth picture ratio less than predetermined threshold value, the human body limb band of position in the color depth picture is detected.
First object area determination module 210 determines first object region according to the human body limb band of position.
Second target area determining module 212, for determining the second target area, the second target according to first object region The region area in region is less than the region area in first object region.
As shown in Figure 10, in one embodiment, commodity identification module 204 includes:
Commodity Suggestion box is inputted depth by commodity Suggestion box acquiring unit 2042 for obtaining standard merchandise Suggestion box It practises in neural network model.
Convolution characteristic pattern acquiring unit 2044, for obtaining the color depth picture after normalizing, by convolutional layer to returning One change after color depth picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern.
Depth-sampling image acquisition unit 2046 samples to obtain sampled images for carrying out sliding window to convolution characteristic pattern, will Sampled images are mapped in depth map, obtain depth-sampling image.
Commodity recognition unit 2048 is obtained for carrying out region division to depth-sampling image according to commercial standards Suggestion box Corresponding cut zone, is identified cut zone, obtains corresponding merchandise classification and product locations.
As shown in figure 11, in one embodiment, commodity information processor 200 further include:
Picture set acquisition module 214 obtains standard merchandise picture set and standard merchandise tag set, mark for standard Include merchandise classification information and location information in label set, standard merchandise picture set is divided into training data set and test number According to set, standard merchandise picture is color depth picture including depth information.
Training module 216, for by being trained the deep learning nerve net after being trained to training data set Network model.
Test module 218, for using test data set to the deep learning neural network model after the training into Row test, obtains recognition result set.
Discrimination computing module 220, for determining test identification according to recognition result set and standard merchandise tag set Rate.
Model determining module 222, for when the test discrimination of test data set reaches predetermined threshold value, after training Deep learning neural network model as the deep learning neural network model trained.
As shown in figure 12, in one embodiment, training module 216 includes:
Parameter updating unit 2162, for according to each standard merchandise picture in training data set, updating the depth The parameter of learning neural network model.
Training unit 2164, for having learnt each standard quotient in training data set when deep learning neural network Product picture, when stopping the parameter of update deep learning neural network model, the deep learning neural network model after being trained.
Specific about commodity information processor limits the limit that may refer to above for merchandise information processing method Fixed, details are not described herein.Modules in above-mentioned commodity information processor can fully or partially through software, hardware and its It combines to realize.Above-mentioned each module can be embedded in or in the form of hardware independently of in the processor in computer equipment, can also It is stored in a software form in the memory in computer equipment, in order to which processor calls the above modules of execution corresponding Operation.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure Figure can be as shown in figure 13.The computer equipment includes the processor connected by system bus, memory, network interface, shows Display screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment Memory includes non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and computer Program.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The meter The network interface for calculating machine equipment is used to communicate by network connection with external terminal.When the computer program is executed by processor To realize a kind of merchandise information processing method.The display screen of the computer equipment can be that liquid crystal display or electric ink are aobvious The input unit of display screen, the computer equipment can be the touch layer covered on display screen, can also be computer equipment shell Button, trace ball or the Trackpad of upper setting can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Figure 13, only with the relevant part of application scheme The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to application scheme, and specific computer is set Standby may include either combining certain components than more or fewer components as shown in the figure or being arranged with different components.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor realize following steps when executing computer program:It obtains to be identified The corresponding color depth picture being made of color image and depth picture of commodity;Color depth picture is inputted to the depth trained It spends in learning neural network model, identification obtains merchandise classification corresponding with commodity to be identified and product locations;According to commodity class Other and product locations determine the commodity value information of commodity to be identified.
In one embodiment, following steps are also realized when processor executes computer program:When detecting color depth When the commodity region for including in picture accounts for color depth picture ratio less than predetermined threshold value, the human body in sense colors depth picture Position region;First object region is determined according to the human body limb band of position;The second mesh is determined according to first object region Region is marked, the region area of the second target area is less than the region area in first object region.
In one embodiment, following steps are also realized when processor executes computer program:Obtain standard merchandise suggestion Frame inputs commodity Suggestion box in deep learning neural network model;The color depth picture after normalization is obtained, convolution is passed through Layer to after normalization color depth picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern;To convolution characteristic pattern into Row sliding window samples to obtain sampled images, and sampled images are mapped in depth map, obtains depth-sampling image;According to commercial standards Suggestion box carries out region division to depth-sampling image and obtains corresponding cut zone, is identified, obtains pair to cut zone The merchandise classification and product locations answered.
In one embodiment, following steps are also realized when processor executes computer program:Obtain standard merchandise picture Set and standard merchandise tag set include merchandise classification information and location information in tag set, by standard merchandise pictures Conjunction is divided into training data set and test data set, and standard merchandise picture is color depth picture including depth information;It is logical It crosses and is trained the deep learning neural network model after being trained to training data set;Using test data set to instruction Deep learning neural network model after white silk is tested, and recognition result set is obtained;Root recognition result set and standard merchandise Tag set determines test discrimination;When the test discrimination of test data set reaches predetermined threshold value, by the depth after training Learning neural network model is spent as the deep learning neural network model trained.
In one embodiment, following steps are also realized when processor executes computer program:According to training data set In each standard merchandise picture, update deep learning neural network model parameter;When deep learning neural network has learnt Each standard merchandise picture in training data set is instructed when stopping the parameter of update deep learning neural network model Deep learning neural network model after white silk.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program realizes following steps when being executed by processor:Obtain that commodity to be identified are corresponding to be made of color image and depth picture Color depth picture;In the deep learning neural network model that the input of color depth picture has been trained, identification is obtained and is waited for Identify the corresponding merchandise classification of commodity and product locations;The commodity valence of commodity to be identified is determined according to merchandise classification and product locations Value information.
In one embodiment, following steps are also realized when computer program is executed by processor:It is colored deep when detecting When the commodity region for including in degree picture accounts for color depth picture ratio less than predetermined threshold value, the people in sense colors depth picture Body position region;First object region is determined according to the human body limb band of position;Second is determined according to first object region Target area, the region area of the second target area are less than the region area in first object region.
In one embodiment, following steps are also realized when computer program is executed by processor:Deep learning nerve net Network model includes convolutional layer, by the deep learning neural network model trained of color depth picture input, identification obtain and The step of corresponding merchandise classification of commodity and product locations, including:Standard merchandise Suggestion box is obtained, commodity Suggestion box is inputted deep It spends in learning neural network model;The color depth picture after normalization is obtained, it is deep to the colour after normalization by convolutional layer Spend picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern;Sliding window is carried out to convolution characteristic pattern to sample to obtain sample graph Sampled images are mapped in depth map by picture, obtain depth-sampling image;According to commercial standards Suggestion box to depth-sampling image It carries out region division and obtains corresponding cut zone, cut zone is identified, corresponding merchandise classification and commodity position are obtained It sets.
In one embodiment, following steps are also realized when computer program is executed by processor:Obtain standard merchandise figure Standard merchandise picture set is divided into training data set and test data set by piece set and standard merchandise tag set, mark Quasi- commodity picture is color depth picture including depth information;By after being trained and being trained to training data set Deep learning neural network model;The deep learning neural network model after training is tested using test data set, Obtain recognition result set;Test discrimination is determined according to recognition result set and standard merchandise tag set;Work as test data When the test discrimination of set reaches predetermined threshold value, using the deep learning neural network model after training as the depth trained Learning neural network model.
In one embodiment, following steps are also realized when computer program is executed by processor:According to training dataset Each standard merchandise picture in conjunction updates the parameter of deep learning neural network model;When deep learning neural network has learnt Each standard merchandise picture in complete training data set obtains when stopping the parameter of update deep learning neural network model Deep learning neural network model after training.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, Any reference to memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above example can be combined arbitrarily, to keep description succinct, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield is all considered to be the range of this specification record.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the protection of the application Range.Therefore, the protection domain of the application patent should be determined by the appended claims.

Claims (10)

1. a kind of merchandise information processing method, the method includes:
Obtain the corresponding color depth picture being made of color image and depth picture of commodity to be identified;
In the deep learning neural network model trained of color depth picture input, will identify obtain with it is described to be identified The corresponding merchandise classification of commodity and product locations;
The commodity value information of the commodity to be identified is determined according to the merchandise classification and the product locations.
2. according to the method described in claim 1, it is characterized in that, it is described obtain commodity to be identified it is corresponding by color image and After the step of color depth picture of depth picture composition, further include:
When detecting that the commodity region for including in the color depth picture accounts for the color depth picture ratio and be less than default threshold When value, the human body limb band of position in the color depth picture is detected;
First object region is determined according to the human body limb band of position;
The second target area is determined according to the first object region, and the region area of second target area is less than described the The region area of one target area.
3. according to the method described in claim 1, it is characterized in that, the deep learning neural network model includes convolutional layer, In the deep learning neural network model that color depth picture input has been trained, identification obtains and the commodity pair The step of merchandise classification and product locations for answering, including:
Standard merchandise Suggestion box is obtained, the commodity Suggestion box is inputted in the deep learning neural network model;
The color depth picture after normalization is obtained, by the convolutional layer to the color depth after the normalization Picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern;
Sliding window is carried out to the convolution characteristic pattern to sample to obtain sampled images, and the sampled images are mapped to the color depth In picture, the depth-sampling image is obtained;
Region division is carried out to the depth-sampling image according to the commercial standards Suggestion box and obtains corresponding cut zone, it is right The cut zone is identified, and obtains corresponding merchandise classification and product locations.
4. according to the method described in claim 1, it is characterized in that, described generate the deep learning neural network model trained Step, including:
Standard merchandise picture set and standard merchandise tag set are obtained, includes merchandise classification information and position in the tag set Confidence ceases, and the standard merchandise picture set is divided into training data set and test data set, the standard merchandise picture For color depth picture including depth information;
By being trained the deep learning neural network model after being trained to the training data set;
The deep learning neural network model after the training is tested using the test data set, obtains identification knot Fruit set;
Test discrimination is determined according to the recognition result set and the standard merchandise tag set;
When the test discrimination of the test data set reaches predetermined threshold value, by the deep learning nerve net after the training Network model is as the deep learning neural network model trained.
5. according to the method described in claim 4, it is characterized in that, the deep learning neural network model includes parameter, institute The step of stating by being trained the deep learning neural network model after being trained to the training data set, including:
According to each standard merchandise picture in the training data set, the ginseng of the deep learning neural network model is updated Number;
When the deep learning neural network has learnt each standard merchandise picture in the training data set, stopping is more When the parameter of the new deep learning neural network model, the deep learning neural network model after the training is obtained.
6. a kind of commodity information processor, which is characterized in that described device includes:
Data acquisition module, for obtaining the corresponding color depth picture of commodity to be identified;
Commodity identification module is known for inputting the color depth picture in the deep learning neural network model trained Merchandise classification corresponding with the commodity and product locations are not obtained;
Commodity value information module, the quotient for determining the commodity to be identified according to the merchandise classification and the product locations Product value information.
7. device according to claim 6, which is characterized in that described device further includes:
Position detection module block, for accounting for the color depth figure when the commodity region for including in the color depth picture When piece ratio is less than predetermined threshold value, the human body limb band of position in the color depth picture is detected;
First object area determination module determines first object region according to the position;
Second target area determining module, for determining second target area according to the first object region, described The region area of one target area is less than the region area of the second area.
8. device according to claim 6, which is characterized in that the commodity identification module includes:
The commodity Suggestion box is inputted the depth by commodity Suggestion box acquiring unit for obtaining standard merchandise Suggestion box It practises in neural network model;
Convolution characteristic pattern acquiring unit, for obtaining the color depth picture after normalizing, by the convolutional layer to institute State normalization after the color depth picture carry out it is down-sampled, obtain it is down-sampled after convolution characteristic pattern;
Depth-sampling image acquisition unit samples to obtain sampled images for carrying out sliding window to the convolution characteristic pattern, will be described Sampled images are mapped in the depth map, obtain the depth-sampling image;
Commodity recognition unit is obtained for carrying out region division to the depth-sampling image according to the commercial standards Suggestion box The cut zone is identified in corresponding cut zone, obtains corresponding merchandise classification and product locations.
9. a kind of computer equipment, including memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, which is characterized in that the processor realizes any one of claim 1 to 5 institute when executing the computer program The step of stating method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claim 1 to 5 is realized when being executed by processor.
CN201810097478.9A 2018-01-31 2018-01-31 Commodity information processing method and device, computer equipment and storage medium Expired - Fee Related CN108447061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810097478.9A CN108447061B (en) 2018-01-31 2018-01-31 Commodity information processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810097478.9A CN108447061B (en) 2018-01-31 2018-01-31 Commodity information processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108447061A true CN108447061A (en) 2018-08-24
CN108447061B CN108447061B (en) 2020-12-08

Family

ID=63191535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810097478.9A Expired - Fee Related CN108447061B (en) 2018-01-31 2018-01-31 Commodity information processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108447061B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784385A (en) * 2018-12-29 2019-05-21 广州海昇计算机科技有限公司 A kind of commodity automatic identifying method, system, device and storage medium
CN110210334A (en) * 2019-05-15 2019-09-06 广州影子科技有限公司 Pig inspects method and device, pig random check system and computer storage medium by random samples
CN110689005A (en) * 2019-09-05 2020-01-14 上海零眸智能科技有限公司 Commodity identification method based on deep learning fusion position and shape information
CN110991372A (en) * 2019-12-09 2020-04-10 河南中烟工业有限责任公司 Method for identifying cigarette brand display condition of retail merchant
WO2020073601A1 (en) * 2018-10-09 2020-04-16 深兰科技(上海)有限公司 Goods recognition method, goods recognition apparatus, and storage medium
CN111126110A (en) * 2018-10-31 2020-05-08 杭州海康威视数字技术股份有限公司 Commodity information identification method, settlement method and device and unmanned retail system
CN111191551A (en) * 2019-12-23 2020-05-22 深圳前海达闼云端智能科技有限公司 Commodity detection method and device
CN111310706A (en) * 2020-02-28 2020-06-19 创新奇智(上海)科技有限公司 Commodity price tag identification method and device, electronic equipment and storage medium
CN112653900A (en) * 2020-12-21 2021-04-13 Oppo广东移动通信有限公司 Target display method, device and equipment in video live broadcast

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825494A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825494A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020073601A1 (en) * 2018-10-09 2020-04-16 深兰科技(上海)有限公司 Goods recognition method, goods recognition apparatus, and storage medium
CN111126110A (en) * 2018-10-31 2020-05-08 杭州海康威视数字技术股份有限公司 Commodity information identification method, settlement method and device and unmanned retail system
CN111126110B (en) * 2018-10-31 2024-01-05 杭州海康威视数字技术股份有限公司 Commodity information identification method, settlement method, device and unmanned retail system
CN109784385A (en) * 2018-12-29 2019-05-21 广州海昇计算机科技有限公司 A kind of commodity automatic identifying method, system, device and storage medium
CN110210334B (en) * 2019-05-15 2022-01-04 广州影子科技有限公司 Pig spot check method and device, pig spot check system and computer storage medium
CN110210334A (en) * 2019-05-15 2019-09-06 广州影子科技有限公司 Pig inspects method and device, pig random check system and computer storage medium by random samples
CN110689005A (en) * 2019-09-05 2020-01-14 上海零眸智能科技有限公司 Commodity identification method based on deep learning fusion position and shape information
CN110991372A (en) * 2019-12-09 2020-04-10 河南中烟工业有限责任公司 Method for identifying cigarette brand display condition of retail merchant
CN111191551B (en) * 2019-12-23 2023-07-14 达闼机器人股份有限公司 Commodity detection method and device
CN111191551A (en) * 2019-12-23 2020-05-22 深圳前海达闼云端智能科技有限公司 Commodity detection method and device
CN111310706A (en) * 2020-02-28 2020-06-19 创新奇智(上海)科技有限公司 Commodity price tag identification method and device, electronic equipment and storage medium
CN111310706B (en) * 2020-02-28 2022-10-21 创新奇智(上海)科技有限公司 Commodity price tag identification method and device, electronic equipment and storage medium
CN112653900A (en) * 2020-12-21 2021-04-13 Oppo广东移动通信有限公司 Target display method, device and equipment in video live broadcast

Also Published As

Publication number Publication date
CN108447061B (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN108447061A (en) Merchandise information processing method, device, computer equipment and storage medium
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN109165645B (en) Image processing method and device and related equipment
CN105825524B (en) Method for tracking target and device
CN111340126B (en) Article identification method, apparatus, computer device, and storage medium
CN111259889A (en) Image text recognition method and device, computer equipment and computer storage medium
CN109670452A (en) Method for detecting human face, device, electronic equipment and Face datection model
CN108416902B (en) Real-time object identification method and device based on difference identification
CN109583489A (en) Defect classifying identification method, device, computer equipment and storage medium
CN108985159A (en) Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN108985155A (en) Mouth model training method, mouth recognition methods, device, equipment and medium
CN108710866A (en) Chinese mold training method, Chinese characters recognition method, device, equipment and medium
CN109508638A (en) Face Emotion identification method, apparatus, computer equipment and storage medium
JP2020507836A (en) Tracking surgical items that predicted duplicate imaging
CN109086711A (en) Facial Feature Analysis method, apparatus, computer equipment and storage medium
CN107808120A (en) Glasses localization method, device and storage medium
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN108647625A (en) A kind of expression recognition method and device
KR102141302B1 (en) Object detection method based 0n deep learning regression model and image processing apparatus
CN109886153A (en) A kind of real-time face detection method based on depth convolutional neural networks
CN110097018A (en) Transformer substation instrument detection method and device, computer equipment and storage medium
CN113920309B (en) Image detection method, image detection device, medical image processing equipment and storage medium
CN111144398A (en) Target detection method, target detection device, computer equipment and storage medium
CN111832561B (en) Character sequence recognition method, device, equipment and medium based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201208

Termination date: 20220131