CN112381184B - Image detection method, image detection device, electronic equipment and computer readable medium - Google Patents

Image detection method, image detection device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112381184B
CN112381184B CN202110051707.5A CN202110051707A CN112381184B CN 112381184 B CN112381184 B CN 112381184B CN 202110051707 A CN202110051707 A CN 202110051707A CN 112381184 B CN112381184 B CN 112381184B
Authority
CN
China
Prior art keywords
image
training sample
value
sample
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110051707.5A
Other languages
Chinese (zh)
Other versions
CN112381184A (en
Inventor
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shilianzhonghe Technology Co ltd
Original Assignee
Beijing Missfresh Ecommerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Missfresh Ecommerce Co Ltd filed Critical Beijing Missfresh Ecommerce Co Ltd
Priority to CN202110051707.5A priority Critical patent/CN112381184B/en
Publication of CN112381184A publication Critical patent/CN112381184A/en
Application granted granted Critical
Publication of CN112381184B publication Critical patent/CN112381184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses an image detection method, an image detection device, an electronic device and a computer readable medium. One embodiment of the method comprises: acquiring a first article image and a second article image of an article to be detected; respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector; performing fusion processing on the first article image vector and the second article image vector to generate a fusion vector; inputting the fusion vector into a pre-trained image detection model to obtain an image detection result; and sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing. The embodiment realizes the detection of the articles acquired by the user from multiple angles, and improves the accuracy of the article detection result. Thus, the error rate of item settlement is reduced.

Description

Image detection method, image detection device, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to an image detection method, an image detection device, an electronic device, and a computer-readable medium.
Background
With the development of internet technology, more and more automatic sales counter appears. When a user obtains an article through an automatic sales counter, the automatic sales counter usually detects the article obtained by the user by using a conventional Radio Frequency Identification (RFID) technology, and performs settlement processing according to a detection result.
However, the following technical problems generally exist in the detection method:
firstly, articles acquired by a user cannot be detected from multiple angles, so that the accuracy of the detection result of the articles is not high, and the error rate of the settlement of the articles is high;
secondly, when the conventional rfid technology detects an object, the relationship among the information included in the image is not considered comprehensively, so that the accuracy of detecting the object is low, and the error rate of the object settlement is high.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose image detection methods, apparatuses, electronic devices, and computer-readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an image detection method, including: acquiring a first article image and a second article image of an article to be detected; respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector; performing fusion processing on the first article image vector and the second article image vector to generate a fusion vector; inputting the fusion vector into a pre-trained image detection model to obtain an image detection result; and sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
In some embodiments, the determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set includes:
generating a loss value for the at least one training sample by a formula:
Figure 520138DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 700584DEST_PATH_IMAGE002
a value representing the height loss of the image,
Figure 300192DEST_PATH_IMAGE003
representing the number of training samples comprised by the at least one training sample,
Figure 907891DEST_PATH_IMAGE004
indicating the sequence number of the training samples in the at least one training sample,
Figure 225740DEST_PATH_IMAGE005
representing the second of the at least one training sample
Figure 193565DEST_PATH_IMAGE004
Each training sample includes a sample image height value,
Figure 647680DEST_PATH_IMAGE006
representing the second of the at least one training sample
Figure 426280DEST_PATH_IMAGE004
Each training sample includes a first difference value corresponding to a sample image height value,
Figure 231425DEST_PATH_IMAGE007
a preset height adjustment value is represented,
Figure 19253DEST_PATH_IMAGE008
representing the second of the at least one training sample
Figure 577142DEST_PATH_IMAGE004
The image height value corresponding to each training sample,
Figure 526643DEST_PATH_IMAGE009
to represent
Figure 553505DEST_PATH_IMAGE010
And
Figure 145023DEST_PATH_IMAGE011
the maximum value of (a) is,
Figure 308151DEST_PATH_IMAGE012
indicating a pre-set image property loss value,
Figure 694133DEST_PATH_IMAGE013
representing the second of the at least one training sample
Figure 723138DEST_PATH_IMAGE004
Each training sample includes a sample attribute value that,
Figure 852768DEST_PATH_IMAGE014
representing the second of the at least one training sample
Figure 870403DEST_PATH_IMAGE004
Each training sample includes a second difference value corresponding to the sample attribute value,
Figure 958444DEST_PATH_IMAGE015
representing the second of the at least one training sample
Figure 694319DEST_PATH_IMAGE004
The image attribute values corresponding to each training sample,
Figure 893219DEST_PATH_IMAGE016
to represent
Figure 765361DEST_PATH_IMAGE017
And
Figure 7992DEST_PATH_IMAGE018
the maximum value of (a) is,
Figure 496742DEST_PATH_IMAGE019
representing a loss value of the at least one training sample.
In a second aspect, some embodiments of the present disclosure provide an image detection apparatus, the apparatus comprising: an acquisition unit configured to acquire a first item image and a second item image of an item to be detected; the extraction unit is configured to respectively perform picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector; a fusion unit configured to perform fusion processing on the first item image vector and the second item image vector to generate a fusion vector; the input unit is configured to input the fusion vector into a pre-trained image detection model to obtain an image detection result; a transmission unit configured to transmit the image detection result to a settlement apparatus having a display function and a storage function to perform settlement processing.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the image detection method of some embodiments of the present disclosure, the accuracy of the article detection result is improved, and the error rate of the article settlement is reduced. Specifically, the reason why the accuracy of the article detection result is not high is that: the article obtained by the user cannot be detected from multiple angles, so that the accuracy of the detection result of the article is not high, and the error rate of the article settlement is high. Based on this, the image detection method of some embodiments of the present disclosure first acquires a first article image and a second article image of an article to be detected. Thus, data support is provided for detecting the article to be detected from two different angles. And secondly, respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector. Then, the first item image vector and the second item image vector are fused to generate a fusion vector. Therefore, the change of the article to be detected is conveniently considered comprehensively, and data support is provided for improving the accuracy of the article detection result. And then, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result. Therefore, the object obtained by the user is detected from multiple angles, and the accuracy of the object detection result is improved. Thus, the error rate of item settlement is reduced.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of an image detection method according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of an image detection method according to the present disclosure;
FIG. 3 is an image detection model in some embodiments of an image detection method according to the present disclosure;
FIG. 4 is a flow chart of further embodiments of an image detection method according to the present disclosure;
FIG. 5 is a schematic block diagram of some embodiments of an image detection apparatus according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of an image detection method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may acquire a first item image 102 and a second item image 103 of an item to be detected. Next, the computing device 101 may perform picture feature extraction processing on the first item image 102 and the second item image 103 respectively to obtain a first item image vector 104 and a second item image vector 105. Then, the computing device 101 may perform a fusion process on the first item image vector 104 and the second item image vector 105 to generate a fusion vector 106. Next, the computing device 101 may input the above-mentioned fusion vector 106 into a pre-trained image detection model 107, resulting in an image detection result 108. Finally, the computing apparatus 101 can transmit the above-described image detection result 108 to the settlement apparatus 109 having a display function and a storage function to perform settlement processing.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of an image detection method according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The image detection method comprises the following steps:
step 201, a first article image and a second article image of an article to be detected are acquired.
In some embodiments, an executing subject of the image detection method (e.g., the computing device 101 shown in fig. 1) may acquire the first item image and the second item image of the item to be detected from the device terminal through a wired connection manner or a wireless connection manner. Here, the article to be detected may refer to the same article stored in a vending cabinet and obtained by the user from the vending cabinet. Here, the first article image may be an image of the user when the article is stored in a vending cabinet before the user acquires the article. Here, the second item image may refer to an image when the user stores the item in a vending cabinet after acquiring the item. For example, the first item image or the second item image may include, but is not limited to, at least one of: item name, item attribute value (price), item height value, etc.
Step 202, performing picture feature extraction processing on the first article image and the second article image respectively to obtain a first article image vector and a second article image vector.
In some embodiments, the executing entity may perform picture feature extraction processing on the first item image and the second item image respectively through a pre-trained initial image extraction network model to obtain a first item image vector and a second item image vector. Here, the initial image extraction Network model may be a VGG (Visual Geometry Group Network) 16 model, a VGG19 model, or the like. For example, the first item image vector may be [0, 0, 0, 1, 0], and the second item image vector may be [0, 0, 1, 0, 0 ].
In some optional implementations of some embodiments, the executing subject may further obtain the first item image vector and the second item image vector by:
first, referring to fig. 3, the first article image is input to a pre-trained image feature extraction network to obtain a first article image vector. Wherein, the picture feature extraction network comprises: a convolutional network 301 and a pooling network 304, wherein the convolutional network 301 comprises a first convolutional layer 3011 and a second convolutional layer 3012, and the pooling network 304 comprises a first pooling layer 3041 and a second pooling layer 3042.
In practice, the above-mentioned first step may comprise the following sub-steps:
the first substep is to input the first object image to the first convolution layer 3011 and the second convolution layer 3012, respectively, to obtain a first image feature sequence 302 and a second image feature sequence 303.
A second substep of inputting the first image feature sequence 302 and the second image feature sequence 303 into the first pooling layer 3041 to obtain a first pooled feature vector sequence set 305. Here, the first pooling layer 3041 may be used for feature compression and dimensionality reduction.
A third substep of inputting the first image feature sequence 302 and the second image feature sequence 303 into the second pooling layer 3042 to obtain a second pooled feature vector sequence set 306. Here, the first pooling layer 3042 may be used for feature compression and dimensionality reduction.
A fourth sub-step, performing a splicing process on each first pooled feature vector sequence in the first pooled feature vector sequence set 305 and a second pooled feature vector sequence corresponding to the first pooled feature vector sequence to generate a spliced pooled feature vector sequence, so as to obtain a spliced pooled feature vector sequence set 307. In practice, the first pooled feature vector sequence set 305 may be [ [0.5, 0, 0, 0, 0], [0, 0.5, 0, 0, 0] ]. The second pooled feature vector sequence set 306 may be [ [1, 0, 0, 0, 0], [0, 1, 0, 0, 0] ]. Thus, the resulting stitched pooled feature vector sequence set 307 may be [ [0.5, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0.5, 0, 0, 0, 0, 1, 0, 0, 0] ].
And a fifth substep, performing fusion processing on each spliced pooling characteristic vector sequence in the spliced pooling characteristic vector sequence set to obtain a fused pooling characteristic vector as a first article image vector. Here, the fusion process may refer to a stitching process. For example, each of the merged pooled feature vector sequences in the merged pooled feature vector sequence set 307 is subjected to a fusion process to obtain a fused pooled feature vector [0.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 1, 0, 0, 0] as the first item image vector 104.
And secondly, inputting the second object image to the picture feature extraction network to obtain a second object image vector. In practice, reference is made to fig. 3 and the description of the relevant contents.
Step 203, performing a fusion process on the first item image vector and the second item image vector to generate a fusion vector.
In some embodiments, the execution subject may perform a fusion process on the first item image vector and the second item image vector to generate a fusion vector. Here, the fusion process may refer to a stitching process. For example, the first item image vector may be [0, 0, 0, 1, 0], and the second item image vector may be [0, 0, 1, 0, 0 ]. And performing fusion processing on the first item image vector and the second item image vector to generate a fusion vector [0, 0, 0, 1, 0, 0, 0, 1, 0, 0 ].
And 204, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result.
In some embodiments, the executing entity may input the fusion vector into a pre-trained image detection model to obtain an image detection result. Here, the pre-trained image detection model may be a network model of a variety of results. For example, a CNN (Convolutional Neural Networks) model, an RNN (Recurrent Neural Networks) model, or a DNN (Deep Neural Networks) model, and the like. Of course, the model can be built according to actual needs.
Step 205, sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
In some embodiments, the execution body may transmit the image detection result to a settlement device having a display function and a storage function to perform a settlement process. For example, the image detection result may be "10 items for a, total price 30 dollars". The settlement device "001" can perform settlement processing based on "10 items a, total price 30 yuan".
The above embodiments of the present disclosure have the following advantages: by the image detection method of some embodiments of the present disclosure, the accuracy of the article detection result is improved, and the error rate of the article settlement is reduced. Specifically, the reason why the accuracy of the article detection result is not high is that: the article obtained by the user cannot be detected from multiple angles, so that the accuracy of the detection result of the article is not high, and the error rate of the article settlement is high. Based on this, the image detection method of some embodiments of the present disclosure first acquires a first article image and a second article image of an article to be detected. Thus, data support is provided for detecting the article to be detected from two different angles. And secondly, respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector. Then, the first item image vector and the second item image vector are fused to generate a fusion vector. Therefore, the change of the article to be detected is conveniently considered comprehensively, and data support is provided for improving the accuracy of the article detection result. And then, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result. Therefore, the object obtained by the user is detected from multiple angles, and the accuracy of the object detection result is improved. Thus, the error rate of item settlement is reduced.
With further reference to fig. 4, a flow 400 of further embodiments of an image detection method according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The image detection method comprises the following steps:
step 401, a first article image and a second article image of an article to be detected are acquired.
Step 402, performing picture feature extraction processing on the first article image and the second article image respectively to obtain a first article image vector and a second article image vector.
Step 403, performing fusion processing on the first item image vector and the second item image vector to generate a fusion vector.
In some embodiments, the specific implementation manner and technical effects of steps 401 and 403 may refer to steps 201 and 203 in those embodiments corresponding to fig. 2, which are not described herein again.
And 404, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result.
In some embodiments, the subject performing the image detection method (e.g., computing device 101 shown in fig. 1) may input the above-described fusion vector into a pre-trained image detection model by various methods, resulting in an image detection result. The image detection model is obtained by training through the following steps;
in the first step, a training sample set is obtained. Wherein, the training samples in the training sample set include: a sample image, a sample name, a sample image height value, and a sample attribute value. Here, the sample attribute value may refer to a value transfer attribute value (price) of the item. For example, the training sample may be "chocolate. png, chocolate, 15cm, 20 yuan".
And secondly, inputting a sample image included in at least one training sample in the training sample set into the initial neural network to obtain an image detection result corresponding to each training sample in the at least one training sample. The image detection result comprises an image name, an image height value and an image attribute value. Here, the initial Neural network model may be CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), or the like without model training.
In practice, the network structure of the initial neural network needs to be determined before the second step described above. For example, it is necessary to determine which layers the initial neural network model includes, the connection order relationship between layers, and which neurons each layer includes, the weight (weight) and bias term (bias) corresponding to each neuron, the activation function of each layer, and so on. As an example, when the initial neural network model is a deep convolutional neural network, since the deep convolutional neural network is a multi-layer neural network, it needs to be determined which layers the deep convolutional neural network includes (e.g., convolutional layers, pooling layers, fully-connected layers, classifiers, etc.), the connection order relationship between layers, and which network parameters each layer includes (e.g., weights, bias terms, convolution step sizes), etc. Among other things, convolutional layers may be used to extract information features. For each convolution layer, it can determine how many convolution kernels there are, the size of each convolution kernel, the weight of each neuron in each convolution kernel, the bias term corresponding to each convolution kernel, the step size between two adjacent convolutions, and the like. And the pooling layer is used for performing dimension reduction processing on the characteristic information.
And thirdly, determining the loss value of the at least one training sample through a loss function. Here, a loss function may be used to determine a loss value for at least one training sample. The loss function may include, but is not limited to: mean square error loss function (MSE), hinge loss function (SVM), cross entropy loss function (cross entropy), and the like.
In some optional implementations of some embodiments, the third step may include the following sub-steps:
in the first substep, a difference between an image height value included in an image detection result corresponding to each of the at least one training sample and a sample image height value included in the training sample is determined as a first difference, and a first difference group is obtained.
And a second substep of determining a difference between an image attribute value included in the image detection result corresponding to each of the at least one training sample and a sample attribute value included in the training sample as a second difference, thereby obtaining a second difference value set.
A third substep of determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set. Generating a loss value of the at least one training sample by a formula:
Figure 702595DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 960401DEST_PATH_IMAGE002
a value representing the height loss of the image,
Figure 124667DEST_PATH_IMAGE003
indicating the number of training samples comprised by said at least one training sample,
Figure 835134DEST_PATH_IMAGE004
indicating the sequence number of the training sample in the at least one training sample,
Figure 110257DEST_PATH_IMAGE005
representing the second of said at least one training sample
Figure 753728DEST_PATH_IMAGE004
Each training sample includes a sample image height value,
Figure 818722DEST_PATH_IMAGE006
representing the second of said at least one training sample
Figure 282065DEST_PATH_IMAGE004
Each training sample includes a first difference value corresponding to a sample image height value,
Figure 360879DEST_PATH_IMAGE007
a preset height adjustment value is represented,
Figure 593277DEST_PATH_IMAGE008
representing the second of said at least one training sample
Figure 833766DEST_PATH_IMAGE004
The image height value corresponding to each training sample,
Figure 784404DEST_PATH_IMAGE009
to represent
Figure 401330DEST_PATH_IMAGE010
And
Figure 488235DEST_PATH_IMAGE011
the maximum value of (a) is,
Figure 414472DEST_PATH_IMAGE012
indicating a pre-set image property loss value,
Figure 586827DEST_PATH_IMAGE013
representing the second of said at least one training sample
Figure 741865DEST_PATH_IMAGE004
Each training sample includes a sample attribute value that,
Figure 948855DEST_PATH_IMAGE014
representing the second of said at least one training sample
Figure 531146DEST_PATH_IMAGE004
Each training sample includes a second difference value corresponding to the sample attribute value,
Figure 190798DEST_PATH_IMAGE015
representing the second of said at least one training sample
Figure 149527DEST_PATH_IMAGE004
The image attribute values corresponding to each training sample,
Figure 211024DEST_PATH_IMAGE016
to represent
Figure 479063DEST_PATH_IMAGE017
And
Figure 626010DEST_PATH_IMAGE018
the maximum value of (a) is,
Figure 122851DEST_PATH_IMAGE019
representing a loss value of the at least one training sample.
And fourthly, in response to the fact that the loss value is smaller than or equal to the preset threshold value, determining the initial neural network as the image detection model.
And fifthly, responding to the fact that the loss value is larger than the preset threshold value, adjusting network parameters of the initial neural network, using unused training samples to form a training sample set, using the adjusted initial neural network as the initial neural network, and executing the processing steps again.
The above formula and its related content are used as an invention point of the present disclosure, and solve the technical problem mentioned in the background art, i.e. the relationship among the information included in the image is not comprehensively considered when the conventional rfid technology detects the object, and further, the accuracy of detecting the object is low, and the error rate of the object settlement is further high. The factors that contribute to the long waiting time of the user are often as follows: when the traditional radio frequency identification technology is used for detecting an article, the relation among all information contained in an image is not comprehensively considered, and further, the accuracy of detecting the article is low. If the above factors are solved, the effect of reducing the waiting time of the user can be achieved. To achieve this, the present disclosure employs different lightweight loss functions to sum the loss values for the item image height values and the image attribute values. The sum of the obtained loss values can reach the preset threshold value in a short time, and the convergence speed of the model is further accelerated. Therefore, the problem that the relation among all information contained in the image is not comprehensively considered when the traditional radio frequency identification technology is used for detecting the object is solved. Furthermore, the accuracy of detecting the article is improved, so that the error rate of the article settlement is reduced.
Step 405, sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
In some embodiments, the specific implementation manner and technical effects of step 405 may refer to step 205 in those embodiments corresponding to fig. 2, and are not described herein again.
As can be seen from fig. 4, compared with the description of some embodiments corresponding to fig. 2, the flow 400 of the image detection method in some embodiments corresponding to fig. 4 embodies a training step of the image detection model. Different lightweight loss functions are adopted to sum the loss values of the object image height value and the image attribute value. The sum of the obtained loss values can reach the preset threshold value in a short time, and the convergence speed of the model is further accelerated. Therefore, the problem that the relation among all information contained in the image is not comprehensively considered when the traditional radio frequency identification technology is used for detecting the object is solved. Furthermore, the accuracy of detecting the article is improved, so that the error rate of the article settlement is reduced.
With further reference to fig. 5, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of an image detection apparatus, which correspond to those of the method embodiments described above with reference to fig. 2, and which may be applied in particular to various electronic devices.
As shown in fig. 5, the image detection apparatus 500 of some embodiments includes: an acquisition unit 501, an extraction unit 502, a fusion unit 503, an input unit 504, and a transmission unit 505. Wherein the acquiring unit 501 is configured to acquire a first item image and a second item image of an item to be detected; the extracting unit 502 is configured to perform picture feature extraction processing on the first item image and the second item image respectively to obtain a first item image vector and a second item image vector; the fusion unit 503 is configured to perform fusion processing on the first item image vector and the second item image vector to generate a fusion vector; the input unit 504 is configured to input the fusion vector into a pre-trained image detection model, resulting in an image detection result; the transmission unit 505 is configured to transmit the above-described image detection result to a settlement apparatus having a display function and a storage function to perform settlement processing.
In some optional implementations of some embodiments, the extraction unit 502 is further configured to: inputting the first article image to a pre-trained picture feature extraction network to obtain a first article image vector; and inputting the second article image into the picture feature extraction network to obtain a second article image vector.
In some optional implementations of some embodiments, the picture feature extraction network includes: the convolutional network comprises a first convolutional layer and a second convolutional layer, and the pooling network comprises a first pooling layer and a second pooling layer.
In some optional implementations of some embodiments, the extraction unit 502 is further configured to: inputting the first article image into the first convolution layer and the second convolution layer respectively to obtain a first image feature sequence and a second image feature sequence; inputting the first image feature sequence and the second image feature sequence into the first pooling layer to obtain a first pooled feature vector sequence set; inputting the first image feature sequence and the second image feature sequence into the second pooling layer to obtain a second pooled feature vector sequence set; splicing each first pooled feature vector sequence in the first pooled feature vector sequence set and a second pooled feature vector sequence corresponding to the first pooled feature vector sequence to generate a spliced pooled feature vector sequence, so as to obtain a spliced pooled feature vector sequence set; and performing fusion processing on each spliced pooling characteristic vector sequence in the spliced pooling characteristic vector sequence set to obtain a fused pooling characteristic vector as a first article image vector.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 600 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a first article image and a second article image of an article to be detected; respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector; performing fusion processing on the first article image vector and the second article image vector to generate a fusion vector; inputting the fusion vector into a pre-trained image detection model to obtain an image detection result; and sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an extraction unit, a fusion unit, an input unit, and a transmission unit. Here, the names of these units do not constitute a limitation on the units themselves in some cases, and for example, the display unit may also be described as "a unit that transmits the above-described image detection result to a settlement apparatus having a display function and a storage function to perform settlement processing".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. An image detection method, comprising:
acquiring a first article image and a second article image of an article to be detected;
respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector;
performing fusion processing on the first item image vector and the second item image vector to generate a fusion vector;
inputting the fusion vector into a pre-trained image detection model to obtain an image detection result;
sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing;
the image detection model is obtained by training through the following steps:
obtaining a training sample set, wherein training samples in the training sample set comprise: a sample image, a sample name, a sample image height value, and a sample attribute value;
based on the training sample set, the following processing steps are performed:
inputting a sample image included in at least one training sample in a training sample set into an initial neural network to obtain an image detection result corresponding to each training sample in the at least one training sample, wherein the image detection result includes an image name, an image height value and an image attribute value;
determining a loss value of the at least one training sample by a loss function;
determining the initial neural network as an image detection model in response to determining that the loss value is less than or equal to a preset threshold value;
wherein the determining a loss value for the at least one training sample comprises:
determining a difference value between an image height value included in an image detection result corresponding to each training sample in the at least one training sample and a sample image height value included in the training sample as a first difference value, so as to obtain a first difference value group;
determining a difference value between an image attribute value included in an image detection result corresponding to each training sample in the at least one training sample and a sample attribute value included in the training sample as a second difference value, so as to obtain a second difference value group;
determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set;
wherein the determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set comprises:
generating a loss value for the at least one training sample by a formula:
Figure 501465DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 418605DEST_PATH_IMAGE002
a value representing the height loss of the image,
Figure 729501DEST_PATH_IMAGE003
representing the number of training samples comprised by the at least one training sample,
Figure 390290DEST_PATH_IMAGE004
indicating the sequence number of the training samples in the at least one training sample,
Figure 204662DEST_PATH_IMAGE005
representing the second of the at least one training sample
Figure 558283DEST_PATH_IMAGE004
Each training sample includes a sample image height value,
Figure 841628DEST_PATH_IMAGE006
representing the second of the at least one training sample
Figure 306107DEST_PATH_IMAGE004
Each training sample includes a first difference value corresponding to a sample image height value,
Figure 974986DEST_PATH_IMAGE007
a preset height adjustment value is represented,
Figure 765087DEST_PATH_IMAGE008
representing the second of the at least one training sample
Figure 50575DEST_PATH_IMAGE004
The image height value corresponding to each training sample,
Figure 787587DEST_PATH_IMAGE009
to represent
Figure 91398DEST_PATH_IMAGE010
And
Figure 52401DEST_PATH_IMAGE011
the maximum value of (a) is,
Figure 294026DEST_PATH_IMAGE012
indicating a pre-set image property loss value,
Figure 365888DEST_PATH_IMAGE013
representing the second of the at least one training sample
Figure 540517DEST_PATH_IMAGE004
Each training sample includes a sample attribute value that,
Figure 891995DEST_PATH_IMAGE014
representing the second of the at least one training sample
Figure 152075DEST_PATH_IMAGE004
Each training sample includes a second difference value corresponding to the sample attribute value,
Figure 496469DEST_PATH_IMAGE015
representing the second of the at least one training sample
Figure 260025DEST_PATH_IMAGE004
The image attribute values corresponding to each training sample,
Figure 31672DEST_PATH_IMAGE016
to represent
Figure 779049DEST_PATH_IMAGE017
And
Figure 179330DEST_PATH_IMAGE018
the maximum value of (a) is,
Figure 62973DEST_PATH_IMAGE019
representing a loss value of the at least one training sample.
2. The method according to claim 1, wherein the performing picture feature extraction processing on the first item image and the second item image respectively to obtain a first item image vector and a second item image vector comprises:
inputting the first article image to a pre-trained picture feature extraction network to obtain a first article image vector;
and inputting the second object image into the picture feature extraction network to obtain a second object image vector.
3. The method of claim 2, wherein the picture feature extraction network comprises: a convolutional network comprising a first convolutional layer and a second convolutional layer, and a pooling network comprising a first pooling layer and a second pooling layer.
4. The method of claim 3, wherein the inputting the first item image to a pre-trained picture feature extraction network to obtain a first item image vector comprises:
inputting the first article image into the first convolution layer and the second convolution layer respectively to obtain a first image feature sequence and a second image feature sequence;
inputting the first image feature sequence and the second image feature sequence into the first pooling layer to obtain a first pooled feature vector sequence set;
inputting the first image feature sequence and the second image feature sequence into the second pooling layer to obtain a second pooled feature vector sequence set;
splicing each first pooled feature vector sequence in the first pooled feature vector sequence set and a second pooled feature vector sequence corresponding to the first pooled feature vector sequence to generate a spliced pooled feature vector sequence, so as to obtain a spliced pooled feature vector sequence set;
and performing fusion processing on each spliced pooling characteristic vector sequence in the spliced pooling characteristic vector sequence set to obtain a fused pooling characteristic vector as a first article image vector.
5. The method of claim 1, wherein the method further comprises:
in response to determining that the loss value is greater than the preset threshold, adjusting network parameters of the initial neural network, and composing a training sample set using unused training samples, performing the processing step again with the adjusted initial neural network as the initial neural network.
6. An image detection apparatus comprising:
an acquisition unit configured to acquire a first item image and a second item image of an item to be detected;
the extraction unit is configured to respectively perform picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector;
a fusion unit configured to perform fusion processing on the first item image vector and the second item image vector to generate a fusion vector;
an input unit configured to input the fusion vector into a pre-trained image detection model, resulting in an image detection result, wherein the image detection model is trained by the following steps:
obtaining a training sample set, wherein training samples in the training sample set comprise: a sample image, a sample name, a sample image height value, and a sample attribute value;
based on the training sample set, the following processing steps are performed:
inputting a sample image included in at least one training sample in a training sample set into an initial neural network to obtain an image detection result corresponding to each training sample in the at least one training sample, wherein the image detection result includes an image name, an image height value and an image attribute value;
determining a loss value of the at least one training sample by a loss function;
determining the initial neural network as an image detection model in response to determining that the loss value is less than or equal to a preset threshold value;
wherein the determining a loss value for the at least one training sample comprises:
determining a difference value between an image height value included in an image detection result corresponding to each training sample in the at least one training sample and a sample image height value included in the training sample as a first difference value, so as to obtain a first difference value group;
determining a difference value between an image attribute value included in an image detection result corresponding to each training sample in the at least one training sample and a sample attribute value included in the training sample as a second difference value, so as to obtain a second difference value group;
determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set;
wherein the determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set comprises:
generating a loss value for the at least one training sample by a formula:
Figure 536679DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 240193DEST_PATH_IMAGE002
a value representing the height loss of the image,
Figure 926389DEST_PATH_IMAGE021
representing the number of training samples comprised by the at least one training sample,
Figure 680850DEST_PATH_IMAGE004
indicating the sequence number of the training samples in the at least one training sample,
Figure 794299DEST_PATH_IMAGE005
representing the second of the at least one training sample
Figure 985109DEST_PATH_IMAGE004
Each training sample includes a sample image height value,
Figure 474997DEST_PATH_IMAGE006
representing the second of the at least one training sample
Figure 67652DEST_PATH_IMAGE004
Each training sample includes a first difference value corresponding to a sample image height value,
Figure 617582DEST_PATH_IMAGE007
a preset height adjustment value is represented,
Figure 295688DEST_PATH_IMAGE008
representing the second of the at least one training sample
Figure 323687DEST_PATH_IMAGE004
The image height value corresponding to each training sample,
Figure 505269DEST_PATH_IMAGE022
to represent
Figure 740948DEST_PATH_IMAGE010
And
Figure 906350DEST_PATH_IMAGE011
the maximum value of (a) is,
Figure 738039DEST_PATH_IMAGE012
indicating a pre-set image property loss value,
Figure 39708DEST_PATH_IMAGE023
representing the second of the at least one training sample
Figure 197020DEST_PATH_IMAGE004
Each training sample includes a sample attribute value that,
Figure 849718DEST_PATH_IMAGE014
representing the second of the at least one training sample
Figure 970252DEST_PATH_IMAGE004
Each training sample includes a second difference value corresponding to the sample attribute value,
Figure 392006DEST_PATH_IMAGE015
representing the second of the at least one training sample
Figure 454640DEST_PATH_IMAGE004
The image attribute values corresponding to each training sample,
Figure 329055DEST_PATH_IMAGE024
to represent
Figure 768126DEST_PATH_IMAGE017
And
Figure 44387DEST_PATH_IMAGE018
the maximum value of (a) is,
Figure 277922DEST_PATH_IMAGE019
representing a loss value of the at least one training sample;
a transmission unit configured to transmit the image detection result to a settlement apparatus having a display function and a storage function to perform a settlement process.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN202110051707.5A 2021-01-15 2021-01-15 Image detection method, image detection device, electronic equipment and computer readable medium Active CN112381184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110051707.5A CN112381184B (en) 2021-01-15 2021-01-15 Image detection method, image detection device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110051707.5A CN112381184B (en) 2021-01-15 2021-01-15 Image detection method, image detection device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN112381184A CN112381184A (en) 2021-02-19
CN112381184B true CN112381184B (en) 2021-05-25

Family

ID=74581858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110051707.5A Active CN112381184B (en) 2021-01-15 2021-01-15 Image detection method, image detection device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112381184B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808187A (en) * 2021-09-18 2021-12-17 京东鲲鹏(江苏)科技有限公司 Disparity map generation method and device, electronic equipment and computer readable medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960119A (en) * 2018-06-28 2018-12-07 武汉市哈哈便利科技有限公司 A kind of commodity recognizer of the multi-angle video fusion for self-service cabinet
CN109684950A (en) * 2018-12-12 2019-04-26 联想(北京)有限公司 A kind of processing method and electronic equipment
CN109697801A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Self-help settlement equipment, method, apparatus, medium and electronic equipment
CN111339887A (en) * 2020-02-20 2020-06-26 深圳前海达闼云端智能科技有限公司 Commodity identification method and intelligent container system
CN111340126A (en) * 2020-03-03 2020-06-26 腾讯云计算(北京)有限责任公司 Article identification method and device, computer equipment and storage medium
CN111353540A (en) * 2020-02-28 2020-06-30 创新奇智(青岛)科技有限公司 Commodity category identification method and device, electronic equipment and storage medium
CN111415461A (en) * 2019-01-08 2020-07-14 虹软科技股份有限公司 Article identification method and system and electronic equipment
CN111553889A (en) * 2020-04-16 2020-08-18 上海扩博智能技术有限公司 Method, system, equipment and storage medium for comparing commodity placement positions on goods shelf
CN111626201A (en) * 2020-05-26 2020-09-04 创新奇智(西安)科技有限公司 Commodity detection method and device and readable storage medium
CN111738245A (en) * 2020-08-27 2020-10-02 创新奇智(北京)科技有限公司 Commodity identification management method, commodity identification management device, server and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697801A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Self-help settlement equipment, method, apparatus, medium and electronic equipment
CN108960119A (en) * 2018-06-28 2018-12-07 武汉市哈哈便利科技有限公司 A kind of commodity recognizer of the multi-angle video fusion for self-service cabinet
CN109684950A (en) * 2018-12-12 2019-04-26 联想(北京)有限公司 A kind of processing method and electronic equipment
CN111415461A (en) * 2019-01-08 2020-07-14 虹软科技股份有限公司 Article identification method and system and electronic equipment
CN111339887A (en) * 2020-02-20 2020-06-26 深圳前海达闼云端智能科技有限公司 Commodity identification method and intelligent container system
CN111353540A (en) * 2020-02-28 2020-06-30 创新奇智(青岛)科技有限公司 Commodity category identification method and device, electronic equipment and storage medium
CN111340126A (en) * 2020-03-03 2020-06-26 腾讯云计算(北京)有限责任公司 Article identification method and device, computer equipment and storage medium
CN111553889A (en) * 2020-04-16 2020-08-18 上海扩博智能技术有限公司 Method, system, equipment and storage medium for comparing commodity placement positions on goods shelf
CN111626201A (en) * 2020-05-26 2020-09-04 创新奇智(西安)科技有限公司 Commodity detection method and device and readable storage medium
CN111738245A (en) * 2020-08-27 2020-10-02 创新奇智(北京)科技有限公司 Commodity identification management method, commodity identification management device, server and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-task cascade deep convolutional neural networks for large-scale commodity recognition;Xiaofeng Zou等;《Springer》;20190701;第1-15页 *
基于YOLOv3 的轻量级目标检测网络;齐榕等;《计算机应用与软件》;20201031;第37卷(第10期);参见第2.2节 *

Also Published As

Publication number Publication date
CN112381184A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN111967467A (en) Image target detection method and device, electronic equipment and computer readable medium
CN112800276A (en) Video cover determination method, device, medium and equipment
CN112381184B (en) Image detection method, image detection device, electronic equipment and computer readable medium
CN112381074B (en) Image recognition method and device, electronic equipment and computer readable medium
CN113468344A (en) Entity relationship extraction method and device, electronic equipment and computer readable medium
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN112529672B (en) Article information pushing method and device, electronic equipment and computer readable medium
CN115272760A (en) Small sample smoke image fine classification method suitable for forest fire smoke detection
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN111949860B (en) Method and apparatus for generating a relevance determination model
CN113780239A (en) Iris recognition method, iris recognition device, electronic equipment and computer readable medium
CN113255812A (en) Video frame detection method and device and electronic equipment
CN114202758A (en) Food information generation method and device, electronic equipment and medium
CN112990135B (en) Device control method, device, electronic device and computer readable medium
CN113283115B (en) Image model generation method and device and electronic equipment
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph
CN114155366B (en) Dynamic cabinet image recognition model training method and device, electronic equipment and medium
CN114625876B (en) Method for generating author characteristic model, method and device for processing author information
CN113819989B (en) Article packaging method, apparatus, electronic device and computer readable medium
CN111311616B (en) Method and apparatus for segmenting an image
CN111582458A (en) Method and apparatus, device, and medium for processing feature map
CN114627352A (en) Article information generation method and device, electronic equipment and computer readable medium
CN116188887A (en) Attribute recognition pre-training model generation method and attribute recognition model generation method
CN117076920A (en) Model training method, information generating method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240411

Address after: Room A-T0445, Building 3, No. 20 Yong'an Road, Shilong Economic Development Zone, Mentougou District, Beijing, 102300 (Cluster Registration)

Patentee after: Beijing Shilianzhonghe Technology Co.,Ltd.

Country or region after: China

Address before: 100102 room 801, 08 / F, building 7, yard 34, Chuangyuan Road, Chaoyang District, Beijing

Patentee before: BEIJING MISSFRESH E-COMMERCE Co.,Ltd.

Country or region before: China