Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of an image detection method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may acquire a first item image 102 and a second item image 103 of an item to be detected. Next, the computing device 101 may perform picture feature extraction processing on the first item image 102 and the second item image 103 respectively to obtain a first item image vector 104 and a second item image vector 105. Then, the computing device 101 may perform a fusion process on the first item image vector 104 and the second item image vector 105 to generate a fusion vector 106. Next, the computing device 101 may input the above-mentioned fusion vector 106 into a pre-trained image detection model 107, resulting in an image detection result 108. Finally, the computing apparatus 101 can transmit the above-described image detection result 108 to the settlement apparatus 109 having a display function and a storage function to perform settlement processing.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of an image detection method according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The image detection method comprises the following steps:
step 201, a first article image and a second article image of an article to be detected are acquired.
In some embodiments, an executing subject of the image detection method (e.g., the computing device 101 shown in fig. 1) may acquire the first item image and the second item image of the item to be detected from the device terminal through a wired connection manner or a wireless connection manner. Here, the article to be detected may refer to the same article stored in a vending cabinet and obtained by the user from the vending cabinet. Here, the first article image may be an image of the user when the article is stored in a vending cabinet before the user acquires the article. Here, the second item image may refer to an image when the user stores the item in a vending cabinet after acquiring the item. For example, the first item image or the second item image may include, but is not limited to, at least one of: item name, item attribute value (price), item height value, etc.
Step 202, performing picture feature extraction processing on the first article image and the second article image respectively to obtain a first article image vector and a second article image vector.
In some embodiments, the executing entity may perform picture feature extraction processing on the first item image and the second item image respectively through a pre-trained initial image extraction network model to obtain a first item image vector and a second item image vector. Here, the initial image extraction Network model may be a VGG (Visual Geometry Group Network) 16 model, a VGG19 model, or the like. For example, the first item image vector may be [0, 0, 0, 1, 0], and the second item image vector may be [0, 0, 1, 0, 0 ].
In some optional implementations of some embodiments, the executing subject may further obtain the first item image vector and the second item image vector by:
first, referring to fig. 3, the first article image is input to a pre-trained image feature extraction network to obtain a first article image vector. Wherein, the picture feature extraction network comprises: a convolutional network 301 and a pooling network 304, wherein the convolutional network 301 comprises a first convolutional layer 3011 and a second convolutional layer 3012, and the pooling network 304 comprises a first pooling layer 3041 and a second pooling layer 3042.
In practice, the above-mentioned first step may comprise the following sub-steps:
the first substep is to input the first object image to the first convolution layer 3011 and the second convolution layer 3012, respectively, to obtain a first image feature sequence 302 and a second image feature sequence 303.
A second substep of inputting the first image feature sequence 302 and the second image feature sequence 303 into the first pooling layer 3041 to obtain a first pooled feature vector sequence set 305. Here, the first pooling layer 3041 may be used for feature compression and dimensionality reduction.
A third substep of inputting the first image feature sequence 302 and the second image feature sequence 303 into the second pooling layer 3042 to obtain a second pooled feature vector sequence set 306. Here, the first pooling layer 3042 may be used for feature compression and dimensionality reduction.
A fourth sub-step, performing a splicing process on each first pooled feature vector sequence in the first pooled feature vector sequence set 305 and a second pooled feature vector sequence corresponding to the first pooled feature vector sequence to generate a spliced pooled feature vector sequence, so as to obtain a spliced pooled feature vector sequence set 307. In practice, the first pooled feature vector sequence set 305 may be [ [0.5, 0, 0, 0, 0], [0, 0.5, 0, 0, 0] ]. The second pooled feature vector sequence set 306 may be [ [1, 0, 0, 0, 0], [0, 1, 0, 0, 0] ]. Thus, the resulting stitched pooled feature vector sequence set 307 may be [ [0.5, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0.5, 0, 0, 0, 0, 1, 0, 0, 0] ].
And a fifth substep, performing fusion processing on each spliced pooling characteristic vector sequence in the spliced pooling characteristic vector sequence set to obtain a fused pooling characteristic vector as a first article image vector. Here, the fusion process may refer to a stitching process. For example, each of the merged pooled feature vector sequences in the merged pooled feature vector sequence set 307 is subjected to a fusion process to obtain a fused pooled feature vector [0.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 1, 0, 0, 0] as the first item image vector 104.
And secondly, inputting the second object image to the picture feature extraction network to obtain a second object image vector. In practice, reference is made to fig. 3 and the description of the relevant contents.
Step 203, performing a fusion process on the first item image vector and the second item image vector to generate a fusion vector.
In some embodiments, the execution subject may perform a fusion process on the first item image vector and the second item image vector to generate a fusion vector. Here, the fusion process may refer to a stitching process. For example, the first item image vector may be [0, 0, 0, 1, 0], and the second item image vector may be [0, 0, 1, 0, 0 ]. And performing fusion processing on the first item image vector and the second item image vector to generate a fusion vector [0, 0, 0, 1, 0, 0, 0, 1, 0, 0 ].
And 204, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result.
In some embodiments, the executing entity may input the fusion vector into a pre-trained image detection model to obtain an image detection result. Here, the pre-trained image detection model may be a network model of a variety of results. For example, a CNN (Convolutional Neural Networks) model, an RNN (Recurrent Neural Networks) model, or a DNN (Deep Neural Networks) model, and the like. Of course, the model can be built according to actual needs.
Step 205, sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
In some embodiments, the execution body may transmit the image detection result to a settlement device having a display function and a storage function to perform a settlement process. For example, the image detection result may be "10 items for a, total price 30 dollars". The settlement device "001" can perform settlement processing based on "10 items a, total price 30 yuan".
The above embodiments of the present disclosure have the following advantages: by the image detection method of some embodiments of the present disclosure, the accuracy of the article detection result is improved, and the error rate of the article settlement is reduced. Specifically, the reason why the accuracy of the article detection result is not high is that: the article obtained by the user cannot be detected from multiple angles, so that the accuracy of the detection result of the article is not high, and the error rate of the article settlement is high. Based on this, the image detection method of some embodiments of the present disclosure first acquires a first article image and a second article image of an article to be detected. Thus, data support is provided for detecting the article to be detected from two different angles. And secondly, respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector. Then, the first item image vector and the second item image vector are fused to generate a fusion vector. Therefore, the change of the article to be detected is conveniently considered comprehensively, and data support is provided for improving the accuracy of the article detection result. And then, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result. Therefore, the object obtained by the user is detected from multiple angles, and the accuracy of the object detection result is improved. Thus, the error rate of item settlement is reduced.
With further reference to fig. 4, a flow 400 of further embodiments of an image detection method according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The image detection method comprises the following steps:
step 401, a first article image and a second article image of an article to be detected are acquired.
Step 402, performing picture feature extraction processing on the first article image and the second article image respectively to obtain a first article image vector and a second article image vector.
Step 403, performing fusion processing on the first item image vector and the second item image vector to generate a fusion vector.
In some embodiments, the specific implementation manner and technical effects of steps 401 and 403 may refer to steps 201 and 203 in those embodiments corresponding to fig. 2, which are not described herein again.
And 404, inputting the fusion vector into a pre-trained image detection model to obtain an image detection result.
In some embodiments, the subject performing the image detection method (e.g., computing device 101 shown in fig. 1) may input the above-described fusion vector into a pre-trained image detection model by various methods, resulting in an image detection result. The image detection model is obtained by training through the following steps;
in the first step, a training sample set is obtained. Wherein, the training samples in the training sample set include: a sample image, a sample name, a sample image height value, and a sample attribute value. Here, the sample attribute value may refer to a value transfer attribute value (price) of the item. For example, the training sample may be "chocolate. png, chocolate, 15cm, 20 yuan".
And secondly, inputting a sample image included in at least one training sample in the training sample set into the initial neural network to obtain an image detection result corresponding to each training sample in the at least one training sample. The image detection result comprises an image name, an image height value and an image attribute value. Here, the initial Neural network model may be CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), or the like without model training.
In practice, the network structure of the initial neural network needs to be determined before the second step described above. For example, it is necessary to determine which layers the initial neural network model includes, the connection order relationship between layers, and which neurons each layer includes, the weight (weight) and bias term (bias) corresponding to each neuron, the activation function of each layer, and so on. As an example, when the initial neural network model is a deep convolutional neural network, since the deep convolutional neural network is a multi-layer neural network, it needs to be determined which layers the deep convolutional neural network includes (e.g., convolutional layers, pooling layers, fully-connected layers, classifiers, etc.), the connection order relationship between layers, and which network parameters each layer includes (e.g., weights, bias terms, convolution step sizes), etc. Among other things, convolutional layers may be used to extract information features. For each convolution layer, it can determine how many convolution kernels there are, the size of each convolution kernel, the weight of each neuron in each convolution kernel, the bias term corresponding to each convolution kernel, the step size between two adjacent convolutions, and the like. And the pooling layer is used for performing dimension reduction processing on the characteristic information.
And thirdly, determining the loss value of the at least one training sample through a loss function. Here, a loss function may be used to determine a loss value for at least one training sample. The loss function may include, but is not limited to: mean square error loss function (MSE), hinge loss function (SVM), cross entropy loss function (cross entropy), and the like.
In some optional implementations of some embodiments, the third step may include the following sub-steps:
in the first substep, a difference between an image height value included in an image detection result corresponding to each of the at least one training sample and a sample image height value included in the training sample is determined as a first difference, and a first difference group is obtained.
And a second substep of determining a difference between an image attribute value included in the image detection result corresponding to each of the at least one training sample and a sample attribute value included in the training sample as a second difference, thereby obtaining a second difference value set.
A third substep of determining a loss value of the at least one training sample based on the at least one training sample, the image detection result corresponding to each of the at least one training sample, the first difference value set, and the second difference value set. Generating a loss value of the at least one training sample by a formula:
wherein the content of the first and second substances,
a value representing the height loss of the image,
indicating the number of training samples comprised by said at least one training sample,
indicating the sequence number of the training sample in the at least one training sample,
representing the second of said at least one training sample
Each training sample includes a sample image height value,
representing the second of said at least one training sample
Each training sample includes a first difference value corresponding to a sample image height value,
a preset height adjustment value is represented,
representing the second of said at least one training sample
The image height value corresponding to each training sample,
to represent
And
the maximum value of (a) is,
indicating a pre-set image property loss value,
representing the second of said at least one training sample
Each training sample includes a sample attribute value that,
representing the second of said at least one training sample
Each training sample includes a second difference value corresponding to the sample attribute value,
representing the second of said at least one training sample
The image attribute values corresponding to each training sample,
to represent
And
the maximum value of (a) is,
representing a loss value of the at least one training sample.
And fourthly, in response to the fact that the loss value is smaller than or equal to the preset threshold value, determining the initial neural network as the image detection model.
And fifthly, responding to the fact that the loss value is larger than the preset threshold value, adjusting network parameters of the initial neural network, using unused training samples to form a training sample set, using the adjusted initial neural network as the initial neural network, and executing the processing steps again.
The above formula and its related content are used as an invention point of the present disclosure, and solve the technical problem mentioned in the background art, i.e. the relationship among the information included in the image is not comprehensively considered when the conventional rfid technology detects the object, and further, the accuracy of detecting the object is low, and the error rate of the object settlement is further high. The factors that contribute to the long waiting time of the user are often as follows: when the traditional radio frequency identification technology is used for detecting an article, the relation among all information contained in an image is not comprehensively considered, and further, the accuracy of detecting the article is low. If the above factors are solved, the effect of reducing the waiting time of the user can be achieved. To achieve this, the present disclosure employs different lightweight loss functions to sum the loss values for the item image height values and the image attribute values. The sum of the obtained loss values can reach the preset threshold value in a short time, and the convergence speed of the model is further accelerated. Therefore, the problem that the relation among all information contained in the image is not comprehensively considered when the traditional radio frequency identification technology is used for detecting the object is solved. Furthermore, the accuracy of detecting the article is improved, so that the error rate of the article settlement is reduced.
Step 405, sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
In some embodiments, the specific implementation manner and technical effects of step 405 may refer to step 205 in those embodiments corresponding to fig. 2, and are not described herein again.
As can be seen from fig. 4, compared with the description of some embodiments corresponding to fig. 2, the flow 400 of the image detection method in some embodiments corresponding to fig. 4 embodies a training step of the image detection model. Different lightweight loss functions are adopted to sum the loss values of the object image height value and the image attribute value. The sum of the obtained loss values can reach the preset threshold value in a short time, and the convergence speed of the model is further accelerated. Therefore, the problem that the relation among all information contained in the image is not comprehensively considered when the traditional radio frequency identification technology is used for detecting the object is solved. Furthermore, the accuracy of detecting the article is improved, so that the error rate of the article settlement is reduced.
With further reference to fig. 5, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of an image detection apparatus, which correspond to those of the method embodiments described above with reference to fig. 2, and which may be applied in particular to various electronic devices.
As shown in fig. 5, the image detection apparatus 500 of some embodiments includes: an acquisition unit 501, an extraction unit 502, a fusion unit 503, an input unit 504, and a transmission unit 505. Wherein the acquiring unit 501 is configured to acquire a first item image and a second item image of an item to be detected; the extracting unit 502 is configured to perform picture feature extraction processing on the first item image and the second item image respectively to obtain a first item image vector and a second item image vector; the fusion unit 503 is configured to perform fusion processing on the first item image vector and the second item image vector to generate a fusion vector; the input unit 504 is configured to input the fusion vector into a pre-trained image detection model, resulting in an image detection result; the transmission unit 505 is configured to transmit the above-described image detection result to a settlement apparatus having a display function and a storage function to perform settlement processing.
In some optional implementations of some embodiments, the extraction unit 502 is further configured to: inputting the first article image to a pre-trained picture feature extraction network to obtain a first article image vector; and inputting the second article image into the picture feature extraction network to obtain a second article image vector.
In some optional implementations of some embodiments, the picture feature extraction network includes: the convolutional network comprises a first convolutional layer and a second convolutional layer, and the pooling network comprises a first pooling layer and a second pooling layer.
In some optional implementations of some embodiments, the extraction unit 502 is further configured to: inputting the first article image into the first convolution layer and the second convolution layer respectively to obtain a first image feature sequence and a second image feature sequence; inputting the first image feature sequence and the second image feature sequence into the first pooling layer to obtain a first pooled feature vector sequence set; inputting the first image feature sequence and the second image feature sequence into the second pooling layer to obtain a second pooled feature vector sequence set; splicing each first pooled feature vector sequence in the first pooled feature vector sequence set and a second pooled feature vector sequence corresponding to the first pooled feature vector sequence to generate a spliced pooled feature vector sequence, so as to obtain a spliced pooled feature vector sequence set; and performing fusion processing on each spliced pooling characteristic vector sequence in the spliced pooling characteristic vector sequence set to obtain a fused pooling characteristic vector as a first article image vector.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 600 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a first article image and a second article image of an article to be detected; respectively carrying out picture feature extraction processing on the first article image and the second article image to obtain a first article image vector and a second article image vector; performing fusion processing on the first article image vector and the second article image vector to generate a fusion vector; inputting the fusion vector into a pre-trained image detection model to obtain an image detection result; and sending the image detection result to a settlement device with a display function and a storage function to perform settlement processing.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an extraction unit, a fusion unit, an input unit, and a transmission unit. Here, the names of these units do not constitute a limitation on the units themselves in some cases, and for example, the display unit may also be described as "a unit that transmits the above-described image detection result to a settlement apparatus having a display function and a storage function to perform settlement processing".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.