CN115187800A - Artificial intelligence commodity inspection method, device and medium based on deep learning - Google Patents

Artificial intelligence commodity inspection method, device and medium based on deep learning Download PDF

Info

Publication number
CN115187800A
CN115187800A CN202210800571.8A CN202210800571A CN115187800A CN 115187800 A CN115187800 A CN 115187800A CN 202210800571 A CN202210800571 A CN 202210800571A CN 115187800 A CN115187800 A CN 115187800A
Authority
CN
China
Prior art keywords
commodity
image
target
pixel
area image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210800571.8A
Other languages
Chinese (zh)
Inventor
陈强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd
Original Assignee
Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd filed Critical Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd
Priority to CN202210800571.8A priority Critical patent/CN115187800A/en
Publication of CN115187800A publication Critical patent/CN115187800A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method, a device and a medium for artificial intelligence commodity inspection based on deep learning, which are characterized in that commodity information is identified from a first area image by acquiring the first area image of a target area after a door is closed; after the door is closed again, acquiring a second area image of the target area, identifying commodity information from the second area image, and then obtaining distinguishing information by comparing the commodity information obtained twice; then, the image of the taken commodity is collected, the target commodity information is identified and obtained, the distinguishing information is matched with the target commodity information, and the commodity taken out is inspected when the distinguishing information is consistent with the target commodity information, so that the accuracy of commodity inspection is ensured.

Description

Artificial intelligence commodity inspection method, device and medium based on deep learning
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence commodity inspection method, an artificial intelligence commodity inspection device and an artificial intelligence commodity inspection medium based on deep learning.
Background
Artificial Intelligence (AI) is a new technical science to study and develop theories, methods, techniques and application systems for simulating, extending and expanding human Intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Since the birth of artificial intelligence, theories and technologies become mature day by day, and application fields are expanded continuously, so that science and technology products brought by the artificial intelligence in the future can be assumed to be 'containers' of human intelligence. The artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but can think like a human, and can also exceed human intelligence.
Artificial intelligence is widely used in commerce, and machine vision is used to inspect goods in containers. The current intelligent counter vision scheme mainly uses a fisheye lens as a vision collector to detect and identify commodities in a counter. Compared with a common lens, the visual angle of the fisheye lens is larger, and under the condition that the layer height of the container is limited, the visual information of commodities of the whole layer of container can be collected. However, in the visual scheme, the difference of some commodities with different types and specifications (such as different heights of the commodities) in the container can not be recognized under the visual angle of the fisheye lens.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and a medium for artificial intelligence commodity inspection based on deep learning, which can accurately identify commodities taken out of containers.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to an artificial intelligence commodity inspection method based on deep learning, which comprises the following steps:
the method comprises the steps of obtaining a first area image, a second area image and a target image of a target area in a container, wherein the first area image is obtained after the container door is closed, the second area image is obtained when the container door is closed next time, and the target image is an image of a commodity taken out of the container;
identifying the target image to obtain target commodity information; identifying the first area image to obtain a first commodity list, and identifying the second area image to obtain a second commodity list;
comparing the first commodity image with the second commodity image to obtain distinguishing information;
and comparing the target commodity information with the distinguishing information, and finishing the commodity inspection when the target commodity information is matched with the distinguishing information.
Further, acquiring an image of the target in the container, comprising:
acquiring position information of a commodity;
and comparing the position information of the commodity with a preset position threshold range, acquiring an image of the preset position when the position information of the commodity exceeds the position threshold range, and taking the image of the preset position as a target image.
Further, the step of identifying the target image and obtaining the target commodity information includes:
extracting commodity contour features from the target image;
acquiring a first training data set containing the commodity contour features, and inputting the first training data set into a preset convolutional neural network to generate a first commodity identification model;
and inputting the target image into the first commodity identification model, and outputting the target commodity information.
Further, the method for extracting the commodity outline feature from the target image comprises the following steps:
for the target image to be converted into a gray scale image, the mathematical expression of gray scale conversion is as follows:
GRAY(A i )=(R 2.2 ×0.2937+G 2.2 ×0.6274+B 2.2 ×0.0753) 1/2.2
wherein R is a pixel A i The red color value G is pixel point A i Green value of, B is a pixel A i A blue value of;
for pixel point A in the gray level image i Scanning one by one and obtaining pixel point A i Gray (A) of i ):
Line-by-line scanning and calculating pixel points A i And adjacent pixel point A i-1 Difference of GRAY values of (A) GRAY (A) i )-GRAY(A i-1 ) And is combined withIn | GRAY (A) i )-GRAY(A i-1 )|>When a is the first threshold, pixel A is set i And adjacent pixel point A i-1 Marking to obtain marked pixel point A j
When marking pixel point A j When the following judgment conditions are met, the pixel point A is set j As a contour pixel, the determination condition at least includes:
marking a pixel A j At least one adjacent pixel point A exists in the horizontal direction and the vertical direction j-1 Satisfy | GRAY (A) i )-GRAY(A i-1 )|>;
Marking a pixel A j Has at most three adjacent pixel points A in the horizontal direction and the vertical direction j-1 Satisfy | GRAY (A) i )-GRAY(A i-1 )|<β is a second threshold;
and merging the outline pixel points to obtain the outline characteristics of the commodity.
Further, recognizing the first area image to obtain a first commodity list, and recognizing the second area image to obtain a second commodity list, including:
extracting commodity features of the first area image and the second area image;
acquiring a second training data set containing commodity characteristics, inputting the second training data set into a preset convolutional neural network, and generating a second commodity identification model;
inputting the first area image into the second commodity identification model to obtain an identification result, and establishing the first commodity list according to the identification result; and inputting the second area image into the second commodity identification model to obtain an identification result, and establishing the second commodity list according to the identification result.
Further, the extracting the commodity feature of the first area image comprises:
carrying out binarization processing on the first area image to obtain a first black-and-white image;
selecting one pixel point A from the first black-and-white image k When at least one pixel point A exists in the first black-and-white image k Pixel point A with different gray value k-1 Then, the pixel point A is reserved k (ii) a When there is no pixel and pixel A in the first black-and-white image k Pixel point A with different gray value k-1 Then, the pixel point A is removed k
From retained pixel A k And extracting a plurality of closed feature blocks, and taking the closed feature blocks with the repeated occurrence times exceeding a preset threshold value as commodity features of the first area image.
Further, the extracting the commodity feature of the second area image comprises:
carrying out binarization processing on the second area image to obtain a second black-and-white image;
selecting any pixel point A from the second black-and-white image k ', when at least one pixel point A exists in the second black-and-white image k ' Pixel points A with different gray values k-1 When, keep pixel A k '; when the second black-and-white image does not have any one pixel and pixel point A k ' Pixel points A with different gray values k-1 When, remove pixel A k ′;
From retained pixel A k And extracting a plurality of closed feature blocks, and taking the closed feature blocks with the repeated occurrence times exceeding a preset threshold value as commodity features of the second area image.
The present invention also provides an artificial intelligence commodity inspection device, comprising:
the system comprises an acquisition module, a display module and a display module, wherein a user acquires a first area image, a second area image and a target image of a target area in a container, the first area image is acquired after the container door is closed, the second area image is acquired when the container door is closed next time, and the target image is an image of a commodity which is taken out of the container;
the identification module is used for identifying the target image to obtain target commodity information; identifying the first area image to obtain a first commodity list, and identifying the second area image to obtain a second commodity list;
the comparison module is used for comparing the first commodity image with the second commodity image to obtain distinguishing information;
and the inspection module is used for comparing the target commodity information with the distinguishing information and finishing the inspection of the commodity when the target commodity information is matched with the distinguishing information.
The invention also provides a storage medium, wherein a computer program is stored, and when the computer program is loaded and executed by a processor, the artificial intelligence commodity inspection method based on deep learning is realized.
The present invention also provides a computer apparatus comprising: a processor, and a memory; wherein the memory is for storing a computer program; the processor is used for loading and executing the computer program to enable the computer device to execute the artificial intelligence commodity inspection method based on deep learning.
The beneficial effects of the invention are: the artificial intelligence commodity inspection method, the artificial intelligence commodity inspection device and the artificial intelligence commodity inspection medium based on deep learning have the advantages that commodity information is identified from a first area image by acquiring the first area image of a target area after a door is closed; after the door is closed again, acquiring a second area image of the target area, identifying commodity information from the second area image, and then obtaining distinguishing information by comparing the commodity information obtained twice; then, the image of the taken commodity is collected, the target commodity information is identified and obtained, the distinguishing information is matched with the target commodity information, and the commodity taken out is inspected when the distinguishing information is consistent with the target commodity information, so that the accuracy of commodity inspection is ensured.
Drawings
The invention is further described below with reference to the following figures and examples:
FIG. 1 is a schematic flow chart of a method for inspecting a commodity according to the present invention;
fig. 2 is a schematic structural diagram of the commodity inspection device of the present invention.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the layers related to the present invention rather than being drawn according to the number, shape and size of the layers in actual implementation, and the type, amount and proportion of each layer in actual implementation can be changed arbitrarily, and the layer layout may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details.
At present, the figure of an unmanned container can be seen in a street, a lane, particularly a residential area. The unmanned sales counter greatly facilitates the life of people and promotes the connection between the entity economy and the Internet economy. However, the existing unmanned counter has many problems such as inaccurate identification of the taken out goods. When a shopper takes out a commodity and returns it many times, an identification error is easily caused.
In order to solve the above problem, as shown in fig. 1: the artificial intelligence commodity inspection method based on deep learning of the embodiment comprises the following steps:
s101, acquiring a first area image, a second area image and a target image of a target area in a container, wherein the first area image is acquired after the container door is closed, the second area image is acquired when the container door is closed next time, and the target image is an image of a commodity taken out of the container;
s102, identifying the target image to obtain target commodity information; identifying the first area image to obtain a first commodity list, and identifying the second area image to obtain a second commodity list;
s103, comparing the first commodity image with the second commodity image to obtain distinguishing information;
s104, comparing the target commodity information with the distinguishing information, and when the target commodity information is matched with the distinguishing information, finishing the commodity inspection.
The commodity content lacking after closing twice is obtained through the distinguishing information, then what commodity is taken away is visually judged through the target commodity information, and accurate commodity inspection can be obtained after two times of identification and verification. Thereby avoiding the condition of commodity inspection mistakes.
In some embodiments, acquiring an image of a target in a container comprises:
s201, acquiring position information of a commodity;
s202, comparing the position information of the commodity with a preset position threshold range, collecting an image of the preset position when the position information of the commodity exceeds the position threshold range, and taking the image of the preset position as a target image.
Specifically, in this embodiment, the position information of the commodity is detected by a sensor (such as an infrared sensor and an RFID radio frequency antenna) arranged at the door of the container, and the position information of the commodity is actually generated only after the commodity is taken to the door; and meanwhile, the preset threshold range is within the container, when the position of the commodity leaves the container, the camera inside the container is started to collect the image of the preset position, and the preset position is the corresponding position of the door of the container.
In some embodiments, identifying the target image and obtaining the target product information includes:
s301, extracting commodity outline characteristics from the target image;
s302, a first training data set containing the commodity contour features is obtained and input into a preset convolutional neural network to generate a first commodity identification model;
and S303, inputting the target image into the first commodity identification model, and outputting the target commodity information.
In some embodiments, extracting the commodity contour feature from the target image comprises:
s401, converting the target image into a gray level image, wherein the mathematical expression of gray level conversion is as follows:
GRAY(A i )=(R 2.2 ×0.2937+G 2.2 ×0.6274+B 2.2 ×0.0753) 1/2.2
wherein R is a pixel point A i The red color value G is a pixel point A i Green value of (B) is pixel point A i A blue value of;
in this embodiment, the contour is obtained by using a grayscale image, because when the commodity is taken out from the container, the collected image is very complex, and the clothing of the consumer is photographed, so if the binarization method is used, it is difficult to obtain the binarization threshold, and the pattern of the target commodity is likely to be erased, so the contour feature is directly collected by using the grayscale image.
S402, aligning pixel point A in gray level image i Scanning one by one and obtaining pixel point A i Gray value of (A) i ):
S403, scanning line by line and calculating pixel point A i And adjacent pixel point A i-1 Difference of GRAY values of (A) GRAY (A) i )-GRAY(A i-1 ) And in | GRAY (A) i )-GRAY(A i-1 )|>When a is the first threshold, pixel A is set i And adjacent pixel point A i-1 Marking to obtain marked pixel point A j
S404, when marking the pixel point A j When the following judgment conditions are met, the pixel point A is set j As a contour pixel, the determination conditions at least include:
marking a pixel A j At least one adjacent pixel point A exists in the horizontal direction and the vertical direction j-1 Satisfying | GRAY (A) i )-GRAY(A i-1 )|>a;
Marking a pixel A j Has at most three adjacent pixel points A in the horizontal direction and the vertical direction j-1 Satisfy | GRAY (A) i )-GRAY(A i-1 )|<β, β is a second threshold;
s405, the contour pixel points are combined to obtain the contour features of the commodity, generally, the contour pixel points are located on a complete line, so that when the two conditions of the step S404 are met, all contour lines in the image can be roughly extracted, and then the identification is carried out through an identification model, so that the information such as the type and the specification of the commodity can be obtained.
In some embodiments, identifying the first area image to obtain a first item list, and identifying the second area image to obtain a second item list comprises:
s501, extracting commodity characteristics of the first area image and the second area image;
s502, obtaining a second training data set containing commodity characteristics, inputting the second training data set into a preset convolutional neural network, and generating a second commodity identification model;
s503, inputting the first area image into a second commodity identification model to obtain an identification result, and establishing a first commodity list according to the identification result; and inputting the second area image into a second commodity identification model to obtain an identification result, and establishing a second commodity list according to the identification result.
In this embodiment, the recognition model is trained in the same manner as the artificial neural network is acquired, and then all the commodities in the first area image and the second area image are judged, so that a first commodity list and a second commodity list are generated; the first product list includes information on a product before the consumer purchases the product, and the second product list includes information on a product after the consumer purchases the product and closes the door.
In some embodiments, extracting the commodity feature of the first region image comprises:
s601, performing binarization processing on the first area image to obtain a first black-and-white image;
s602, selecting one pixel point A from the first black-and-white image k When at least one AND image exists in the first black-and-white imagePlain dot A k Pixel point A with different gray value k-1 Then, the pixel point A is reserved k (ii) a When there is no pixel and pixel A in the first black-and-white image k Pixel point A with different gray value k-1 Then, the pixel point A is removed k
S603, from the reserved pixel point A k And extracting a plurality of closed feature blocks, and taking the closed feature blocks with the repeated occurrence times exceeding a preset threshold value as commodity features of the first area image.
In this embodiment, in the case of identifying a scene, generally, goods in the container are placed neatly, so that difficulty in extracting features is low, the contour of the goods is directly extracted by using a binarization method, and then due to the package and the shape of the goods, a plurality of closed patterns, such as trademarks on the package bag, printing on the package bag, bottle caps in a beverage bottle, and the like, may exist. By simply extracting these features, the article can be easily identified. The characteristics of the goods in this embodiment thus include a trademark on the packing bag, a stamp on the packing bag, a cap in a beverage bottle, a character on the packing bag, an overall shape of the packing, and the like. Because the recognition is carried out by adopting a recognition model mode, the recognition accuracy of the recognition model trained after the characteristics are extracted is higher.
In some embodiments, extracting the commodity feature of the second region image comprises:
s701, carrying out binarization processing on the second area image to obtain a second black-and-white image;
s702, selecting any pixel point A from the second black-and-white image k ' when at least one pixel point A exists in the second black-and-white image k ' Pixel points A with different gray values k-1 When, keep pixel A k '; when there is no one pixel and pixel point A in the second black-and-white image k ' Pixel points A with different gray values k-1 When, remove pixel A k ′;
S703. From the reserved pixel point A k Extracting a plurality of closed feature blocks, and taking the closed feature blocks with the repeated occurrence times exceeding a preset threshold value as commodity features of the second area image。
Similarly, in this embodiment, the contour of the commodity is directly extracted by binarization, and then due to the commodity package and the shape of the commodity, there are many closed patterns, such as a trademark on the packaging bag, a stamp on the packaging bag, a bottle cap in a beverage bottle, and the like. By simply extracting these features, the article can be easily identified. The merchandise features in this embodiment thus include branding on the bag, printing on the bag, the cap in the beverage bottle, lettering on the bag, the overall shape of the package, and the like. Because the recognition model is adopted for recognition, the recognition model trained after the features are extracted has higher recognition accuracy.
The artificial intelligence commodity inspection method based on deep learning of the invention, through gathering the first regional image of target area after closing the door, discern the commodity information from the first regional image; after the door is closed again, acquiring a second area image of the target area, identifying commodity information from the second area image, and then obtaining distinguishing information by comparing the commodity information obtained twice; then, the image of the taken commodity is collected, the target commodity information is identified and obtained, the distinguishing information is matched with the target commodity information, and the commodity taken out is inspected when the distinguishing information is consistent with the target commodity information, so that the accuracy of commodity inspection is ensured.
As shown in fig. 2, the present invention also provides an artificial intelligence commodity inspection device, comprising:
the system comprises an acquisition module, a storage module and a display module, wherein a user acquires a first area image, a second area image and a target image of a target area in a container, the first area image is acquired after the container door is closed, the second area image is acquired when the container door is closed next time, and the target image is an image of a commodity taken out of the container by the commodity;
the identification module is used for identifying the target image to obtain target commodity information; identifying the first area image to obtain a first commodity list, and identifying the second area image to obtain a second commodity list;
the comparison module is used for comparing the first commodity image with the second commodity image to obtain distinguishing information;
and the inspection module is used for comparing the target commodity information with the distinguishing information and finishing the inspection of the commodity when the target commodity information is matched with the distinguishing information.
The artificial intelligent commodity inspection device identifies commodity information from a first area image by acquiring the first area image of a target area after closing a door; after the door is closed again, acquiring a second area image of the target area, identifying commodity information from the second area image, and then obtaining distinguishing information by comparing the commodity information obtained twice; then, the image of the taken commodity is collected, the target commodity information is identified and obtained, the distinguishing information is matched with the target commodity information, and the commodity taken out is inspected when the distinguishing information is consistent with the target commodity information, so that the accuracy of commodity inspection is ensured.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements any of the methods in the present embodiments.
The present embodiment further provides an electronic terminal, including: a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored in the memory so as to enable the terminal to execute the method in the embodiment.
The computer-readable storage medium in the embodiment can be understood by those skilled in the art as follows: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The electronic terminal provided by the embodiment comprises a processor, a memory, a transceiver and a communication interface, wherein the memory and the communication interface are connected with the processor and the transceiver and are used for completing mutual communication, the memory is used for storing a computer program, the communication interface is used for carrying out communication, and the processor and the transceiver are used for operating the computer program so that the electronic terminal can execute the steps of the method.
In this embodiment, the Memory may include a Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components.
Finally, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The artificial intelligence commodity inspection method based on deep learning is characterized by comprising the following steps:
acquiring a first area image, a second area image and a target image of a target area in a container, wherein the first area image is acquired after the container door is closed, the second area image is acquired when the container door is closed next time, and the target image is an image of a commodity taken out of the container;
identifying the target image to obtain target commodity information; identifying the first area image to obtain a first commodity list, and identifying the second area image to obtain a second commodity list;
comparing the first commodity image with the second commodity image to obtain distinguishing information;
and comparing the target commodity information with the distinguishing information, and finishing the commodity inspection when the target commodity information is matched with the distinguishing information.
2. The artificial intelligence commodity inspection method based on deep learning of claim 1, wherein: acquiring a target image in a container, comprising:
acquiring position information of a commodity;
and comparing the position information of the commodity with a preset position threshold range, acquiring an image of the preset position when the position information of the commodity exceeds the position threshold range, and taking the image of the preset position as a target image.
3. The artificial intelligence commodity inspection method based on deep learning of claim 1, wherein: the step of identifying the target image and obtaining the target commodity information comprises the following steps:
extracting commodity contour features from the target image;
acquiring a first training data set containing the commodity contour features, and inputting the first training data set into a preset convolutional neural network to generate a first commodity identification model;
and inputting the target image into the first commodity identification model, and outputting the target commodity information.
4. The artificial intelligence commodity inspection method based on deep learning of claim 3, characterized in that: extracting commodity contour features from the target image, wherein the commodity contour features comprise:
for the target image to be converted into a gray scale image, the mathematical expression of gray scale conversion is as follows:
GRAY(A i )=(R 2.2 ×0.2937+G 2.2 ×0.6274+B 2.2 ×0.0753) 1/2.2
wherein R is a pixel point A i The red color value G is a pixel point A i Green value of, B is a pixel A i A blue value of;
for pixel point A in the gray level image i Scanning one by one and obtaining pixel point A i Gray value of (A) i ):
Line-by-line scanning and calculating pixel points A i And adjacent pixel point A i-1 Difference of GRAY values of (A) GRAY (A) i )-GRAY(A i-1 ) And in | GRAY (A) i )-GRAY(A i-1 ) When | is greater than a, a is a first threshold value, and the pixel point A is processed i And adjacent pixel point A i-1 Marking is carried out to obtain marked pixel points A j
When marking pixel point A j When the following judgment conditions are met, the pixel point A is set j As a contour pixel, the determination condition at least includes:
marking a pixel A j At least one adjacent pixel point A exists in the horizontal direction and the vertical direction j-1 Satisfying | GRAY (A) i )-GRAY(A i-1 )|>a;
Marking a pixel A j At most, three adjacent pixel points A exist in the horizontal direction and the vertical direction j-1 Satisfying | GRAY (A) i )-GRAY(A i-1 ) If beta is less than | beta, beta is a second threshold value;
and merging the outline pixel points to obtain the commodity outline characteristics.
5. The artificial intelligence commodity inspection method based on deep learning of claim 1, characterized in that: identifying the first area image to obtain a first commodity list, identifying the second area image to obtain a second commodity list, comprising:
extracting commodity features of the first area image and the second area image;
acquiring a second training data set containing commodity characteristics, inputting the second training data set into a preset convolutional neural network, and generating a second commodity identification model;
inputting the first area image into the second commodity identification model to obtain an identification result, and establishing the first commodity list according to the identification result; and inputting the second area image into the second commodity identification model to obtain an identification result, and establishing the second commodity list according to the identification result.
6. The artificial intelligence commodity inspection method based on deep learning of claim 5, wherein: the commodity feature of the first area image is extracted, and the method comprises the following steps:
carrying out binarization processing on the first area image to obtain a first black-and-white image;
selecting any pixel point A from the first black-and-white image k When at least one pixel point A exists in the first black-and-white image k Pixel point A with different gray value k-1 Then, the pixel point A is reserved k (ii) a When there is no pixel and pixel A in the first black-and-white image k Pixel point A with different gray value k-1 Then, the pixel point A is removed k
From retained pixel A k Extracting a plurality of closed feature blocks, and taking the closed feature blocks with repeated occurrence times exceeding a preset threshold value as commodity features of the first area image.
7. The artificial intelligence commodity inspection method based on deep learning of claim 5, wherein: the commodity feature of the second area image is extracted, and the method comprises the following steps:
carrying out binarization processing on the second area image to obtain a second black-and-white image;
selecting any pixel point A from the second black-and-white image k ', when at least one pixel point A exists in the second black-and-white image k ' Pixel points A with different gray values k-1 When, keep pixel A k '; when there is no pixel and pixel A in the second black-and-white image k ' Pixel points A with different gray values k-1 When, remove pixel A k ′;
From retained pixel A k Extracting a plurality of closed feature blocks, and taking the closed feature blocks with the repeated occurrence times exceeding a preset threshold value as commodity features of the second area image.
8. Artificial intelligence commodity verifying attachment, its characterized in that: the method comprises the following steps:
the system comprises an acquisition module, a display module and a display module, wherein a user acquires a first area image, a second area image and a target image of a target area in a container, the first area image is acquired after the container door is closed, the second area image is acquired when the container door is closed next time, and the target image is an image of a commodity taken out of the container;
the identification module is used for identifying the target image to obtain target commodity information; identifying the first area image to obtain a first commodity list, and identifying the second area image to obtain a second commodity list;
the comparison module is used for comparing the first commodity image with the second commodity image to obtain distinguishing information;
and the inspection module is used for comparing the target commodity information with the distinguishing information and finishing the inspection of the commodity when the target commodity information is matched with the distinguishing information.
9. A storage medium having a computer program stored therein, wherein the computer program when loaded and executed by a processor implements a method for artificial intelligence commodity inspection based on deep learning according to any one of claims 1 to 7.
10. A computer device, comprising: a processor, and a memory; wherein the memory is for storing a computer program; the processor is used for loading and executing the computer program to enable the computer device to execute the artificial intelligence commodity inspection method based on deep learning of any one of claims 1 to 7.
CN202210800571.8A 2022-07-06 2022-07-06 Artificial intelligence commodity inspection method, device and medium based on deep learning Pending CN115187800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210800571.8A CN115187800A (en) 2022-07-06 2022-07-06 Artificial intelligence commodity inspection method, device and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210800571.8A CN115187800A (en) 2022-07-06 2022-07-06 Artificial intelligence commodity inspection method, device and medium based on deep learning

Publications (1)

Publication Number Publication Date
CN115187800A true CN115187800A (en) 2022-10-14

Family

ID=83516476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210800571.8A Pending CN115187800A (en) 2022-07-06 2022-07-06 Artificial intelligence commodity inspection method, device and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN115187800A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117906734A (en) * 2024-03-20 2024-04-19 深圳桑达银络科技有限公司 Automatic leveling weighing cashing system and method based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117906734A (en) * 2024-03-20 2024-04-19 深圳桑达银络科技有限公司 Automatic leveling weighing cashing system and method based on artificial intelligence

Similar Documents

Publication Publication Date Title
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
US11151427B2 (en) Method and apparatus for checkout based on image identification technique of convolutional neural network
CN108549870B (en) Method and device for identifying article display
CN107617573A (en) A kind of logistics code identification and method for sorting based on multitask deep learning
CN111178355B (en) Seal identification method, device and storage medium
CN106156778A (en) The apparatus and method of the known object in the visual field identifying three-dimensional machine vision system
CN108345912A (en) Commodity rapid settlement system based on RGBD information and deep learning
CN104299006A (en) Vehicle license plate recognition method based on deep neural network
CN109034694B (en) Production raw material intelligent storage method and system based on intelligent manufacturing
CN110598752A (en) Image classification model training method and system for automatically generating training data set
US11354549B2 (en) Method and system for region proposal based object recognition for estimating planogram compliance
CN114783584A (en) Method and device for recording drug delivery receipt
CN115187800A (en) Artificial intelligence commodity inspection method, device and medium based on deep learning
CN111598076A (en) Method and device for detecting and processing date in label image
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
CN111311226A (en) Machine vision-based two-dimensional code reading method and device under complex background
CN113850167A (en) Commodity identification method and system based on edge calculation and machine deep learning
CN112257506A (en) Fruit and vegetable size identification method and device, electronic equipment and computer readable medium
CN111126110A (en) Commodity information identification method, settlement method and device and unmanned retail system
Koponen et al. Recent advancements in machine vision methods for product code recognition: A systematic review
CN114758259B (en) Package detection method and system based on X-ray object image recognition
CN103971118A (en) Detection method of wine bottles in static pictures
Ying et al. Detection of cigarette missing in packing based on deep convolutional neural network
CN112163800B (en) Management method and device for visual inspection tool
US20240029274A1 (en) System and method for detecting a trigger event for identification of an item

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination