CN117132845A - Snack classification method by scanning code coordinated image recognition and checking through partition bars - Google Patents

Snack classification method by scanning code coordinated image recognition and checking through partition bars Download PDF

Info

Publication number
CN117132845A
CN117132845A CN202311404116.7A CN202311404116A CN117132845A CN 117132845 A CN117132845 A CN 117132845A CN 202311404116 A CN202311404116 A CN 202311404116A CN 117132845 A CN117132845 A CN 117132845A
Authority
CN
China
Prior art keywords
image
classification
snack
rectangular frame
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311404116.7A
Other languages
Chinese (zh)
Other versions
CN117132845B (en
Inventor
陈建
董江凯
傅旭栋
朱健
朱小明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Youyou Technology Co ltd
Original Assignee
Zhejiang Youyou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Youyou Technology Co ltd filed Critical Zhejiang Youyou Technology Co ltd
Priority to CN202311404116.7A priority Critical patent/CN117132845B/en
Publication of CN117132845A publication Critical patent/CN117132845A/en
Application granted granted Critical
Publication of CN117132845B publication Critical patent/CN117132845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a code scanning coordinated image recognition and snack classification method by means of partition bar verification, which is used for realizing the primary classification of paved snacks through the bar code scanning of bar code scanning equipment arranged below a transparent conveyor belt; generating a virtual overall image based on the orientation characteristics of the bar codes scanned in the primary classification, and screening secondary classification objects in a mode of matching with real image blocks of snacks, thereby being beneficial to improving the image classification speed in the secondary classification; the method comprises the steps that a first color block and a second color block with the largest and smallest sizes in an image block which is an object of secondary classification are extracted through rectangular frames with fixed sizes, and the size ratio of the first rectangular frame and the second rectangular frame for respectively selecting the first color block and the second color block is taken as a secondary classification characteristic, so that the calculated amount for identifying the boundary of the color block is reduced, and the secondary classification speed is improved; the first-stage classification and the second-stage classification are checked through the third-stage classification, the advantages of the first two-stage classification are integrated through the third-stage classification, and the classification is more accurate.

Description

Snack classification method by scanning code coordinated image recognition and checking through partition bars
Technical Field
The invention relates to the technical field of commodity classification, in particular to bulk snack classification, and in particular relates to a snack classification method for coordinated image identification by scanning codes and verification by means of partition rods.
Background
The current pricing method for bulk foods such as snacks comprises the following steps: the trade company manually screens out snacks with the same weighing price from the shopping basket, and then uniformly weighs and then uniformly counts the snacks, the weighing mode relies on manual memory and sorting of the weighing price of each type of snacks, errors are unavoidable, and when the purchased snacks have a plurality of weighing prices, the snacks need to be sorted and weighed for multiple times, are very complicated, and easily cause checkout queuing.
In order to solve the above problems, merchants expect to realize self-checkout of bulk snacks by consumers as much as the current application of mature self-checkout machines by various merchants, but because the size of the code scanning label of bulk snacks is smaller, the code scanning of bulk snacks by consumers is very troublesome, and the existing self-checkout machines do not have the function of weighing in batches, and cannot realize unified weighing and price for various types of bulk food with the same weighing price, therefore, the existing self-checkout machines cannot be suitable for classifying and price for bulk snacks.
Image recognition technology is mature, and is widely applied in recent years, and at present, some researches for classifying and recognizing bulk foods by using the image recognition technology are also available, but the size of the bulk foods is usually smaller, and the existing image recognition technology is difficult to extract the classification characteristics of the bulk foods, so that the classification accuracy of the bulk foods is not ideal, and the image recognition technology is also a main reason that the existing image recognition technology is difficult to apply to a bulk food classification scene. In addition, to accurately classify bulk snacks, the existing scheme needs to extract detailed features of the bulk snacks, the feature extraction method is complex and takes a long time, and self-service checkout efficiency based on the traditional image recognition technology is difficult to ensure.
Therefore, in summary, how to break the way under the problems that bulk snacks are inconvenient to scan codes manually in batches due to smaller size, a self-service checkout machine does not have a weighing function, the existing image recognition technology is not ideal for classifying the bulk snacks, and the like, avoids a weighing link, and realizes self-service checkout of the bulk snacks by consumers, so that the technical problem to be solved in the field is urgent at present.
Disclosure of Invention
The invention aims to overcome the defect that bulk snacks are inconvenient to manually scan codes in batches through labels, improves the accuracy and the sorting speed of sorting the bulk snacks by improving the existing image sorting algorithm, avoids a weighing link, and realizes self-service checkout of the bulk snacks, and provides a snacks sorting method with coordinated image recognition by scanning codes and verification by means of partition rods.
To achieve the purpose, the invention adopts the following technical scheme:
a snack classifying method by scanning code, coordinating image recognition and checking by means of partition bars is provided, a transparent snack conveying belt is divided into a plurality of grids, and bar code scanning and image collecting equipment is arranged in each grid, and the snack classifying method comprises the following steps of
S1, each bar code scanning and image acquisition device tries to scan the bar codes of snack goods paved above the grid installed in the first area of the classified pricing device, and finishes first-level classification after scanning the first bar codes and further acquires bar code images; a first image acquisition device acquires a first global map of each snack spread over the first region;
s2, generating a first virtual whole image of snacks of the corresponding type scanned to the first bar code on the first global image, and then matching and filtering image blocks with similarity with the first virtual whole image from the first global image by a machine;
s3, filtering the rest image blocks in the first global diagram in the step S2, and carrying out secondary classification on the size ratio of a first rectangular frame with a fixed size, which is used for selecting a first color block with the largest area in the image blocks, to a second rectangular frame with a fixed size, which is used for selecting a second color block with the smallest area in the image blocks, with the largest frame selection matching degree;
S4, after pricing is carried out according to the primary classification result and the secondary classification result, each snack is conveyed to the shopping bag through the transparent conveyor belt.
Preferably, when executing step S4, the pricing is further performed according to the three-level classification result, where the pricing flow is:
after the secondary classification of step S3 is completed, it is determined whether classification of each of the image blocks in the first global map is completed,
if yes, after pricing according to all classification results, conveying all snacks to a shopping bag through the transparent conveyor belt;
if not, three-level classification is carried out on each image block which is not classified in the first global map by means of the partition rod, and each snack is conveyed to the shopping bag after pricing is completed.
Preferably, the image information of the barcode image includes a first orientation feature of the scanned first barcode relative to the grid, where the first orientation feature is an included angle between two connecting lines of an upper left vertex p1, an upper right vertex p2 and a center point p0 of the grid of the rectangular first barcode and a horizontal line.
Preferably, in step S2, the method for performing similarity matching between the first virtual whole image and the image block includes the steps of:
a1, acquiring information of a snack whole image bound by the type pointed by the scanned first bar code, wherein the information comprises size characteristics and shape characteristics of the snack whole image, position characteristics of the bar code in the snack whole image and second orientation characteristics of the bar code relative to the snack whole image;
A2, restoring the snack whole image corresponding to the scanned first bar code according to the snack whole image information, and overlapping a second bar code with a second orientation in the snack whole image onto the first bar code with the first orientation, so as to generate the restored snack whole image into the first virtual whole image on the first global image;
a3, extracting similarity matching objects of the first virtual whole graph from the first global graph, and calculating the first area intersection ratio of each matching object and the first virtual whole graph;
a4, judging whether the first area intersection ratio is larger than a preset threshold value,
if yes, judging that the similarity matching is successful;
if not, judging that the similarity matching fails.
Preferably, in step A3, the method for extracting the similarity matching object of the first virtual global graph from the first global graph includes the steps of:
a31, screening out second grids adjacent to the first grids scanned by the first bar code;
a32, extracting a local graph formed by the first grid and each adjacent second grid from the first global graph;
A33, identifying the image block from the local graph as a similarity matching object of the first virtual whole graph.
Preferably, the method for performing the secondary classification in step S3 includes the steps of:
s31, respectively extracting a first rectangular frame with a maximum size and/or extracting a second rectangular frame with a minimum size from the rectangular frame list along the first direction and/or the second direction according to a preset extraction strategy;
s32, selecting each color block on the image block by using the first rectangular frame and/or the second rectangular frame;
s33, calculating a second area intersection ratio of the first rectangular frame and each frame-selected first color block to be used as a first frame-selected matching degree, and/or calculating a third area intersection ratio of the second rectangular frame and each frame-selected second color block to be used as a second frame-selected matching degree;
s34, judging whether the first frame selection matching degree larger than a first frame selection matching degree threshold value is generated, and generating the second frame selection matching degree larger than a second frame selection matching degree threshold value,
If yes, go to step S35;
if not, returning to the step S31;
and S35, calculating the size ratio of the first rectangular frame and the second rectangular frame which are successfully matched as a secondary classification characteristic, and matching the snack commodity type corresponding to the size ratio.
Preferably, the extraction strategy in step S31 includes the steps of:
s311, judging whether the first frame selection matching degree is not generated and the second frame selection matching degree is not generated in the last traversal of the rectangular frame list,
if yes, filtering the first rectangular frame and the second rectangular frame extracted in the previous traversal from the rectangular frame list, and then re-extracting the first rectangular frame with the maximum size and the second rectangular frame with the minimum size along the first direction and the second direction;
if not, go to step S312;
s312, judging whether the last traversal generates the first frame selection matching degree,
if yes, filtering the second rectangular frame extracted in the previous traversal from the rectangular frame list, and then re-extracting the second rectangular frame with the minimum size along the second direction;
if not, filtering the first rectangular frame extracted by the previous traversal from the rectangular frame list, and then re-extracting the first rectangular frame with the maximum size along the first direction.
Preferably, the method for performing three-level classification comprises the steps of:
b1, after the transparent conveyor belt spreads each snack located in the first area through the secondary of the partition poles and conveys the snacks to a second area, a second image acquisition device acquires a second global image of each snack spread in the second area;
b2, extracting a difference area between the second global map and the first global map;
and B3, classifying the snack goods in each difference area by adopting the secondary classification method.
Preferably, in step S1, a first mark is made on each grid of which the first-stage classification is successful, in step S3, a second mark is made on each image block of which the second-stage classification is successful, and in step B2, the method for extracting the difference region includes the steps of:
the classified pricing device takes the second global image acquired by the second image acquisition device as an instruction, activates the bar code scanning function of the bar code scanning and image acquisition device which is installed in each grid without the first mark, and scans the bar code of each snack in the second area;
c2, generating a second virtual overall diagram of snacks of the corresponding type of the second bar code scanned in the step C1 on the second global diagram by the method described in the step A1-A2;
C3, identifying the image block as a similarity matching object of the second virtual whole graph from the second global graph by the method described in the step A31-A33;
and C4, matching the image blocks with similarity with each second virtual whole image, making the second marks, and taking each image block which is not made with the second marks in the second global image as the difference area.
Preferably, each of the grids is conveyed synchronously with the transparent conveyor belt.
The invention has the following beneficial effects:
1. barcode scanning and image acquisition equipment is arranged in each grid divided under the transparent conveyor belt, and barcode scanning is attempted to be carried out on bulk snacks paved above the corresponding grid through the equipment, so that the first-level classification of each bulk snacks is realized, and the number of objects for classifying and identifying the images of the first global map acquired by each bulk snacks paved in the first area by a machine is reduced;
2. for bulk snacks which are successfully scanned with bar code information, generating a virtual whole image of the bulk snacks on a first global image according to bar code image information (comprising first orientation characteristics of grids installed by bar code relative to the bar code scanning and image acquisition equipment) acquired by the bar code scanning and image acquisition equipment, and according to binding relation between the bar code image information and whole image information of the bulk snacks of corresponding types (comprising shape characteristics, size characteristics of the whole image, position characteristics of the bar code on the whole image and second orientation characteristics of the bar code relative to the whole image of the snacks), and then matching and filtering image blocks with similarity to each virtual whole image from the first global image by a machine, so that secondary classification objects are reduced in a secondary classification link of image identification with high accuracy;
3. In the secondary classification link of image recognition, a first color block and a second color block with maximum and minimum sizes in an image block which is an object of secondary classification are extracted through a rectangular frame with fixed size, and the size ratio of the first rectangular frame and the second rectangular frame of the first color block and the second color block is respectively selected as secondary classification characteristics, so that the calculated amount for recognizing the boundary of the color block is reduced, and the secondary classification speed is improved. It is emphasized that the advantage of computing the secondary classification features by rectangular boxes of fixed size is: the size of the rectangular frames with fixed sizes is determined in advance by combining the specific sizes of the color blocks with various sizes on the outer packages of the snacks, the same rectangular frame with fixed sizes can frame the color blocks with various sizes under the size, but the frame selection plumpness (the frame selection plumpness is also defined as frame selection matching degree, the plumpness is defined as pixel area of the color blocks and the area ratio of the rectangular frames) is different, compared with the traditional area image recognition mode of automatically generating the matched rectangular frames by the areas of the color blocks, the calculated amount of boundary recognition of the color blocks is greatly reduced, firstly, the frame selection speed of the color blocks can be improved, in addition, as the size of the rectangular frames is fixed, the area of each size rectangular frame is known, the trouble that the size of the rectangular frame of the color block needs to be calculated according to the area of the color block in the traditional frame selection mode is avoided, the number of the ratio of the pixel area of the largest size rectangular frame to the smallest size rectangular frame is limited, and the commodity type corresponding to the characteristic of the classified snacks can be recognized more quickly based on the classification characteristic;
4. For the situations that the stacking of bulk snacks and the like affect the accuracy of primary and secondary classification, stacking is overcome through a partition rod, after each bulk snacks on a first global image are transmitted from a first area divided by the partition rod to a second area, a second global image after stacking is overcome is acquired, and images of other bulk snacks areas in the second global image except for the primary classification mark and the secondary classification mark are subjected to three-level classification, so that misleakage of classification of each bulk snacks is further reduced, and classification accuracy is ensured;
5. in the three-level classification method, the error and leakage areas possibly occurring in the first-level classification and the second-level classification are identified through the extraction of the difference areas. In the three-level classification, the first-level classification and the second-level classification are subjected to technical fusion, so that the classification precision is further ensured;
6. the sorting times of similar commodities are used as statistical results of the purchase quantity of bulk snacks of corresponding types, and the bulk snacks of the corresponding types are directly counted according to Shan Baojia grids, so that a weighing link is omitted.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a diagram showing steps in a method for classifying snack foods by means of partition bar verification according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a class pricing apparatus;
FIG. 3 is a diagram of a comparative example of a first global map and a second global map;
FIG. 4 is an exemplary diagram of similarity matching of a generated virtual global map with an image block;
FIG. 5 is an exemplary diagram of a screening grid after a barcode has been scanned;
FIG. 6 is an exemplary diagram of a box-out of color patches from a snack image in a rectangular box of fixed size.
Detailed Description
The technical scheme of the invention is further described below by the specific embodiments with reference to the accompanying drawings.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to limit the invention; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if the terms "upper", "lower", "left", "right", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, only for convenience in describing the present invention and simplifying the description, rather than indicating or implying that the apparatus or elements being referred to must have a specific orientation, be constructed and operated in a specific orientation, so that the terms describing the positional relationships in the drawings are merely for exemplary illustration and are not to be construed as limiting the present invention, and that the specific meanings of the terms described above may be understood by those of ordinary skill in the art according to specific circumstances.
In the description of the present invention, unless explicitly stated and limited otherwise, the term "coupled" or the like should be interpreted broadly, as it may be fixedly coupled, detachably coupled, or integrally formed, as indicating the relationship of components; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between the two parts or interaction relationship between the two parts. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The snack classification method for coordinated image recognition by code scanning and verification by means of partition bars provided by the embodiment of the invention, as shown in fig. 1, comprises the following steps:
s1, after bulk snacks are paved on a first area on a transparent conveyor (a plurality of grids are divided below the transparent conveyor, a bar code sweeping and image acquisition device is installed in each grid, the sizes, the shapes and the like of all grids are preferably the same for facilitating subsequent algorithm calculation), each bar code sweeping and image acquisition device tries to sweep the bar code of the snacks above the installed grid, and completes one-level classification (the type of snacks is embedded in the bar code, the sweeping of the bar code represents the classification of the snacks), and further acquires a bar code image (the image information of the bar code image comprises the first orientation characteristic of the scanned first bar code relative to the grid, the first orientation characteristic can be determined by using the included angles between two connecting lines of the upper left vertex p1 and the upper right vertex p2 of the bar code image and the central point p0 of the grid as illustrated in fig. 4 and can also be determined by other manners, and the orientation characteristic is not described in detail here), in addition, as shown in fig. 2, the image information of the first image is preferably paved on the first area of each snacks;
S2, generating a first virtual whole graph of snacks of a corresponding type scanned to the first bar code on a first global graph (the generation method is set forth in the following steps A1-A2), and then matching and filtering image blocks with similarity with the first virtual whole graph from the first global graph by a machine (filtering refers to marking the image blocks to terminate continuous classification);
the method for performing similarity matching between the first virtual whole image and the image block specifically comprises the following steps:
a1, acquiring information of a snack whole image bound by a type pointed by a scanned first bar code, wherein the information comprises size characteristics and shape characteristics of the snack whole image, position characteristics of the bar code in the snack whole image (such as assuming that the snack whole image is rectangular, a distance between a central site of the bar code and the central site of the snack whole image is used as the position characteristics of the bar code) and second orientation characteristics of the bar code relative to the snack whole image (such as an example in FIG. 4, assuming that the snack whole image is rectangular, connecting lines of an upper left vertex p1 of the bar code and an upper left vertex p3 of the snack whole image, connecting lines of an upper right vertex p2 of the bar code and an upper right vertex p4 of the snack whole image are respectively determined with a horizontal line, and different snack shapes can be used for determining the second orientation characteristics of the bar code relative to the snack whole image by adopting different methods, wherein the determination method of the second orientation characteristics is not a scope of the claimed protection of the application, and therefore not specifically described;
A2, restoring the snack whole image corresponding to the scanned first bar code according to the snack whole image information, and overlapping a second bar code with a second orientation in the snack whole image on the first bar code with the first orientation to generate a restored snack whole image into a first virtual whole image on the first global image (shown as reference numeral '300' in fig. 4); it should be noted that, the process of overlapping the first barcode and the second barcode is essentially a process of matching and overlapping based on the orientation features of the two barcode images, for example, 4 vertices on the first barcode and the second barcode which are both rectangular can be identified first, then the second barcode is overlapped on the first barcode in the orientation of the first barcode, when 4 vertices corresponding to the 4 vertices on the second barcode and overlapped on the first barcode represent successful overlapping (as long as the overlapping rate of each vertex on the second barcode and the corresponding vertex on the first barcode is higher than a set overlapping rate threshold, the overlapping is considered successful, and the overlapping rate can be the distance between the vertices);
a3, extracting similarity matching objects of the first virtual whole graph from the first global graph, and calculating the intersection ratio of each matching object and the first area of the first virtual whole graph;
In step A3, the method for extracting the similarity matching object of the first virtual global graph from the first global graph specifically includes the following steps:
a31, screening out second grids adjacent to the first grid scanned to the first bar code (the grid in the middle as shown in fig. 5); it should be noted that, the grid screening is as far as possible, for example, for the edge grids scanned to the first bar code, only 5 adjacent top, left, right, top left and top right second grids may be screened out;
a32, extracting a local graph formed by the first grid and each second grid adjacent to the first grid from the first global graph;
a33, identifying the image block (as shown by reference numeral '400' in FIG. 4) from the partial graph as a similarity matching object of the first virtual whole graph. Here, a conventional image recognition method is used to recognize an image block from a partial image, for example, a region having continuous pixels in the partial image is recognized as an image block.
In addition, the pixel area of the image block may be calculated by using the existing area image area calculation method, and the area of the first virtual whole image is known (the area of the corresponding type of snack outer package is calculated and stored in advance, and the machine only needs to directly obtain the area of the first virtual whole image from the database when calculating the first area intersection ratio in step A3), so that the calculation of the first area intersection ratio in step A3 is not complex.
After step A3 is performed, the similarity matching process for the first virtual whole image and the image block proceeds to the step of:
a4, judging whether the first area intersection ratio calculated in the step A3 is larger than a preset threshold value,
if so, judging that the similarity matching is successful (after the similarity matching is successful, filtering the image blocks participating in the similarity matching from the first global graph, and not serving as subsequent secondary classification objects, so as to reduce the number of the secondary classification objects and accelerate the subsequent secondary classification speed);
if not, judging that the similarity matching fails.
After completing the filtering and screening of the secondary classification objects on the first global graph, as shown in fig. 1, the snack classification method for coordinated image recognition by code scanning and verification by means of partition bars provided in this embodiment shifts to the steps:
s3, filtering the rest image blocks in the step S2 in the first global diagram, and carrying out secondary classification on the size ratio of a first rectangular frame with a fixed size, which is used for selecting a first color block with the largest area in the image blocks, to a second rectangular frame with a fixed size, which is used for selecting a second color block with the smallest area in the image blocks, with the largest frame selection matching degree;
In the invention, the first-stage classification is realized by the bar code scanning and image collecting equipment arranged in the corresponding grid below the transparent conveyor belt to scan the bar code of the snack outer package, but when customers pave various snacks on the conveyor belt, some snacks are downward, some snacks are upward, and when the bar code is upward, the bar code scanning and image collecting equipment cannot scan the bar code, so that the first-stage classification of the snacks cannot be realized, and therefore, the snacks are required to be further identified through the second-stage classification.
The method of secondary classification is specifically described below:
before the secondary classification, firstly, arranging all rectangular frames in a rectangular frame database into a rectangular frame list according to the size from big to small, defining a first direction of traversing the rectangular frame list from big to small, and traversing a second direction of traversing the rectangular frame list from small to big, wherein the method for performing the secondary classification specifically comprises the following steps:
s31, according to a preset extraction strategy, extracting a first rectangular frame with a maximum size from a rectangular frame list along a first direction and/or a second direction (when 'and' is adopted, when 'or' will be emphasized in the following content) and/or extracting a second rectangular frame with a minimum size;
S32, selecting each color block on the image block by using the first rectangular frame and/or the second rectangular frame; it should be noted that, the image block represents a snack image paved on the conveyor, the color block refers to an area image block formed by continuous pixels of a certain color on the snack image, an existing image recognition algorithm is used to identify the color block from the snack image, and the technology of identifying the area image block formed by the certain color from the image is already mature, and the identification algorithm for the color block is not the scope of the claims of the present invention, so it is not specifically described.
The method for selecting the color block from the image block by the first rectangular frame or the second rectangular frame is similar to the existing algorithm principle of selecting the target image area by self-adaptively generating the rectangular frame, and is different in that the rectangular frame adopted in the invention is not self-adaptively generated for the area size of the matched target area, but is a rectangular frame with fixed size, when the color block as the frame selection target is subjected to frame selection, the color block is selected along the boundary frame of the target area as much as possible, and the boundary calculation amount of the frame selection target area is greatly reduced. For example, as shown in fig. 6, a first color block and a second color block are respectively frame-selected with the extracted first rectangular frame 500 and second rectangular frame 600 having a fixed size;
S33, calculating a second area intersection ratio of the first rectangular frame and each first color block selected by the frame to serve as a first frame selection matching degree, and/or calculating a third area intersection ratio of the second rectangular frame and each second color block selected by the frame to serve as a second frame selection matching degree;
s34, judging whether a first frame selection matching degree larger than a first frame selection matching degree threshold value is generated or not, and generating a second frame selection matching degree larger than a second frame selection matching degree threshold value,
if yes, go to step S35;
if not, returning to the step S31;
and S35, calculating the size ratio of the first rectangular frame and the second rectangular frame which are successfully matched as a secondary classification characteristic, and matching the snack commodity type corresponding to the size ratio.
The extraction strategy described in step S31 specifically includes the following steps:
s311, judging whether the last traversal of the rectangular frame list does not generate the first frame matching degree and the second frame matching degree,
if yes, filtering the first rectangular frame and the second rectangular frame which are extracted in the previous traversal from the rectangular frame list, and then re-extracting the first rectangular frame with the maximum size and the second rectangular frame with the minimum size along the first direction and the second direction;
if not, go to step S312;
S312, judging whether the last traversal generates the first frame selection matching degree,
if yes, filtering out a second rectangular frame extracted in the previous traversal from the rectangular frame list, and then re-extracting the second rectangular frame with the smallest size along the second direction;
if not, filtering the first rectangular frame extracted in the previous traversal from the rectangular frame list, and then re-extracting the first rectangular frame with the maximum size along the first direction.
When the image block is classified secondarily by traversing the rectangular frame list, if the first rectangular frame with the first frame matching degree and the second rectangular frame with the second frame matching degree are not matched after traversing, the image block is marked as a snack image area which is not classified successfully in the first global diagram, and the three-level classification is attempted again later.
When the snacks are uniformly spread on the transparent conveyor belt (no abnormal conditions such as snack overlapping exist), the ideal classification precision of bulk snacks is achieved through primary classification and secondary classification, and the purchased snacks can be priced according to the primary and secondary classification results. Each type of snack has a corresponding weight, which can be weighed in advance and the weighing information is entered into the machine. When the price is calculated, the number of times the snack is identified and the weight of a single package of the snack are calculated, so that the total weight of the purchased snack can be obtained, and then the price of the snack can be calculated by calculating the total weight and the weighing price of the snack. Therefore, when there is no abnormal situation such as snack overlapping, the first-stage and second-stage classification described in steps S1-S3 is completed, and as shown in fig. 1, the snack classification method of scan code coordination image recognition and verification by means of partition bars in this embodiment shifts to the steps:
S4, after pricing is carried out according to the primary classification result and the secondary classification result, each snack is conveyed to the shopping bag through the transparent conveyor belt.
However, since abnormal situations such as snack superposition and the like may occur due to uneven spreading, the primary and secondary classification objects may be omitted, and less pricing occurs, in order to solve the problem, when executing step S4, pricing is preferably performed according to the tertiary classification result, where the pricing flow is as follows:
after finishing the two-stage classification of step S3, judging whether the classification of each image block in the first global map is finished (the judging method is that in step S1, a first mark is made on each grid which is successfully classified in the first global map (through the first mark, a machine can directly judge that the image blocks which are formed on the first global map and are provided with the first mark are finished to be classified), in step S3, a second mark is made on each image block which is successfully classified in the second stage, a machine can directly judge that the image blocks which are formed on the first global map and are provided with the second mark are finished to be classified according to the second mark, if any image block which is not provided with any mark exists, the classification of all the image blocks in the first global map is judged to be not finished),
If yes, after pricing is carried out according to all classification results (including a primary classification result, a secondary classification result and a tertiary classification result), each snack is transmitted to a shopping bag through a transparent conveyor belt;
if not, three-level classification is carried out on each image block which is not classified in the first global map by means of the partition rod, and each snack is conveyed to the shopping bag after pricing is completed.
The method for performing three-level classification comprises the following steps:
b1, after the transparent conveyor belt conveys each snack located in the first area to the second area through the partition bar 100 and the artificial secondary spreading shown in fig. 2, a second image acquisition device (preferably installed above the second area) acquires a second global view of each snack spread in the second area (see fig. 3 for a comparative example of the first global view and the second global view); the height of the partition bar from the conveyor is preset according to the thickness of the outer package of each type of snack, for example, the outer package thickness of all bulk snacks within the business has 2 thickness ranges, the first thickness range is greater than 5cm, the second thickness range is 0-5cm, the height of the partition bar can be set to 5cm, and for bulk snacks with outer package thickness greater than 5cm, the consumer is required to self-scan the bar code to price by the code scanning device 200 shown in fig. 2, the snacks do not participate in the machine classification at all levels, which also reduces the machine classification objects, and is beneficial for improving the overall snack machine classification speed. And for each snack with the thickness of less than 5cm, the situation that the stacking height is more than 5cm or less than 5cm can occur after stacking, for the situation that the stacking height is less than 5cm, a merchant or a consumer needs to manually pave the stacked snacks for the second time, and for the situation that the stacking height is more than 5cm, the partition rod is used for carrying out the secondarily paving for the second time, so that the number of the manually secondarily paved snacks is reduced, when the number of the stacked snacks is more, the manually and rapidly paved for the second time can be assisted, and the conveyor belt is facilitated to rapidly convey each snack to a second area.
It is also noted that the conveyor conveys snack foods from the first region to the second region to complete the first and second classification of the first global map as instructions. The second image acquisition device acquires a second global map of each snack in a second region at the following time: all the grids formed in the first global map are detected by them (the grids disposed under the conveyor belt are synchronously conveyed with the conveyor belt). The second image acquisition device judges whether all grids formed in the first global map are detected in the second area or not by the method that: the first image capturing device captures the view range of the first global map, for example, taking the grids (1), (2), (3), (4) shown in fig. 2 as boundary points of the view range, and as long as the 4 boundary points are detected, it means that the view range covers all the grids in the first area. The second image capturing device only needs to detect the same 4 boundary points in the second area, namely, determine that all grids originally formed in the first area have been transferred to the second area, and capture the second global map again in the visual field range enclosed by the 4 boundary points. For the identification of 4 boundary points, different grid color features can be adopted, for example, the grid colors of boundary points (1), (2), (3) and (4) are respectively red, orange, yellow and blue, and the colors of other grids are set to be transparent.
B2, extracting a difference region between the second global map and the first global map, wherein the extraction method specifically comprises the following steps:
in step S1, a first mark is made on each grid that is successfully classified in the first stage (the first mark is also shown in the first global map, for example, the grids are synchronously transmitted along with the conveyor belt, after the grids sweep up the bar code, the color lamps of the grids can be turned on, the colors of the lamps are formed in the first global map as the first mark), in step S3, a second mark is made on each image block that is successfully classified in the second stage (the second mark is shown in the first global map, for example, the image block that is successfully classified in the second stage is marked with a virtual color, the image block marked with the virtual color shows that the second stage classification is successful), and then the difference region between the second global map and the first global map is extracted by the following steps:
the classified pricing equipment takes a second global image acquired by the second image acquisition equipment as an instruction, activates bar code scanning and image acquisition equipment bar code scanning functions installed in grids which do not make a first mark, and scans bar codes of snacks in a second area;
c2, generating a second virtual overall diagram of snacks of the corresponding type of the second bar code scanned in the step C1 on a second global diagram by the method described in the steps A1-A2;
C3, identifying the image block as the similarity matching object of the second virtual whole graph from the second global graph by the method described in the step A31-A33;
and C4, matching out the image blocks with similarity with each second virtual whole image, making the second marks, and taking each image block which is not made with the second marks in the second global image as a difference area.
After the difference area is extracted, the three-level classification method is transferred to the steps:
and B3, classifying the snack goods in each difference area by adopting the two-stage classification method described in the steps S31-S35 and the steps S311-S312.
In conclusion, the first-stage classification of the spread snacks is realized through the bar code scanning of the bar code scanning equipment arranged below the transparent conveyor belt; generating a virtual overall image based on the orientation characteristics of the bar codes scanned in the primary classification, and screening secondary classification objects in a mode of matching with real image blocks of snacks, thereby being beneficial to improving the image classification speed in the secondary classification; the method comprises the steps that a first color block and a second color block with the largest and smallest sizes in an image block which is an object of secondary classification are extracted through rectangular frames with fixed sizes, and the size ratio of the first rectangular frame and the second rectangular frame for respectively selecting the first color block and the second color block is taken as a secondary classification characteristic, so that the calculated amount for identifying the boundary of the color block is reduced, and the secondary classification speed is improved; the first-stage classification and the second-stage classification are checked through the third-stage classification, the advantages of the first two-stage classification are integrated through the third-stage classification, and the classification is more accurate.
It should be understood that the above description is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be apparent to those skilled in the art that various modifications, equivalents, variations, and the like can be made to the present application. However, such modifications are intended to fall within the scope of the present application without departing from the spirit of the present application. In addition, some terms used in the description and claims of the present application are not limiting, but are merely for convenience of description.

Claims (10)

1. A snack classification method of code scanning coordinated image recognition and verification by means of partition rods is characterized in that a plurality of grids are divided under a transparent conveyor belt, bar code scanning and image acquisition equipment is arranged in each grid, and the snack classification method comprises the following steps:
s1, each bar code scanning and image acquisition device tries to scan the bar codes of snack goods paved above the grid installed in the first area of the classified pricing device, and finishes first-level classification after scanning the first bar codes and further acquires bar code images; a first image acquisition device acquires a first global map of each snack spread over the first region;
s2, generating a first virtual whole image of snacks of the corresponding type scanned to the first bar code on the first global image, and then matching and filtering image blocks with similarity with the first virtual whole image from the first global image by a machine;
S3, filtering the rest image blocks in the first global diagram in the step S2, and carrying out secondary classification on the size ratio of a first rectangular frame with a fixed size, which is used for selecting a first color block with the largest area in the image blocks, to a second rectangular frame with a fixed size, which is used for selecting a second color block with the smallest area in the image blocks, with the largest frame selection matching degree;
s4, after pricing is carried out according to the primary classification result and the secondary classification result, each snack is conveyed to the shopping bag through the transparent conveyor belt.
2. The method of classifying snacks by means of scan code coordinated image recognition and partition bar verification according to claim 1, wherein in step S4, pricing is further performed according to the three-level classification result, and the pricing flow is as follows:
after the secondary classification of step S3 is completed, it is determined whether classification of each of the image blocks in the first global map is completed,
if yes, after pricing according to all classification results, conveying all snacks to a shopping bag through the transparent conveyor belt;
if not, three-level classification is carried out on each image block which is not classified in the first global map by means of the partition rod, and each snack is conveyed to the shopping bag after pricing is completed.
3. The method of claim 2, wherein the image information of the barcode image includes a first orientation characteristic of the scanned first barcode with respect to the grid, the first orientation characteristic being an angle between two connecting lines of an upper left vertex (p 1), an upper right vertex (p 2) and a center point (p 0) of the grid, and a horizontal line, respectively.
4. The method of scan code coordinated image recognition and snack classification by zoning bar verification of claim 3 wherein in step S2 the method of similarity matching said first virtual whole image with said image block comprises the steps of:
a1, acquiring information of a snack whole image bound by the type pointed by the scanned first bar code, wherein the information comprises size characteristics and shape characteristics of the snack whole image, position characteristics of the bar code in the snack whole image and second orientation characteristics of the bar code relative to the snack whole image;
a2, restoring the snack whole image corresponding to the scanned first bar code according to the snack whole image information, and overlapping a second bar code with a second orientation in the snack whole image onto the first bar code with the first orientation, so as to generate the restored snack whole image into the first virtual whole image on the first global image;
A3, extracting similarity matching objects of the first virtual whole graph from the first global graph, and calculating the first area intersection ratio of each matching object and the first virtual whole graph;
a4, judging whether the first area intersection ratio is larger than a preset threshold value,
if yes, judging that the similarity matching is successful;
if not, judging that the similarity matching fails.
5. The method of scan code coordinated image recognition and snack classification by zoning bar verification of claim 4 wherein in step A3 the method of extracting similarity matching objects of the first virtual global graph from the first global graph comprises the steps of:
a31, screening out second grids adjacent to the first grids scanned by the first bar code;
a32, extracting a local graph formed by the first grid and each adjacent second grid from the first global graph;
a33, identifying the image block from the local graph as a similarity matching object of the first virtual whole graph.
6. The method of classifying snack by means of partition bar verification and scan code coordinated image recognition of claim 5 wherein, for each rectangular box in a rectangular box database, the rectangular boxes are arranged into a rectangular box list from big to small in size, and a first direction is defined for traversing the rectangular box list from big to small in size, and a second direction is defined for traversing the rectangular box list from small to big in size, the method of performing the two-stage classification in step S3 comprises the steps of:
S31, respectively extracting a first rectangular frame with a maximum size and/or extracting a second rectangular frame with a minimum size from the rectangular frame list along the first direction and/or the second direction according to a preset extraction strategy;
s32, selecting each color block on the image block by using the first rectangular frame and/or the second rectangular frame;
s33, calculating a second area intersection ratio of the first rectangular frame and each frame-selected first color block to be used as a first frame-selected matching degree, and/or calculating a third area intersection ratio of the second rectangular frame and each frame-selected second color block to be used as a second frame-selected matching degree;
s34, judging whether the first frame selection matching degree larger than a first frame selection matching degree threshold value is generated, and generating the second frame selection matching degree larger than a second frame selection matching degree threshold value,
if yes, go to step S35;
if not, returning to the step S31;
and S35, calculating the size ratio of the first rectangular frame and the second rectangular frame which are successfully matched as a secondary classification characteristic, and matching the snack commodity type corresponding to the size ratio.
7. The scan code coordinated image recognition and snack classification method by zoned bar verification of claim 6 wherein said extraction strategy in step S31 comprises the steps of:
S311, judging whether the first frame selection matching degree is not generated and the second frame selection matching degree is not generated in the last traversal of the rectangular frame list,
if yes, filtering the first rectangular frame and the second rectangular frame extracted in the previous traversal from the rectangular frame list, and then re-extracting the first rectangular frame with the maximum size and the second rectangular frame with the minimum size along the first direction and the second direction;
if not, go to step S312;
s312, judging whether the last traversal generates the first frame selection matching degree,
if yes, filtering the second rectangular frame extracted in the previous traversal from the rectangular frame list, and then re-extracting the second rectangular frame with the minimum size along the second direction;
if not, filtering the first rectangular frame extracted by the previous traversal from the rectangular frame list, and then re-extracting the first rectangular frame with the maximum size along the first direction.
8. The scan code coordinated image recognition and treat classification method by zoned bar verification of claim 6 or 7, wherein the method of performing three-level classification comprises the steps of:
B1, after the transparent conveyor belt spreads each snack located in the first area through the secondary of the partition poles and conveys the snacks to a second area, a second image acquisition device acquires a second global image of each snack spread in the second area;
b2, extracting a difference area between the second global map and the first global map;
and B3, classifying the snack goods in each difference area by adopting the secondary classification method.
9. The method of classifying snack foods by means of partition bar verification and scan code coordinated image recognition of claim 8 wherein in step S1, first marking is made on each of said grids successfully classified by a first level, in step S3, second marking is made on each of said image blocks successfully classified by a second level, and in step B2, the method of extracting said difference region comprises the steps of:
the classified pricing device takes the second global image acquired by the second image acquisition device as an instruction, activates the bar code scanning function of the bar code scanning and image acquisition device which is installed in each grid without the first mark, and scans the bar code of each snack in the second area;
c2, generating a second virtual overall diagram of snacks of the corresponding type of the second bar code scanned in the step C1 on the second global diagram by the method described in the step A1-A2;
C3, identifying the image block as a similarity matching object of the second virtual whole graph from the second global graph by the method described in the step A31-A33;
and C4, matching the image blocks with similarity with each second virtual whole image, making the second marks, and taking each image block which is not made with the second marks in the second global image as the difference area.
10. The scan code coordinated image recognition and treat classification method by zoned bar verification according to any one of claims 1-7, 9, wherein each of said grids is transported synchronously with said transparent conveyor.
CN202311404116.7A 2023-10-27 2023-10-27 Snack classification method by scanning code coordinated image recognition and checking through partition bars Active CN117132845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311404116.7A CN117132845B (en) 2023-10-27 2023-10-27 Snack classification method by scanning code coordinated image recognition and checking through partition bars

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311404116.7A CN117132845B (en) 2023-10-27 2023-10-27 Snack classification method by scanning code coordinated image recognition and checking through partition bars

Publications (2)

Publication Number Publication Date
CN117132845A true CN117132845A (en) 2023-11-28
CN117132845B CN117132845B (en) 2024-01-05

Family

ID=88851171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311404116.7A Active CN117132845B (en) 2023-10-27 2023-10-27 Snack classification method by scanning code coordinated image recognition and checking through partition bars

Country Status (1)

Country Link
CN (1) CN117132845B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118080374A (en) * 2024-04-25 2024-05-28 浙江名瑞智能装备科技股份有限公司 Multi-material combined blanking feed line

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
CN106781121A (en) * 2016-12-14 2017-05-31 朱明� The supermarket self-checkout intelligence system of view-based access control model analysis
CN109637056A (en) * 2018-12-13 2019-04-16 深圳众宝城贸易有限公司 A kind of artificial intelligence Supermarket account-settling system
US20200334650A1 (en) * 2019-04-16 2020-10-22 Alibaba Group Holding Limited Self-service checkout counter checkout
CN112365255A (en) * 2020-10-28 2021-02-12 中标慧安信息技术股份有限公司 Non-inductive payment method and system for supermarket
CN114970590A (en) * 2022-04-22 2022-08-30 中国计量大学 Bar code detection method
CN115481647A (en) * 2022-09-06 2022-12-16 浙江百世技术有限公司 Method for identifying telephone number in face list image
US20230071821A1 (en) * 2021-09-07 2023-03-09 Infiniq Co., Ltd. Product identification method and sales system using the same
CN115861986A (en) * 2022-12-21 2023-03-28 浙江由由科技有限公司 Non-standard product intelligent identification and loss prevention method based on supermarket self-service checkout system
CN116721411A (en) * 2023-02-09 2023-09-08 浙江由由科技有限公司 Bulk snack identification method based on machine learning
WO2023204436A1 (en) * 2022-04-20 2023-10-26 주식회사 지어소프트 Method and apparatus for processing payments for products in unmanned store

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
CN106781121A (en) * 2016-12-14 2017-05-31 朱明� The supermarket self-checkout intelligence system of view-based access control model analysis
CN109637056A (en) * 2018-12-13 2019-04-16 深圳众宝城贸易有限公司 A kind of artificial intelligence Supermarket account-settling system
US20200334650A1 (en) * 2019-04-16 2020-10-22 Alibaba Group Holding Limited Self-service checkout counter checkout
CN112365255A (en) * 2020-10-28 2021-02-12 中标慧安信息技术股份有限公司 Non-inductive payment method and system for supermarket
US20230071821A1 (en) * 2021-09-07 2023-03-09 Infiniq Co., Ltd. Product identification method and sales system using the same
WO2023204436A1 (en) * 2022-04-20 2023-10-26 주식회사 지어소프트 Method and apparatus for processing payments for products in unmanned store
CN114970590A (en) * 2022-04-22 2022-08-30 中国计量大学 Bar code detection method
CN115481647A (en) * 2022-09-06 2022-12-16 浙江百世技术有限公司 Method for identifying telephone number in face list image
CN115861986A (en) * 2022-12-21 2023-03-28 浙江由由科技有限公司 Non-standard product intelligent identification and loss prevention method based on supermarket self-service checkout system
CN116721411A (en) * 2023-02-09 2023-09-08 浙江由由科技有限公司 Bulk snack identification method based on machine learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
THI THOA MAC: ""Application of Improved Yolov3 for Pill Manufacturing System"", 《IFAC-PAPERSONLINE》, vol. 54, no. 15, pages 544 - 549 *
刘彩霞;杨春;: "基于机器视觉的食品码垛机器人控制系统设计", 食品工业, no. 01, pages 232 - 234 *
张淑青: ""基于深度学习的超市商品检测与识别算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2021, no. 09, pages 138 - 437 *
李秀利: ""基于深度学习的无人超市商品图像检测识别方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2021, no. 10, pages 138 - 120 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118080374A (en) * 2024-04-25 2024-05-28 浙江名瑞智能装备科技股份有限公司 Multi-material combined blanking feed line

Also Published As

Publication number Publication date
CN117132845B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN117132845B (en) Snack classification method by scanning code coordinated image recognition and checking through partition bars
CN108320404B (en) Commodity identification method and device based on neural network and self-service cash register
CA2841721C (en) Package vision sort system and method
CN109447169A (en) The training method of image processing method and its model, device and electronic system
CN101268478B (en) Method and apparatus for detecting suspicious activity using video analysis
CN109300263A (en) The settlement method and device of image recognition technology based on convolutional neural networks
CN114819797A (en) Image acquisition device and information acquisition system for inventory management system
CN108376447A (en) The remote weighing platform intervened with delayed fraud
CN106096932A (en) The pricing method of vegetable automatic recognition system based on tableware shape
CN109741551B (en) Commodity identification settlement method, device and system
CN110321769A (en) A kind of more size commodity on shelf detection methods
Bobbit et al. Visual item verification for fraud prevention in retail self-checkout
CN109859164A (en) A method of by Quick-type convolutional neural networks to PCBA appearance test
CN109918517A (en) A kind of wisdom purchase system
CN115861986B (en) Non-standard intelligent identification and loss prevention method based on supermarket self-service checkout system
CN110909698A (en) Electronic scale recognition result output method, system, device and readable storage medium
CN109345735A (en) A kind of self-service machine commodity recognition method and system
CN107563461A (en) The automatic fees-collecting method and system of catering industry based on image recognition
CN113139768B (en) Goods shortage monitoring method based on unmanned vending machine
CN117437264A (en) Behavior information identification method, device and storage medium
CN114455255A (en) Abnormal cigarette sorting error detection method based on multi-feature recognition
CN110473337A (en) A kind of automatic vending machine based on image collecting device
JP7446403B2 (en) Product recognition of multiple products for checkout
CN111444796A (en) Commodity placement judgment method and device for vending robot
CN116721411A (en) Bulk snack identification method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant